Welcome to TechNet Blogs Sign in | Join | Help

Windows Virtualization Team Blog

The Microsoft Windows Virtualization Product Group Team Blog

Syndication

News

Locations of visitors to this page
Technorati Profile

Additional Blog Resources:
 
 
 

Guest Post: Virtualization; Something Old, Someone New and Some Really Big Cost Savings

Hello, I’m Kevin Knox, VP Worldwide Commercial Business at AMD. Microsoft invited me to do this guest blog post in conjunction with our sponsorship of Microsoft’s “Get Virtual Now” event on Sept. 8.

I always find it interesting to hear people singing the praises of x86 virtualization and talking about how this recently introduced technology is already revolutionizing the industry. Fact of the matter is that virtualization technology was originally introduced for the purpose of time sharing on mainframes in the early 1970’s.  One could probably trace the roots of x86 virtualization to the early 90’s when IT managers finally realized that people’s desks and wiring closets were probably no place for servers and started to relocate them into the datacenter. The next step in the evolution of x86 virtualization was a few years later when IT managers realized they could safely run multiple applications on a single server simultaneously. And if you are reading this blog, you can probably figure out what came next…..running multiple versions of an OS on a single piece of hardware, or what has affectionately become known as virtualization. While certainly an interesting history, there are two major happenings on the near horizon that I believe will permanently change the face of virtualization.

First is the notion of virtualization becoming “intelligent.” Think about the possibilities when an enterprise infrastructure becomes smart enough to essentially manage itself based on a predefined set of rules, policies and current application requirements. Application A runs for 1 hour and when it is finished, it shuts down, creates a new virtual session with ½ its resources to start a nightly backup and it reallocates the other ½ of the resources to another session getting ready to kick off a major batch run. While just an example, hopefully it shows the possibilities of what some are calling Dynamic Provisioning. In the end what this will enable is the elimination of human capital, which for most IT shops is still the largest budget item they have. No longer will masses of IT people be needed for starting, stopping, configuring and managing virtual sessions.  On top of that, by optimizing the software environment in such a way, less, better utilized hardware will result.

The second and probably more important happening is Microsoft’s entry into the virtualization space. While Microsoft has invested in a variety of virtualization areas over the years including partnerships, acquisition and new technologies, they now seem to have a clear strategy and roadmap for delivering broad virtualization technologies across the enterprise. This is not to suggest they will be better or worse than anyone else, but rather having Microsoft in the game will help drive innovation, push the competition and allow for a more competitive marketplace, which ends up being a good thing for end users. Microsoft’s years of experience in operating systems and infrastructure software will also help enable new features and function and enable emerging technologies such as Dynamic Provisioning.  One thing I can assure you from my years at AMD is that competition drives innovation and value for the end-users, for the ecosystem and the industry as a whole.

To illustrate this, consider the following advances that AMD has driven into the x86 processor over the past 5 years together with greater choices in virtualization software that are starting to open the doors for IT departments, enabling them to run a larger range of workloads on virtualized servers. 64-bit and multi-core processor technologies now provide the robust platform needed for memory-intensive virtualization. More advanced power management capabilities of the processor are helping to reduce power and cooling costs. AMD-V technology, hardware-enabled virtualization inherent in our processors, is enabling software like Hyper-V to handle the most demanding applications.

So what does the future hold? Well I can’t tell you that, but I can assure you we are in the midst of the most interesting time in the history of x86 virtualization from both a competitive as well an innovation perspective.  As Bob Dylan once sang “The Times They Are A-Changin.”

Kevin Knox is Vice President of Worldwide Commercial Business at AMD.  His postings are his own opinions and may not represent AMD’s positions, strategies or opinions. Links to third party sites are provided for convenience and unless explicitly stated, AMD is not responsible for the contents of such linked sites and no endorsement is implied.

 

by porourke | 2 Comments

Thoughts on today's virtualization licensing and support news

Today we announced some changes to server application licensing and support policies related to running MS server apps on top of anyone's hypervisor. Several folks have written or blogged about it, here are some:

Chris Wolf (Burton Group)

Virtualization.info

NetworkWorld

Windows IT Pro 

Thoughts on application mobility licensing

As of Sept. 1, 41 Microsoft server applications are covered, both per-processor apps and server/CAL applications that are available via Volume Licensing [Enterprise Agreement and Open Agreements]. In essence the 90-day mobility rule is removed for these applications. With these changes, both licenses and software can move more freely across servers in a server farm [up to two data centers each physically located in a time zone that is within four hours, or within EU and/or EFTA].

Here are some examples of how the licensing might work for you:

·         A customer has a server farm with 8 4-processor servers, running a total of 4 copies of Exchange.

o    Under the old rules, they would need to either manually move the Exchange instances to another server that is already licensed for Exchange, OR they would need to license all 8 possible servers for Exchange.

o    Starting Sept. 1, they will need to have a license for each running instance (4) and those licenses can be moved from one physical server to another as needed.

·         A customer has a server farm with 8 4-processor servers, running a total of 4 instances of SQL Server Enterprise Edition under the per-processor model.

o    Under the old rules, the customer would need to manually move an instance from one licensed processor to another, or they would need as many as 32 licenses (8x4)

o    Starting Sept 1, the customer will need a maximum of 4 licenses. Because Microsoft allows unlimited instances on a processor licensed for SQL Server Enterprise Edition, the customer could have as few as one license if all 4 instances are always moved together.    

 A few other points:

  • these licensing rights don't apply to server apps acquired via OEM or retail channels
  • only apply to those products listed; it doesn't apply to client access licenses (CALs) or management licenses MLs) or Windows Server
  • If you've recently purchased these server apps listed in the"Application Server License Mobility" VL Brief based on rules prior to Sept. 1, then you should talk to the Microsoft partner or account rep to receive maximum value.
  • Why no Windows Server? Check out the use rights with Windows Server enterprise and datacenter editions.

 Thoughts on technical support

As of today, you'll see that we're expanding tech support policy for (initial) 31 server applications for customers that run these apps on WS08 Hyper-V, Microsoft Hyper-V Server or any other validated hypervisor (type 1 or 2). The nut of it is ... customers will be able to get the same level of tech support for virtualized workloads that they get today with non-virtualized workloads.

The kicker here, and where many journos reported inaccurate information, is that 3rd-party vendors' hypervisors must first pass the validation test before customers can get cooperative support from Microsoft and that vendor. For example, it was reported that VMware signed an agreement to participate in the Server Virtualization Validation Program. That much is true. However, it doesn't mean that cooperative support is now in place. First, ESX Server must go through and pass the validation test. Once validated, they'll be added to KB article 944987, where we list "support partners for non-Microsoft hardware virtualization software." Today only Novell is listed, and that's due to the broader technical collaboration agreement in place between the companies.

The other thing to note is that the server application teams have posted configurations that will be supported running on validated hypervisors. For example, the Exchange team posted a blog about their policy, which can be summarized as: 

  • Exchange Server 2007 Service Pack 1 is supported on Hyper-V and other validated hypervisors when deployed according to the guidelines published on TechNet. 

 The Sharepoint team blogged about their policy today and posted an FAQ here.

Let us know if you have questions. Cheers,

Patrick O'Rourke

by porourke | 7 Comments

Guest Post: MaximumASP and Hyper-V

Hi, my name is Dominic Foster, Chief Technology Officer at MaximumASP, which is a web hosting and IT services provider that sells/markets/services Microsoft technologies to the SMB business market.

A core element of the MaximumASP business model is a complete commitment to exclusively offering hosting solutions that deliver Microsoft technologies.  This overall market and product focus has been the leading success factor in our growth over the past eight years.

Server consolidation was a very appealing aspect of virtualization - it quite simply allows us to reduce the amount of physical machines in our datacenter. This consolidation gives us greater utilization on our hardware and reduces costs on machines, licensing, and power. Reducing the amount of our physical servers in the datacenter also means less power and cooling costs and lower overall total energy consumption.

Virtualization also gives us the flexibility and scalability we need to make smooth changes to the underlying infrastructure with minimal impact on systems or applications.

During our research we found the TCO of Windows Server 2008 Hyper-V with System Center to be a less expensive solution than a similar toolset from VMware. Being completely manageable by System Center allowed us the flexibility to use the same management tools for both Hyper-V and physical servers. An added benefit was the consistent Microsoft look and feel for System Center so there was not the typical steep learning curve associated with a new toolset. Strong support from Microsoft and the MVP program facilitated our quick adoption of Hyper-V.

It was important for us to be involved in the Hyper-V Go Live program to get access to resources and contacts that could help us bring our solution to market faster.  It was also an opportunity to connect with Microsoft to allow us to get our questions answered, and brainstorm our ideas. We've been able to get our questions answered, and/or routed to the product team. We've been able to connect with technical, and marketing Microsoft contacts that help us further develop relationships in one of our 'core' competencies and focus areas.

In May of 2008 we dispatched a virtualization survey to 2,313 customers and received a 10% response rate. The response was overwhelmingly positive with 79% of respondents saying they would consider running their web applications on a virtual server.

Here are some open ended comments from the survey:

·         "Being able to store a configured image of a machine and just click-and-drag that image into reality is earth shattering. Deploying virtual SANs, load balancers, and firewalls for pennies on the dollar would give my company a real advantage in our niche."

·         "The line between software and hardware is becoming increasingly blurry. Companies like mine must leverage these innovations or yield market to those who can."

 

Virtualization will continue to be a major focus for us, not only for reasons of consolidation and reduction in operating costs, but also to enable more rapid deployment of new application servers, reduced down time for server maintenance, to reduce driver dependencies for new hardware, and the assistance in disaster recovery planning.

Dominic Foster, Chief Technology Officer, MaximumASP

www.maximumasp.com

 UPDATE (August 11th)

Thanks for the comment by wanderson. Microsoft has asked me to update my original guest post so I can address Wayne's comment.

 

There were a few key features that we required when evaluating solutions. While VMware has a rich feature set in VI3 Enterprise, some simply were not needed. For our environment Hyper-V managed by System Center Server Management Suite, not just SCVMM, was a perfect fit. Some of the key features we needed were:

 

64-bit Support - Both support this.
Migration - While quick migration from Microsoft does incur downtime when the machine  saves state then shift the LUN owner, we found the period in our environment rather short. It is obviously not a Live Migration like Vmotion, not yet anyway, but we did not require this.

High Availability - Both support HA/Clustering

Online Backup - We wanted to provide two methods of backup so we decided on Volume Shadow Copy Services from the host and System Center Data protection manager inside the guest.

 

During the evaluation period the Cost was structured as the following:

Vmware per Server

Virtual Machine Solution VI3 Enterprise 2 Processor x $5750

Management Server (VirtualCenter): $6044

 

Hyper-V and System Center per Server

Pricing for Server Management Suite Enterprise $1290

 

Vmware pricing from http://www.vmware.com/products/vi/vc/buy.html

Microsoft pricing from http://www.microsoft.com/systemcenter/en/us/management-suites.aspx#Anchor3

 

I have omitted the cost of the Windows OS since in either scenario the cost is the same.

 

In addition we currently use pieces of System Center in our environment and would have to also purchase it if using VMware.

 

In closing I have to say if the cost of the products during our testing were equal we would have chosen VMware. It has been the industry leader, and is more mature in its development and feature set. However from the numbers above you can see there is a drastic difference in the pricing of the two products and our requirements did not warrant the additional cost.

 

by porourke | 4 Comments

Guest Post: Why Microsoft and Hyper-V for HostBasket

Hi, my name is Bert Van Pottelberghe, business unit manager at Hostbasket, which is the leading hosting company and SaaS-provider in Belgium with over 30,000 SMB customers.

In a recent survey of our datacenter with over 1,000 servers, we saw that the average CPU-usage was only 12%. On the other hand, investments in new server hardware, datacenter space and the cost of power and cooling – now at an all time high - keep prices for dedicated servers high. The hosting industry is a very competitive industry, so we needed to come up with an answer.

We have been investigating virtualization technologies such as Xen, VMWare and Virtuozzo, but always found problems (such as security-issues, complex and expensive licensing, stability or scalability) that kept us from creating a virtual machine-offer.

Our Windows Server 2008 Hyper-V offering has two components:

·         Flex Servers: Virtual servers on shared hardware platform. More info here.  

·         Hyper-V servers: dedicated hardware server with 2 virtual machines (application server and database server for example).

 

Both are positioned as dedicated servers, not as virtual servers.  With these two offers we address the needs of 2 customer groups: The Flex servers are for websites and applications that have outgrown shared hosting, while the Hyper-V servers are for larger applications that used to be deployed on two (or more) separate physical servers.

We opted not to use the term “Virtual servers” for our offer, because a “virtual server” implies less value for money than a dedicated server. Also, “Virtual servers” are often associated with cheap solutions based on Parallels Virtuozzo.

Our solutions based on Windows Server 2008 Hyper-V provide more functionality (like snapshotting, easy installation, flexible upgrading), are more secure and stable and have a features that guarantee a higher uptime than dedicated servers.

All customers involved during the beta-stage decided to renew their subscription and gave positive comments on the stability and performance.

Bert Van Pottelberghe

Business Unit Manager

HostBasket

by porourke | 2 Comments

Guest Post: Going Live with Hyper-V for myhosting.com

Hi, my name is Stephen Nichols, VP of sales and marketing for SoftCom Technology Consulting, which is the company behind myhosting.com, a leading global provider of affordable web, email and application hosting. We have been actively engaged with Microsoft technologies since we began in the hosting industry back in 1997. We are part of Gold Certified Partner program with competencies in Data Management Solutions, Information Worker Solutions, Mobility Solutions and Networking Infrastructure Solutions. Based on our strength and experience in shared hosting we see the greatest opportunity for growth is to build new solutions on this foundation. The development of these solutions will be customer requirement driven and need to be delivered cost effectively and on demand.

The best prospect for future profits is to move beyond commoditized hosting of simple websites sold on large amounts of storage, bandwidth and email addresses. To future proof our business we have begun to offer unique solutions backed by exceptional support. As part of our process to find and develop tools and strategies to differentiate our services in the market, our entire organization was keen to implement some form of Virtual Server Hosting.

In the words of our Operations Manager: “Modern computer systems are extremely powerful, 4 socket, quad core CPUs, these systems are able to support many gigabytes of memory and storage. Running one operating system and a single application on these type of machines would be inefficient. By using virtualization technology, we can consolidate multiple physical servers onto one physical machine.”

We have already implemented virtualization technology within our infrastructure for our internal server needs and saw that server consolidation allows us to have smaller footprint and a lower cost of ownership. Fewer physical servers reduce power consumption both by servers and by the cooling infrastructure, lowering costs and at the same time making our solution “greener.” The next step was to build hosting solution that could take advantage of virtualization technology to provide our customers the tools they need to run their businesses. As not all virtualization platforms are created equally we took our time to find the best fit.

At the end of the day the clear choice for us was to go with Microsoft’s Windows Server 2008 Hyper-V platform. Some of the key factors we evaluated include the following.

Efficient Cost and licensing model: As a Gold Partner under the SPLA model, the licensing for the entire platform was included in one package which made things simple. There is no need to deal with multiple vendor systems for reporting.

Hardware support and flexibility: Within our data center we have several server configurations. Hyper-V support for a wide range of hardware makes it attractive for us. Once we install and configure Windows Server 2008 we enable Hyper-V and we are ready to go.

Server management tools: Hyper-V allows our Operations Team to use the same familiar Microsoft tools to manage both the physical and the virtual server environments in one place.

Reliability and Scalability: We need to be able to balance the needs of the few with the needs of the many. Hyper-V enables us to do this by providing each customer with an independent operating system that can be customized by the customer without affecting other customers.

We have been involved in Go Live Programs with Microsoft in the past so we were eager to participate this time around. The exciting part for myhosting.com is to have access to not only the code but to the team of knowledgeable people at Microsoft to collaborate with. The main benefit so far has been how straightforward it has been to implement an integrated hosted Virtual Server solution.  We are able to offer customers a transactional hosting experience and at the same time provide the latest platform of Windows Server 2008 and SQL Server 2008. Customers get the best of all worlds; a full operating system supported by Microsoft in a virtual environment, 24/7 live support, scalability that is cost effective, and best of all, a solution that just works.

 

Stephen Nichols
VP, Sales and Marketing
SoftCom Technology Consulting

See our blog here.

by porourke | 1 Comments

VM standards, offline patching tools

Quick post here on two items.

Offline Virtual Machine Servicing Tool can be downloaded here. As blogged about at by the System Center team blog, Virtualization.info, and InfoWorld blog.

This product allows the update of large-scale deployments of virtual machines, leveraging PowerShell, System Center Virtual Machine Manager (SCVMM) 2007 and WSUS 3.0 (or Configuration Manager 2007). As Alessandro pointed out, "It just automates the VM power-on, updates deploying through virtual network access, and VM shutdown." You can use this Solution Accelerator to help you with business scenarios such as these:

  • Your IT organization is converting physical servers to virtual machines to reduce costs, including administrative overhead. How can you regularly update offline virtual machines while minimizing administrative costs?
  • Your IT organization has thousands of virtual machines stored for months at a time in a number of libraries. How do you keep the virtual machines reliably up to date?

Second, today Citrix issued an announcement about "Project Kensho", which is described as:

will deliver Open Virtual Machine Format (OVF) tools that, for the first time, allow independent software vendors (ISVs) and enterprise IT managers to easily create hypervisor-independent, portable enterprise application workloads.  These tools will allow application workloads to be imported and run across Citrix XenServer, Microsoft Windows Server 2008 Hyper-V and VMware ESX virtual environments 

 While you can read ComputerWorld's article or Alessandro's blog on the announcement, I think the most interesting perspective comes from Simon Crosby's June 27 post. Here's an excerpt from his post:

Kensho showcases our commitment to open standards based virtual infrastructure management using DMTF CIM based interfaces, and will in the not too distant future allow Microsoft System Center VMM to manage XenServer.   It also allows users to quickly and easily export their virtualized workloads to and import them from the new industry standard portable virtual machine format, OVF.   You'll be hearing much more about Kensho and its features in the near future.

The OVF standard, which I was fortunate to be able to help to develop  offers ISVs and enterprise IT staff a hypervisor-independent portable virtual machine format that packages a complete application workload with its  resource requirements, configuration and customization parameters, licebnsand signatures to facilitate appliance integrity and security checking, as an open standard. Virtualized data center workloads captured in OVF format can be installed and run on any DMTF compliant virtualization platform. OVF also supports software license checking for the enclosed VMs, and allows an installed VM to localize the applications it contains and optimize its performance for a given virtualization environment.At the DMTF interoperability event, we used Project Kensho to create VMs from VMware, Hyper-V & XenServer in the OVF format.  We also used Kensho to import and run OVF virtual appliances on XenServer and Hyper-V.  Kensho will  allow application vendors and IT users to produce virtual appliances once as "golden application templates", independent of the virtualization platform used to deploy them - and is a clear demonstration of how Citrix will add value to Hyper-V. 

The Citrix news release says Kensho will be available as a download in September.

As for Microsoft, we support the OVF standards work, which isn't complete yet. There's no public schedule for when OVF will be supported in our products, such as Hyper-V or SCVMM, but it's on the board. It's great to see partners like Citrix doing converts for interop based on our DMTF standard interface. And we'll continue to work with Citrix, Novell and Sun on interoperability, in addition to making technology (like VHD image, Hypercall API) available via Open Specification Promise.

Patrick O'Rourke

by porourke | 2 Comments

Rumor Mill: Dispelling the "Microsoft Virtualization is not Ready for 'Prime Time'" Myth...

 Greetings! Chris Steffen here again from Kroll Factual Data. I want to share some thoughts on what I have heard about Microsoft virtualization in the enterprise data center. I will also be the first to admit that I am not the average user of Microsoft’s virtualization technologies and that I probably have a bit of a bias toward the Microsoft solution. But the bias did not come without some pretty compelling reasons.

 

There are several requirements we each consider when evaluating a server virtualization product. Maybe the primary requirement is cost. Maybe the primary requirement is flexibility. Maybe it is manageability or ease of use or compatibility or reliability. Each of these requirements is a valid reason to choose a certain virtualization product. And in my evaluation, each of these requirements is answered by the Microsoft virtualization solution. I am not going to deep dive into a sales pitch here on the benefits of Virtual Server over any of the other solutions, but I did want to address a specific concern that I have heard while attending conferences and while industry analysts have talked to me: that Microsoft’s virtualization products are not ready for enterprise production environments.

 

For some background, Kroll Factual Data has been using Microsoft Virtual Server in our production environment for nearly five years (since Virtual Server 2003). We have tested and implemented every subsequent version ever since and currently have a more than 1,600 virtual machine environment consisting of Virtual Server 2005 R2 and Hyper-V. We are able to run 300,000 business transactions per day in this environment, and the flexibility afforded to us by having an 85% virtualized data center allows us to deploy additional capacity on demand nearly instantly.

 

When we started down the virtualization path, I will be the first to admit that some of the other virtualization solutions had an advantage over Microsoft, specifically in their management tools. Virtual Machine Manager (VMM) took care of that, and the improvements coming in VMM 2008 makes the Microsoft virtualization management solution the best in class.

 

We have had Hyper-V in our production environment now for months and are migrating our existing Virtual Server 2005R2 VMs to Hyper-V hosts as fast as our IT team can work. It has been stable, the support has been outstanding (through the Microsoft TAP program), and we are seeing about a 20% increase in resource utilization over Virtual Server 2005 R2.

 

Based on  Factual Data ’s use of the products, Microsoft’s virtualization suite is ready for the big leagues. I mentioned earlier that I have a bias toward the Microsoft virtualization solution, and I have this bias because Virtual Server, Hyper-V and VMM have proven to me that they work as promised. They have provided the cost-effective, reliable and easy-to-manage solution that we needed, and that we will continue to use.

 

Some may say that they came a bit late to the party, but I would contend that the party is just beginning.

 

-Chris

by porourke | 3 Comments

Hyper-V available via Windows Update today

The update will be classified as a "recommended update", which means it will flow down automatically according to the settings you’ve selected for your Windows Server 2008 OS.  It’s also being released to Windows Server Update Services.

Taylor has a nice screen capture here. Let the fun times begin!

Patrick

by porourke | 3 Comments

Linux Integration Components for Windows Server 2008 Hyper-V

Daniel asked about the Linux integration components for Hyper-V. They've now reached RC2 status, according to Mike Sterling, and are available from http://connect.microsoft.com/

Hang on – did you say RC2?

 

Due to customer feedback from the beta version, we added a couple of additional features.

  

·         Mouse Support: Support for the synthetic mouse device has been added in beta. This new mouse support allows the mouse to move in and out of the window without having to use the CTRL-ALT-LEFTARROW key command to break out.

·         Fastpath Boot Support: Support for faster single disk configurations has been added to the RC2 release. Boot devices now take advantage of the storage VSC to provide enhanced performance.

We’ve reached RTM on the hypercall adapter, Linux implementation of VMBus, and the network and storage VSC.

 

Patrick

 

UPDATE (July 8, 2008) - Vacations slowed my response to the comments. The Linux ICs were taken down so that folks could review the licensing one more time. Licensing is tricky when open source and proprietary software are packaged. I'm trying to get an ETA and will update this post when I know a date.

by porourke | 13 Comments

RTM Announcement: Microsoft Assessment and Planning Toolkit 3.1 for Hyper-V

As many of you know, the much-anticipated Hyper-V hypervisor feature was just released last week.  Many of the IT professionals like you have now began to use Hyper-V server virtualization technology to take back the control of your datacenter.  With energy costs going sky high and the demand for IT services increasing, no wonder many of you are now looking to consolidate the physical servers using virtualization.

With server consolidation being the direction to go, we are pleased to announce the RTM of Microsoft Assessment and Planning Toolkit 3.1 for Hyper-V.

What is the Microsoft Assessment and Planning Toolkit 3.1?

The Microsoft Assessment and Planning Toolkit 3.1 (or MAP) is a network-wide agent-less tool that can help you quickly find out where your desktops and servers are as well as auto-generate upgrade recommendations for multiple products and technologies including server, desktop and virtualization migration scenarios covering:

·         Hyper-V virtualization candidates assessment (New in MAP 3.1)

·         SQL server discovery and assessment (New in MAP 3.1)

·         Desktop Security Center assessment (New in MAP 3.1)

·         Windows Server 2008 hardware and device compatibility assessment including server discovery

·         Windows Vista hardware and device compatibility assessment including PC discovery

·         Office 2007 hardware compatibility assessment

·         Microsoft Application Virtualization hardware compatibility assessment

·         SNMP inventory reporting

 

How can I use the MAP Toolkit to plan for my server consolidation project?

The MAP Toolkit can help you determine which subset of your currently physical servers are good candidates for Hyper-V virtualization.  By capturing the workloads and utilization of each of your server over a defined period of time, this tool can recommend a set of consolidated hosts for your existing servers.  It looks at many factors such as CPU utilization, memory utilization, Network IO and Disk IO rates and auto-generate a set of proposal documents and detailed spreadsheets for your virtualization planning process.

How does remote inventory work?  Is it agent-based or agent-less?

Leveraging the Windows Management Instrumentation (WMI) protocol, all you need to do is a machine that has the MAP Toolkit 3.1 installed, the right set of administrative credentials to the “target machines” (i.e. the machines you want the MAP Toolkit to inventory and assess), as well as other WMI requirements here.  The MAP Toolkit will then remotely ping each machine across your network securely all without installing any software agents on the individual target machines.  Therefore, the MAP Toolkit is completely agent-less with zero foot-print!

Where do I go to get the MAP Toolkit 3.1 and learn more about it?

·         Download the MAP Toolkit 3.1:                                 http://go.microsoft.com/fwlink/?LinkId=111000

·         Learn more on TechNet:                                               http://www.microsoft.com/MAP  

·         Read tips and tricks:                                                        http://blogs.technet.com/MAPBLOG

What else should I leverage to accelerate the planning, deployment, and operation of Hyper-V?

To help you accelerate the entire lifecycle of your virtualization project, the Microsoft Solution Accelerators team has developed tools and guidance including Microsoft Assessment and Planning Toolkit 3.1, Infrastructure Planning and Design Guides for Virtualization (including guides for Hyper-V, App-V, Terminal Services, and SCVMM 2008), Windows Server 2008 Security Guide, Microsoft Deployment Toolkit, and Offline Virtual Machine Servicing Tool.

Check out the latest Virtualization Solution Accelerators at:         http://www.microsoft.com/VSA

by porourke | 4 Comments

Filed under:

Top 5 things to know about Hyper-V

My name is Ronald Beekelaar. I'm a Microsoft MVP of Virtual Machine Technology, based in Amsterdam. I have my own consultancy firm, and since 2002 I focus on virtualization. At first, this was strictly VMware-oriented, but a few years later this included Microsoft's virtualization products as well.

 

Since the first public beta of Hyper-V more than a year ago, I have done many presentations about Hyper-V at various events, and talked to a lot of customers about transitioning to Hyper-V. The people I talk to can be divided into two groups: they either have experience with Microsoft Virtual PC and Virtual Server, or they only know the VMware products and are just now looking into Hyper-V. However, for both groups, and despite very different opinions, there are five topics that always come up in discussions. Below is my list of the top-5 things you should know and understand about Hyper-V.

 

5) Understand the hypervisor model and performance consequences.

This is especially a big one for people that know Virtual PC and Virtual Server. The virtualization model that Hyper-V uses is very different from the model that Virtual PC and Virtual Server use. The Hyper-V model allows for much better I/O performance of virtual machines. This is mainly due to the new 64-bit hypervisor layer underneath everything - including the host operating system even, and the new high-speed VMBus "synthetic" drivers that run in the virtual machines.

 

Particularly- the use of optimized synthetic disk and network drivers talking to the VMBus, instead of using normal "hardware-oriented" drivers, make for a much faster I/O path from applications inside the Hyper-V virtual machines to the physical hardware. To make use of these synthetic drivers, make sure you use an operating system inside the virtual machines for which Microsoft provides so-called Integration Components. When you install the Integration Components, the synthetic drivers are installed as well.

 

Please see Microsoft Knowledge Base article 954958 for the list of operating systems in which you can install Integration Components.

 

4) Understand the use of snapshots

Snapshots in Hyper-V are very different from those found in Virtual PC and Virtual Server. Snapshots in Hyper-V allow you to save the current point-in-time state of your running or non-running virtual machine, and later come back to that particular state. Great for testing, troubleshooting and roll-back of virtual machine state.

 

This comes in the place of undo-disks, save-state, and to some extend even differencing disks with Virtual PC and Virtual Server.

 

Make sure you understand the power of snapshots and the scenarios where you should not roll-back the state of your virtual machine. Any scenario with a distributed database (such as domain controllers) is not a good candidate for snapshotting.

 

3) How to use Quick Migration

No discussion on virtualization can be complete without addressing fail-over support and virtual machine management. That is topic 3 and 2 in the list.

 

It doesn't take long to realize that any time you run multiple virtual machines on the same physical Hyper-V server, you have to think about how to handle the scenario where you have to do maintenance on the physical server (planned), or worse what happens when the physical server suddenly stops working due to loss of power or similar (unplanned).

 

For both the planned and unplanned scenarios, Hyper-V has support of host clustering. Windows Server 2008 clustering treats virtual machines as fully-managed clustered resources. For fail-over, clustering moves the

virtual machine from one node to another node. In Hyper-V terminology this is called Quick Migration. Due to the use of shared storage, only the content of the memory of the running virtual machine is copied to the other node.

 

2) Consider System Center Virtual Machine Manager

Virtual machines created with Hyper-V need to be managed as well. Enter System Center Virtual Machine Manager 2008. With Virtual Machine Manager you can simplify lots of tasks related to virtual machine management. This includes easier virtual host cluster support, automatically provisioning new virtual machines based on templates (including taking care of the "sysprep" part to make multiple virtual machines from the same template unique on the network), and an straightforward physical-to-virtual (P2V) conversion

process to move existing physical computers on to Hyper-V as virtual machines.

 

Interestingly enough, SCVMM 2008 can manage Hyper-V servers, Virtual Server, and even VMware ESX virtualized infrastructure, and even comes with a nifty virtual-to-virtual (V2V) option to move existing VMware virtual machines to Hyper-V.

 

As you would expect of a new server product from Microsoft, SCVMM 2008 fully supports automation with PowerShell.

 

1) Hyper-V can run on Server Core Installation

The number one thing to know about Hyper-V is the fact that it can run perfectly well on a Server Core Installation of Windows Server 2008. This means that on the physical server, you only need to install the absolute minimum "host OS,", and still have the full Hyper-V functionality. Having less moving parts and services running on the Hyper-V computer is, naturally, very beneficial to reduce the number of times you need to patch the server, and reduces the possible attack surface exposed to the network.

 

For any serious production installation of Hyper-V, and for any serious comparison with VMware ESX, being able to run Hyper-V on a Windows Server 2008 Core Installation is essential.

 

Ronald

 

 


Video: Ronald Beekelaar on Virtualization

by porourke | 17 Comments

Achtung! 3,500 BMW dealerships going virtual

Continuing with yesterday's theme... on the first day of "Virt-Mas" my true love brought to me - 3,500 BMW dealerships that will be deploying Windows Server 2008 Hyper-V later this year.

Before sharing BMW's story, I first wanted to say "thanks" to Mark and Dugie for their comments to me today. And Frank at QLogic launched a Web page today where you can learn more about their benchmark measuring QLogic HBA and Hyper-V performance. Their results surpass VMware's May 2008 test, which may surprise some people. Now onto BMW ...

Unless you read German and troll the German newswires, you probably missed Microsoft's news release on a project that BMW is deploying to 3,500 dealerships around the world built on the Windows platform. I don't speak German ["Ich spreche nicht Deutsch"], but a colleague in MS-Germany translated the announcement for me. Following is a translation of some of the news release:

Microsoft Consulting Services has developed a retail server platform based on Windows Server 2003 and Virtual Server 2005 for a globally operating automobile manufacturer. The solution permits the rollout of new applications and services in a typical branch office infrastructure, while ensuring a high degree of standardization, a minimum of operating effort, and the necessary system security. These requirements count among the major challenges in the contract dealer and franchise environments, where operators traditionally tend to operate more independently. Dealers’ customers – and their own employees – profit from optimized advisory procedures, shorter waiting times, and improved quality of advice.

The use of virtualization technology allows the corresponding multiprocessor hardware to be utilized flexibly. It also guarantees a high degree of security against failure, while at the same time reducing the amount of testing necessary for simultaneous use of different applications. With the help of Microsoft Forefront Intelligent Application Gateway 2007, a remote access solution has been implemented that allows access to 3rd level support in the heterogeneous network structures of the workshops while taking into account the high data protection requirements of the dealers.

The BMW group is relying on the Microsoft server platform to underpin its rollout of a new hardware and application infrastructure for its more than 3,500 service operations worldwide. This infrastructure is known as the Integrated Service Infrastructure Server (ISIS). Based on the platform, premium aftersales services will be offered that are standardized across the globe. This rollout will be completed in the first quarter of 2008.

An upgrade to the 64-bit Windows Server 2008 with Microsoft's new Hyper-V virtualization technology is already planned, in order to utilize the performance capabilities of the hardware to an even greater extent. BMW also plans to roll out the SQL Server and BizTalk Server in the course of this migration. SQL Server 2005 allows the platform to be laid out in a more homogeneous fashion, while the use of BizTalk Server 2006 represents a significant steps towards a uniform modular front-end, including the optimization of processes and the integration of various backends. 

 In short, BMW's Integrated Service Infrastrructure Server will be the technology architecture for after-sales service at 3,500 BMW dealerships around the world. I do recall seeing a slogan that BMW uses/used with this project. It was something like, "the first car was sold my marketing; the second by services." Here's a few other details that didn't make the news release:

  • On average, dealers will have 2 servers and roughly 20 client devices per location, which they will use for everything from sourcing spare parts to accessing diagnostic manuals and consulting customer service policies.
  • BMW is initially using Virtual Server 2005 R2 (on Windows Server 2003 EE) to consolidate and run .NET-based applications, business process integration and other applications on each server.
  • BMW is using System Center Operations Manager; no word if they'll use SCVMM 2007/2008.

So what's my point? Why does anyone care about a deployment using Virtual Server the day AFTER we released Hyper-V? I'm interested to hear your thoughts. Here are mine:

  • if you're a Virtual Server customer, now you know that there's someone else who has plans to move a large # of Virtual Server-based VHDs to Hyper-V
  • server virtualization (be it Type 1 or Type 2) is great, but it's just part of the architecture needed to deploy their new after-sales service. As a colleague always tells me, "a hypervisor by itself can't even run calculator; you need apps and an OS to do something." For BMW, it's handy that the hypervisor can come from the same vendor as the applications, dev tools, management tools and OS.
  • if you're an SI partner (perhaps going to WW Partner Conference next month), it represents a services opportunity to help customers design their architectures, virtualize packaged and custom apps, and integrate those with other systems.
  • If you're a VMW shareholder, that's approx. 28,000 VMs that will be running on Hyper-V in the near future ;-)

11 more days until Hyper-V available via Windows Update (WU). Have a good weekend.

Patrick

by porourke | 3 Comments

WU-hoo! Only 12 Days to WU

It’s Christmas in July…

As you may have heard, Windows Server 2008 Hyper-V was released today, and is now available to download. July 8th will mark Hyper-V’s Release to Windows Update. You may remember when we released the first Hyper-V beta, it was right around Christmas. We’re bringing the joy of the holidays back to you with the “12 Days of Virt-Mas.”

 

And so, let’s make merry…

In celebration of the Hyper-V news, and as part of the “12 days of Virt-Mas” countdown to Hyper-V availability I, along with some of the other Hyper-V product managers, will be posting daily spotlights on specific features and benefits of Hyper-V technology beginning tomorrow. We might also throw in the occasional guest blogger or customer story (like Microsoft’s own deployment and results with Hyper-V as showcased today in Rob Emanuel’s guest blog and video) Because it’s virtualization, there’s no need to worry about any of us running out of things to share- it is the season of giving.

 

I’ll be back tomorrow to bring you the 1st day of “Virt-Mas.” Stay tuned…

 

Patrick

 

by porourke | 18 Comments

Hyper-V WMI – Startup-Shutdown-Recover/Snapshots/Error Messages

On Saturday I posted that I have started my own blog and promised to provide summery post to this blog for a while…  There are new three posts at http://blogs.msdn.com/taylorb that are not on this blog.

Hyper-V WMI – Configuring Automatic Startup/Shutdown/Recovery Action’s For Virtual Machines

Hyper-V WMI: Creating/Applying/Deleting Virtual Machine Snapshots

Hyper-V WMI: Rich Error Messages for Non-Zero ReturnValue (no more 32773, 32768, 32700…)

 

Thank’s and Happy Virtualizing!

Taylor Brown
Hyper-V Integration Test Lead
http://blogs.msdn.com/taylorb

clip_image001

by Taylorb | 0 Comments

Filed under: ,

Great source of Hyper-V performance oriented information

Virtualization Nation,

Greetings! Quick note today. :-)

If you are looking for some good performance oriented information on Hyper-V please check out Tony Voellm's blog. Tony is the Hyper-V Performance Team Lead, Performance Architect, Performance Nut, you get the idea... His blog is here: http://blogs.msdn.com/tvoellm/archive/tags/Hyper-V/default.aspx

Some recent posts have included:

  • Hyper-V Performance FAQ
  • Performance Counter Details
  • Performance oriented bugs (memory, timers, …)

The goal of the blog is to put the technical side to performance and help users understand the system and what happens under the covers.

When we ship the final Hyper-V RTM, Tony will update the blog...

Cheers, -Jeff

by WSV_GUY | 1 Comments

More Posts Next page »
© 2008 Microsoft Corporation. All rights reserved. Terms of Use  |  Trademarks  |  Privacy Statement
Page view tracker