Hardware-assisted virtualization

In computing, hardware-assisted virtualization is a platform virtualization approach that enables efficient full virtualization using help from hardware capabilities, primarily from the host processors. Full virtualization is used to simulate a complete hardware environment, or virtual machine, in which an unmodified guest operating system (using the same instruction set as the host machine) executes in complete isolation. Hardware-assisted virtualization was added to x86 processors (Intel VT-x or AMD-V) in 2006.

Hardware-assisted virtualization is also known as accelerated virtualization; Xen calls it hardware virtual machine (HVM), and Virtual Iron calls it native virtualization.

History

Hardware-assisted virtualization first appeared on the IBM System/370 in 1972, for use with VM/370, the first virtual-machine operating system. With the increasing demand for high-definition computer graphics (e.g. CAD), virtualization of mainframes lost some attention in the late 1970s, when the upcoming minicomputers fostered resource allocation through distributed computing, encompassing the commoditization of microcomputers.

IBM offers hardware virtualization for its POWER CPUs under AIX (e.g. System p) and for its IBM-Mainframes System z. IBM refers to its specific form of hardware virtualization as "logical partition", or more commonly as LPAR.

The increase in compute capacity per x86 server (and in particular the substantial increase in modern networks' bandwidths) rekindled interest in data-center based computing which is based on virtualization techniques. The primary driver was the potential for server consolidation: virtualization allowed a single server to cost-efficiently consolidate compute power on multiple underutilized dedicated servers. The most visible hallmark of a return to the roots of computing is Cloud computing, which is a synonym for data center based computing (or mainframe-like computing) through high bandwidth networks. It is closely connected to virtualization.

The initial implementation x86 architecture did not meet the Popek and Goldberg virtualization requirements to achieve "classical virtualization":

This made it difficult to implement a virtual machine monitor for this type of processor. Specific limitations included the inability to trap on some privileged instructions.[1]

To compensate for these architectural limitations, designers have accomplished virtualization of the x86 architecture through two methods: full virtualization or paravirtualization.[2] Both create the illusion of physical hardware to achieve the goal of operating system independence from the hardware but present some trade-offs in performance and complexity.

  1. Paravirtualization is a technique in which the hypervisor provides an API and the OS of the guest virtual machine calls that API, requiring OS modifications.
  2. Full virtualization was implemented in first-generation x86 VMMs. It relies on binary translation to trap and virtualize the execution of certain sensitive, non-virtualizable instructions. With this approach, critical instructions are discovered (statically or dynamically at run-time) and replaced with traps into the VMM to be emulated in software. Binary translation can incur a large performance overhead in comparison to a virtual machine running on natively virtualized architectures such as the IBM System/370. VirtualBox, VMware Workstation (for 32-bit guests only), and Microsoft Virtual PC, are well-known commercial implementations of full virtualization.

In 2005 and 2006, Intel and AMD (working independently) created new processor extensions to the x86 architecture called Intel VT-x and AMD-V, respectively (On the Itanium architecture, hardware-assisted virtualization is known as VT-i). The first generation of x86 processors to support these extensions were released in late 2005 early 2006:

Well-known implementations of hardware-assisted x86 virtualization include VMware Workstation (for 64-bit guests only), Xen 3.x (including derivatives like Virtual Iron), Linux KVM and Microsoft Hyper-V.

Pros

Hardware-assisted virtualization reduces the maintenance overhead of paravirtualization as it reduces (ideally, eliminates) the changes needed in the guest operating system. It is also considerably easier to obtain better performance. A practical benefit of hardware-assisted virtualization has been cited by VMware engineers[3] and Virtual Iron.

Cons

Hardware-assisted virtualization requires explicit support in the host CPU, which is not available on all x86/x86_64 processors.

A "pure" hardware-assisted virtualization approach, using entirely unmodified guest operating systems, involves many VM traps, and thus high CPU overheads, limiting scalability and the efficiency of server consolidation.[4] This performance hit can be mitigated by the use of paravirtualized drivers; the combination has been called "hybrid virtualization".[5]

In 2006 first-generation 32- and 64-bit x86 hardware support was found rarely to offer performance advantages over software virtualization.[6]

See also

References

  1. Adams, Keith. "A Comparison of Software and Hardware Techniques for x86 Virtualization" (PDF). Retrieved 20 January 2013.
  2. Chris Barclay, New approach to virtualizing x86s, Network World, 10/20/2006
  3. See http://x86vmm.blogspot.com/2005/12/graphics-and-io-virtualization.html
  4. See http://www.valinux.co.jp/documents/tech/presentlib/2007/2007xenconf/Intel.pdf
  5. Jun Nakajima and Asit K. Mallick, Hybrid-Virtualization—Enhanced Virtualization for Linux, in Proceedings of the Linux Symposium, Ottawa, June 2007, http://ols.108.redhat.com/2007/Reprints/nakajima-Reprint.pdf
  6. A Comparison of Software and Hardware Techniques for x86 Virtualization, Keith Adams and Ole Agesen, VMware, ASPLOS’06 October 21–25, 2006, San Jose, California, USA"Surprisingly, we find that the first-generation hardware support rarely offers performance advantages over existing software techniques. We ascribe this situation to high VMM/guest transition costs and a rigid programming model that leaves little room for software flexibility in managing either the frequency or cost of these transitions.

Further reading

This article is issued from Wikipedia - version of the Monday, January 25, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.