OpenVZ

From Wikipedia, the free encyclopedia
OpenVZ
Developer(s) Community project, supported by Parallels, Inc.
Initial release 2005 (2005)
Written in C
Operating system Linux
Platform x86, x86-64
Available in English
Type OS-level virtualization
License GNU GPL v.2
Website openvz.org

OpenVZ (Open Virtuozzo) is an operating system-level virtualization technology based on the Linux kernel and operating system. OpenVZ allows a physical server to run multiple isolated operating system instances, called containers, Virtual Private Servers (VPSs), or Virtual Environments (VEs). OpenVZ is similar to FreeBSD jails and Solaris Zones.

OpenVZ compared to other virtualization technologies

OpenVZ is not true virtualization but really containerization like FreeBSD Jails. Technologies like VMWare and Xen are more flexible in that they virtualize the entire machine and can run multiple operating systems and different kernel versions. OpenVZ uses a single patched Linux kernel and therefore can run only Linux, all containers share the same architecture and kernel version. However, as it does not have the overhead of a true hypervisor, it is very fast and efficient.[1] The disadvantage with this approach is the single kernel. All guests must function with the same kernel version that the host uses.

The advantages, however, are that memory allocation is soft in that memory not used in one virtual environment can be used by others or for disk caching. OpenVZ uses a common file system so each virtual environment is just a directory of files that is isolated using chroot, newer versions of OpenVZ also allow the container to have its own file system.[2] Thus a virtual machine can be cloned by just copying the files in one directory to another and creating a config file for the virtual machine and starting it.

Kernel

The OpenVZ kernel is a Linux kernel, modified to add support for OpenVZ containers. The modified kernel provides virtualization, isolation, resource management, and checkpointing. As of vzctl 4.0, OpenVZ can work with unpatched Linux 3.x kernels, with a reduced feature set.[3]

Virtualization and isolation

Each container is a separate entity, and behaves largely as a physical server would. Each has its own:

Files
System libraries, applications, virtualized /proc and /sys, virtualized locks, etc.
Users and groups
Each container has its own root user, as well as other users and groups.
Process tree
A container only sees its own processes (starting from init). PIDs are virtualized, so that the init PID is 1 as it should be.
Network
Virtual network device, which allows a container to have its own IP addresses, as well as a set of netfilter (iptables), and routing rules.
Devices
If needed, any container can be granted access to real devices like network interfaces, serial ports, disk partitions, etc.
IPC objects
Shared memory, semaphores, messages.

Resource management

OpenVZ resource management consists of four components: two-level disk quota, fair CPU scheduler, disk I/O scheduler, and user beancounters. These resources can be changed during container run time, eliminating the need to reboot.

Two-level disk quota

Each container can have its own disk quotas, measured in terms of disk blocks and inodes (roughly number of files). Within the container, it is possible to use standard tools to set UNIX per-user and per-group disk quotas.

CPU scheduler

The CPU scheduler in OpenVZ is a two-level implementation of fair-share scheduling strategy.

On the first level, the scheduler decides which container it is to give the CPU time slice to, based on per-container cpuunits values. On the second level the standard Linux scheduler decides which process to run in that container, using standard Linux process priorities.

It is possible to set different values for the CPUs in each container. Real CPU time will be distributed proportionally to these values.

In addition to the above, OpenVZ provides[4] ways to:

  • set strict CPU limits, such as 10% of a total CPU time (--cpulimit);
  • limit number of CPU cores available to container (--cpus);
  • bind a container to a specific set of CPUs (--cpumask).

I/O scheduler

Similar to the CPU scheduler described above, I/O scheduler in OpenVZ is also two-level, utilizing Jens Axboe's CFQ I/O scheduler on its second level.

Each container is assigned an I/O priority, and the scheduler distributes the available I/O bandwidth according to the priorities assigned. Thus no single container can saturate an I/O channel.

User Beancounters

User Beancounters is a set of per-container counters, limits, and guarantees, meant to prevent a single container from monopolizing system resources. In current OpenVZ kernels (RHEL6-based 042stab*) there are two primary parameters (ram and swap, a.k.a. physpages and swappages), and others are optional.[5]

Other resources are mostly memory and various in-kernel objects such as IPC shared memory segments and network buffers. Each resource can be seen from /proc/user_beancounters and has five values associated with it: current usage, maximum usage (for the lifetime of a container), barrier, limit, and fail counter. The meaning of barrier and limit is parameter-dependent; in short, those can be thought of as a soft limit and a hard limit. If any resource hits the limit, the fail counter for it is increased. This allows the owner to detect problems by monitoring /proc/user_beancounters in the container.

Checkpointing and live migration

A live migration and checkpointing feature was released for OpenVZ in the middle of April 2006. This makes it possible to move a container from one physical server to another without shutting down the container. The process is known as checkpointing: a container is frozen and its whole state is saved to a file on disk. This file can then be transferred to another machine and a container can be unfrozen (restored) there; the delay is roughly a few seconds. Because state is usually preserved completely, this pause may appear to be an ordinary computational delay.

Limitations

By default, OpenVZ restricts container access to real physical devices (thus making a container hardware-independent). An OpenVZ administrator can enable container access to various real devices, such as disk drives, USB ports,[6] PCI devices [7] or physical network cards.[8]

/dev/loopN is often restricted in deployments (as loop devices use kernel threads which might be a security issue), which restricts the ability to mount disk images. A work-around is to use FUSE.

OpenVZ is limited to the providing only some VPN technologies based on PPP (such as PPTP/L2TP) and TUN/TAP. IPsec is supported inside containers since kernel 2.6.32.

A graphical user interface called EasyVZ was attempted in 2007,[9] but it did not progress beyond version 0.1.

See also

References

External links

This article is issued from Wikipedia. The text is available under the Creative Commons Attribution/Share Alike; additional terms may apply for the media files.