What Performs Best? Bare Metal Server vs Virtualization
Virtualization technology has become a ubiquitous, end-to-end technology for data centers, edge computing installations, networks, storage and even endpoint desktop systems. However, admins and decision-makers should remember that each virtualization technique differs from the others. Bare-metal virtualization is clearly the preeminent technology for many IT goals, but host machine hypervisor technology works better for certain virtualization tasks.
By installing a hypervisor to abstract software from the underlying physical hardware, IT admins can increase the use of computing resources while supporting greater workload flexibility and resilience. Take a fresh look at the two classic virtualization approaches and examine the current state of both technologies
What is bare-metal virtualization?
Bare-metal virtualization installs a Type 1 hypervisor — a software layer that handles virtualization tasks — directly onto the hardware before the system installing any other OSes, drivers, or applications. Common hypervisors include VMware ESXi and Microsoft Hyper-V. Admins often refer to bare-metal hypervisors as the OSes of virtualization, though hypervisors aren’t Operating Systems in the traditional sense.
Once admins install a bare-metal hypervisor, that hypervisor can discover and virtualize the system’s available CPU, memory and other resources. The hypervisor creates a virtual image of the system’s resources, which it can then provision to create independent VMs. VMs are essentially individual groups of resources that run OSes and applications. The hypervisor manages the connection and translation between physical and virtual resources, so VMs and the software that they run only use virtualized resources.
Since virtualized resources and physical resources are inherently bound to each other, virtual resources are finite. This means the number of VMs a bare-metal hypervisor can create is contingent upon available resources. For example, if a server has 24 CPU cores and the hypervisor translates those physical CPU cores into 24 vCPUs, you can create any mix of VMs that use up to that total amount of vCPUs — e.g., 24 VMs with one vCPU each, 12 VMs with two vCPUs each and so on. Though a system could potentially share additional resources to create more VMs — a process known as oversubscription — this practice can lead to undesirable consequences.
Once the hypervisor creates a VM, it can configure the VM by installing an OS such as Windows Server 2019 and an application such as a database. Consequently, the critical characteristic of a bare-metal hypervisor and its VMs is that every VM remains completely isolated and independent of every other VM. This means that no VM within a system shares resources with or even has awareness of any other VM on that system.
Because a VM runs within a system’s memory, admins can save a fully configured and functional VM to a disk or physical servers, where they can then back up and reload the VM onto the same or other servers in the future, or duplicate it to invoke multiple instances of the same VM on other servers in a system.
Advantages and disadvantages of bare-metal virtualization
Virtualization is a mature and reliable technology; VMs provide powerful isolation and mobility. With bare-metal virtualization, every VM is logically isolated from every other VM, even when those VMs coexist on the same hardware. A single VM can neither directly share data with or disrupt the operation of other VMs nor access the memory content or traffic of other VMs. In addition, a fault or failure in one VM does not disrupt the operation of other VMs. In fact, the only real way for one VM to interact with another VM is to exchange traffic through the network as if each VM is its own separate server.
Bare-metal virtualization also supports live VM migration, which enables VMs to move from one virtualized system to another without halting VM operations. Live migration enables admins to easily balance server workloads or offload VMs from a server that requires maintenance, upgrades or replacements. Live migration also increases efficiency compared to manually reinstalling applications and copying data sets.
However, the hypervisor itself poses a potential single point of failure (SPOF) for a virtualized system. But virtualization technology is so mature and stable that modern hypervisors, such as VMware ESXi 7, notoriously lack such flaws and attack vectors. If a VM fails, the cause probably lies in that VM’s OS or application, rather than in the hypervisor
What is hosted virtualization?
Hosted virtualization offers many of the same characteristics and behaviors as bare-metal virtualization. The difference comes from how the system installs the hypervisor. In a hosted environment, the system installs the host OS prior to installing a suitable hypervisor — such as VMware Workstation, KVM or Oracle Virtual Box — atop that OS.
Once the system installs a hosted hypervisor, the hypervisor operates much like a bare-metal hypervisor. It discovers and virtualizes resources and then provisions those virtualized resources to create VMs. The hosted hypervisor and the host OS manage the connection between physical and virtual resources so that VMs — and the software that runs within them — only use those virtualized resources.
However, with hosted virtualization, the system can’t virtualize resources for the host OS or any applications installed on it, because those resources are already in use. This means that a hosted hypervisor can only create as many VMs as there are available resources, minus the physical resources the host OS requires.
The VMs the hypervisor creates can each receive guest operating systems and applications. In addition, every VM created under a hosted hypervisor is isolated from every other VM. Similar to bare-metal virtualization, VMs in a hosted system run in memory and the system can save or load them as disk files to protect, restore or duplicate the VM as desired.
Hosted hypervisors are most commonly used in endpoint systems, such as laptop and desktop PCs, to run two or more desktop environments, each with potentially different OSes. This can benefit business activities such as software development.
In spite of this, organizations use hosted virtualization less often because the presence of a host OS offers no benefits in terms of virtualization or VM performance. The host OS imposes an unnecessary layer of translation between the VMs and the underlying hardware. Inserting a common OS also poses a SPOF for the entire computer, meaning a fault in the host OS affects the hosted hypervisor and all of its VMs.
Although hosted hypervisors have fallen by the wayside for many enterprise tasks, the technology has found new life in container-based virtualization. Containers are a form of virtualization that relies on a container engine, such as Docker, LXC or Apache Mesos, as a hosted hypervisor. The container engine creates and manages virtual instances — the containers — that share the services of a common host OS such as Linux.
The crucial difference between hosted VMs and containers is that the system isolates VMs from each other, while containers directly use the same underlying OS kernel. This enables containers to consume fewer system resources compared to VMs. Additionally, containers can start up much faster and exist in far greater numbers than VMs, enabling for greater dynamic scalability for workloads that rely on micro service-type software architectures, as well as important enterprise services such as network load balancers.