Understanding The Basics of Virtualization

Share This Post:

Share on facebook
Facebook
Share on twitter
Twitter
Share on linkedin
LinkedIn
Understanding Virtualization 850 550

Virtualization, which made Linux popular for cloud computing vendors, is the technology that made cloud computing possible.  This article from Secur explores:

  • What is virtualization?
  • The different types of virtualization.
  • The implementation of virtualization in a Linux environment.

What is a Hypervisor?

When designing the architecture for an application that support a large number of clients, an industry standard practice is that separate the different functions of the application onto separate servers.

  • From a performance perspective, this practice makes sense as you can dedicate separate computing resources to each element.
  • From a security standpoint, it compartmentalize access, making attacks a little more difficult to execute.
Understanding The Basics of Virtualization
Traditional server application set up.

As the diagram above depicts, this means locating the application server, the web server, and the database server on separate servers.

  • Customers only communicate with the front-end web server.
  • The web server passes the connections to the application, which in turn communicates with the database server.

Having said all that, the increased capacity of servers, this model becomes from a cost and operational point of view. Dedicating an entire physical server to just running a web server, another physical server to just running the database server, and yet a third physical server to just running the application software doesn’t utilize the full power of the servers.  To avoid this operational (and cost) inefficiency, you can take advantage of virtualization which lets you run multiple virtual smaller server environments on a single physical server.

Virtual Server:Understanding The Basics of Virtualization
Diagrammatic representation of a virtual server set up.

As laid out in the diagram above,  virtual servers operate as  standalone servers running on a single piece of physical server hardware, and are what is referred to as virtual machines, or VMs.

As none of the virtual servers interacts with each other, they act just as if they were located on separate physical servers.  In order for virtual server to share resources on a physical server without conflict requires a “hypervisor“, which is also called a virtual machine monitor (vmm), manages the physical server resources shared between the virtual machines; it provides a virtual environment of CPU time, memory space, and storage space to each virtual machine running on a physical server.  From the perspective of each virtual machine, it operates as if it has direct access to the server resources without any noticeable interference from the hypervisor.

As each virtual machine on a server is a separate entity, you can run different operating systems on the different virtual machines without having to incur the costs of additional servers.

Different Types of Hypervisors

There are two different hypervisor implementation methods, a bare metal and hosted. With the Type I hypervisors, you must dedicate a server to hosting virtual machines, while with a Type II hypervisor, your server can perform some (although not a lot) of other functions while it hosts virtual machines.

The attraction of using a Type II hypervisor is that you can run it on an already installed operating system and you don’t need to create a new server environment to run virtual machines.

Type I/Bare Metal Hypervisors

Type I hypervisors, also known as bare-metal hypervisors, run directly on the server hardware,  and the software interacts directly with the CPU, memory, and storage on the system, allocating them to each virtual machine as needed.  There are two popular Type 1 hypervisors:

  • The Linux Kernel-based Virtual Machine/KVM: Utilizes a standard Linux kernel along with a special hypervisor module, depending on the CPU used.
    • San host any type of guest operating systems.
  • XEN: An open-source standard for hardware virtualization.
    • Support Intel/AMD /Arm CPUs. 
    • Includes additional software for managing the hypervisor from a guest operating system.

Type II/ Hosted Hypervisors

Type II/hosted hypervisors run on top of an existing operating system install, like any other application on the host operating system. They run guest virtual machines as separate processes on the host operating system and support guest operating systems, which are completely separated from the host operating system. 

This lets you can use a Mac host operating system and still run Windows or Linux guest operating systems.

There are many Windows and macOS Type II hypervisors, such as VMware Workstation and QEMU, but for Linux the one commonly used is Oracle VirtualBox.

Hypervisor Templates

Hypervisor templates are how you manage the configuration setting for virtual machines.  The open-source standard for virtual machine configurations is called the Open Virtualization Format (OVF), which creates a distribution package consisting of multiple files.  OVF uses a single XML configuration file to define the virtual machine hardware environment requirements.  The additional files define the network access, virtual drive requirements, and any operating system requirements.

The configuration settings for virtual machines (resource needs, hardware interactions, etc) can be saved to template files, allowing  easy duplication of the virtual machine environment on either the same hypervisor or a separate hypervisor server.

As OVF templates are challenging to distribute. The solution to that is the Open Virtualization Appliance (OVA) format, which bundles all of the OVF files into a single tar archive file for easy distribution.

Exploring Containers

If you are just looking to distribute an application, there’s no need to duplicate an entire operating system environment to distribute it; the efficient way to distribute an application is to use what is called a “container“.

What Are Containers

As any software developer knows, all too often an application works just fine in development and then come crashing down when deployed to a production environment that doesn’t accurately reproduce the development environment.  This is because that  all the ancillary files required to run an application are not properly replicated in the production environment.

As most applications require a lots of files to run and are usually colocated in a single directory; often additional library files are required for interfacing the application to databases, desktop management software, or built-in operating system functions and are usually scattered in various places scattered around the Linux virtual directory.

The “cute” term for this is dependancy hell, issue arising around shared packages or libraries and different/incompatible versions of the shared packages.

  • If the shared package/library can only be installed in a single version, the problem arises when obtaining newer or older versions of the dependent packages, which in turn, breaks other dependencies and push the problem to another set of packages.

Containers are designed to solve this problem;  the container provides a self-sufficient environment run time environment for the application by gathering all of the files necessary (the runtime files, library files, database files, and any operating system–specific files) and storing them within the container.  Since containers don’t contain the entire operating system, they’re more lightweight than a full virtual machine, making them easier to distribute.   Containers utilized the chroot jail method to separate applications running on a Linux system with some additional features. 

If you run multiple applications on a server, you can install multiple containers, each a self-contained environment for each particular application.  As application containers are portable as you can run the same container in any host environment and expect the same behavior for the application.  Ideal for application developers as the application can build the application container in one environment, copy it to a test environment, and then deploy the application container to a production environment, all without worrying about missing files.

 

Container Software

There are two main Linux container packages are commonly used in Linux:

  • LXC: Developed as an open-source standard for creating containers.
    • Each LXC container is a little more involved than just a standard lightweight application container but not quite as heavy as a full virtual machine
    • Include their own minimal operating system that interfaces with the host system hardware, by passing the host operating system. 
    • While they contain their own mini–operating system and sometimes referred to as virtual machines, they still require a host operating system to operate.
  • Docker:  Released as an open-source project, it is an extremely lightweight technology, allowing several containers to run on the same host Linux system.
    • Uses a separate daemon that runs on the host Linux system to managed the installed Docker images.
      • The daemon listens for requests from the individual containers as well as from a Docker command-line interface that allows you to control the container environments.

Container Templates

As with virtual machines, containers templates  allow for the use of templates to duplicate container environments. The different types of Linux containers utilize different methods for distributing templates.

  • LXC: Uses a separate utility called LXD to manage containers. 
  • Docker: uses container image files to store container configurations.
    • The container image file is a read-only container image that can store and distribute application containers.

Share This Post:

Share on facebook
Facebook
Share on twitter
Twitter
Share on linkedin
LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *

Table of Contents

You May Like

Related Posts

Privacy
Linux Administrator

How Does a VPN Work?

A VPN, or Virtual Private Network provides you with the ability to create a secures connection to another network over the Internet.  People use VPNs

Read More »
Networking
Linux Administrator

What is a Network Appliance

A network appliance is a device you add to the network to provide additional functionality and extensibility Load Balancer Distributes work load across several devices.

Read More »