How to Configure Virtual Machines

Share This Post:

Share on facebook
Facebook
Share on twitter
Twitter
Share on linkedin
LinkedIn
How to configure virtual machines

Knowing how to configure virtual machines requires knowing enough to figure out:

  • How virtual and physical networks interoperate
  • The different disk storage choices available
  • How to automate booting a system
  • How to install Linux distributions on virtual machines
  • Basic virtual machine creation and management tools

Read on as Secur helps you navigate the sometimes confusing world of configuring virtual machines.

Understanding Virtual Machine Configuration Utilities

As with all things open source, there are a range of virtual machine utilities to create, destroy, boot, shut down, and configure your guest VMs. There are many open-source alternatives from which to choose. Some work only at the command line and sometimes are used within shell scripts, while others are graphical. In the following sections, we’ll look at a few of these tools.

Working with The libvirt Library

Most command-line utilities that start with “vir” or “virt” employ the “libvirt” library, so first things first, use the ldd command and make sure it is has the hypervisor packages installed.

libvirt: How to Configure Virtual Machines
Checking to make sure libvirt is installed on the system.

The “libvirt” library is a popular virtualization management utility that includes:

  • An API library that is incorporated into several open-source VMMs (hypervisors), such as KVM.
  • The libvirtd daemon that manages the VM host system and executes VM guest system management tasks, such as starting and stopping the VM
  • Command-line utilities, such as virt-install and virsh, that operate on the VM host system and are used to control and manage VM guest systems

Viewing virsh: The Virtual Interactive Shell

The “virsh” shell is a basic shell used tto manage a system’s virtual machines that leverages the “libvirt” library.  If you have a hypervisor installed, use the “virsh” shell to create, remove, start, stop, and manage virtual machines on your system.  You can see an example of this in the screenshot below.

virsh: How to Configure Virtual Machines
The virsh interactive console.

“Virsh” commands can be entered directly from the Bash shell, so you do not need to enter the “virsh” shell to manage the virtual machines. makes it useful for those who wish to employ the commands in shell scripts to automate virtual machine administration.

Using the Virtual Machine Manager

Part of  the “virt-manager” package, the Virtual Machine Manager (VMM), a Python program, is a desktop application for creating and managing virtual machines.  Initiated from a terminal emulator within the graphical environment via the “virt-manager” command and features:

  • Performance statistic graphs. 
  • A way to modify guest virtual machines’ configurations, such as their virtual networks 
  • A virtual network computing (VNC) client viewer (“virt-viewer“), allowing a graphical desktop environment console to be attached to any running virtual machine. 

Understanding Bootstrapping

Bootstrapping a system is the process of installing a new system using a configuration file or image of an earlier system install.

Booting a VM with Shell Scripts

While there are a number of ways to boot a system, starting a few VMs via a GUI is different beast compared to starting up hundreds of virtual machines, so you need to figure out how to go about automating the process.  While there are many shell strips you could swipe the Internet if you want something to help boot up your system,  shell scripts for booting virtual machines is typically a build-your-own approach, especially for booting guest virtual machines on a company-owned/managed hosts machine.
 
The most efficient way to go about this is to  create configuration files for the various virtual machines on the system and read them into the shell script(s) for booting as needed.  This is a flexible approach that allows a great deal of customization as guests can be booted, when the host system starts, at predetermined times, or on demand. 

Kick-Starting with Anaconda

You can easily bootstrap a new system (physical or virtual) using the kickstart installation method for setting up and conducting a system installation consists of the following:
1. Create a kickstart file to configure the system.
2. Store the kickstart file on the network or on a detachable device, 
3. Place the installation source (e.g., ISO file) where it is accessible to the kickstart process.
4. Create a boot medium that will initiate the kickstart process.
5. Kick off the kickstart installation.

Creating the Kickstart File

A kickstart file is a text file that contains all the installation choices you desire for a new system. While you could manually create this file with a text editor, it is far easier to use an anaconda file

On Red Hat distros this file, named “anaconda-ks.cfg is created at installation and stored in the “/root” directory; it contains all the installation choices that were made when the system was installed.

Unlike Red Hat, Ubuntu distributions do not use anaconda files; you need to install the “system-config-kickstart” utility and use it to create a kickstart file.   Alternatively , you can use Ubuntu’s “preseed” application, its native bootstrapping product 

It is important to know that the root password and primary user password are stored in this file, so obviously it must be kept secured so as not to compromise any of your virtual or physical systems this file is used to bootstrap.

Create the kickstart file for a system installation, by copying the  anaconda file for your new machines and out of convention, label it “ks.cfg”;  open the file in a text editor and make any necessary modifications.  Use the “ksvalidator” utility to find syntax issues in a kickstart file.

Storing the Kickstart File

It goes without saying that this configuration file needs to be properly stored/protected:

  • Physical system installations: Use a removable media to store a configured kickstart file
  • Virtual machine creation: Store it locally on the host system.

Placing the Installation Source

The installation source is either:

  • The ISO file you are employing to install the Linux distribution.
  • An installation tree containing the extracted contents of an installation ISO file.
    • For a virtual machine installation, store the ISO or installation tree on the host system.
  • Often for a regular physical system, the ISO is stored on removable media or a network location. 

Creating a Boot Medium

Physical installation:  The method/medium depends on:

  • The various system boot options
  • The type of system on which you will be performing the installation
    • A simple method for servers that have the ability to boot from USB drives or DVDs is to store a bootable live ISO on one of these choices.

Virtual machine installation: No need to create a boot medium. 

Kicking Off the Installation

For a physical system:

  • Start the boot process
  • Reach a boot prompt, and
  • Enter a command like linux “ks=hd:sdc1:/ks.cfg“, depending on the hardware environment, bootloader, and the location of thekickstart file.

 

For a virtual system:

  • Employ the virt-install command, 
  • Add two options similar to these:

–initrd-inject /root/VM-Install/ks.cfg

–extra-args=”ks=file:/ks.cfg console=tty0 console=ttyS0,115200n8″

    • Consider creating a shell script with a loop and have it reissue the “virt-install” command multiple times to create/multiple virtual machines as you need.

Initializing with Cloud-init

Cloud-init, developed by Canonical, the company that produce the Ubuntu, is a tool for bootstrapping virtualized machines. The “cloud-init” tool is a python application for both cloud-based virtualization services (Amazon Web Services (AWS), Microsoft Azure, and Digital Ocean) and cloud-based management operating systems, like OpenStack that automatically applies user data to a platform’s virtual machine instances.  The tool also bootstraps local virtual machines using VMM (hypervisor) products like VMware and KVM and is supported by most major Linux distributions.  Making use of  “user-data”, which is either:

  • A string of information 
  • Data stored Yet Another Markup Language (YAML) formatted files.

Cloud-init allows Linux administrators to:

  • Configure a virtual machine’s hostname, temporary mount points, and the default locale.
  • Created pre-generated OpenSSH private keys to provide encrypted access to the virtualized system.
  • Employ customized scripts when the virtual machine is bootstrapped. 

Primarily configured by the “/etc/cloud/cloud.cfg” file, the command-line utility is “cloud-init” command that you would only employ  on a host machine, where virtual machines are created.   When working with cloud-based virtualization services, you provide the user-data file/information via their management interface to bootstrap newly created virtual machines.

Managing Virtual Machine Storage Issues

When setting up a virtual system, especially at scale, understand the various disk configuration options as the choices you make affect the virtual machine’s performance.  As virtual machines’ disk drives are simply files on a physical host’s disk, depending on the VMM (hypervisor) employed and  configuration settings, a single virtual disk may be represented by either a single physical file or multiple physical files.  In addition to this high level knowledge of virtualization, you need to understand the following configuration terms:
Provisioning:   Beyond just selecting disk storage size when creating a virtual machine, you must consider if you want a thick or thin provision

  • Thick provisioning: A static setting that preallocates the physical file(s) created on the physical disk  based on the virtual disk size;  if you select 50GB as your virtual disk size, 50GB of space is consumed on the physical drive.
  • Thin provisioning: Allows disk size to grow dynamically, so the hypervisor only consumes the amount of disk space actually used for the virtual drive.  If you select 50GB for your virtual disk size, and only 10GB of space is written to the virtual drive, only 10GB of space is consumed on the physical drive. As more data is written to the virtual drive, more space is utilized on the physical drive up to the 50GB setting.  when you delete data from the virtual drive, it does not automatically free up disk space from the physical drive.
    • This allows “overprovisioning” which occurs when more virtual disk space is assigned than is available on the host.  This is based on the fact that you can scale up the physical storage as needed to meet virtual machine demand.

Persistent Volumes: Used by many virtualization products, such as OpenStack and Kubernetes, virtualized persistent volumes operate in a similar fashion to a physical disk.

  • Data is kept on the disk until the system or user overwrites it.
  • The data stays on the disk, whether the virtual machine is running or not.
    • With some virtualization products, it can remain even after the virtual machine using it is destroyed.

Blobs: Used in Microsoft Azure cloud platform’s technical documentation,  blob storage is large unstructured data, which is offered over the Internet and can be manipulated with .NET code.   Blob data items, grouped together into a container for a particular user account,  are classified as to one of three different types:

  • Append blobs: blocks of text and binary data with enhanced storage to allow for efficient appending operations and as a result, is often used for logging data.
  • Block blobs: Blocks of text and binary data witha  size limit is 4.7TB.
  • Page blobs: Random access files up to 8TB in size; used as virtual disks for Azure virtual machines.

Virtual Machine Network Configurations

When it comes to virtual machines, they can have any number of virtualized NICs to interact with the virtualized internal switches provided by the hypervisor.  Properly configured, this set up results in higher network/application performance and improved security.

Virtualizing the Network

Initially referring to the virtualization of switches and routers running at OSI level 2 and 3, network virtualization worked its way higher on the OSI model to incorporate firewalls, server load balancing with some providers  offering Network as a Service (NaaS).  There are two basic network virtualization concepts you need to familiarize yourself with

  • Virtualized local area networks/VLANs: As previously discussed a local area network (LAN) contains systems and various devices located in a small area, such as an office or building, which share a common communications line or wireless link.  These networks are often broken up into different network segments, VLANs, where their network traffic travels at relatively high speeds, even if the group of systems and various devices are physically located across various LAN subnets. VLANs are based on logical and virtualized connections, using layer 2 to broadcast messages and layer 3 routers, to implement this LAN virtualization.
  • Overlay Network:   Offering better flexibility, utilization, costs and scalability than non-virtualized network solutions, overlay networks use encapsulation and communication channel bandwidth tunneling to virtualize a network.  A network’s communication medium is split into different channels, with each channel assigned to a particular service or device.
    • Packets traveling over the channels are encapsulated inside another packet for the trip.
    • When the receiving end of the tunneled channel gets the encapsulated packet, the packet is removed from its capsule and handled.
    • An overly network employs virtual switches, tunneling protocols, and software-defined networking in addition to the typical network hardware,  Software-defined networking is a method for controlling and managing network communications via software with an SDN controller program and northbound and southbound APIs. Other applications on the network see the SDN as a logical network switch.

Configuring Virtualized NICs

Depending on the configuration and the employed hypervisor, virtual NICs (adapters) sometimes connect to the hosts physical NIC and other times connect to a virtual switch.

Virtual switches: Configure Virtual Machines
Schematics of a virtual machines with virtual NICs.

Network interface cards on virtual machine have lots of choices so you need to know what you are doing to make sure you get the configuration done properly.

Host-Only/local adapter:   Connects to a virtual network contained within the virtual machine’s host system; no connection to the external physical/virtual network to which the host system is attached.   The benefits of a host only adaptor are:

  • Speed: This is a very fast connection if the host system has two or more virtual machines because VMs’ network traffic takes place in the host system’s RAM and not on external networks
  • Security. When using two virtual machines, one can act as a proxy server utilizing a different NIC configuration and access the external network. The second, employing a host-only adapter, sends/receives its web requests through the VM with the connection to the outside network;  the end result is that the first VM functions as a proxy server.   

Bridged adaptor: Makes the virtual machine function a node on the LAN or VLAN to which the host system is attached. The VM gets its own IP address and can be seen on the network.  The virtual NIC is connected to a host machine’s physical NIC and transmits its own traffic to/from the external physical (or virtual) network.  In the example above, the VM functioning as the proxy server uses a bridged adaptor

Network address translation (NAT) adapter:   The configuration of a virtual NAT is similar to a physical NAT.  A physical NAT uses a network device, such as a router, to “hide” a LAN computer system’s IP address when that computer sends traffic out onto another network segment. All the other LAN systems’ IP addresses are translated into a single IP address to other network segments. The router tracks each LAN computer’s external traffic, so when traffic is sent back to that system, it is routed to the appropriate computer. 

  • A virtualized NAT table is maintained by the hypervisor instead of a network device which uses IP address of the host system as the single IP address that is sent out onto the external network.
  • Each virtual machine has its own IP address within the host system’s virtual network.
  • Offers enhanced security by keeping internal IP addresses private from the external network.

Dual/Multi-Homed:   A dual/multi-homed system  is a computer that has one or more active network adapters, providing redundancy and load balancing of external network traffic.  In the virtual world, many virtual machines are dual-homed or even multi-homed, depending on the virtual networking environment configuration and goals. 

In our virtual proxy server example, the is dual-homed:

  • With one internal network NIC (host-only) to communicate with the protected virtual machine
  • Has a bridged adapter to transmit and receive packets on the external network.

The image below shows the complete network picture of this virtual proxy server.

Virtual server configuration: Configure Virtual Machines
Diagrammatic representation of virtual server layout

Share This Post:

Share on facebook
Facebook
Share on twitter
Twitter
Share on linkedin
LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *

Table of Contents

You May Like

Related Posts

Basic Linux Commands
Linux Basics
Linux Administrator

Basic Linux Commands

Getting comfortable with Linux means getting unto speed with some basic linux commands to help you navigate and manage the system. pwd: Finding Your Location

Read More »