Comparing Public Cloud Performance – Part Three – GCP

In the first series on this post I looked at Azure VMs and provided a comparison with IONOS Enterprise Cloud, in the second part we looked at AWS, this final post of the series will look at comparing Google Cloud Platform (GCP).

As a bit of background in case you haven’t read the first or second parts yet, I’ve been working with the major cloud vendors for some years now and for me performance has always been a key factor when choosing the right platform, I’ve always struggled in finding the right balance of cost vs performance when choosing the right platforms and have created this blog to highlight some of the differences.

I’ve just started a new role as Cloud Architect for 1&1 IONOS Enterprise cloud and one of the main factors in coming here was the technology and some of the claims that it makes especially with performance and simplicity. This blog will highlight those performance claims and also the cost benefit that choosing the right cloud provider will be for you.

For these tests I’ve kept it simple, I’m using small instances that will host microservices so cost is one variable but performance is another, I will be creating an instance with 1 vCPU and 2Gb RAM, this system will be a base line for testing and I will use Novabench (novabench.co.uk) for some basic CPU and RAM performance modelling. There are so many tools out there but I find this one real quick and simple to test against some key attributes I will also be using the same tool for the instances so not unbiased results too.

So on with the comparison and next up is GCP for this I’ve selected a custom VM size as this is as near as consistent with other instances on the clouds I have been testing, The CPU used is an Intel Xeon 2.3Ghz and the price for this including windows server licensing and support costs comes out at £50.64 per month


GCP Pricing calculator for Custom VM

For IONOS Enterprise Cloud I’ve also selected a similar spec as GCP which is a 1 CPU and 2Gb RAM and have used the Intel Haswell E5-2660v3 based chip for the OS as this will be as close to the custom VM in GCP, Like GCP I’ve also included the Windows Server license cost in the subscription along with 24/7 support which is actually free. The monthly cost for this server is £59.18 so comparing costs of using IONOS Enterprise Cloud there is a slight benefit of using GCP as you would save £102.48 over the year, so looks like GCP has a cost edge over IONOS, so what about the performance.


IONOS Enterprise Cloud Pricing for GCP 1 CPU 2Gb RAM equivalent

First I wanted to see how the external and internal internet connectivity was performing, to no big surprise IONOS way out performed Azure by a factor of 2, which is to be expected given the infrastructure back end design running on InfiniBand and the datacentre interconnects. The download speed was comparable for Google which you would expect from the internet giant.

GCP Speedtest performance rating

IONOS Enterprise Cloud Speedtest performance rating

Next the focus turned to CPU, RAM and disk performance for this I ran the Novabench performance utility and performed tests on both servers, the tests did throw up some major differences between the two. Let’s take a look at GCP first

GCP custom 1 vCPU & 2GB Ram VM Novabench Results

The GCP results were interesting to a point that twice as much resources are to be required to get to the same level of the IONOS instance.The GCP instance had a more or less half that of a score for its CPU, RAM and Disk benchmark compared to IONOS but it must be noted that the GCP resources are shared resources instances being hosted on GCP, the RAM score was also at a much lower throughput with a difference of 11964 MB/s, but what was noticeable was that the disk read and write performance was half that of IONOS. the write speed was not what would be expected from SSD storage.

The IONOS Enterprise cloud exhibited near twice the values from the results to GCP.

IONOS Instance Novabench result

Conclusion

Due to the dedicated resources that are used by IONOS Enterprise Cloud it becomes apparent that other Public Cloud vendors have to double (GCP & AWS) or even quadruple (Azure) their resource configurations to be comparable in performance to IONOS. Comparing GCP to IONOS to catch up to a similar performance of that of IONOS Enterprise Cloud the GCP instance would need to be reconfigured to a custom VM with 2 vCPUs and 4Gb RAM size this is 2 times the resources of the IONOS Instance which would increase the monthly cost to £94.57 which would equate to £1134.84 for the year of which you would have to pay an extra £423.96 per year for an equal performance instance of that of the IONOS instance.

GCP custom 2 vCPU& 4GB Ram VM Novabench Results

Can you really justify that type of expense of spending an additional £400 per year for just one system for the same performance? IONOS Enterprise Cloud provides dedicated CPU and Memory and is surely the way to go.

Don’t just take my word for it, give it a go yourself, I’m sure you’ll be impressed with the results.

Get your free 30 day no obligation trial at https://www.ionos.co.uk/pro/enterprise-cloud/

Comparing Public Cloud Performance – Part Two – AWS

In the first series on this post I looked at Azure VMs and provided a comparison with IONOS Enterprise Cloud, this next part will focus on AWS.

As a bit of background in case you haven’t read the first part yet, I’ve been working with the major cloud vendors for some years now and for me performance has always been a key factor when choosing the right platform, I’ve always struggled in finding the right balance of cost vs configuration when choosing the right platforms and have created this blog to highlight some of the differences.

I’ve just started a new role as Cloud Architect for 1&1 IONOS Enterprise cloud and one of the main factors in coming here was the technology and some of the claims that it makes especially with performance and simplicity. This blog will highlight those performance claims and also the cost benefit that choosing the right cloud provider will be for you.

For these test I’ve kept it simple, I’m using a small instances that will host microservices so cost is one variable but performance is another, I will be creating an instance with 1 vCPU and 2Gb RAM this system will be a base line for testing and I will use Novabench (novabench.co.uk) for some basic CPU and RAM performance modelling. There are so many tools out there and I find this one real quick and simple to test again some key attributes also using the same tool for the instances will show unbiased results too.

So on with the comparison and next up is AWS, as AWS doesn’t have a 1 CPU and 2GB RAM flavour to choose from I’ve selected the M4_large size as this is as near as consistent with other instances on the clouds I have been testing all be it double that of the IONOS Enterprise Cloud size, the CPU used is an Intel Haswell E5-2660 and the price for this including windows server licensing and support costs comes out at $140.55 per month which equates to £109.22 as calculated by Google currency converter at the time of writing.

2018-11-22_11-27-12AWS Pricing calculator for M4 Large

For IONOS Enterprise Cloud I’ve also selected a slightly reduced spec to AWS and have used the Intel Haswell E5-2660v3 based chip for the OS as this going by my testing should  be very close to the M4 Large instance in AWS, as with AWS I’ve also included the Windows Server license cost in the subscription along with 24/7 support which is actually free. The monthly cost for this server is £50.96 so comparing costs of using IONOS Enterprise Cloud there would be a saving of £699.12 over the year, a saving is a saving so on paper the costs look good so far.

2018-11-20_14-00-37IONOS Enterprise Cloud Pricing

Now what about performance tests between the two?  First I wanted to see how the external and internal internet connectivity was performing, to no big surprise IONOS way out performed AWS by a factor of 2, which is to be expected given the infrastructure backend design running on InfiniBand and the datacentre interconnects.

2018-11-22_10-10-15AWS Speedtest performance rating

2018-11-22_10-40-14IONOS Enterprise Cloud Speedtest performance rating

Next the focus turned to CPU, RAM and disk performance for this I ran the Novabench performance utility and performed tests on both servers, the tests did throw up some major differences between the two. Let’s take a look at AWS first

2018-11-22_10-17-07AWS M4 Large Instance Novabench Results

The AWS results were interesting to a point that twice as much resources were required to get to the same level of the IONOS instance. The AWS instance had a more or less equal score for its CPU, RAM and Disk benchmark but it must be noted that the AWS resources are shared resources instances being hosted on AWS, the RAM score was also at a lower throughput with a difference of  5733 MB/s, but what was noticeable was that the disk read and write performance was half that of IONOS.

The IONOS Enterprise cloud exhibited similar results to AWS but consumed half the resources.

2018-11-20_12-37-19IONOS Instance Novabench result

Conclusion

Due to the dedicated resources that are used by IONOS Enterprise Cloud it becomes apparent that other Public Cloud vendors have to double (AWS & Google) or even quadruple (Azure) their resource configurations to be comparable in performance to IONOS. When comparing AWS to IONOS to get to similar performance of that of IONOS Enterprise Cloud the AWS instance would need to be reconfigured by a factor of 2 which would increase the monthly cost to $140.55 or £109.22 which would equate to £1310.64 for the year of which £700.12 would be for the cost of an equal performance instance of that of the IONOS instance, don’t forget this is for a single system so once you’re deploying 100s or 1000s of instances that soon racks up.

Can you really justify that type of expense of spending an additional £700 per year for one system for the same performance? IONOS Enterprise Cloud provides dedicated CPU and Memory and is surely the way to go.

Get your free 30 day no obligation trial at https://www.ionos.co.uk/pro/enterprise-cloud/

Comparing Public Cloud Performance – Part One – Microsoft Azure

I’ve been working with the major cloud vendors for some years now and for me performance has always been a key factor when choosing the right platform for Infrastructure-as-a-Service, I’ve always struggled in finding the right balance of cost vs configuration when choosing the right platforms and have created this 3 part blog to highlight some of the differences I’ve seen between Azure, AWS and Google Cloud.

I’ve just started a new role as Cloud Architect for 1&1 IONOS, working in the Enterprise Cloud division, and one of the main factors in coming here was the technology stack and the surrounding network settings and some of the claims that it makes especially with performance and simplicity. This blog will highlight those performance claims and also the cost-benefit that choosing the right cloud provider will be for you.

For the tests I’ve kept it simple, I will be using small instances that will host eventually host microservices with Docker so cost will be one variable but performance is another, I will be creating an instance with 1 vCPU and 2Gb RAM, this system will be a baseline for testing, I will use Novabench (novabench.com) for some basic CPU and RAM performance modelling. There are so many tools out there but I find this one real quick and simple to test against some key attributes, I will also use the same tool for all the cloud vendors instances so this should show unbiased results too.

Let’s start by looking at Azure and for this I’ve selected the A1_v2 size as this consistent with other instances on the clouds I will be testing, The CPU used is an Intel Haswell E5-2673 v3 and the price for this including windows server licensing and support costs comes out at £62.20 per month

2018-11-20_13-59-06Azure Pricing calculator for A1_v2

For IONOS Enterprise Cloud I’ve also selected a similar spec and have used the Intel Haswell E5-2660 v3 based chip for the OS as this will be very close to the A1_v2 instance in Azure, Like Azure I’ve also included the Windows Server license cost in the subscription along with 24/7 support which is actually free. The monthly cost for this server is £50.96 so comparing costs of using IONOS Enterprise Cloud there would be a saving of £134.88 over the year, a saving is a saving, so on paper the costs look good so far.

2018-11-20_14-00-37IONOS Enterprise Cloud Pricing for A1_v2 equivalent

Now, what about performance tests between the two?  First I wanted to see how the external and internal internet connectivity was performing, so no big surprise, IONOS way outperformed Azure by a factor of 3, which is to be expected given the infrastructure back end design running on InfiniBand and the datacentre interconnects.

2018-11-22_10-07-47Azure Speedtest performance rating

2018-11-22_10-40-14IONOS Enterprise Cloud Speedtest performance rating

Next, the focus turned to CPU, RAM and disk performance for this I ran the Novabench performance utility and performed tests on both servers, the tests did throw up some major differences between the two. Let’s take a look at Azure first

2018-11-22_10-40-59Azure A1_v2 Instance Novabench Results

The Azure instance had a low score for its CPU benchmark which makes sense as the CPU is a shared resource with other instances being hosted on that Hyper-V cluster node within the Azure cloud, the RAM score was also low with a throughput of 3929 MB/s, but what was noticeable was that the disk read performance was good with a throughput of 163 MB/s but write speeds were a complete polar opposite.

The IONOS Enterprise cloud eclipsed the metrics of the Azure instance and really showed off the advantage of having dedicated CPU and memory resources for the instance

2018-11-20_12-37-19IONOS Instance Novabench result

The CPU performance was 385% that of the CPU in Azure and for Azure to achieve a similar score an additional 3 CPUs would have to be added to maintain the same CPU score. The RAM speed also was way beyond that of Azure and achieved 19318 MB/s a factor of 3 times faster, the disk read & write performance both outperformed Azure, it did maintain an equal throughput for both write and read speeds with writes outperforming by 18 times that of Azure. Just a note here that I used a standard HDD as the storage medium and could have used an SSD instead which would have increased the performance even more.

Finally, I configured another instance in IONOS Enterprise Cloud using an AMD Opteron 62xx 2.8Ghz processor to see it that could match the Intel-based Azure instance and for much of the benchmark scores it was comparable to the Azure instance, even better the cost of the instance was £31.52 a month giving a saving £368.16 over the year. It should be mentioned that IONOS Enterprise Clouds let you configure cores and storage at will in the most granular way possible: core by core and Gigabyte by Gigabyte.

2018-11-20_15-55-43IONOS AMD Instance Novabench result

Conclusion

For Azure to catch up to similar performance of that of IONOS Enterprise Cloud the Azure instance would need to be reconfigured to a A4_v2 size this is 4 times the resources of the IONOS Instance which would increase the monthly cost to £182.44 which would equate to £2210.64 for the year of which £1599.12 would be for the cost of an equal performance instance of that of the IONOS instance.

2018-11-22_10-12-03Azure A4_v2 Instance Novabench Results

Can you really justify that type of expense of spending an additional £1600 per year for the same performance? IONOS Enterprise Cloud employs KVM based virtualisation making extensive use of hardware virtualisation and maps the CPU power of a real core to a vCPU and provides dedicated memory so it is surely the way to go.

Get your free 30 day no obligation trial at https://www.ionos.co.uk/pro/enterprise-cloud/

Openstack – Configuring for LVM Storage Backend

The volume service is able to make use of a volume group attached directly to the server on which the service runs. This volume group must be created exclusively for use by the block storage service and the configuration updated to point to the name of the volume group.
The following steps must be performed while logged into the system hosting the volume service as the root user:
  1. Use the pvcreate command to create a physical volume.
    # pvcreate DEVICE
      Physical volume "DEVICE" successfully created
    Replace DEVICE with the path to a valid, unused, device. For example:

    # pvcreate /dev/sdX
  2. Use the vgcreate command to create a volume group.
    # vgcreate cinder-volumes DEVICE
      Volume group "cinder-volumes" successfully created
    Replace DEVICE with the path to the device used when creating the physical volume. Optionally replace cinder-volumes with an alternative name for the new volume group.
  3. Set the volume_group configuration key to the name of the newly created volume group.
    # openstack-config --set /etc/cinder/cinder.conf \
    DEFAULT volume_group cinder-volumes
    The name provided must match the name of the volume group created in the previous step.
  4. Ensure that the correct volume driver for accessing LVM storage is in use by setting the volume_driverconfiguration key to cinder.volume.drivers.lvm.LVMISCSIDriver.
    # openstack-config --set /etc/cinder/cinder.conf \
    DEFAULT volume_driver cinder.volume.drivers.lvm.LVMISCSIDriver
The volume service has been configured to use LVM storage.

Openstack Juno- RDO Packstack deployment to an external network & config via Neutron

Openstack Juno- RDO Packstack deployment to an external network & config via Neutron

Openstack is a solution that I have briefly been following over the past couple of years or so but never really had enough time to give it the focus it probably deserves, the current project I am working on has an element of interaction with Openstack so it seems a great opportunity to gain some in depth hands on experience giving me greater insight on how the various Openstack components click together and the level of interaction required for existing environments.

Having already bult a fairly solid VMware and Hyper-V lab environment meant that I wasn’t going to crash and burn what I already have; I need to shoehorn an Openstack deployment in to the lab environment, utilizing the existing network and storage facilities already available. This blog post will endeavor to layout the steps required to add an Openstack deployment from start to operational build and go over some of the hurdles I encountered along the way. As some background, in my existing built lab I use a typical 192.168.1.0/24 range of IPv4 address and also have a router to the outside world at 192.168.1.254. If your labs the same then it’s a matter of running the commands, if not then modify the address ranges to suit yours.

So many flavors to choose from.

Before I go into the steps, I also wanted to highlight some of the hurdles I encountered to building the Openstack deployment. The first question I asked myself is which distribution to choose to build the environment; initially I reviewed the Openstack docs to see the process of building the latest version of Openstack Juno version. Ubuntu and Centos seemed like the most common distributions that are used, I went for Ubuntu first because of the Devstack deployment process which a friend of mine suggested to check out. The docs surrounding Devstack (http://docs.openstack.org/developer/devstack/) are good, but are not so straight forward as it wasn’t clear exactly which files needed creating or modifying  for building the environment. For example it wasn’t clear if you needed to create the configuration file (local.conf or localrc) to get the options you need installed and configured. After a couple of attempts I did get a working environment going but initially it was a basic Nova/Network setup only finding the correct way to configure the local.conf file I got Neutron installed although configuration was another matter. I did have many late nights trying to get a worked environment but eventually gave up on it.

After ditching the Ubuntu build I then looked at building with Centos, having used Redhat for many years it did feel much more comfortable, I carried out some research on the options with Centos and went for an automatic installation process by using RDO (https://www.rdoproject.org/Main_Page), a community project of Redhat, Fedora and Centos deployments supported by users of the community.  One thing I have found with both Devstack and RDO is that information is out there but it is spread all over the place and not all sites have up to date information, for example some still focus on Havana or Icehouse and not many have info on Juno. Hopefully this guide will bring the installation steps into a single document which will help you.

Building out the Openstack environment following steps 1 to 27

Below are the steps I have created which will build out an Openstack deployment of Juno on a single VM or physical system which is based on Centos 7, it will use Neutron and connect to the existing external lab network of 192.168.1.0/24. The Openstack VM will have an IP of 192.168.1.150 which we will configure as a bridge, we will create a new network for the Openstack instances which will use a private IP pool of 10.0.0.0/24 and a floating IP or 192.168.1.0/24, we will create a floating IP range of 192.168.1.200-192.168.1.220 so that I can have 18 IPs available for instances if needed.

I will use vSphere 6 but really vSphere v5.x would be OK too, my vSphere servers can run nested virtualization which is ideal as I can create a snapshot and revert the snapshot if certain things failed.

1.      Create a new VM, for my requirements I have created a 16gb VM which is enough to run a couple of instances too along with Openstack, it also has a boot disk of 20GB, I also added another disk which I will use for Cinder (block storage), it will be a 100GB disk. I have also attached 2 virtual network cards both are directly connected to the main network.

2.     Install Centos 7.0 on the VM or physical system, I have used CentOS-7.0-1406-x86_64-Minimal.iso for my build. Install the OS following the configuration inputs as requested as asked by the install process.

3.     Some additional house keeping I make on the image is to rename the enolxxxxxxx network devices to eth0 and eth1, I’m a bit old school with device naming.

Modify the /etc/default/grub and append ‘net.ifnames=0 biosdevname=0‘ to the GRUB_CMDLINE_LINUX= statement.

# vi /etc/default/grub
GRUB_CMDLINE_LINUX="rd.lvm.lv=rootvg/usrlv rd.lvm.lv=rootvg/swaplv crashkernel=auto vconsole.keymap=usrd.lvm.lv=rootvg/rootlv vconsole.font=latarcyrheb-sun16 rhgb quiet net.ifnames=0 biosdevname=0"

4.     next make a new grub config

# grub2-mkconfig -o /boot/grub2/grub.cfg

5.     Rename the config files for both eno devices

# mv /etc/sysconfig/network-scripts/ifcfg-eno16777736 /etc/sysconfig/network-scripts/ifcfg-eth0

6.     repeat for eth1

# mv /etc/sysconfig/network-scripts/ifcfg-eno32111211 /etc/sysconfig/network-scripts/ifcfg-eth1

7.      Reboot to run with the modified changes.

# reboot

The RDO Install process

8.     Bring the Centos OS up to date

# yum update -y

9.     Open the SE Linux barriers a bit, this is a lab environment so can loosen the security a little

# vi /etc/selinux/config # SELINUX=enforcing SELINUX=permissive

10.  Install the epel repository

# yum install epel-release -y

11.   Modify the epel repo and enable core, debuginfo and source sections.

# vi /etc/yum.repos.d/epel.repo

[epel]enabled=1
[epel-debuginfo] enabled=1
[epel-source] enabled=1

12.   Install net tools

# yum install net-tools -y

13.   Install the RDO release

# yum install -y http://rdo.fedorapeople.org/rdo-release.rpm

14.   Install openstack packstack

# yum install -y openstack-packstack

15.   Install openvswitch

# yum install openvswitch -y

16.   Final update

# yum update -y

Cinder volume preparation

17.   Install lvm2

# yum install lvm2 -y

18. Build out using packstack puppet process

# packstack --allinone --provision-all-in-one-ovs-bridge=n

19.  Remove 20gb loopback file from Packstack install and create new cinder-volume disk on 100GB virtual disk

# vgremove cinder-volumes
# fdisk sdb
# pvcreate /dev/sdb
# vgcreate cinder-volumes /dev/sdb

UPDATE

Instead of the changes to eth1 and br-ex I have found a simpler method of using eth1 as the NIC that will be used on the OVS switch. just remember that if the server is rebooted to check that the eth1 is still connected to the br-ex port group.

20. Add eth1 to the openvswitch br-ex ports

# ovs-vsctl add-port br-ex eth1

Change network configuration for /etc/sysconfig/network-scripts/ifcfg-br-ex & /etc/sysconfig/network-scripts/ifcfg-eth1

# vi /etc/sysconfig/network-scripts/ifcfg-br-ex

DEVICE=br-ex
DEVICETYPE=ovs
TYPE=OVSBridge
BOOTPROTO=static
IPADDR=192.168.1.150
NETMASK=255.255.255.0
GATEWAY=192.168.1.254
DNS1=192.168.1.1
DNS2=192.168.1.254
ONBOOT=yes

# vi /etc/sysconfig/network-scripts/ifcfg-ifcfg-eth0

DEVICE=eth0
HWADDR=52:54:00:92:05:AE # your hwaddr
TYPE=OVSPort
DEVICETYPE=ovs
OVS_BRIDGE=br-ex
ONBOOT=yes

21.  Additional network configurations for the bridge

# network_vlan_ranges = physnet1
# bridge_mappings = physnet1:br-ex

22.   Restart the network services so that the config takes effect

# service network restart

Configure new network and router to connect onto external network

23.  Remove old network configuration settings

# . keystonerc_admin
# neutron router-gateway-clear router1
# neutron subnet-delete public_subnet
# neutron subnet-delete private subnet
# neutron net-delete private
# neutron net-delete public

24.  Open ports for icmp pings and connection via ssh

# nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 
# nova secgroup-add-rule default tcp 22 22 0.0.0.0/0

25.  Create new private network on 10.0.0.0/24 subnet

# neutron net-create private 
# neutron subnet-create private 10.0.0.0/24 --name private --dns-nameserver 8.8.8.8

26.  Create new public network on 192.168.1.0/24 subnet

# neutron net-create homelan --router:external=True 
# neutron subnet-create homelan 192.168.1.0/24 --name homelan --enable_dhcp False --allocation_pool start=192.168.1.201,end=192.168.1.220 --gateway 192.168.1.254

27.  Create new virtual router to connect private and public networks

# HOMELAN_NETWORK_ID=`neutron net-list | grep homelan | awk '{ print $2 }'` 
# PRIVATE_SUBNET_ID=`neutron subnet-list | grep private | awk '{ print $2}'` 
# ADMIN_TENANT_ID=`keystone tenant-list | grep admin | awk '{ print $2}'` 
# neutron router-create --name router --tenant-id $ADMIN_TENANT_ID router 
# neutron router-gateway-set router $HOMELAN_NETWORK_ID
# neutron router-interface-add router $PRIVATE_SUBNET_ID

That’s the install and configuration process complete. I will continue this series of blogs with deployment of instances and floating IP allocation.

Hope this has helped you deploy Openstack. Feel free to leave me a comment.

Getting started with KVM

This is the first in my series of KVM tutorials. this guide will walk you through the installation and configuration of KVM.

Getting the system ready for KVM Virtualization

For those of you that have played with Xen you would have noticed that is normally necessary to have the correct version of the kernel to run it, with KVM this is not the case. With todays versions of Linux they will more than likely be ready to support KVM from within their kernels, all that is left for you to do is to install the KVM kernel module.

With a standard installation the modules and tools are not installed by default that is unless you specifically select them, for example within the RHEL 6 installation process for instance.

To install KVM from the command prompt, execute the following command in a terminal window with root privileges:

yum install kvm virt-manager libvirt

If the installation fails just check that you have not attempted to install KVM on a 32-bit system, or a 64-bit system running a 32-bit version of RHEL 6:

Error: libvirt conflicts with kvm

Once the installation of KVM is complete it is recommended to reboot the system once you have closed any running applications.

Once the system has restarted you can check that everything is OK with the installation by making sure that two specific running modules have been loaded into the kernel. This can be done by running the following command:

su -
lsmod | grep kvm

The above command should look similar to the following:

lsmod | grep kvm
kvm_intel              45578  0
kvm                   291875  1 kvm_intel

The installation should have configured the libvirtd daemon to run in the background. Using a terminal window with super user privileges, run the following command to check that libvirtd is running:

/sbin/service libvirtd status
libvirtd (pid  xxxx) is running...

If the process is not running, it can be started as follows:

/sbin/service libvirtd start

You’re now ready to launch the Virtual Machine Manager “virt-manager” by selecting Applications > System Tools > Virtual Machine Manager. If the QEMU entry is not listed, select the File -> Add Connection menu option and select the QEMU/KVM hypervisor before clicking on the Connect button.

If all went OK then you should now be ready to create virtual machines into which guest operating systems may be installed.

In the next post we shall look at virtual machine creation and general configuration for KVM such as Disks, Networks and system management.