Openstack – Configuring for LVM Storage Backend

The volume service is able to make use of a volume group attached directly to the server on which the service runs. This volume group must be created exclusively for use by the block storage service and the configuration updated to point to the name of the volume group.
The following steps must be performed while logged into the system hosting the volume service as the root user:
  1. Use the pvcreate command to create a physical volume.
    # pvcreate DEVICE
      Physical volume "DEVICE" successfully created
    Replace DEVICE with the path to a valid, unused, device. For example:

    # pvcreate /dev/sdX
  2. Use the vgcreate command to create a volume group.
    # vgcreate cinder-volumes DEVICE
      Volume group "cinder-volumes" successfully created
    Replace DEVICE with the path to the device used when creating the physical volume. Optionally replace cinder-volumes with an alternative name for the new volume group.
  3. Set the volume_group configuration key to the name of the newly created volume group.
    # openstack-config --set /etc/cinder/cinder.conf \
    DEFAULT volume_group cinder-volumes
    The name provided must match the name of the volume group created in the previous step.
  4. Ensure that the correct volume driver for accessing LVM storage is in use by setting the volume_driverconfiguration key to cinder.volume.drivers.lvm.LVMISCSIDriver.
    # openstack-config --set /etc/cinder/cinder.conf \
    DEFAULT volume_driver cinder.volume.drivers.lvm.LVMISCSIDriver
The volume service has been configured to use LVM storage.
Posted in KVM, Openstack | Leave a comment

Openstack Liberty Error: Unable to retrieve volume limit information.

After an Openstack Liberty deployment  you may encounter the following error: Error: Unable to retrieve volume limit information. OR Danger: There was an error submitting the form. Please try again.

unable to retreive size limit

These errors are a result of a miss-configuration within CINDER, to resolve this all you need to do is edit the ‘/etc/cinder/cinder.conf‘ file and make sure the following two lines exist

auth_uri = http://keystone_ip:5000
auth_url = http://keystone_ip:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = services
username = cinder
password = [ccinder password] <- find from answer file. password is stored in CONFIG_CINDER_KS_PW

After you had verified or added the lines you will need to restart the cinder services by running:

# service openstack-cinder-api restart
# service openstack-cinder-backup restart
# service openstack-cinder-scheduler restart
# service openstack-cinder-volume restart

Posted in Uncategorized | Leave a comment

Using multiple external networks in OpenStack Neutron

I haven’t found a lot of documentation about it, but basically, here’s how to do it. Lets assume the following:

  • you start from a single external network, which is connected to ‘br-ex’
  • you want to attach the new external network to ‘eth1’.

In the network node (were neutron-l3-agent, neutron-dhcp-agent, etc.. run):

  • Create a second OVS bridge, which will provide connectivity to the new external network:
ovs-vsctl add-br br-eth1

ovs-vsctl add-port br-eth1 eth1

ip link set eth1 up
  • (Optionally) If you want to plug a virtual interface into this bridge and add a local IP on the node to this network for testing:
ovs-vsctl add-port br-eth1 vi1 – set Interface vi1 type=internal

ip addr add dev vi1 # you may adjust your network CIDR, or set your system configuration to setup this at boot.
  • Edit your /etc/neutron/l3_agent.ini , and set/change:
gateway_external_network_id =

external_network_bridge =

This change tells the l3 agent that it must relay on the physnet<->bridge mappings at /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini it will automatically patch those bridges and router interfaces around. For example, in tunneling mode, it will patch br-int to the external bridges, and set the external ‘q’router interfaces on br-int.

  • Edit your /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini to map ‘logical physical nets’ to ‘external bridges’
bridge_mappings = physnet1:br-ex,physnet2:br-eth1
  • Restart your neutron-l3-agent and your neutron-openvswitch-agent
service neutron-l3-agent restart

service neutron-openvswitch-agent restart

At this point, you can create two external networks (please note, if you don’t make the l3_agent.ini changes, the l3 agent will start complaining and will refuse to work)

neutron net-create ext_net –provider:network_type flat –provider:physical_network physnet1 –router:external=True

neutron net-create ext_net2 –provider:network_type flat –provider:physical_network physnet2 –router:external=True

And for example create a couple of internal subnets and routers:

# for the first external net

neutron subnet-create ext_net –gateway – –enable_dhcp=False # here the allocation pool goes explicit…. all the IPs available..

neutron router-create router1

neutron router-gateway-set router1 ext_net

neutron net-create privnet

neutron subnet-create privnet –gateway –name privnet_subnet

neutron router-interface-add router1 privnet_subnet
# for the second external net

neutron subnet-create ext_net2 –allocation-pool start=,end= –gateway= –enable_dhcp=False

neutron router-create router2

neutron router-gateway-set router2 ext_net2

neutron net-create privnet2

neutron subnet-create privnet2 –gateway –name privnet2_subnet
 neutron router-interface-add router2 privnet2_subnet
Posted in Openstack | Leave a comment

Setting Up a Flat Network with Openstack Neutron

This setup will allow the VMs to use an existing network. In this example, eth2 is connected to this pre-existing network ( that we want to use for the OpenStack VMs.

All the configuration is done in the node dedicated to Nova Networking.

1. Set up the Open vSwitch bridge:

# ovs-vsctl add-br br-eth2
# ovs-vsctl add-port br-eth2 eth2

2. Set up /etc/network/interfaces (node’s IP is

auto eth2
iface eth2 inet manual
up ifconfig $IFACE up
up ip link set $IFACE promisc on
down ip link set $IFACE promisc off
down ifconfig $IFACE down

auto br-eth2
iface br-eth2 inet static

3. Tell Open vSwitch to use the bridge connected to eth2 (br-eth2) and map physnet1 to it /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini:

network_vlan_ranges = physnet1
bridge_mappings = physnet1:br-eth2

4. Tell Neutron to not to use the metadata proxy in /etc/nova/nova.conf (otherwise you get a HTTP 400 when querying the metadata service):


Note that the metadata service is managed by Neutron as usual via the neutron-metadata-agent service anyway.

5. Create the network telling to use the physnet1 mapped above to br-eth2:

# neutron net-create flat-provider-network --shared  --provider:network_type flat --provider:physical_network physnet1

6. Create the subnet:

# neutron subnet-create --name flat-provider-subnet --gateway --dns-nameserver  --allocation-pool start=,end=  flat-provider-network

That’s it. Now VMs will get an IP of the specified range and will be directly connected to our network via Open vSwitch.

Posted in Openstack | Leave a comment

Fixing Openstack Kilo Horizon Re-login issue

Fixing Horizon Re-login issue

There is an issue in OpenStack Kilo with re-login because of a bad cookie session. Here is how to fix the issue.

#vi /etc/openstack-dashboard/local_settings
AUTH_USER_MODEL = 'openstack_auth.User'
Posted in Openstack | Leave a comment

Disaster Recovery as a Service: Ten steps to success

Disaster recovery is becoming top of mind for many CIOs. Understanding the success criteria to make the disaster recovery journey of your own organization smooth and successful is critical, but the path to getting there can be difficult.

Follow the ten key steps below, to guide you on the right path to success.

  1. Understand why disaster recovery is important to your business, and what your specific disaster recovery requirements are.

The first key step is understanding why you are looking for a disaster recovery solution for your business, and what your requirements are- from a disaster recovery perspective as well as for the solution in need. Running a Business Impact Analysis (BIA) will assist in the impact of a disruption to your business and will also help expose the effect of such disruption to your reputation including the effect of any loss of data or loss of staff, the BIA is very much the building block and foundation of your disaster recovery planning and knowing what the business impact to outages is probably the most important aspect in defining the answer to the “why” question. Knowing the business impact will not only drive the Service Level Agreements (SLAs) for the business processes they will also assist the disaster recovery plan to minimise any prolonged outages which could be derived from human error during the recovery process. If these aspects are missing and haven’t been thought of yet then running a Business Impact Analysis should be the first thing that you do and will put you in good stead as you move forward.

An additional aspect of the disaster recovery process is to understand your Recovery Point Objective (RPO), Recovery Time Objective (RTO). From a SLA perspective, think about the amount of time and data loss your business can incur. Zero data loss is obviously ideal, but this can exponentially drive up the cost of the solution. Having a limit to the data loss that can be incurred by your business based on the business service is realistic. Both the time and data loss windows will translate to your RTO and RPOs respectively.

Additionally, does your business require adherence to any regulatory compliance or operating rules? For example, do you need to provide proof of a quarterly or yearly disaster recovery test?  Disaster recovery testing is important, and there are a lot of factors to take into consideration here. What kind of replication technology would you choose – expensive hardware-based replication or host-based or even replication to the cloud. What you choose is based on various factors including cost, business policies, SLA requirements, and importantly environmental factors.  For instance, if your data center is located in an area which gets affected by floods, then your disaster recovery location needs to be in a separate geographic area or even in the cloud.

  1. Should you build your own or buy off the shelf?

The next step is driven by how much investment you either want to make operationally or in capital expenditure. You probably have already invested quite heavily into infrastructure at your primary data center location – things such as server hardware, virtualization technologies and storage. You could take a simple approach and invest in another physical data center for disaster recovery, but this would lead to the expense of not only double software / hardware infrastructure costs but also additional physical location costs. A more savvy approach would be to utilize a vendor to supply disaster recovery services at a fraction of the cost of running dual locations. Keep in mind that choosing the right vendor is important too. You will want to look for a leader in the managed disaster recovery services space that has years of credible experience.

  1. Understand the difference between disaster recovery as-a-service and backup and recovery as-a-service.

Understand that disaster recovery and backup are different ball games. While backup is a necessary part of a business continuity strategy, it lends itself to SLAs of hours to days. On the other hand, disaster recovery is better suited to SLA requirements in minutes to hours. Based on the business uptime and data loss requirements specific to a business service, your business would deploy a disaster recovery solution for your business-critical applications, while backup would be sufficient for those non-critical business services which can take some downtime.  Choose a disaster recovery as-a-service solution that can protect your entire estate or at least the critical elements of it that drive your business. This includes physical and virtual systems, as well as the mix of different OSs that typically are run within enterprise businesses today. The disaster recovery as-a-service solution that you choose should also be able to provide you with the ability to run your systems within their cloud location for a period of time, until you can get your infrastructure back up and running and transfer services back to your primary site.

  1. Choose the right Cloud Hypervisor.

It may seem like an easy decision to make- you would seek a vendor that runs the same hypervisor on the backend as you are on your primary site, but keep in mind this is not a necessity.  If you are using VMware vSphere or Microsoft Hyper-V then running these type of hypervisors in the cloud is going to incur some additional licensing costs in a DR solution. Another thing to think about is whether you really need all the bells and whistles when you’ve invoked disaster recovery. Most of your time is going to be taken up with getting services up and running back at your own location as quickly as possible, so maybe not. What you basically need is a hypervisor to host your systems that provides the basic performance, scale and resilience you require. A more cost-efficient stance would be to utilise a KVM-based hypervisor running within OpenStack. This ticks the boxes in terms of enterprise ready and best of all, the service costs should yield a better ROI than those running proprietary hypervisor technologies, saving your business considerable money.

  1. Plan for all business services that need to be protected, including multi-tier services

Now were getting down to the nitty-gritty details. The business services that need to be protected will be primarily driven by the SLAs that brought you down this path. Keep in mind that you capture all operating system types that these business services are running on and also think about how you handle any physical systems that have not yet been virtualized. Moving virtualized applications to the cloud is an easy process, as these are already encapsulated by the hypervisor in use. But pure physical business applications are another matter altogether.  It is not impossible to move physical application data to the cloud, but when it comes to a failback scenario, if the services you select does not have this capability, then you are a sitting duck. This is especially important to keep in mind in the case where a complete outage has occurred and a rebuild is needed. Another thing to think about is when your business services or applications are started in the cloud- can you start or stop these systems in a particular order if a business service is made of different processes, such as a multi-tier application, and also inject manual steps within your failover plan if so required? Controlling multi-tier business applications that span across systems is going to be a high priority, not only while invoking disaster recovery but also when you’re performing a disaster recovery test.

  1. Plan for your RTOs, RPOs, Bandwidth, Latency and IOPs

Understanding how you can achieve your Recovery Point Objective (RPO), Recovery Time Objective (RTO), as well as the IO load of virtual machines, and the peaky nature of writes through the business day within your systems, this data will help you understand what your required WAN bandwidth should be. Determine whether your disaster recovery service vendor can guarantee these RTOs and RPOs, because every additional minute or hour that your business is down as defined by the Business Impact Analysis is going to cost you. If you aim for RPO of 15 minutes or less, then your bandwidth to the cloud needs to be big enough to cope with extended periods of heavy IO within your systems. If your RTO is something like 4 hours, then you need to know if your systems can recover within that time period, keeping in mind that other operations too need to be managed, such as DNS and AD/LDAP updates including any additional infrastructure services that your business needs.

  1. Avoid vendor lock-in while moving data to the cloud

Understanding how your data will be sent to the cloud provider site is important. A solution that employs VMware vSphere on-premises and in the cloud limits you to a replication solution that works only for virtualized systems with no choice of protecting physical OS systems. This may seem acceptable at the time, but you will be locked into this solution and switching DR providers in the future may be difficult.  Seeking a solution that is flexible and can protect all types of major virtualization platforms as well as physical OS gives you the flexibility of choice for the future.

  1. Run successful disaster recovery rehearsals without unexpected costs

Rehearsals or exercises are probably the most important aspect of any disaster recovery solution. Not having an automated disaster recovery rehearsal process that you test on a regular basis can leave your business vulnerable. Your recovery rehearsals should not affect your running production environment. Any rehearsal system should run in parallel albeit within a separate network VLAN, but still have some type of access to infrastructure services such as AD, LDAP and DNS etc. so that full disaster recovery testing can be carried out. Once testing is complete, it is essential that the solution include a provision to easily remove and clean up the rehearsal processes.

  1. How long can you stay in the cloud?

For a moment let’s imagine that the unthinkable has happened, and you have invoked disaster recovery to your cloud service provider. The nature of the outage at your primary location will dictate the length of time you will need to keep your business applications running on your service providers’ infrastructure. It is imperative that you are aware of any clauses within your contract that pertain to length of time you can keep your business running on the cloud providers’ site. There is also a big pull to get enterprises to think about running in the cloud and staying there, but this is a big decision to make. Performance of the systems is going to be one metric to poll against, as is performance of storage, or more precisely the quality of service of the storage that the cloud vendor will provide. On the whole, it makes sense to get back into your own infrastructure as quick as possible, since it is custom built to support your business.

  1. How easy is it to failback business services to your own site?

Getting your data back or reversing the replication data path is going to be important especially as you don’t want to affect your running systems within the cloud by injecting more downtime! Rebuilding your infrastructure is one aspect that needs to be meticulously planned. Any assistance that the solution itself can provide to make this process smoother is a bonus. Your on-premises location is going to need a full re-sync of data from the cloud location which may take some time, so the solution should be able to handle a two-step approach to failback- the re-sync should happen in one operation and once complete, the process to switch back your systems can be done at a time that suits your business.

Success, you’re now armed to create a robust business continuity plan.

Follow the steps above to gain an understanding of what’s needed to be successful on your disaster recovery as a service journey, and use them as checkpoints while developing you own robust business continuity plan for your business.

Posted in Microsoft, VMware | Leave a comment

VMware Integrated Openstack 2.0 set for release before the end of Q3 2015

Its been just six months since VMware released version 1.0 of VMware Integrated Openstack for general availability and now the next release is expected to be available before the end of Q3 2015 for download, here’s what’s new in this 2.0 release:

  • Kilo-based: VMware Integrated OpenStack 2.0 will be based on OpenStack Kilo release, making it current with upstream OpenStack code.
  • Seamless OpenStack Upgrade: VMware Integrated OpenStack 2.0 will introduce an Industry-first seamless upgrade capability between OpenStack releases. Customers will now be able to upgrade from V1.0 (Icehouse) to V2.0 (Kilo), and even roll back if anything goes wrong, in a more operationally efficient manner.
  • Additional Language Support: VMware Integrated OpenStack 2.0 will now available in six more languages: German, French, Traditional Chinese, Simplified Chinese, Japanese and Korean.
  • LBaaS: Load Balancing as a service will be available supported through VMware NSX.
  • Ceilometer Support: VMware Integrated OpenStack 2.0 will now support Ceilometer with Mongo DB as the Backend Database
  • App-Level Auto Scaling using Heat: Auto Scaling will enable users to set up metrics that scale up or down application components. This will enable development teams to address unpredictable changes in demand for the app services. Ceilometer will provide the alarms and triggers, Heat will orchestrate the creation (or deletion) of scale out components and LBaaS will provide load balancing for the scale out components.
  • Backup and Restore: VMware Integrated OpenStack 2.0 will include the ability to backup and restore OpenStack services and configuration data.
  • Advanced vSphere Integration: VMware Integrated OpenStack 2.0 will expose vSphere Windows Guest Customization. VMware admins will be able to specify various attributes such as ability to generate new SIDs, assign admin passwords for the VM, manage compute names etc. There will also be added support for more granular placement of VMs by leveraging vSphere features such as affinity and anti-affinity settings.
  • Qcow2 Image Support: VMware Integrated OpenStack 2.0 will support the popular qcow2 Virtual Machine image format.
  • Available through our vCloud Air Network Partners: Customers will be able to use OpenStack on top of VMware through any of the service providers in out vCloud Air Partner Network.
Posted in VMware | Tagged , | Leave a comment