Openstack Juno- RDO Packstack deployment to an external network & config via Neutron

Openstack Juno- RDO Packstack deployment to an external network & config via Neutron

Openstack is a solution that I have briefly been following over the past couple of years or so but never really had enough time to give it the focus it probably deserves, the current project I am working on has an element of interaction with Openstack so it seems a great opportunity to gain some in depth hands on experience giving me greater insight on how the various Openstack components click together and the level of interaction required for existing environments.

Having already bult a fairly solid VMware and Hyper-V lab environment meant that I wasn’t going to crash and burn what I already have; I need to shoehorn an Openstack deployment in to the lab environment, utilizing the existing network and storage facilities already available. This blog post will endeavor to layout the steps required to add an Openstack deployment from start to operational build and go over some of the hurdles I encountered along the way. As some background, in my existing built lab I use a typical 192.168.1.0/24 range of IPv4 address and also have a router to the outside world at 192.168.1.254. If your labs the same then it’s a matter of running the commands, if not then modify the address ranges to suit yours.

So many flavors to choose from.

Before I go into the steps, I also wanted to highlight some of the hurdles I encountered to building the Openstack deployment. The first question I asked myself is which distribution to choose to build the environment; initially I reviewed the Openstack docs to see the process of building the latest version of Openstack Juno version. Ubuntu and Centos seemed like the most common distributions that are used, I went for Ubuntu first because of the Devstack deployment process which a friend of mine suggested to check out. The docs surrounding Devstack (http://docs.openstack.org/developer/devstack/) are good, but are not so straight forward as it wasn’t clear exactly which files needed creating or modifying  for building the environment. For example it wasn’t clear if you needed to create the configuration file (local.conf or localrc) to get the options you need installed and configured. After a couple of attempts I did get a working environment going but initially it was a basic Nova/Network setup only finding the correct way to configure the local.conf file I got Neutron installed although configuration was another matter. I did have many late nights trying to get a worked environment but eventually gave up on it.

After ditching the Ubuntu build I then looked at building with Centos, having used Redhat for many years it did feel much more comfortable, I carried out some research on the options with Centos and went for an automatic installation process by using RDO (https://www.rdoproject.org/Main_Page), a community project of Redhat, Fedora and Centos deployments supported by users of the community.  One thing I have found with both Devstack and RDO is that information is out there but it is spread all over the place and not all sites have up to date information, for example some still focus on Havana or Icehouse and not many have info on Juno. Hopefully this guide will bring the installation steps into a single document which will help you.

Building out the Openstack environment following steps 1 to 27

Below are the steps I have created which will build out an Openstack deployment of Juno on a single VM or physical system which is based on Centos 7, it will use Neutron and connect to the existing external lab network of 192.168.1.0/24. The Openstack VM will have an IP of 192.168.1.150 which we will configure as a bridge, we will create a new network for the Openstack instances which will use a private IP pool of 10.0.0.0/24 and a floating IP or 192.168.1.0/24, we will create a floating IP range of 192.168.1.200-192.168.1.220 so that I can have 18 IPs available for instances if needed.

I will use vSphere 6 but really vSphere v5.x would be OK too, my vSphere servers can run nested virtualization which is ideal as I can create a snapshot and revert the snapshot if certain things failed.

1.      Create a new VM, for my requirements I have created a 16gb VM which is enough to run a couple of instances too along with Openstack, it also has a boot disk of 20GB, I also added another disk which I will use for Cinder (block storage), it will be a 100GB disk. I have also attached 2 virtual network cards both are directly connected to the main network.

2.     Install Centos 7.0 on the VM or physical system, I have used CentOS-7.0-1406-x86_64-Minimal.iso for my build. Install the OS following the configuration inputs as requested as asked by the install process.

3.     Some additional house keeping I make on the image is to rename the enolxxxxxxx network devices to eth0 and eth1, I’m a bit old school with device naming.

Modify the /etc/default/grub and append ‘net.ifnames=0 biosdevname=0‘ to the GRUB_CMDLINE_LINUX= statement.

# vi /etc/default/grub
GRUB_CMDLINE_LINUX="rd.lvm.lv=rootvg/usrlv rd.lvm.lv=rootvg/swaplv crashkernel=auto vconsole.keymap=usrd.lvm.lv=rootvg/rootlv vconsole.font=latarcyrheb-sun16 rhgb quiet net.ifnames=0 biosdevname=0"

4.     next make a new grub config

# grub2-mkconfig -o /boot/grub2/grub.cfg

5.     Rename the config files for both eno devices

# mv /etc/sysconfig/network-scripts/ifcfg-eno16777736 /etc/sysconfig/network-scripts/ifcfg-eth0

6.     repeat for eth1

# mv /etc/sysconfig/network-scripts/ifcfg-eno32111211 /etc/sysconfig/network-scripts/ifcfg-eth1

7.      Reboot to run with the modified changes.

# reboot

The RDO Install process

8.     Bring the Centos OS up to date

# yum update -y

9.     Open the SE Linux barriers a bit, this is a lab environment so can loosen the security a little

# vi /etc/selinux/config # SELINUX=enforcing SELINUX=permissive

10.  Install the epel repository

# yum install epel-release -y

11.   Modify the epel repo and enable core, debuginfo and source sections.

# vi /etc/yum.repos.d/epel.repo

[epel]enabled=1
[epel-debuginfo] enabled=1
[epel-source] enabled=1

12.   Install net tools

# yum install net-tools -y

13.   Install the RDO release

# yum install -y http://rdo.fedorapeople.org/rdo-release.rpm

14.   Install openstack packstack

# yum install -y openstack-packstack

15.   Install openvswitch

# yum install openvswitch -y

16.   Final update

# yum update -y

Cinder volume preparation

17.   Install lvm2

# yum install lvm2 -y

18. Build out using packstack puppet process

# packstack --allinone --provision-all-in-one-ovs-bridge=n

19.  Remove 20gb loopback file from Packstack install and create new cinder-volume disk on 100GB virtual disk

# vgremove cinder-volumes
# fdisk sdb
# pvcreate /dev/sdb
# vgcreate cinder-volumes /dev/sdb

UPDATE

Instead of the changes to eth1 and br-ex I have found a simpler method of using eth1 as the NIC that will be used on the OVS switch. just remember that if the server is rebooted to check that the eth1 is still connected to the br-ex port group.

20. Add eth1 to the openvswitch br-ex ports

# ovs-vsctl add-port br-ex eth1

Change network configuration for /etc/sysconfig/network-scripts/ifcfg-br-ex & /etc/sysconfig/network-scripts/ifcfg-eth1

# vi /etc/sysconfig/network-scripts/ifcfg-br-ex

DEVICE=br-ex
DEVICETYPE=ovs
TYPE=OVSBridge
BOOTPROTO=static
IPADDR=192.168.1.150
NETMASK=255.255.255.0
GATEWAY=192.168.1.254
DNS1=192.168.1.1
DNS2=192.168.1.254
ONBOOT=yes

# vi /etc/sysconfig/network-scripts/ifcfg-ifcfg-eth0

DEVICE=eth0
HWADDR=52:54:00:92:05:AE # your hwaddr
TYPE=OVSPort
DEVICETYPE=ovs
OVS_BRIDGE=br-ex
ONBOOT=yes

21.  Additional network configurations for the bridge

# network_vlan_ranges = physnet1
# bridge_mappings = physnet1:br-ex

22.   Restart the network services so that the config takes effect

# service network restart

Configure new network and router to connect onto external network

23.  Remove old network configuration settings

# . keystonerc_admin
# neutron router-gateway-clear router1
# neutron subnet-delete public_subnet
# neutron subnet-delete private subnet
# neutron net-delete private
# neutron net-delete public

24.  Open ports for icmp pings and connection via ssh

# nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 
# nova secgroup-add-rule default tcp 22 22 0.0.0.0/0

25.  Create new private network on 10.0.0.0/24 subnet

# neutron net-create private 
# neutron subnet-create private 10.0.0.0/24 --name private --dns-nameserver 8.8.8.8

26.  Create new public network on 192.168.1.0/24 subnet

# neutron net-create homelan --router:external=True 
# neutron subnet-create homelan 192.168.1.0/24 --name homelan --enable_dhcp False --allocation_pool start=192.168.1.201,end=192.168.1.220 --gateway 192.168.1.254

27.  Create new virtual router to connect private and public networks

# HOMELAN_NETWORK_ID=`neutron net-list | grep homelan | awk '{ print $2 }'` 
# PRIVATE_SUBNET_ID=`neutron subnet-list | grep private | awk '{ print $2}'` 
# ADMIN_TENANT_ID=`keystone tenant-list | grep admin | awk '{ print $2}'` 
# neutron router-create --name router --tenant-id $ADMIN_TENANT_ID router 
# neutron router-gateway-set router $HOMELAN_NETWORK_ID
# neutron router-interface-add router $PRIVATE_SUBNET_ID

That’s the install and configuration process complete. I will continue this series of blogs with deployment of instances and floating IP allocation.

Hope this has helped you deploy Openstack. Feel free to leave me a comment.

About virtuallylg

Hello, my name is Lorenzo Galelli, I have been working with availability and virtualization solutions for Symantec for over a decade now and its amazing to see the impact virtualization has brought to the world of IT. During my time at Symantec I have worked as a systems engineer for customers big and small and seen a vast array of different virtualization projects. I am currently Technical Product Manager for ApplicationHA for VMware and KVM and I also have focus on VDI especially with Symantec's VirtualStore and FileStore technologies. Follow my blog for all things Symantec and virtualization. Opinions expressed here are my own.
This entry was posted in KVM and tagged , . Bookmark the permalink.

One Response to Openstack Juno- RDO Packstack deployment to an external network & config via Neutron

  1. Krish says:

    Thanks for guide. I’ve followed same and installed, however when I add ‘eth0’ to br-ex I’m unable to ping gateway from VM. My openstack VM is on 172.16.31.161 on 172.16.31.0/24, gateway is 172.16.31.2. Can you suggest how do I troubleshoot this issue?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s