VMware Integrated Openstack 2.0 set for release before the end of Q3 2015

Its been just six months since VMware released version 1.0 of VMware Integrated Openstack for general availability and now the next release is expected to be available before the end of Q3 2015 for download, here’s what’s new in this 2.0 release:

  • Kilo-based: VMware Integrated OpenStack 2.0 will be based on OpenStack Kilo release, making it current with upstream OpenStack code.
  • Seamless OpenStack Upgrade: VMware Integrated OpenStack 2.0 will introduce an Industry-first seamless upgrade capability between OpenStack releases. Customers will now be able to upgrade from V1.0 (Icehouse) to V2.0 (Kilo), and even roll back if anything goes wrong, in a more operationally efficient manner.
  • Additional Language Support: VMware Integrated OpenStack 2.0 will now available in six more languages: German, French, Traditional Chinese, Simplified Chinese, Japanese and Korean.
  • LBaaS: Load Balancing as a service will be available supported through VMware NSX.
  • Ceilometer Support: VMware Integrated OpenStack 2.0 will now support Ceilometer with Mongo DB as the Backend Database
  • App-Level Auto Scaling using Heat: Auto Scaling will enable users to set up metrics that scale up or down application components. This will enable development teams to address unpredictable changes in demand for the app services. Ceilometer will provide the alarms and triggers, Heat will orchestrate the creation (or deletion) of scale out components and LBaaS will provide load balancing for the scale out components.
  • Backup and Restore: VMware Integrated OpenStack 2.0 will include the ability to backup and restore OpenStack services and configuration data.
  • Advanced vSphere Integration: VMware Integrated OpenStack 2.0 will expose vSphere Windows Guest Customization. VMware admins will be able to specify various attributes such as ability to generate new SIDs, assign admin passwords for the VM, manage compute names etc. There will also be added support for more granular placement of VMs by leveraging vSphere features such as affinity and anti-affinity settings.
  • Qcow2 Image Support: VMware Integrated OpenStack 2.0 will support the popular qcow2 Virtual Machine image format.
  • Available through our vCloud Air Network Partners: Customers will be able to use OpenStack on top of VMware through any of the service providers in out vCloud Air Partner Network.
Posted in VMware | Tagged , | Leave a comment

Openstack Juno- RDO Packstack deployment to an external network & config via Neutron

Openstack Juno- RDO Packstack deployment to an external network & config via Neutron

Openstack is a solution that I have briefly been following over the past couple of years or so but never really had enough time to give it the focus it probably deserves, the current project I am working on has an element of interaction with Openstack so it seems a great opportunity to gain some in depth hands on experience giving me greater insight on how the various Openstack components click together and the level of interaction required for existing environments.

Having already bult a fairly solid VMware and Hyper-V lab environment meant that I wasn’t going to crash and burn what I already have; I need to shoehorn an Openstack deployment in to the lab environment, utilizing the existing network and storage facilities already available. This blog post will endeavor to layout the steps required to add an Openstack deployment from start to operational build and go over some of the hurdles I encountered along the way. As some background, in my existing built lab I use a typical 192.168.1.0/24 range of IPv4 address and also have a router to the outside world at 192.168.1.254. If your labs the same then it’s a matter of running the commands, if not then modify the address ranges to suit yours.

So many flavors to choose from.

Before I go into the steps, I also wanted to highlight some of the hurdles I encountered to building the Openstack deployment. The first question I asked myself is which distribution to choose to build the environment; initially I reviewed the Openstack docs to see the process of building the latest version of Openstack Juno version. Ubuntu and Centos seemed like the most common distributions that are used, I went for Ubuntu first because of the Devstack deployment process which a friend of mine suggested to check out. The docs surrounding Devstack (http://docs.openstack.org/developer/devstack/) are good, but are not so straight forward as it wasn’t clear exactly which files needed creating or modifying  for building the environment. For example it wasn’t clear if you needed to create the configuration file (local.conf or localrc) to get the options you need installed and configured. After a couple of attempts I did get a working environment going but initially it was a basic Nova/Network setup only finding the correct way to configure the local.conf file I got Neutron installed although configuration was another matter. I did have many late nights trying to get a worked environment but eventually gave up on it.

After ditching the Ubuntu build I then looked at building with Centos, having used Redhat for many years it did feel much more comfortable, I carried out some research on the options with Centos and went for an automatic installation process by using RDO (https://www.rdoproject.org/Main_Page), a community project of Redhat, Fedora and Centos deployments supported by users of the community.  One thing I have found with both Devstack and RDO is that information is out there but it is spread all over the place and not all sites have up to date information, for example some still focus on Havana or Icehouse and not many have info on Juno. Hopefully this guide will bring the installation steps into a single document which will help you.

Building out the Openstack environment following steps 1 to 27

Below are the steps I have created which will build out an Openstack deployment of Juno on a single VM or physical system which is based on Centos 7, it will use Neutron and connect to the existing external lab network of 192.168.1.0/24. The Openstack VM will have an IP of 192.168.1.150 which we will configure as a bridge, we will create a new network for the Openstack instances which will use a private IP pool of 10.0.0.0/24 and a floating IP or 192.168.1.0/24, we will create a floating IP range of 192.168.1.200-192.168.1.220 so that I can have 18 IPs available for instances if needed.

I will use vSphere 6 but really vSphere v5.x would be OK too, my vSphere servers can run nested virtualization which is ideal as I can create a snapshot and revert the snapshot if certain things failed.

1.      Create a new VM, for my requirements I have created a 16gb VM which is enough to run a couple of instances too along with Openstack, it also has a boot disk of 20GB, I also added another disk which I will use for Cinder (block storage), it will be a 100GB disk. I have also attached 2 virtual network cards both are directly connected to the main network.

2.     Install Centos 7.0 on the VM or physical system, I have used CentOS-7.0-1406-x86_64-Minimal.iso for my build. Install the OS following the configuration inputs as requested as asked by the install process.

3.     Some additional house keeping I make on the image is to rename the enolxxxxxxx network devices to eth0 and eth1, I’m a bit old school with device naming.

Modify the /etc/default/grub and append ‘net.ifnames=0 biosdevname=0‘ to the GRUB_CMDLINE_LINUX= statement.

# vi /etc/default/grub
GRUB_CMDLINE_LINUX="rd.lvm.lv=rootvg/usrlv rd.lvm.lv=rootvg/swaplv crashkernel=auto vconsole.keymap=usrd.lvm.lv=rootvg/rootlv vconsole.font=latarcyrheb-sun16 rhgb quiet net.ifnames=0 biosdevname=0"

4.     next make a new grub config

# grub2-mkconfig -o /boot/grub2/grub.cfg

5.     Rename the config files for both eno devices

# mv /etc/sysconfig/network-scripts/ifcfg-eno16777736 /etc/sysconfig/network-scripts/ifcfg-eth0

6.     repeat for eth1

# mv /etc/sysconfig/network-scripts/ifcfg-eno32111211 /etc/sysconfig/network-scripts/ifcfg-eth1

7.      Reboot to run with the modified changes.

# reboot

The RDO Install process

8.     Bring the Centos OS up to date

# yum update -y

9.     Open the SE Linux barriers a bit, this is a lab environment so can loosen the security a little

# vi /etc/selinux/config # SELINUX=enforcing SELINUX=permissive

10.  Install the epel repository

# yum install epel-release -y

11.   Modify the epel repo and enable core, debuginfo and source sections.

# vi /etc/yum.repos.d/epel.repo

[epel]enabled=1
[epel-debuginfo] enabled=1
[epel-source] enabled=1

12.   Install net tools

# yum install net-tools -y

13.   Install the RDO release

# yum install -y http://rdo.fedorapeople.org/rdo-release.rpm

14.   Install openstack packstack

# yum install -y openstack-packstack

15.   Install openvswitch

# yum install openvswitch -y

16.   Final update

# yum update -y

Cinder volume preparation

17.   Install lvm2

# yum install lvm2 -y

18. Build out using packstack puppet process

# packstack --allinone --provision-all-in-one-ovs-bridge=n

19.  Remove 20gb loopback file from Packstack install and create new cinder-volume disk on 100GB virtual disk

# vgremove cinder-volumes
# fdisk sdb
# pvcreate /dev/sdb
# vgcreate cinder-volumes /dev/sdb

UPDATE

Instead of the changes to eth1 and br-ex I have found a simpler method of using eth1 as the NIC that will be used on the OVS switch. just remember that if the server is rebooted to check that the eth1 is still connected to the br-ex port group.

20. Add eth1 to the openvswitch br-ex ports

# ovs-vsctl add-port br-ex eth1

Change network configuration for /etc/sysconfig/network-scripts/ifcfg-br-ex & /etc/sysconfig/network-scripts/ifcfg-eth1

# vi /etc/sysconfig/network-scripts/ifcfg-br-ex

DEVICE=br-ex
DEVICETYPE=ovs
TYPE=OVSBridge
BOOTPROTO=static
IPADDR=192.168.1.150
NETMASK=255.255.255.0
GATEWAY=192.168.1.254
DNS1=192.168.1.1
DNS2=192.168.1.254
ONBOOT=yes

# vi /etc/sysconfig/network-scripts/ifcfg-ifcfg-eth0

DEVICE=eth0
HWADDR=52:54:00:92:05:AE # your hwaddr
TYPE=OVSPort
DEVICETYPE=ovs
OVS_BRIDGE=br-ex
ONBOOT=yes

21.  Additional network configurations for the bridge

# network_vlan_ranges = physnet1
# bridge_mappings = physnet1:br-ex

22.   Restart the network services so that the config takes effect

# service network restart

Configure new network and router to connect onto external network

23.  Remove old network configuration settings

# . keystonerc_admin
# neutron router-gateway-clear router1
# neutron subnet-delete public_subnet
# neutron subnet-delete private subnet
# neutron net-delete private
# neutron net-delete public

24.  Open ports for icmp pings and connection via ssh

# nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 
# nova secgroup-add-rule default tcp 22 22 0.0.0.0/0

25.  Create new private network on 10.0.0.0/24 subnet

# neutron net-create private 
# neutron subnet-create private 10.0.0.0/24 --name private --dns-nameserver 8.8.8.8

26.  Create new public network on 192.168.1.0/24 subnet

# neutron net-create homelan --router:external=True 
# neutron subnet-create homelan 192.168.1.0/24 --name homelan --enable_dhcp False --allocation_pool start=192.168.1.201,end=192.168.1.220 --gateway 192.168.1.254

27.  Create new virtual router to connect private and public networks

# HOMELAN_NETWORK_ID=`neutron net-list | grep homelan | awk '{ print $2 }'` 
# PRIVATE_SUBNET_ID=`neutron subnet-list | grep private | awk '{ print $2}'` 
# ADMIN_TENANT_ID=`keystone tenant-list | grep admin | awk '{ print $2}'` 
# neutron router-create --name router --tenant-id $ADMIN_TENANT_ID router 
# neutron router-gateway-set router $HOMELAN_NETWORK_ID
# neutron router-interface-add router $PRIVATE_SUBNET_ID

That’s the install and configuration process complete. I will continue this series of blogs with deployment of instances and floating IP allocation.

Hope this has helped you deploy Openstack. Feel free to leave me a comment.

Posted in KVM | Tagged , | 1 Comment

Goodbye vSphere AppHA, you were just not up to the job, enter Symantec ApplicationHA to the rescue

Well I thought this day would come eventually but I am surprised to see it so soon, its official folks. vSphere AppHA is no more as of vSphere 6.0, the official announcement is here . With the effort that’s required to provide continual support for old and new applications and also having to provide continual support for their updates it looks like the task was not something that VMware wanted to focus on. Don’t think that your covered though with backups, replication, vSphere HA or vSphere FT, non of those will get your application back up and running automatically should it fail.

don’t worry though

Symantec ApplicationHA comes to the rescue…..

As one of the first third party vendors providing support for application availability within virtual machines, Symantec has always been at the forefront providing resilience for applications running within VMware vSphere. ApplicationHA is one solution that has been doing this, and for the past four years its been going from strength to strength adding functionality and automation and importantly, resilience for mission critical applications that enables our customers to sleep at night. If your unfamiliar with Symantec ApplicationHA take a look at this comparison which I made a while back, its very detailed but will give you an insight of ApplicationHA’s true potential. Its inexpensive and doesn’t need vSphere Enterprise Plus to work. Its stable mature technology built on Veritas Cluster Server heritage. The development effort required to keep on top of platform and application updates is a challenge but it’s worth it, after all it’s the applications that drive your business and providing resilience for them should be top of mind.

More info on Symantec Application can be found here there’s also a free trial that you can test drive for 60 days if you like too.

Posted in Symantec, VMware | Tagged , , | 1 Comment

Whats new in VMware Fault Tolerance 6.0

VMware Fault Tolerance (FT) in vSphere 5.5 is one of those features you would love to use but because of its vCPU limitation it was not really helping to protect the Mission Critical applications so for many its left behind. With vSphere 6.0, VMware broken the limitation of a single vCPU for Fault Tolerance, a FT VM now Supports upto 4 vCPUs and 64 GB of RAM. With vSMP support, FT can be used to protect your mission critical applications. Along with the vSMP FT support, let’s take a look at what’s new in vSphere 6.0 Fault Tolerance(FT).

vSphere 6.0 - FT_1

Benefits of Fault Tolerance

  • Continuous Availability with Zero downtime and Zero data loss
  • NO TCP connections loss during failover
  • Fault Tolerance is completely transparent to Guest OS.
  • FT doesn’t depend on Guest OS and application
  • Instantaneous failover from Primary VM to Secondary VM in case of ESXi host failure

What’s New in vSphere 6.0 Fault Tolerance

  • FT support upto 4 vCPUs and 64 GB RAM
  • Fast Check-Pointing, a new Scalable technology is introduced to keep primary and secondary in Sync by replacing “Record-Replay”
  • vSphere 6.0, Supports vMotion of both Primary and Secondary Virtual Machine
  • With vSphere 6.0, You will be able to backup your virtual machines. FT supports for vStorage APIs for Data Protection (VADP) and it also supports all leading VADP solutions in Market like Symantec, EMC, HP ,etc.
  • With vSphere 6.0, FT Supports all Virtual Disk Type like EZT, Thick or Thin Provisioned disks. It supports only Eager Zeroed Thick with vSphere 5.5 and earlier versions
  • Snapshot of FT configured Virtual Machines are supported with vSphere 6.0
  • New version of FT keeps the Separate copies of VM files like .VMX, .VMDk files to protect primary VM from both Host and Storage failures. You are allowed to keep both Primary and Secondary VM files on different datastore.

Difference between vSphere 5.5 and vSphere 6.0 Fault Tolerance (FT)

Difefrence between FT 5.5 amd 6.0

One thing to be aware of with VMware FT is that this feature does not monitor the application its still only virtual machine protection so you still need to think about the application and how it will be protected.

Posted in VMware | 2 Comments

What new features are in vSphere 6.0

Well there has been some public information out there for some time on some of the new features that will or maybe in vSphere 6.0, mainly information that has come from VMworld 2014 and some from the beta which although public did have some NDA with it. As VMware announce the new release below are some of the new features that made the cut into the new version.

vSphere Platform (including ESXi)

  • Increase in vSphere Host Configuration Maximums
    • 480 Physical CPUs per Host
    • Up to 12 TB of Physical Memory
    • Up to 1000 VMs per Host
    • Up to 6000 VMs per Cluster
  • Virtual Hardware v11
    • 128 vCPUs per VM
    • 4 TB RAM per VM
    • Hot-add RAM now vNUMA aware
    • Serial and parallel port enhancements
      • A virtual machine can now have a maximum of 32 serial ports
      • Serial and parallel ports can now be removed
  • ESXi Account & Password Management
    • New ESXCLI commands to add/modify/remove local user accounts
    • Configurable account lockout policies
    • Password complexity setting via VIM API & vCenter Host Advanced System Settings
  • Improved Auditability of ESXi Admin Actions
    • Prior to vSphere 6.0, actions taken through vCenter by any user would show up as ‘vpxuser’ in ESXi logs.
    • In vSphere 6.0 actions taking through vCenter will show the actual username in the ESXi logs
  • Enhanced Microsoft Clustering (MSCS) Support
    • Support for Windows 2012 R2 and SQL 2012
    • Failover Clustering and AlwaysOn Availability Groups
    • IPv6 Support
    • PVSCSI & SCSI controller support
    • vMotion Support
      • Clustering across physical hosts with Physical Compatibility Mode RDMs (Raw Device Mapping)
      • Supported on Windows 2008, 2008 R2, 2012, and 2012 R2

vCenter 6.0

  • Scalability Improvements
    • 1000 Hosts per vCenter
    • 10,000 VMs per vCenter
    • 64 Hosts per cluster (including VSAN!)
    • 6000 VMs per cluster
    • Linked Mode no longer requires MS ADAM
  • New Simplified Architecture with Platform Services Controller
    • Centralizes common services
    • Embedded or Centralized deployment models
  • Content Library
    • Repository for vApps, VM templates, and ISOs
    • Publisher/Subscriber model with two replication models
    • Allow content to be stored in one location and replicated out to “Subscriber” vCenters
  • Certificate Management
    • Certificate management for ESXi hosts & vCenter
    • New VMware Endpoint Certificate Service (VECS)
    • New VMware Certificate Authority
  • New vMotion Capabilities
    • Cross vSwitch vMotion
    • Cross vCenter vMotion
    • Long Distance vMotion
    • vMotion across L3 boundaries

Storage & Availability

  • VMware Virtual Volumes (VVOLS)
    • Logical extension of virtualization into the storage world
    • Policy based management of storage on per-VM basis
    • Offloaded data services
    • Eliminates LUN management
  • Storage Policy-Based Management
    • Leverages VASA API to intelligently map storage to policies and capabilities
    • Polices are assigned to VMs and ensure storage performance & availability
  • Fault Tolerance
    • Multi-vCPU FT for up to 4 vCPUs
    • Enhanced virtual disk format support (thin & thick disks)
    • Ability to hot configure FT
    • Greatly increased FT host compatibility
    • Backup support with snapshots through VADP
    • Now uses copies of VMDKs for added storage redundancy (allowed to be on separate datastores)
  • vSphere Replication
    • End-to-end network compression
    • Network traffic isolation
    • Linux file system quiescing
    • Fast full sync
    • Move replicas without full sync
    • IPv6 support
  • vSphere Data Protection
    • VDP Advanced has been rolled into VDP and is no longer available for purchase (the features of VDP-A are now available for free to Essentials Plus and higher editions of vSphere!)
    • Protects up to 800 VMs per vCenter
    • Up to 20 VDP appliances per vCenter
    • Replicate backup data between VDP & EMC Avamar
    • EMC Data Domain support with DD Boost
    • Automated backup verification

So there you have it – a pretty long list of updates for vSphere 6.0. One thing that I was surprised to see that that vSphere Application HA has been removed in vSphere 6.0 due to a lack of demand for the feature, oddly its not something we have seen at Symantec and still our user base grows quarter by quarter and Symantec ApplicationHA goes on..

Posted in VMware | 2 Comments

Providing high availability and disaster recovery for virtualized SAP within VMware the right way

Over the past couple of years I have been getting more and more involved in SAP architecture designs for HA and DR and one on my pet hates at the start of my journey was the lack of basic information on what the SAP components were for and how they interacted with each other, it was a hard slog, for those who are venturing into SAP or even those hardened SAP veterans out there the paper below covers SAP in great detail and more importantly covers how SAP deployments should be done correctly especially when high availability and disaster recovery is a requirement.

Many organizations rely on SAP applications to support vital business processes. Any disruption of these services translates directly into bottom-line losses. As organization’s information systems become increasingly integrated and interdependent, the potential impact of failures and outages grows to enormous proportions.

The challenge for IT organizations is to maintain continuous SAP application availability in a complex, interconnected, and heterogeneous  application environment. The difficulties are significant:

  • there are many potential points of failure or disruption
  • the interdependencies between components complicates administration
  • the infrastructure itself undergoes constant change

To gain additional competitive advantage, enterprises must now work more closely together and integrate their SAP environment with those of other organizations, such as partners, customers, or suppliers. The availability of these applications is therefore essential.

There are three main availability classes, depending on the degree of availability required:

  • Standard Availability – achievable availability without additional measures
  • High Availability – increased availability after elimination of single points of failure within the local datacenter
  • Disaster Recovery – highest availability, which even overcomes the failure of an entire production site

Symantec helps the organizations that rely on SAP applications with an integrated, out-of-the-box solution for SAP availability. Symantec’s High Availability and Disaster Recovery solutions for SAP enhance both local and global availability for business critical SAP applications.

Local high availability: By clustering critical application components with application-specific monitoring and failover, Symantec’s solutions simplify the management of complex environments. Administrators can manually move services for preventative and proactive maintenance, and the software automatically migrates and restarts applications in case of failures.

Global availability/disaster recovery: By replicating data across geographically dispersed data centers and using global failover capabilities, companies can provide access to essential services in the event of major site disruptions. Using Symantec’s solutions, administrators can migrate applications or an entire data center within minutes, with a single click through a central console. Symantec’s flexible, hardware independent solutions support a variety of cost-effective strategies for leveraging your investment in disaster recovery resources.

Symantec provides High Availability and Disaster Recovery solutions for SAP, utilizing Symantec™ Storage Foundation, powered by Veritas, Symantec™ Replicator Option, Symantec™ Cluster Server, powered by Veritas, and Cluster Server agents that are designed specifically for SAP applications. The result is an out-of-the-box solution that you can quickly deploy to protect critical SAP applications immediately from either planned or unplanned downtime.

Download the full white paper below.

WP-High-Availability-Disaster-Recovery-for-SAP-applications-1114

Posted in Microsoft, Symantec, VMware | Leave a comment

Fix – vSphere Replication – Cannot connect to the specified site – due to change in default ports

Keep meaning to document this one, so here it goes.

Adding a site in VMware vSphere Replication fails with the error: Cannot connect to the specified site, site might not be available on the network or the network configuration may not be correct.

This may happen if you change the default network port for the vCenter Servers from 80 to another port number.

To resolve this issue when you are not using the standard port 80 or port 443, specify the port number in vSphere Replication: Add Site dialog.

VR_Setting
For example: If vCenter Server at IP address 192.168.1.10 is accessed over port 8081, enter 192.168.1.10:8081 in the vSphere Replication: Add Site dialog.

Hope this helps you out.

thanks for reading.

Posted in Uncategorized | Leave a comment