Deploying multi-node Openstack with Opencontrail

In this post, I have chosen Opencontrail as SDN solution for Openstack neutron networking.
Opencontrail on its own have large amount of components/services which is difficult to cover in single post.
For non-familiar folks, refer to: Opencontrail architecture documentation

Opencontail site contains, lots of interesting blog posts on various topics,…It’s the¬†endless world out there ūüôā

This post intended to get you started with environment to play around and get feel for.
I will primarily focus on creating multi-node setup using Openstack and Opencontaril.
We will see the configuration points for Openstack and Opencontrail.
Then do, some basic operations, like VM launche, and ssh and ping operations to check the connectivity.

So, I guess, enough taking, Let’s get technical.

What do you need for environment?

  • A machine which support virtualization (Intel-VT/AMD-V)
  • Memory 8GB¬†minimum, will use 5.5 GB for all three VMs
  • Disk 60 GB free space
  • VirtualBox/VMware Player/KVM host – I have VirtualBox
  • A cup of Coffee!

Create three virtual machines with ubuntu 14.04 server.

VM1 specification (contrail-openstack-controller):

eth0: 192.168.1.11 (bridged adapter)
eth1: 10.10.10.3 (Host-only adapter)

vm1

Contrail-Openstack-Controller

VM2: specification (compute1):

eth0: 192.168.1.12 (bridged adapter)
eth1: 10.10.10.4 (Host-only adapter)

vm2

Compute Node 1

Vm3: specification (compute2):

eth0: 192.168.1.13 (bridged adapter)
eth1: 10.10.10.5 (Host-only adapter)

Compute Node 2

Compute Node 2

After login to the machines clone following repositories.

https://github.com/Juniper/contrail-installer.git

https://github.com/openstack-dev/devstack.git

For installation of contrail, you can follow up contrail-installer README.

VM1:

Act as controller hence, we will be running all the openstack services, without openstack-nova-compute and opencontrail-vrouter-agent. Following, locarc for opencontrail.

vm1-contrail-localrc

Follow steps as per the contrail-installer and install contrail services.

cd contrail-installer
cp samples/localrc-all localrc (edit localrc as needed)
./contrail.sh build
./contrail.sh install
./contrail.sh configure
#Before starting contrail services, remove agent service from ENABLED_SERVICES
https://github.com/Juniper/contrail-installer/blob/master/contrail.sh#L12
./contrail.sh start

Devstack localrc: disabled most of the services like cinder, heat, tempest and compute

vm1-devstack-localrc

Copy opencontrail plugin for devstack to use:

cd devstack
cp ~/contrail-installer/devstack/lib/neutron_plugins/opencontrail lib/neutron_plugins/
./stack.sh

VM2 & VM3

This will be compute nodes, hence this will be running nova-compute and contrail-vrouter-agent serivces.

Localrc for contrail and devstack as follows:

Check out contrail localrc, INSTALL_PROFILE=COMPUTE

vm2-contrail-localrc           vm2-devstack-localrc

Verify setup with screen outputs: on all the VMs

screen -x contrail
screen -x stack

Screens for contrail and devstack on controller node

controller-contrail-screen

controller contrail screen

controller devstack screen

controller devstack screen

Screens for contrail and devstack on compute node

nova compute screen

nova compute screen

Contrail agent on compute node

Contrail agent on compute node

Check regiester compute nodes with ‘nova hypervisor-list’, and launch 2 VMs.

Booted 2 VMs on two compute nodes

Test ping and ssh commands between vms

vm ssh and ping test

vm ssh and ping test

Let  me know, wherever you stumble across problem during this setup. I will try my best to help you out.

Just got below patch merge now, which was necessary for multi-node contrail setup.

https://github.com/Juniper/contrail-installer/pull/105

Advertisements

Networking in OpenStack : Panoramic view

This article describes the basic introduction to Openstack Neutron ,  different types of network modes and plugin available in Openstack Networking.

Neutron is an¬†OpenStack¬†project to provide “networking as a service” between interface devices (e.g., vNICs) managed by other Openstack services (e.g., nova).

Neutron was introduced as a core part of OpenStack with the initiative’s Folsom release. Prior to the Folsom release, networking functionality was hard-coded in the Nova compute module of OpenStack, which required developers to modify both compute and network features of OpenStack together. With Neutron, networking is a more modular element of OpenStack that can evolve independently.

The core Neutron API includes support for Layer 2 networking and IP address management(IPAM), as well as an extension for a Layer 3 router construct that enables routing between Layer 2 networks and gateways to external networks. Neutron includes a growing list of plugins that enable interoperability with various commercial and open source network technologies, including routers, switches, virtual switches and software-defined networking (SDN)controllers.

Note:  The OpenStack Foundation changed the name of its networking project from Quantum to Neutron due to a trademark conflict with a manufacturer of tape-based data backup systems.

Why Neutron?

  • Give cloud tenants an API to build rich networking topologies, and configure advanced network policies in the cloud.
    • Example: create multi-tier web application topology
  • Enable innovation plugins (open and closed source) that introduce advanced network capabilities
    • Example: use L2-in-L3 tunneling to avoid VLAN limits, provide end-to-end QoS guarantees, used monitoring protocols like¬†NetFlow.
  • Let anyone build advanced network services (open and closed source) that plug into Openstack tenant networks.
    • Examples: LB-aaS, VPN-aaS, firewall-aaS, IDS-aaS, data-center-interconnect-aaS.

Concepts

  • Network, representing isolated virtual Layer-2 domains; a network can also be regarded as a virtual (or logical) switch;
  • Subnet, representing IPv4 or IPv6 address blocks from which IPs to be assigned to VMs on a given network are selected.
  • Port, representing virtual (or logical) switch ports on a given network. Virtual instances attach their interfaces into ports. The logical port also defines the MAC address and the IP address(es) to be assigned to the interfaces plugged into them. When IP addresses are associated to a port, this also implies the port is associated with a subnet, as the IP address was taken from the allocation pool for a specific subnet. ¬†These could be demonstrated in the figure below :

neutron_concept

High-level flow

  • Tenant creates a network (e.g., “net1”)
  • Tenant associates a subnet with that network (e.g., “10.0.0.0/24”)
  • Tenant boots a VM, specifying a single NIC connected to “net1” (e.g.: nova boot –image <image_name> –nic net-id=<id_of_net1> <server_name>)
  • Nova contacts Neutron and creates a port1 on net1.
  • Neutron assigns an IP to port1 is assigned IP. (The IP is chosen by Neutron)
  • Tenant destroys VM.
  • Nova contacts neutron and destroys port1. Allocated IP is returned to the pool of available IP address.

highlevelview

Fig : High level flow

Openstack Networking Architecture:

Neutron-PhysNet-Diagram

Fig : Network Connectivity for Physical Hosts

A standard OpenStack Networking setup has up to four distinct physical data center networks:

  • Management network. Used for internal communication between OpenStack Components.¬†¬† IP addresses on this network should be reachable only within the data center.
  • Data network. Used for VM data communication within the cloud deployment.¬† The IP addressing requirements of this network depend on the OpenStack Networking plugin being used.
  • External network. Used to provide VMs with Internet access in some deployment scenarios.¬† IP addresses on this network should be reachable by anyone on the Internet.
  • API network. Exposes all OpenStack APIs, including the OpenStack Networking API, to tenants.¬†IP addresses on this network should be reachable by anyone on the Internet.¬†The API network may be the same as the external network, because it is possible to create an external-network subnet that is allocated IP ranges that use less than the full range of IP addresses in an IP block.

Network modes in Openstack :

  • Flat mode
  • Flat DHCP mode
  • VLAN DHCP mode

Flat mode

Flat mode is the simplest networking mode. Each instance receives a fixed IP from the pool. All instances are attached to the same bridge (br100) by default. The bridge must be configured manually. The networking configuration is injected into the instance before it is booted. And there is no floating IP feature in this mode.

flatnetwork-diagram-1

Fig : FlatManager network topology

Flat DHCP mode

This is similar to the flat mode in that all instances are attached to the same bridge. In this mode Nova does a bit more configuration; it will attempt to bridge into an Ethernet device (eth0 by default). It will also run dnsmasq as a dhcpserver listening on this bridge. Instances receive their fixed IPs by doing a dhcpdiscover. Moreover, floating IP feature is provided.

flatdhcpmanager-topology-single-instance

Fig:¬†FlatDHCPManager ‚Äď network topology

VLAN Network Mode

It is the default mode for Nova. It provides a private network segment for each project’s instances that can be accessed via a dedicated VPN connection from the Internet.

In this mode, each project gets its own VLAN, Linux networking bridge, and subnet. The subnets are specified by the network administrator, and are assigned dynamically to a project when required. A DHCP Server is started for each VLAN to pass out IP addresses to VM instances from the subnet assigned to the project. All instances belonging to one project are bridged into the same VLAN for that project.

vlanmanager-generic-config-v2-2-tenants-2

Fig:¬†VLANManager ‚Äď network topology

The following plugins are currently included in the OpenStack Networking distribution:

Plugins can have different properties for hardware requirements, features, performance, scale, or operator tools. Because OpenStack Networking supports a large number of plugins, the cloud administrator is able to weigh different options and decide which networking technology is right for the deployment.

OpenStack-Cinder: create volume data/code flow

Hello Folks,

I guess, its been quite some time since my last post. Actually, didn’t get time some free time cover up this post.

Thanks to dear friend¬†Rahul Updadhyaya, he has pull me out ¬†today to write this up ūüôā

Apologize for not putting diagrams right now, will work on it sooner.

This post is about cinder create volume data/code flow.

Example:¬†cinder create 1 –display-name firstvol –volume-type FCVolumeType

1. Start with calling the cinder api:

File: cinder/api/v1/volumes.py

Method: def create(self, req, body): #validates cinder api arguments.

2. Above method calls “cinder.volume.api.API”, taken from ¬†“volume_api_class” a flag from cinder.conf.

self.volume_api.create() # here self.volume_api created from “cinder.volume.api.API”

File: cinder.volume.api.API

Method: def create(self….)

This function store necessary data into database with volume status as ‘creating’.

3. Above method further calls cinder schedule.

self.scheduler_rpcapi.create_volume() # Making asynchronous cast on cinder-scheduler

4. Next asynchronous cast makes it to the cinder/scheduler/manager.py def create_volume()

this in turn calls: self.driver.schedule_create_volume() #here, the self.driver points to scheduler_driver flag in cinder.conf

Now, this could be SimpleScheduler or FilterScheduler (in case of multi-backend)

5. Incase of SimpleScheduler, above  method calls

File: cinder/scheduler/simple.py

Method: def schedule_create_volume()

above method next calls: self.volume_rpcapi.create_volume() # which makes asynchronous cast on selected host

Here, you can view host info with #cinder-manger host list

6. Message is reached to volume/manager.py

File: cinder/volume/manager.py

Method: def create_volume() # calls _create_volume() and make call to volume driver’s create_volume()

self.driver.create_volume() # driver is chosen with volume_driver flag from cinder.conf, this return metadata about volume

And volume status is changed from ‘creating’ to ‘available’.

Tempest: An OpenStack Integration test suite

Integration Testing:

  • Integration Testing is the phase in software testing in which individual software modules are combined and tested as a group.
  • It occurs after unit testing and before system testing.
  • Tests the actual functionality of the application/software you have written.

Why wasting time in integration tests when I have an option to check the functionality manually?

  • Because every time you add a new feature you will have to check and re-check the functionalities manually which is tedious, time-consuming and error-prone.
  • What if you have thousands of features, certainly you can’t afford to check if them one by one.
  • If you claim that your software is platform-independent. How will you make sure that all the functionalities of your software are intact on multiple platforms? That’s where a ‘test-suite’ comes into picture. You can create a dedicated test environment to run your test-suite across different platforms adhering to your scale and stress needs.

I have written unit tests properly. Why should I write integration tests?

  • Because unit-tests test the source code you have written, the semantic and the syntax, not the actual functionality of your software.
  • Integration tests make sure that your source code, when integrated with all your components, serves the real purpose.
  • With addition of a new feature you can just re-trigger the suite and hence make sure that your new feature doesn’t break any existing functionality.

What is tempest?

  • One of Tempest’s prime function is to ensure that your OpenStack cloud works with the OpenStack API as documented. The current largest portion of Tempest code is devoted to test cases that do exactly this.
  • It’s also important to test not only the expected positive path on APIs, but also to provide them with invalid data to ensure they fail in expected and documented ways. Over the course of the OpenStack project Tempest has discovered many fundamental bugs by doing just this.
  • In order for some APIs to return meaningful results, there must be enough data in the system. This means these tests might start by spinning up a server, image, etc., then operating on it.
  • Currently tempest has support to test both APIs and CLIs.

How to run the tempest tests?

  • There are multiple ways to run the tests. You have a choice to run the entire test suite, all the tests in a directory, a test module or a single tests.
  • The documents describes how to run the tempest tests using nosetests.
  • Install the basic dependencies:
  • Go to the directory tempest and execute the tests with the following command;
    • nosetests -sv tempest.tests.identity.admin.test_services.py [ This would run all the tests inside the module test_services.py]
    • nosetests -sv tempest.tests.identity.admin.test_services:ServicesTestJSON.test_create_get_delete_service [This runs a specific tests inside the class ServiceTestJSON of the module test_services.py]

Basic Architecture

  • Tempest tests can be executed from either outside the openstack setup or from the openstack controller itself.
    • The tempest has a rest client implemented in the suite which takes care of the REST operations.
    • All the user inputs which are performing the rest operationon nova apis are read from tempest.conf file.
    • The suite first make a rest call to the keystone api to get the auth token and tenant details for authorization and authentication.
    • Using the auth token obtained from the response, the tests performs the required operations.

More to follow…

  • Debugging using eclipse..
  • Extending tempest framework for your project..

Debug Openstack code Local / Remote with Eclipse and PyDev Plug-In

This article is a result of my exhaustive search for finding a concrete way to debug Openstack. After referring several places, I have come up with a manual of my own on how to setup eclipse environment to debug and understand openstack code flow. It should be a good read if you have similar questions as posted below in your mind.

  • Is it possible to debug openstack code end-to-end?
  • Should I debug locally (everything configure inside eclipse)?
  • How to debug remotely running openstack services?Or, combination of above two?
  • What developer tools/IDEs to use for debugging? (eclipse +pydev, pdb, winpdb, pycharm)?
  • What‚Äôs the best/easiest/more sophisticated¬†method, to get set everything quickly?

And there’s bunch of other questions, followed by multiple alternatives to chose from.

Here in this post, I have tried debugging using Eclipse with pydev plug-in.

Development Environment:

Linux Distro: centos/ ubuntu, (I used VM workstation)
Install eclipse as per os type 32/64 bit on one of the VM: http://www.eclipse.org/
Configure python Interpreter in eclipse.
Install git plugin (only for local debug): http://download.eclipse.org/egit/updates, add this in Help-> install new software

How to  Debug Openstack Services Locally?

To begin with, you can try with keystone in eclipse.

Also, setup environment variables under debug configuration for keystone service to pick up.
OS_USERNAME,
OS_PASSWORD,
OS_TENANT_NAME,
OS_REGION_NAME,
OS_AUTH_URL
Optionally, Setup keystone.conf file as argument under debug configuration dialog.
For example, to test setup, put a break-point at:
File: keystone/identity/core.py
Method: def get_all_tenants(self, context, **kw):
Now, execute keystone-all (debug as-> python run) from eclipse

As you have already install keystoneclient by following above link, from terminal execute:

$keystone tenant-list

(check db is running, iptables service not blocking port ‚Äď just in case if get 500 error with tenant-list)
This should hit break-point in keystone service running in eclipse and ask to move to debug perspective.

Voila, You have just got setup for local debugging.

Remote Debugging: 

Development Environment: 

In this case, I have used two VMs, one is centos and other is ubuntu 12.04.
Ubuntu VM- running  eclipse IDE with pydev plug-in.
Centos VM –¬† openstack services running.
Configure python Interpreter in eclipse.

Configure pydev debug server in eclipse.

To Remote debug, following link has most of the answers:

http://pydev.org/manual_adv_remote_debugger.html

Now, copy /pysrc directory from ubuntu vm to centos vm.

/pysrc ‚Äď will be found in eclipse installation plugins/org.python.pydev_<version>/pysrc

On centos (Remote machine), preferred place to copy under python site-package.

Ex: /usr/lib/python2.6/site-packages/pysrc/

Example-1: Remote debug keystone

Run the debug server in eclipse, note the debug server port.
File: keystone/keystone/identity/core.py

Function: def get_all_tenants(self, context, **kw):  # gives tenant-list

Under this function add line:

import pydevd;pydevd.settrace(<IP addr of eclipse vm>, port=8765, stdoutToServer=True, stderrToServer=True,suspend=True)

Next,File: /keystone/bin/keystone-all
To add pysrc to PYTHONPATH:¬† add following line after ‚Äúimport sys‚ÄĚ line

sys.path.append(‘/usr/lib/python2.6/site-packages/pysrc/’)

eventlet.patcher.monkey_patch(all=False, socket=True, time=True, thread=monkeypatch_thread)
Comment out above this line, and add following line:

eventlet.patcher.monkey_patch(all=False, socket=True, time=True, thread=False)

This most important for debugging, otherwise you will received ‚ÄúThreadSuspended‚ÄĚ error in eclipse.

As, the debug server listen to single thread, above line will take away green threading of thread module.

Restart keystone service
$service keystone restart

$keystone tenant-list

On eclipse, switch to debug perspective.

You should be able to hit break-point in core.py file, and step through further debug execution.

Eclipse_debug_server2

Example-2: Debugging keystone(get auth-token) + nova-api

$nova flavor-list                #will debug this cli

File: /keystone/keystone/service.py
Class: class TokenController
Method:def authenticate(self, context, auth=None):

Add following line:
import pydevd;pydevd.settrace(‚Äė<IP addr of eclipse vm>‚Äô, port=8765, stdoutToServer=True, stderrToServer=True,suspend=True)

Next,File: /keystone/bin/keystone-all
To add pysrc to PYTHONPATH:¬† add following line after ‚Äúimport sys‚ÄĚ line
sys.path.append(‘/usr/lib/python2.6/site-packages/pysrc/’)
eventlet.patcher.monkey_patch(all=False, socket=True, time=True, thread=monkeypatch_thread)
Comment this line, and add following line:

eventlet.patcher.monkey_patch(all=False, socket=True, time=True, thread=False)

This most important for debugging, otherwise you will received ‚ÄúThreadSuspended‚ÄĚ error in eclipse.

As, the debug server listen to single thread, above line will take away green threading of thread module.

Restart keystone service

Next, File: nova/nova/api/openstack/compute/flavors.py
Class: class Controller(wsgi.Controller):
Method:
@wsgi.serializers(xml=FlavorsTemplate)

def detail(self, req):

Add following line under this function:

import pydevd;pydevd.settrace((<IP addr of eclipse vm>, port=8765, stdoutToServer=True, stderrToServer=True,suspend=True)

File: nova/bin/nova-api
Add line after ‚Äúimport sys‚ÄĚ

sys.path.append(‘/usr/lib/python2.6/site-packages/pysrc/’)

eventlet.monkey_patch(os=False)
comment above line, and change to:

eventlet.monkey_patch(all=False,socket=True,time=True,os=False,thread=False)

$service keystone restart
$service nova-api restart
$ nova flavor-list

In eclipse, this should hit break-point in service.py for keystone

Eclipse_debug_server3

After keystone token generated, control move to flavors.py

Eclipse_debug_server4

Things observed:

Path resolution:

If python paths and/or openstack code paths different on both VMs, eclipse will not be able to locate correct file to open, and respond with open file dialog, just cancel the dialog and file from remote machine get displayed. This file will get store into prsrc/temporary_file directory.
To avoid this, on server running openstack service, go to pysrc directory, and modify the file, pydevd_file_utils.py.

More info on this: http://pydev.org/manual_adv_remote_debugger.html

The whole idea for this blog post is to try out alternatives to debug openstack code.
I have taken simplest possible examples in very short time, to demonstrate, it works!!

Request Flow for Provisioning Instance in Openstack

One of the most important use-case in any cloud is provisioning a VM . In this article we shall do a walk through about an instance(VM) being provisioned in a Openstack based cloud. This article deals with the request flow and the component interaction of various projects under Openstack. The end result will be booting up a VM.

request flow

Provisioning a new instance involves the interaction between multiple components inside OpenStack :

  • CLI Command Line Interpreter for submitting commands to OpenStack Compute.
  • Dashboard (“Horizon”) provides the interface for all the OpenStack services.
  • Compute (“Nova”) retrieves virtual disks images(‚ÄúGlance‚ÄĚ) , attach flavor and associated metadata and transforms end user API requests into running instances.
  • ¬†Network (“Quantum”) provides virtual networking for Compute which allows users to create their own networks and then link them to the instances.
  • Block Storage (“Cinder”) provides persistent storage volumes for Compute instances.
  • Image (“Glance”) can store the actual virtual disk files in the Image Store.
  • Identity (“Keystone”) provides authentication and authorization for all OpenStack services.
  • Message Queue(‚ÄúRabbitMQ‚ÄĚ) handles the internal communication within Openstack components such as Nova , Quantum and Cinder.

The request flow for provisioning an Instance goes like this:

  1. Dashboard or CLI gets the user credential and does the REST call to Keystone for authentication.
  2. Keystone authenticate the credentials and generate & send back auth-token which will be used for sending request to other Components through REST-call.
  3. Dashboard or CLI convert the new instance request specified in¬† ‚Äėlaunch instance‚Äô or ‚Äėnova-boot‚Äô form to REST API request and send it to nova-api.
  4. nova-api receive the request and sends the request for validation auth-token and access permission to keystone.
  5. Keystone validates the token and sends updated auth headers with roles and permissions.
  6. nova-api interacts with nova-database.
  7. Creates initial db entry for new instance.
  8.  nova-api sends the rpc.call request to nova-scheduler excepting to get  updated instance entry with host ID specified.
  9. nova-scheduler picks the request from the queue.
  10. nova-scheduler interacts with nova-database to find an appropriate host via filtering and weighing.
  11. Returns the updated instance entry with appropriate host ID after filtering and weighing.
  12. nova-scheduler sends the rpc.cast request to nova-compute for ‚Äėlaunching instance‚Äô on appropriate host .
  13. nova-compute picks the request from the queue.
  14. nova-compute send the rpc.call request to nova-conductor to fetch the instance information such as host ID and flavor( Ram , CPU ,Disk).
  15. nova-conductor picks the request from the queue.
  16. nova-conductor interacts with nova-database.
  17. Return the instance information.
  18. nova-compute picks the instance information from the queue.
  19. nova-compute does the REST call by passing auth-token to glance-api  to get the Image URI by Image ID from glance and upload image from image storage.
  20. glance-api validates the auth-token with keystone. 
  21. nova-compute get the image metadata.
  22. nova-compute does the REST-call by passing auth-token to Network API to allocate and configure the network such that instance gets the IP address. 
  23. quantum-server validates the auth-token with keystone.
  24. nova-compute get the network info.
  25. nova-compute does the REST call by passing auth-token to Volume API to attach volumes to instance.
  26. cinder-api validates the auth-token with keystone.
  27. nova-compute gets the block storage info.
  28. nova-compute generates data for hypervisor driver and executes request on Hypervisor( via libvirt or api).

The table represents the Instance state at various steps during the provisioning :

Status Task Power state Steps
Build scheduling None 3-12
Build networking None 22-24
Build block_device_mapping None 25-27
Build spawing None 28
Active none Running

Messaging in Openstack using RabbitMQ

AMQP is the messaging technology chosen by the OpenStack cloud. The OpenStack components such as Nova , Cinder , Quantum communicates internally via AMQP(Advanced Message Queue Protocol) and through eachother using REST-call. The AMQP broker, RabbitMQ , sits between any two internal Openstack components and allows them to communicate in a loosely coupled fashion i.e its components has, or makes use of, little or no knowledge of the definitions of other separate components. More precisely, Nova components (nova-api, nova-scheduler , nova-compute) use Remote Procedure Calls (RPC) to communicate to one another.

Generally Openstack Components uses direct, fanout, and topic-based exchanges that have been  discussed in this previous blog post.

Openstack Messaging has two modes:

  • rpc.cast – don’t wait for result
  • rpc.call – wait for result (when there is something to return)

RPC is a pretty common pattern in computing, what if we need to run a function on a remote computer and wait for the result? Well, that’s a different story. This pattern is commonly known as Remote Procedure Call¬†or¬†RPC.

In OpenStack the Nova ,Cinder and Quantum implements RPC (both request+response, and one-way, respectively nicknamed ‚Äėrpc.call‚Äô and ‚Äėrpc.cast‚Äô) over AMQP by providing an adapter class which take cares of marshaling and unmarshaling of messages into function calls. Each Nova components (for example api ,compute, Scheduler, etc.) , Cinder components ( for example volume, Scheduler) , Quantum Components( for example quantum-server , agents ,plugins ) ¬†create two queues at the initialization time, one which accepts messages with routing keys ‚ÄėNODE-TYPE.NODE-ID‚Äô (for example compute.hostname) and another, which accepts messages with routing keys as generic ‚ÄėNODE-TYPE‚Äô (for example compute) .

This ¬†is used specifically when Nova-API needs to redirect commands to a specific node like ‚Äė nova delete instance‚Äô. In this case, only the compute node whose host‚Äôs hypervisor is running the virtual machine can kill the instance. The API acts as a consumer when RPC calls are request/response, otherwise is acts as publisher only.

openstackrabbitmq

Fig : Messaging in Openstack using RabbitMQ ( Queue‚Äďserver)¬†

When a single instance is deployed and shared in an OpenStack cloud. Every component connects to the message broker and, depending on its personality (for example compute node, cinder or quantum, may use the queue either as an Invoker (such as API or Scheduler) or a Worker (such as Compute or Quantum). Invokers and Workers do not actually exist in the Nova object model, but we are going to use them as an abstraction for sake of clarity.

An Invoker is a component that sends messages in the queuing system via two operations:  rpc.call and rpc.cast

A Worker is a component that receives messages from the queuing system and reply accordingly to rpc.call operations.

The following are the elements of a message broker node (referred to as a RabbitMQ node)

  • Topic Publisher: ¬†deals with an rpc.call or an rpc.cast operation and used to push a message to the queuing system. Every publisher connects always to the same topic-based exchange; its life-cycle is limited to the message delivery.¬†
  • Direct Consumer: deals with only rpc.call operation used to receive a response message from the queuing system; Every consumer connects to a unique direct-based exchange via a unique exclusive queue; its life-cycle is limited to the message delivery.¬†
  • Topic Consumer: it is activated when a Worker is instantiated and exists throughout its life-cycle,this is used to receive messages from the queue and it invokes the appropriate action as defined by the Worker role. A Topic Consumer connects to the same topic-based exchange either via a shared queue or via a unique exclusive queue. Every Worker has two topic consumers, one that is addressed only during rpc.cast operations (and it connects to a shared queue whose exchange key is ‚Äėtopic‚Äô) and the other that is addressed only during rpc.call operations (and it connects to a unique queue whose exchange key is ‚Äėtopic.host‚Äô).¬†
  • Direct Publisher: it comes to life only during rpc.call operations and it is instantiated to return the message required by the request/response operation. The object connects to a direct-based exchange whose identity is dictated by the incoming message.

 

RPC calls in Openstack :

The diagram below shows the message flow during an rpc.call operation:

  1. a Topic Publisher is instantiated to send the message request to the queuing system; immediately before the publishing operation, a Direct Consumer is instantiated to wait for the response message.
  2. Once the message is dispatched by the exchange, it is fetched by the Topic Consumer dictated by the routing key (such as ‚Äėtopic.host‚Äô) and passed to the Worker in charge of the task.
  3. Once the task is completed, a Direct Publisher is allocated to send the response message to the queuing system.
  4. Once the message is dispatched by the exchange, it is fetched by the Direct Consumer dictated by the routing key (such as ‚Äėmsg_id‚Äô) and passed to the Invoker.

rpccall

Fig: RPC calls in Openstack


RPC cast in Openstack

The diagram below the message flow during an rpc.cast operation:

  1. A Topic Publisher is instantiated to send the message request to the queuing system.
  2. Once the message is dispatched by the exchange, it is fetched by the Topic Consumer dictated by the routing key (such as ‚Äėtopic‚Äô) and passed to the Worker in charge of the task.

rpccast

Fig : RPC cast in Openstack

The publisher (API) sends the message to a topic exchange (compute topic). A consumer (compute worker) retrieves the message from the queue. No response is expected as it is a cast and not a call.
 

Exchanges and queues being created by Openstack components are:

Exchanges and its type:

  • amq.direct¬†¬†¬†¬†¬† direct
  • cinder-scheduler_fanout fanout
  • conductor_fanout¬†¬†¬†¬†¬†¬†¬† fanout
  • amq.topic¬†¬†¬†¬†¬†¬† topic
  • cinder¬† topic
  • amq.rabbitmq.trace¬†¬†¬†¬†¬† topic
  • compute_fanout¬† fanout
  • amq.rabbitmq.log¬†¬†¬†¬†¬†¬†¬† topic
  • amq.fanout¬†¬†¬†¬†¬† fanout
  • q-agent-notifier-network-delete_fanout¬† fanout
  • cinder-volume_fanout¬†¬†¬† fanout
  • amq.headers¬†¬†¬†¬† headers
  • nova¬†¬†¬† topic
  • scheduler_fanout¬†¬†¬†¬†¬†¬†¬† fanout
  • quantum topic
  • amq.match¬†¬†¬†¬†¬†¬† headers
  • dhcp_agent_fanout¬†¬†¬†¬†¬†¬† fanout
  • q-agent-notifier-security_group-update_fanout¬†¬† fanout
  • q-agent-notifier-port-update_fanout¬†¬†¬†¬† fanout

Queues:

  • scheduler_fanout_300bc05b412948ca91e9c2609022d94a¬†¬†¬†¬†¬†¬† 0
  • compute.localhost¬†¬† 0
  • cinder-scheduler¬†¬†¬†¬†¬†¬†¬† 0
  • notifications.info¬†¬†¬†¬†¬† 16
  • q-agent-notifier-port-update_fanout_e84cd1190d3d4d6fab9c92b9903ad1ee¬†¬†¬† 0
  • compute_fanout_ae1e11827f144d5886f96cdcaba7f90b 0
  • cinder-scheduler_fanout_ebe88ad41b7d450a95b183e6e7a404f0¬†¬†¬†¬†¬†¬†¬† 0
  • conductor_fanout_d82adea2be344983bdc36756e58849f9¬†¬†¬†¬†¬†¬† 0
  • q-plugin¬†¬†¬†¬†¬†¬†¬† 0
  • dhcp_agent¬†¬†¬†¬†¬† 0
  • q-agent-notifier-network-delete_fanout_68eb13d73ccb4d97b84e2534f7181f02 0
  • conductor.localhost 0
  • compute 0
  • scheduler.localhost 0
  • scheduler¬†¬†¬†¬†¬†¬† 0
  • dhcp_agent_fanout_d00b708d17994e31bdad92876dcbafc5¬†¬†¬†¬†¬† 0
  • q-agent-notifier-security_group-update_fanout_62f50e6f6327453ca02efb9e67212a53¬† 0
  • conductor¬†¬†¬†¬†¬†¬† 0
  • cinder-scheduler.localhost¬†¬†¬† 0
  • dhcp_agent.localhost¬† 0

Nova uses Kombu to connect to the RabbitMQ environment. Kombu is a messaging framework for Python. If you are interested in how the rpc-over-amqp stuff works , look at  /nova/openstack/common/rpc/impl_kombu.py

References :

For more details on RPC in Openstack :
http://docs.openstack.org/developer/nova/devref/rpc.html

Introduction to Openstack

openstack

OpenStack is a collection of open source technologies delivering a massively scalable cloud operating system.

OpenStack cloud operating system controls large pools of compute, storage, and networking resources throughout a datacenter, all managed through a dashboard that gives administrators control while empowering their users to provision resources through a web interface.

We can think of it as software to power our own Infrastructure as a Service (IaaS) offering like Amazon Web Services.

openstack-software-diagram

release cycle

Fig : OpenStack and its release cycle

Openstack Projects :

Project                     Codenamed
Dashboard Horizon
Compute Nova
Identity Keystone
Network Quantum
Image Service Glance
Block Storage Cinder
Object Storage Swift

OpenStack Components :

 There are currently seven core components of OpenStack and how they conceptually interact with eachother is shown below :

openstack-conceptual-arch

Fig : OpenStack Conceptual Architecture 

Now lets discuss each components and its services ….

1. ¬†Horizon ‚Äď Dashboard

 It provides a modular web-based user interface for all the OpenStack services. With this web GUI, you can perform most operations on your cloud like launching an instance, assigning IP addresses and setting access controls.

horizon-screenshot

Fig : Openstack Dashboard

2. Keystone ‚Äď Identity

  • Keystone is a framework for authentication and authorization for all the OpenStack services.
  • Keystone handles API requests as well as providing configurable catalog, policy, token and identity services.
  • It provides the ability to add users to groups (also known as tenants) and to manage permissions between users and groups. Permissions include the ability to launch and terminate instances.

keystone

Fig : Openstack keystone 

3.¬†Nova ‚Äď Compute

It provides virtual servers upon demand. Nova is the most complicated and distributed component of OpenStack. A large number of processes cooperate to turn end user API requests into running virtual machines.
List of these processes and their functions:

  • nova-api :¬†it’s ¬†a RESTful API web service which accepts incoming commands to interact with the OpenStack cloud.
  • nova-compute:¬†it’s a worker daemon which creates and terminates virtual machine instances via Hypervisor’s APIs .
  • nova-scheduler:¬† it takes a request from the queue and determines which compute server host it should run on.
  • nova-conductor: It ¬†provides services for nova-compute, such as completing database updates and handling long-running tasks.
  • nova database: It stores most of the build-time and run-time state for a cloud infrastructure.
  • The queue provides a central hub for passing messages between daemons. This is usually implemented with RabbitMQ.
  • Nova also provides console services to allow end users to access their virtual instance’s console through a proxy. This involves several daemons (nova-console, nova-novncproxy and nova-consoleauth).
  • nova-network :¬†it’s a worker daemon very similar to nova-compute. It accepts networking tasks from the queue and then performs tasks to manipulate the network (such as setting up bridging interfaces or changing iptables rules). This functionality is being migrated to Quantum, a separate OpenStack service.
  • nova-volume : Manages creation, attaching and detaching of persistent volumes to compute instances. This functionality is being migrated to Cinder, a separate OpenStack service.

 nova-compute
Fig: Openstack Nova

Nova also interacts with many other OpenStack services: Keystone for authentication, Glance for images and Horizon for web interface. The Glance interactions are central. The API process can upload and query Glance while nova-compute will download images for use in launching images.

4.¬†Glance ‚Äď Image Store

It provides discovery, registration and delivery services for disk and server images.
List of processes and their functions:

  • glance-api : ¬†It¬†accepts Image API calls for image discovery, image retrieval and image storage.
  • glance-registry : it stores, processes and retrieves metadata about images (size, type, etc.).
  • glance database : A database to store the image metadata.
  • A storage repository for the actual image files. Glance supports normal filesystems, RADOS block devices, Amazon S3, HTTP and Swift.

Glance accepts API requests for images (or image metadata) from end users or Nova components and can store its disk files in the object storage service, Swift or other storage repository.

glance

Fig: Openstack Glance

5.¬†Quantum ‚Äď Network

It provides ‚Äú network connectivity as a service ‚ÄĚ between interface devices (e.g., vNICs) managed by other OpenStack services (e.g., nova). The service works by allowing users to create their own networks and then attach interfaces to them. Quantum has a pluggable architecture to support many popular networking vendors and technologies.

  • quantum-server accept API requests and route them to the correct quantum plugin.
  • Plugins and agents perform actual actions, like plug/unplug ports, creating networks and subnets and IP addresing.
  • It also has a message queue to route info between quantum-server and various agents.
  • It has a quantum database to store networking state for particular plugins.

quantum

Fig: Openstack Quantum

Quantum will interact mainly with Nova, where it will provide networks and connectivity for its instances.

6.¬†Cinder ‚Äď Block Storage

Cinder allows block devices to be exposed and connected to compute instances for expanded storage & better performance.

  • cinder-api accepts requests and routes them to cinder-volume for action.
  • cinder-volume reacts reading or writing to the cinder database to maintain state, interacts with other processes (like cinder-scheduler) through a message queue and directly on block storage providing hardware or software.
  • cinder-scheduler picks the optimal block storage node to create the volume on.
  • The messages queue route information between Cinder processes.
  • A ¬†cinder database store volumes state.

cinder

Fig: Openstack Cinder

Like Quantum, Cinder will mainly interact with Nova, providing volumes for its instances.

7.Swift ‚Äď Object Storage

Object store allows you to store or retrieve files. It provides a fully distributed, API-accessible storage platform that can be integrated directly into applications or used for backup, archiving and data retention.

Note : Object Storage is not a traditional file system, but rather a distributed storage system for static data such as virtual machine images, photo storage, email storage, backups and archives.

  • Proxy server (swift-proxy-server) accepts incoming requests, like files to upload, modifications to metadata or container creation; it also serve files and container listing.
  • Accounts server manage accounts defined with the object storage service.
  • Container servers manage a mapping of containers, folders, within the object store service.
  • Object servers manage actual objects, files, on the storage nodes.

object store

Fig: Openstack Swift

Also replication services run to provide consistency and availability across the cluster, audit and update.

All these components and how they relate eachother are shown in the simplest way in below OpenStack logical architecture : 

openstack-arch-grizzly-v1-logical

Fig : Openstack logical Architecture 

Features & Benefits of OpenStack

  • Instance life cycle management i.e. Run, reboot, suspend, resize and terminate instances.
  • Management of compute resources i.e. CPU, memory, disk, and network interfaces.
  • Management of Local Area Networks (Flat, Flat DHCP, VLAN DHCP and IPv6) through programmatically allocates IPs and VLANs.
  • API with rate limiting and Authentication to manage who has access to compute resources and prevent users from impacting each other with excessive API utilization.
  • Distributed and asynchronous architecture for massively scalable and highly available system.
  • Virtual Machine (VM) image management i.e. store, import, share, and query images.
  • Floating IP addresses i.e. Ability to assign (and re-assign) IP addresses to VMs.
  • Security Groups i.e. flexibility to assign and control access to VM instances by creating separation between resource pools.
  • Role Based Access Control (RBAC) to ensure security by user, role and project.
  • Projects & Quotas i.e. ability to allocate, track and limit resource utilization.
  • REST-based API.

References:

For more details on Openstack :

OpenStack Compute Administration Manual