Networking in OpenStack : Panoramic view

This article describes the basic introduction to Openstack Neutron ,  different types of network modes and plugin available in Openstack Networking.

Neutron is an OpenStack project to provide “networking as a service” between interface devices (e.g., vNICs) managed by other Openstack services (e.g., nova).

Neutron was introduced as a core part of OpenStack with the initiative’s Folsom release. Prior to the Folsom release, networking functionality was hard-coded in the Nova compute module of OpenStack, which required developers to modify both compute and network features of OpenStack together. With Neutron, networking is a more modular element of OpenStack that can evolve independently.

The core Neutron API includes support for Layer 2 networking and IP address management(IPAM), as well as an extension for a Layer 3 router construct that enables routing between Layer 2 networks and gateways to external networks. Neutron includes a growing list of plugins that enable interoperability with various commercial and open source network technologies, including routersswitchesvirtual switches and software-defined networking (SDN)controllers.

Note:  The OpenStack Foundation changed the name of its networking project from Quantum to Neutron due to a trademark conflict with a manufacturer of tape-based data backup systems.

Why Neutron?

  • Give cloud tenants an API to build rich networking topologies, and configure advanced network policies in the cloud.
    • Example: create multi-tier web application topology
  • Enable innovation plugins (open and closed source) that introduce advanced network capabilities
    • Example: use L2-in-L3 tunneling to avoid VLAN limits, provide end-to-end QoS guarantees, used monitoring protocols like NetFlow.
  • Let anyone build advanced network services (open and closed source) that plug into Openstack tenant networks.
    • Examples: LB-aaS, VPN-aaS, firewall-aaS, IDS-aaS, data-center-interconnect-aaS.

Concepts

  • Network, representing isolated virtual Layer-2 domains; a network can also be regarded as a virtual (or logical) switch;
  • Subnet, representing IPv4 or IPv6 address blocks from which IPs to be assigned to VMs on a given network are selected.
  • Port, representing virtual (or logical) switch ports on a given network. Virtual instances attach their interfaces into ports. The logical port also defines the MAC address and the IP address(es) to be assigned to the interfaces plugged into them. When IP addresses are associated to a port, this also implies the port is associated with a subnet, as the IP address was taken from the allocation pool for a specific subnet.  These could be demonstrated in the figure below :

neutron_concept

High-level flow

  • Tenant creates a network (e.g., “net1”)
  • Tenant associates a subnet with that network (e.g., “10.0.0.0/24”)
  • Tenant boots a VM, specifying a single NIC connected to “net1” (e.g.: nova boot –image <image_name> –nic net-id=<id_of_net1> <server_name>)
  • Nova contacts Neutron and creates a port1 on net1.
  • Neutron assigns an IP to port1 is assigned IP. (The IP is chosen by Neutron)
  • Tenant destroys VM.
  • Nova contacts neutron and destroys port1. Allocated IP is returned to the pool of available IP address.

highlevelview

Fig : High level flow

Openstack Networking Architecture:

Neutron-PhysNet-Diagram

Fig : Network Connectivity for Physical Hosts

A standard OpenStack Networking setup has up to four distinct physical data center networks:

  • Management network. Used for internal communication between OpenStack Components.   IP addresses on this network should be reachable only within the data center.
  • Data network. Used for VM data communication within the cloud deployment.  The IP addressing requirements of this network depend on the OpenStack Networking plugin being used.
  • External network. Used to provide VMs with Internet access in some deployment scenarios.  IP addresses on this network should be reachable by anyone on the Internet.
  • API network. Exposes all OpenStack APIs, including the OpenStack Networking API, to tenants. IP addresses on this network should be reachable by anyone on the Internet. The API network may be the same as the external network, because it is possible to create an external-network subnet that is allocated IP ranges that use less than the full range of IP addresses in an IP block.

Network modes in Openstack :

  • Flat mode
  • Flat DHCP mode
  • VLAN DHCP mode

Flat mode

Flat mode is the simplest networking mode. Each instance receives a fixed IP from the pool. All instances are attached to the same bridge (br100) by default. The bridge must be configured manually. The networking configuration is injected into the instance before it is booted. And there is no floating IP feature in this mode.

flatnetwork-diagram-1

Fig : FlatManager network topology

Flat DHCP mode

This is similar to the flat mode in that all instances are attached to the same bridge. In this mode Nova does a bit more configuration; it will attempt to bridge into an Ethernet device (eth0 by default). It will also run dnsmasq as a dhcpserver listening on this bridge. Instances receive their fixed IPs by doing a dhcpdiscover. Moreover, floating IP feature is provided.

flatdhcpmanager-topology-single-instance

Fig: FlatDHCPManager – network topology

VLAN Network Mode

It is the default mode for Nova. It provides a private network segment for each project’s instances that can be accessed via a dedicated VPN connection from the Internet.

In this mode, each project gets its own VLAN, Linux networking bridge, and subnet. The subnets are specified by the network administrator, and are assigned dynamically to a project when required. A DHCP Server is started for each VLAN to pass out IP addresses to VM instances from the subnet assigned to the project. All instances belonging to one project are bridged into the same VLAN for that project.

vlanmanager-generic-config-v2-2-tenants-2

Fig: VLANManager – network topology

The following plugins are currently included in the OpenStack Networking distribution:

Plugins can have different properties for hardware requirements, features, performance, scale, or operator tools. Because OpenStack Networking supports a large number of plugins, the cloud administrator is able to weigh different options and decide which networking technology is right for the deployment.

Advertisements

Debug Openstack code Local / Remote with Eclipse and PyDev Plug-In

This article is a result of my exhaustive search for finding a concrete way to debug Openstack. After referring several places, I have come up with a manual of my own on how to setup eclipse environment to debug and understand openstack code flow. It should be a good read if you have similar questions as posted below in your mind.

  • Is it possible to debug openstack code end-to-end?
  • Should I debug locally (everything configure inside eclipse)?
  • How to debug remotely running openstack services?Or, combination of above two?
  • What developer tools/IDEs to use for debugging? (eclipse +pydev, pdb, winpdb, pycharm)?
  • What’s the best/easiest/more sophisticated method, to get set everything quickly?

And there’s bunch of other questions, followed by multiple alternatives to chose from.

Here in this post, I have tried debugging using Eclipse with pydev plug-in.

Development Environment:

Linux Distro: centos/ ubuntu, (I used VM workstation)
Install eclipse as per os type 32/64 bit on one of the VM: http://www.eclipse.org/
Configure python Interpreter in eclipse.
Install git plugin (only for local debug): http://download.eclipse.org/egit/updates, add this in Help-> install new software

How to  Debug Openstack Services Locally?

To begin with, you can try with keystone in eclipse.

Also, setup environment variables under debug configuration for keystone service to pick up.
OS_USERNAME,
OS_PASSWORD,
OS_TENANT_NAME,
OS_REGION_NAME,
OS_AUTH_URL
Optionally, Setup keystone.conf file as argument under debug configuration dialog.
For example, to test setup, put a break-point at:
File: keystone/identity/core.py
Method: def get_all_tenants(self, context, **kw):
Now, execute keystone-all (debug as-> python run) from eclipse

As you have already install keystoneclient by following above link, from terminal execute:

$keystone tenant-list

(check db is running, iptables service not blocking port – just in case if get 500 error with tenant-list)
This should hit break-point in keystone service running in eclipse and ask to move to debug perspective.

Voila, You have just got setup for local debugging.

Remote Debugging: 

Development Environment: 

In this case, I have used two VMs, one is centos and other is ubuntu 12.04.
Ubuntu VM- running  eclipse IDE with pydev plug-in.
Centos VM –  openstack services running.
Configure python Interpreter in eclipse.

Configure pydev debug server in eclipse.

To Remote debug, following link has most of the answers:

http://pydev.org/manual_adv_remote_debugger.html

Now, copy /pysrc directory from ubuntu vm to centos vm.

/pysrc – will be found in eclipse installation plugins/org.python.pydev_<version>/pysrc

On centos (Remote machine), preferred place to copy under python site-package.

Ex: /usr/lib/python2.6/site-packages/pysrc/

Example-1: Remote debug keystone

Run the debug server in eclipse, note the debug server port.
File: keystone/keystone/identity/core.py

Function: def get_all_tenants(self, context, **kw):  # gives tenant-list

Under this function add line:

import pydevd;pydevd.settrace(<IP addr of eclipse vm>, port=8765, stdoutToServer=True, stderrToServer=True,suspend=True)

Next,File: /keystone/bin/keystone-all
To add pysrc to PYTHONPATH:  add following line after “import sys” line

sys.path.append(‘/usr/lib/python2.6/site-packages/pysrc/’)

eventlet.patcher.monkey_patch(all=False, socket=True, time=True, thread=monkeypatch_thread)
Comment out above this line, and add following line:

eventlet.patcher.monkey_patch(all=False, socket=True, time=True, thread=False)

This most important for debugging, otherwise you will received “ThreadSuspended” error in eclipse.

As, the debug server listen to single thread, above line will take away green threading of thread module.

Restart keystone service
$service keystone restart

$keystone tenant-list

On eclipse, switch to debug perspective.

You should be able to hit break-point in core.py file, and step through further debug execution.

Eclipse_debug_server2

Example-2: Debugging keystone(get auth-token) + nova-api

$nova flavor-list                #will debug this cli

File: /keystone/keystone/service.py
Class: class TokenController
Method:def authenticate(self, context, auth=None):

Add following line:
import pydevd;pydevd.settrace(‘<IP addr of eclipse vm>’, port=8765, stdoutToServer=True, stderrToServer=True,suspend=True)

Next,File: /keystone/bin/keystone-all
To add pysrc to PYTHONPATH:  add following line after “import sys” line
sys.path.append(‘/usr/lib/python2.6/site-packages/pysrc/’)
eventlet.patcher.monkey_patch(all=False, socket=True, time=True, thread=monkeypatch_thread)
Comment this line, and add following line:

eventlet.patcher.monkey_patch(all=False, socket=True, time=True, thread=False)

This most important for debugging, otherwise you will received “ThreadSuspended” error in eclipse.

As, the debug server listen to single thread, above line will take away green threading of thread module.

Restart keystone service

Next, File: nova/nova/api/openstack/compute/flavors.py
Class: class Controller(wsgi.Controller):
Method:
@wsgi.serializers(xml=FlavorsTemplate)

def detail(self, req):

Add following line under this function:

import pydevd;pydevd.settrace((<IP addr of eclipse vm>, port=8765, stdoutToServer=True, stderrToServer=True,suspend=True)

File: nova/bin/nova-api
Add line after “import sys”

sys.path.append(‘/usr/lib/python2.6/site-packages/pysrc/’)

eventlet.monkey_patch(os=False)
comment above line, and change to:

eventlet.monkey_patch(all=False,socket=True,time=True,os=False,thread=False)

$service keystone restart
$service nova-api restart
$ nova flavor-list

In eclipse, this should hit break-point in service.py for keystone

Eclipse_debug_server3

After keystone token generated, control move to flavors.py

Eclipse_debug_server4

Things observed:

Path resolution:

If python paths and/or openstack code paths different on both VMs, eclipse will not be able to locate correct file to open, and respond with open file dialog, just cancel the dialog and file from remote machine get displayed. This file will get store into prsrc/temporary_file directory.
To avoid this, on server running openstack service, go to pysrc directory, and modify the file, pydevd_file_utils.py.

More info on this: http://pydev.org/manual_adv_remote_debugger.html

The whole idea for this blog post is to try out alternatives to debug openstack code.
I have taken simplest possible examples in very short time, to demonstrate, it works!!

Request Flow for Provisioning Instance in Openstack

One of the most important use-case in any cloud is provisioning a VM . In this article we shall do a walk through about an instance(VM) being provisioned in a Openstack based cloud. This article deals with the request flow and the component interaction of various projects under Openstack. The end result will be booting up a VM.

request flow

Provisioning a new instance involves the interaction between multiple components inside OpenStack :

  • CLI Command Line Interpreter for submitting commands to OpenStack Compute.
  • Dashboard (“Horizon”) provides the interface for all the OpenStack services.
  • Compute (“Nova”) retrieves virtual disks images(“Glance”) , attach flavor and associated metadata and transforms end user API requests into running instances.
  •  Network (“Quantum”) provides virtual networking for Compute which allows users to create their own networks and then link them to the instances.
  • Block Storage (“Cinder”) provides persistent storage volumes for Compute instances.
  • Image (“Glance”) can store the actual virtual disk files in the Image Store.
  • Identity (“Keystone”) provides authentication and authorization for all OpenStack services.
  • Message Queue(“RabbitMQ”) handles the internal communication within Openstack components such as Nova , Quantum and Cinder.

The request flow for provisioning an Instance goes like this:

  1. Dashboard or CLI gets the user credential and does the REST call to Keystone for authentication.
  2. Keystone authenticate the credentials and generate & send back auth-token which will be used for sending request to other Components through REST-call.
  3. Dashboard or CLI convert the new instance request specified in  ‘launch instance’ or ‘nova-boot’ form to REST API request and send it to nova-api.
  4. nova-api receive the request and sends the request for validation auth-token and access permission to keystone.
  5. Keystone validates the token and sends updated auth headers with roles and permissions.
  6. nova-api interacts with nova-database.
  7. Creates initial db entry for new instance.
  8.  nova-api sends the rpc.call request to nova-scheduler excepting to get  updated instance entry with host ID specified.
  9. nova-scheduler picks the request from the queue.
  10. nova-scheduler interacts with nova-database to find an appropriate host via filtering and weighing.
  11. Returns the updated instance entry with appropriate host ID after filtering and weighing.
  12. nova-scheduler sends the rpc.cast request to nova-compute for ‘launching instance’ on appropriate host .
  13. nova-compute picks the request from the queue.
  14. nova-compute send the rpc.call request to nova-conductor to fetch the instance information such as host ID and flavor( Ram , CPU ,Disk).
  15. nova-conductor picks the request from the queue.
  16. nova-conductor interacts with nova-database.
  17. Return the instance information.
  18. nova-compute picks the instance information from the queue.
  19. nova-compute does the REST call by passing auth-token to glance-api  to get the Image URI by Image ID from glance and upload image from image storage.
  20. glance-api validates the auth-token with keystone. 
  21. nova-compute get the image metadata.
  22. nova-compute does the REST-call by passing auth-token to Network API to allocate and configure the network such that instance gets the IP address. 
  23. quantum-server validates the auth-token with keystone.
  24. nova-compute get the network info.
  25. nova-compute does the REST call by passing auth-token to Volume API to attach volumes to instance.
  26. cinder-api validates the auth-token with keystone.
  27. nova-compute gets the block storage info.
  28. nova-compute generates data for hypervisor driver and executes request on Hypervisor( via libvirt or api).

The table represents the Instance state at various steps during the provisioning :

Status Task Power state Steps
Build scheduling None 3-12
Build networking None 22-24
Build block_device_mapping None 25-27
Build spawing None 28
Active none Running

Understanding REST: Implementation

REST web APIs

A ReST web API is a web implementaion of principles of REST using HTTP. It is collection of resources, with defined facet:

  • the base URI for the API, such as http://www.iLearnStack.com/phonebook/UserDetails/
  • the internet media type of the data supported by the web API, such as JSON, XML
  • the set of operations supported by the API, such as GET, PUT, POST, DELETE ( known as request methods for HTTP)
  • the API must be hypertext driven

Considering a generic example, the standard output format of request methods can be consolidated as: 
    Resource: http://www.iLearnStack.com/phonebook/UserDetails/ (Collection URI)
        GET: List the details of addressed methods (in given scenario; UserDetails)
        PUT: Replace the enire addressed method with another addressed method
        POST: Create a new entry in the addressed method
        DELETE: Delete the entire addressed method

    Resource: http://www.iLearnStack.com/phonebook/UserDetails/12345
        GET: Retrieve the representation of the addressed member (in given scenario; 12345) of collection
        PUT: Replace the addressed member of the collection, or if it doesn’t exist, create it.
        POST: Not generally used. Treat the addressed member as a collection in its own right and create a new entry in it.
        DELETE: Delete the addressed member of the collection.

Using REST in Python

Issuing GET Requests

The Python module urllib2 is used to read URLs:

import urllib2
url = 'http://www.iLearnStack.com/phonebook/UserDetails/12345'
response = urllib2.urlopen(url).read()

Issuing POST Requests

A POST Request is simply passing request data as an encoded parameter to urlopen, hence:

import urllib2
import urllib
url = 'http://www.iLearnStack.com/phonebook/UserDetails/12345'
parameters = urllib.urlencode({
    'name': 'alpha',
    'surname;: 'beta'
    })
response = urllib2.urlopen(url, parameters).read()

REST: Openstack Implementaion

Getting Credentials

Credentials are a combination of user name, password, and what tenant (or project) cloud is running under. An additional token need to be generated if interacting with cloud directly with API endpoints, and not with a client. The Cloud administrator can provide a user name and a password as well as tenant identifier so that authorization tokens could be generated.
The tokens generated are typically good for 24 hours. The work flow goes like:

 

  • Begin API requests by asking for an authorization token from the endpoint your cloud administrator gave you. You send your user name, password, and what group or account you are with (the “tenant” in auth-speak).
curl -k -X 'POST' -v https://arm.trystack.org:5443/v2.0/tokens -d '{"auth":{"passwordCredentials":{"username": "joecool", "password":           "coolword"}, "tenantId":"5"}}' -H 'Content-type: application/json'
  • The server returns a response in which the 24-hours token is contained. Use that token to send API requests with the X-Auth-Token included as an header field.
curl -k -D - -H "X-Auth-Token: 7d2f63fd-4dcc-4752-8e9b-1d08f989cc00" -X 'GET' -v https://arm.trystack.org:9774/v1.1/296/extensions -H 'Content-type: application/json'
  • Repeatedly send API requests with that token in the X-Auth-Token header until either: 1) the job’s done or 2) you get a 401 (Unauthorized) code in return.
  • Request a token again when you get 401 (Unauthorized response) response.

Sending Requests to the API

A couple of options for sending requests to OpenStack through an API is available. Developers and testers may prefer to use cURL, the command-line tool. With cURL HTTP requests can be sent and responses are received back from the command line.

For a graphical interface, the REST client for Firefox also works well for testing and trying out commands.

Tokens need to be generated if cURL or a REST client are used.

Following example illustrates the curl commands that can check the list of servers.

curl -v -H "X-Auth-Token:tokengoeshere" http://127.0.0.1:8774/v2/tenantnnnnnn/servers

The output returned for the number of running servers looks like:

{
 "servers": [
        {
        "id": "server***",
        "links": [
                {
                "href": "http://127.0.0.1:8774/v2/tenantnnnnnn/servers/server***",
                 "rel": "self"
                 },
                {
               "href": "http://127.0.0.1:8774/tenantnnnnnn/servers/server***",
               "rel": "bookmark"
               }
                ],
           "name": "Test Server"
            }
            ]
}

For detailed APIs, available for Openstack this link can be referred to.

References

Learn REST

Openstack API start

Messaging in Openstack using RabbitMQ

AMQP is the messaging technology chosen by the OpenStack cloud. The OpenStack components such as Nova , Cinder , Quantum communicates internally via AMQP(Advanced Message Queue Protocol) and through eachother using REST-call. The AMQP broker, RabbitMQ , sits between any two internal Openstack components and allows them to communicate in a loosely coupled fashion i.e its components has, or makes use of, little or no knowledge of the definitions of other separate components. More precisely, Nova components (nova-api, nova-scheduler , nova-compute) use Remote Procedure Calls (RPC) to communicate to one another.

Generally Openstack Components uses direct, fanout, and topic-based exchanges that have been  discussed in this previous blog post.

Openstack Messaging has two modes:

  • rpc.cast – don’t wait for result
  • rpc.call – wait for result (when there is something to return)

RPC is a pretty common pattern in computing, what if we need to run a function on a remote computer and wait for the result? Well, that’s a different story. This pattern is commonly known as Remote Procedure Call or RPC.

In OpenStack the Nova ,Cinder and Quantum implements RPC (both request+response, and one-way, respectively nicknamed ‘rpc.call’ and ‘rpc.cast’) over AMQP by providing an adapter class which take cares of marshaling and unmarshaling of messages into function calls. Each Nova components (for example api ,compute, Scheduler, etc.) , Cinder components ( for example volume, Scheduler) , Quantum Components( for example quantum-server , agents ,plugins )  create two queues at the initialization time, one which accepts messages with routing keys ‘NODE-TYPE.NODE-ID’ (for example compute.hostname) and another, which accepts messages with routing keys as generic ‘NODE-TYPE’ (for example compute) .

This  is used specifically when Nova-API needs to redirect commands to a specific node like ‘ nova delete instance’. In this case, only the compute node whose host’s hypervisor is running the virtual machine can kill the instance. The API acts as a consumer when RPC calls are request/response, otherwise is acts as publisher only.

openstackrabbitmq

Fig : Messaging in Openstack using RabbitMQ ( Queue–server) 

When a single instance is deployed and shared in an OpenStack cloud. Every component connects to the message broker and, depending on its personality (for example compute node, cinder or quantum, may use the queue either as an Invoker (such as API or Scheduler) or a Worker (such as Compute or Quantum). Invokers and Workers do not actually exist in the Nova object model, but we are going to use them as an abstraction for sake of clarity.

An Invoker is a component that sends messages in the queuing system via two operations:  rpc.call and rpc.cast

Worker is a component that receives messages from the queuing system and reply accordingly to rpc.call operations.

The following are the elements of a message broker node (referred to as a RabbitMQ node)

  • Topic Publisher:  deals with an rpc.call or an rpc.cast operation and used to push a message to the queuing system. Every publisher connects always to the same topic-based exchange; its life-cycle is limited to the message delivery. 
  • Direct Consumer: deals with only rpc.call operation used to receive a response message from the queuing system; Every consumer connects to a unique direct-based exchange via a unique exclusive queue; its life-cycle is limited to the message delivery. 
  • Topic Consumer: it is activated when a Worker is instantiated and exists throughout its life-cycle,this is used to receive messages from the queue and it invokes the appropriate action as defined by the Worker role. A Topic Consumer connects to the same topic-based exchange either via a shared queue or via a unique exclusive queue. Every Worker has two topic consumers, one that is addressed only during rpc.cast operations (and it connects to a shared queue whose exchange key is ‘topic’) and the other that is addressed only during rpc.call operations (and it connects to a unique queue whose exchange key is ‘topic.host’). 
  • Direct Publisher: it comes to life only during rpc.call operations and it is instantiated to return the message required by the request/response operation. The object connects to a direct-based exchange whose identity is dictated by the incoming message.

 

RPC calls in Openstack :

The diagram below shows the message flow during an rpc.call operation:

  1. a Topic Publisher is instantiated to send the message request to the queuing system; immediately before the publishing operation, a Direct Consumer is instantiated to wait for the response message.
  2. Once the message is dispatched by the exchange, it is fetched by the Topic Consumer dictated by the routing key (such as ‘topic.host’) and passed to the Worker in charge of the task.
  3. Once the task is completed, a Direct Publisher is allocated to send the response message to the queuing system.
  4. Once the message is dispatched by the exchange, it is fetched by the Direct Consumer dictated by the routing key (such as ‘msg_id’) and passed to the Invoker.

rpccall

Fig: RPC calls in Openstack


RPC cast in Openstack

The diagram below the message flow during an rpc.cast operation:

  1. A Topic Publisher is instantiated to send the message request to the queuing system.
  2. Once the message is dispatched by the exchange, it is fetched by the Topic Consumer dictated by the routing key (such as ‘topic’) and passed to the Worker in charge of the task.

rpccast

Fig : RPC cast in Openstack

The publisher (API) sends the message to a topic exchange (compute topic). A consumer (compute worker) retrieves the message from the queue. No response is expected as it is a cast and not a call.
 

Exchanges and queues being created by Openstack components are:

Exchanges and its type:

  • amq.direct      direct
  • cinder-scheduler_fanout fanout
  • conductor_fanout        fanout
  • amq.topic       topic
  • cinder  topic
  • amq.rabbitmq.trace      topic
  • compute_fanout  fanout
  • amq.rabbitmq.log        topic
  • amq.fanout      fanout
  • q-agent-notifier-network-delete_fanout  fanout
  • cinder-volume_fanout    fanout
  • amq.headers     headers
  • nova    topic
  • scheduler_fanout        fanout
  • quantum topic
  • amq.match       headers
  • dhcp_agent_fanout       fanout
  • q-agent-notifier-security_group-update_fanout   fanout
  • q-agent-notifier-port-update_fanout     fanout

Queues:

  • scheduler_fanout_300bc05b412948ca91e9c2609022d94a       0
  • compute.localhost   0
  • cinder-scheduler        0
  • notifications.info      16
  • q-agent-notifier-port-update_fanout_e84cd1190d3d4d6fab9c92b9903ad1ee    0
  • compute_fanout_ae1e11827f144d5886f96cdcaba7f90b 0
  • cinder-scheduler_fanout_ebe88ad41b7d450a95b183e6e7a404f0        0
  • conductor_fanout_d82adea2be344983bdc36756e58849f9       0
  • q-plugin        0
  • dhcp_agent      0
  • q-agent-notifier-network-delete_fanout_68eb13d73ccb4d97b84e2534f7181f02 0
  • conductor.localhost 0
  • compute 0
  • scheduler.localhost 0
  • scheduler       0
  • dhcp_agent_fanout_d00b708d17994e31bdad92876dcbafc5      0
  • q-agent-notifier-security_group-update_fanout_62f50e6f6327453ca02efb9e67212a53  0
  • conductor       0
  • cinder-scheduler.localhost    0
  • dhcp_agent.localhost  0

Nova uses Kombu to connect to the RabbitMQ environment. Kombu is a messaging framework for Python. If you are interested in how the rpc-over-amqp stuff works , look at  /nova/openstack/common/rpc/impl_kombu.py

References :

For more details on RPC in Openstack :
http://docs.openstack.org/developer/nova/devref/rpc.html

Introduction to Openstack

openstack

OpenStack is a collection of open source technologies delivering a massively scalable cloud operating system.

OpenStack cloud operating system controls large pools of compute, storage, and networking resources throughout a datacenter, all managed through a dashboard that gives administrators control while empowering their users to provision resources through a web interface.

We can think of it as software to power our own Infrastructure as a Service (IaaS) offering like Amazon Web Services.

openstack-software-diagram

release cycle

Fig : OpenStack and its release cycle

Openstack Projects :

Project                     Codenamed
Dashboard Horizon
Compute Nova
Identity Keystone
Network Quantum
Image Service Glance
Block Storage Cinder
Object Storage Swift

OpenStack Components :

 There are currently seven core components of OpenStack and how they conceptually interact with eachother is shown below :

openstack-conceptual-arch

Fig : OpenStack Conceptual Architecture 

Now lets discuss each components and its services ….

1.  Horizon – Dashboard

 It provides a modular web-based user interface for all the OpenStack services. With this web GUI, you can perform most operations on your cloud like launching an instance, assigning IP addresses and setting access controls.

horizon-screenshot

Fig : Openstack Dashboard

2. Keystone – Identity

  • Keystone is a framework for authentication and authorization for all the OpenStack services.
  • Keystone handles API requests as well as providing configurable catalog, policy, token and identity services.
  • It provides the ability to add users to groups (also known as tenants) and to manage permissions between users and groups. Permissions include the ability to launch and terminate instances.

keystone

Fig : Openstack keystone 

3. Nova – Compute

It provides virtual servers upon demand. Nova is the most complicated and distributed component of OpenStack. A large number of processes cooperate to turn end user API requests into running virtual machines.
List of these processes and their functions:

  • nova-api : it’s  a RESTful API web service which accepts incoming commands to interact with the OpenStack cloud.
  • nova-compute: it’s a worker daemon which creates and terminates virtual machine instances via Hypervisor’s APIs .
  • nova-scheduler:  it takes a request from the queue and determines which compute server host it should run on.
  • nova-conductor: It  provides services for nova-compute, such as completing database updates and handling long-running tasks.
  • nova database: It stores most of the build-time and run-time state for a cloud infrastructure.
  • The queue provides a central hub for passing messages between daemons. This is usually implemented with RabbitMQ.
  • Nova also provides console services to allow end users to access their virtual instance’s console through a proxy. This involves several daemons (nova-console, nova-novncproxy and nova-consoleauth).
  • nova-network : it’s a worker daemon very similar to nova-compute. It accepts networking tasks from the queue and then performs tasks to manipulate the network (such as setting up bridging interfaces or changing iptables rules). This functionality is being migrated to Quantum, a separate OpenStack service.
  • nova-volume : Manages creation, attaching and detaching of persistent volumes to compute instances. This functionality is being migrated to Cinder, a separate OpenStack service.

 nova-compute
Fig: Openstack Nova

Nova also interacts with many other OpenStack services: Keystone for authentication, Glance for images and Horizon for web interface. The Glance interactions are central. The API process can upload and query Glance while nova-compute will download images for use in launching images.

4. Glance – Image Store

It provides discovery, registration and delivery services for disk and server images.
List of processes and their functions:

  • glance-api :  It accepts Image API calls for image discovery, image retrieval and image storage.
  • glance-registry : it stores, processes and retrieves metadata about images (size, type, etc.).
  • glance database : A database to store the image metadata.
  • A storage repository for the actual image files. Glance supports normal filesystems, RADOS block devices, Amazon S3, HTTP and Swift.

Glance accepts API requests for images (or image metadata) from end users or Nova components and can store its disk files in the object storage service, Swift or other storage repository.

glance

Fig: Openstack Glance

5. Quantum – Network

It provides “ network connectivity as a service ” between interface devices (e.g., vNICs) managed by other OpenStack services (e.g., nova). The service works by allowing users to create their own networks and then attach interfaces to them. Quantum has a pluggable architecture to support many popular networking vendors and technologies.

  • quantum-server accept API requests and route them to the correct quantum plugin.
  • Plugins and agents perform actual actions, like plug/unplug ports, creating networks and subnets and IP addresing.
  • It also has a message queue to route info between quantum-server and various agents.
  • It has a quantum database to store networking state for particular plugins.

quantum

Fig: Openstack Quantum

Quantum will interact mainly with Nova, where it will provide networks and connectivity for its instances.

6. Cinder – Block Storage

Cinder allows block devices to be exposed and connected to compute instances for expanded storage & better performance.

  • cinder-api accepts requests and routes them to cinder-volume for action.
  • cinder-volume reacts reading or writing to the cinder database to maintain state, interacts with other processes (like cinder-scheduler) through a message queue and directly on block storage providing hardware or software.
  • cinder-scheduler picks the optimal block storage node to create the volume on.
  • The messages queue route information between Cinder processes.
  • A  cinder database store volumes state.

cinder

Fig: Openstack Cinder

Like Quantum, Cinder will mainly interact with Nova, providing volumes for its instances.

7.Swift – Object Storage

Object store allows you to store or retrieve files. It provides a fully distributed, API-accessible storage platform that can be integrated directly into applications or used for backup, archiving and data retention.

Note : Object Storage is not a traditional file system, but rather a distributed storage system for static data such as virtual machine images, photo storage, email storage, backups and archives.

  • Proxy server (swift-proxy-server) accepts incoming requests, like files to upload, modifications to metadata or container creation; it also serve files and container listing.
  • Accounts server manage accounts defined with the object storage service.
  • Container servers manage a mapping of containers, folders, within the object store service.
  • Object servers manage actual objects, files, on the storage nodes.

object store

Fig: Openstack Swift

Also replication services run to provide consistency and availability across the cluster, audit and update.

All these components and how they relate eachother are shown in the simplest way in below OpenStack logical architecture : 

openstack-arch-grizzly-v1-logical

Fig : Openstack logical Architecture 

Features & Benefits of OpenStack

  • Instance life cycle management i.e. Run, reboot, suspend, resize and terminate instances.
  • Management of compute resources i.e. CPU, memory, disk, and network interfaces.
  • Management of Local Area Networks (Flat, Flat DHCP, VLAN DHCP and IPv6) through programmatically allocates IPs and VLANs.
  • API with rate limiting and Authentication to manage who has access to compute resources and prevent users from impacting each other with excessive API utilization.
  • Distributed and asynchronous architecture for massively scalable and highly available system.
  • Virtual Machine (VM) image management i.e. store, import, share, and query images.
  • Floating IP addresses i.e. Ability to assign (and re-assign) IP addresses to VMs.
  • Security Groups i.e. flexibility to assign and control access to VM instances by creating separation between resource pools.
  • Role Based Access Control (RBAC) to ensure security by user, role and project.
  • Projects & Quotas i.e. ability to allocate, track and limit resource utilization.
  • REST-based API.

References:

For more details on Openstack :

OpenStack Compute Administration Manual