Deploying multi-node Openstack with Opencontrail

In this post, I have chosen Opencontrail as SDN solution for Openstack neutron networking.
Opencontrail on its own have large amount of components/services which is difficult to cover in single post.
For non-familiar folks, refer to: Opencontrail architecture documentation

Opencontail site contains, lots of interesting blog posts on various topics,…It’s the¬†endless world out there ūüôā

This post intended to get you started with environment to play around and get feel for.
I will primarily focus on creating multi-node setup using Openstack and Opencontaril.
We will see the configuration points for Openstack and Opencontrail.
Then do, some basic operations, like VM launche, and ssh and ping operations to check the connectivity.

So, I guess, enough taking, Let’s get technical.

What do you need for environment?

  • A machine which support virtualization (Intel-VT/AMD-V)
  • Memory 8GB¬†minimum, will use 5.5 GB for all three VMs
  • Disk 60 GB free space
  • VirtualBox/VMware Player/KVM host – I have VirtualBox
  • A cup of Coffee!

Create three virtual machines with ubuntu 14.04 server.

VM1 specification (contrail-openstack-controller):

eth0: 192.168.1.11 (bridged adapter)
eth1: 10.10.10.3 (Host-only adapter)

vm1

Contrail-Openstack-Controller

VM2: specification (compute1):

eth0: 192.168.1.12 (bridged adapter)
eth1: 10.10.10.4 (Host-only adapter)

vm2

Compute Node 1

Vm3: specification (compute2):

eth0: 192.168.1.13 (bridged adapter)
eth1: 10.10.10.5 (Host-only adapter)

Compute Node 2

Compute Node 2

After login to the machines clone following repositories.

https://github.com/Juniper/contrail-installer.git

https://github.com/openstack-dev/devstack.git

For installation of contrail, you can follow up contrail-installer README.

VM1:

Act as controller hence, we will be running all the openstack services, without openstack-nova-compute and opencontrail-vrouter-agent. Following, locarc for opencontrail.

vm1-contrail-localrc

Follow steps as per the contrail-installer and install contrail services.

cd contrail-installer
cp samples/localrc-all localrc (edit localrc as needed)
./contrail.sh build
./contrail.sh install
./contrail.sh configure
#Before starting contrail services, remove agent service from ENABLED_SERVICES
https://github.com/Juniper/contrail-installer/blob/master/contrail.sh#L12
./contrail.sh start

Devstack localrc: disabled most of the services like cinder, heat, tempest and compute

vm1-devstack-localrc

Copy opencontrail plugin for devstack to use:

cd devstack
cp ~/contrail-installer/devstack/lib/neutron_plugins/opencontrail lib/neutron_plugins/
./stack.sh

VM2 & VM3

This will be compute nodes, hence this will be running nova-compute and contrail-vrouter-agent serivces.

Localrc for contrail and devstack as follows:

Check out contrail localrc, INSTALL_PROFILE=COMPUTE

vm2-contrail-localrc           vm2-devstack-localrc

Verify setup with screen outputs: on all the VMs

screen -x contrail
screen -x stack

Screens for contrail and devstack on controller node

controller-contrail-screen

controller contrail screen

controller devstack screen

controller devstack screen

Screens for contrail and devstack on compute node

nova compute screen

nova compute screen

Contrail agent on compute node

Contrail agent on compute node

Check regiester compute nodes with ‘nova hypervisor-list’, and launch 2 VMs.

Booted 2 VMs on two compute nodes

Test ping and ssh commands between vms

vm ssh and ping test

vm ssh and ping test

Let  me know, wherever you stumble across problem during this setup. I will try my best to help you out.

Just got below patch merge now, which was necessary for multi-node contrail setup.

https://github.com/Juniper/contrail-installer/pull/105

Advertisements

Networking in OpenStack : Panoramic view

This article describes the basic introduction to Openstack Neutron ,  different types of network modes and plugin available in Openstack Networking.

Neutron is an¬†OpenStack¬†project to provide “networking as a service” between interface devices (e.g., vNICs) managed by other Openstack services (e.g., nova).

Neutron was introduced as a core part of OpenStack with the initiative’s Folsom release. Prior to the Folsom release, networking functionality was hard-coded in the Nova compute module of OpenStack, which required developers to modify both compute and network features of OpenStack together. With Neutron, networking is a more modular element of OpenStack that can evolve independently.

The core Neutron API includes support for Layer 2 networking and IP address management(IPAM), as well as an extension for a Layer 3 router construct that enables routing between Layer 2 networks and gateways to external networks. Neutron includes a growing list of plugins that enable interoperability with various commercial and open source network technologies, including routers, switches, virtual switches and software-defined networking (SDN)controllers.

Note:  The OpenStack Foundation changed the name of its networking project from Quantum to Neutron due to a trademark conflict with a manufacturer of tape-based data backup systems.

Why Neutron?

  • Give cloud tenants an API to build rich networking topologies, and configure advanced network policies in the cloud.
    • Example: create multi-tier web application topology
  • Enable innovation plugins (open and closed source) that introduce advanced network capabilities
    • Example: use L2-in-L3 tunneling to avoid VLAN limits, provide end-to-end QoS guarantees, used monitoring protocols like¬†NetFlow.
  • Let anyone build advanced network services (open and closed source) that plug into Openstack tenant networks.
    • Examples: LB-aaS, VPN-aaS, firewall-aaS, IDS-aaS, data-center-interconnect-aaS.

Concepts

  • Network, representing isolated virtual Layer-2 domains; a network can also be regarded as a virtual (or logical) switch;
  • Subnet, representing IPv4 or IPv6 address blocks from which IPs to be assigned to VMs on a given network are selected.
  • Port, representing virtual (or logical) switch ports on a given network. Virtual instances attach their interfaces into ports. The logical port also defines the MAC address and the IP address(es) to be assigned to the interfaces plugged into them. When IP addresses are associated to a port, this also implies the port is associated with a subnet, as the IP address was taken from the allocation pool for a specific subnet. ¬†These could be demonstrated in the figure below :

neutron_concept

High-level flow

  • Tenant creates a network (e.g., “net1”)
  • Tenant associates a subnet with that network (e.g., “10.0.0.0/24”)
  • Tenant boots a VM, specifying a single NIC connected to “net1” (e.g.: nova boot –image <image_name> –nic net-id=<id_of_net1> <server_name>)
  • Nova contacts Neutron and creates a port1 on net1.
  • Neutron assigns an IP to port1 is assigned IP. (The IP is chosen by Neutron)
  • Tenant destroys VM.
  • Nova contacts neutron and destroys port1. Allocated IP is returned to the pool of available IP address.

highlevelview

Fig : High level flow

Openstack Networking Architecture:

Neutron-PhysNet-Diagram

Fig : Network Connectivity for Physical Hosts

A standard OpenStack Networking setup has up to four distinct physical data center networks:

  • Management network. Used for internal communication between OpenStack Components.¬†¬† IP addresses on this network should be reachable only within the data center.
  • Data network. Used for VM data communication within the cloud deployment.¬† The IP addressing requirements of this network depend on the OpenStack Networking plugin being used.
  • External network. Used to provide VMs with Internet access in some deployment scenarios.¬† IP addresses on this network should be reachable by anyone on the Internet.
  • API network. Exposes all OpenStack APIs, including the OpenStack Networking API, to tenants.¬†IP addresses on this network should be reachable by anyone on the Internet.¬†The API network may be the same as the external network, because it is possible to create an external-network subnet that is allocated IP ranges that use less than the full range of IP addresses in an IP block.

Network modes in Openstack :

  • Flat mode
  • Flat DHCP mode
  • VLAN DHCP mode

Flat mode

Flat mode is the simplest networking mode. Each instance receives a fixed IP from the pool. All instances are attached to the same bridge (br100) by default. The bridge must be configured manually. The networking configuration is injected into the instance before it is booted. And there is no floating IP feature in this mode.

flatnetwork-diagram-1

Fig : FlatManager network topology

Flat DHCP mode

This is similar to the flat mode in that all instances are attached to the same bridge. In this mode Nova does a bit more configuration; it will attempt to bridge into an Ethernet device (eth0 by default). It will also run dnsmasq as a dhcpserver listening on this bridge. Instances receive their fixed IPs by doing a dhcpdiscover. Moreover, floating IP feature is provided.

flatdhcpmanager-topology-single-instance

Fig:¬†FlatDHCPManager ‚Äď network topology

VLAN Network Mode

It is the default mode for Nova. It provides a private network segment for each project’s instances that can be accessed via a dedicated VPN connection from the Internet.

In this mode, each project gets its own VLAN, Linux networking bridge, and subnet. The subnets are specified by the network administrator, and are assigned dynamically to a project when required. A DHCP Server is started for each VLAN to pass out IP addresses to VM instances from the subnet assigned to the project. All instances belonging to one project are bridged into the same VLAN for that project.

vlanmanager-generic-config-v2-2-tenants-2

Fig:¬†VLANManager ‚Äď network topology

The following plugins are currently included in the OpenStack Networking distribution:

Plugins can have different properties for hardware requirements, features, performance, scale, or operator tools. Because OpenStack Networking supports a large number of plugins, the cloud administrator is able to weigh different options and decide which networking technology is right for the deployment.