Mirantis named a Challenger in 2024 Gartner® Magic Quadrant™ for Container Management  |  Learn More

< BLOG HOME

BGPaaS in OpenStack - Kubernetes with Calico in OpenStack with OpenContrail

Jim Phillips - August 12, 2016
image
It’s been a while since new version 3.X of OpenContrail was released and we have not got so much time to take a good look at new features of this most deployed SDN/NFV with OpenStack. This blog post therefore brings our perspective on specific use cases and how to use BGP as a Service in OpenStack private cloud.

BGPaaS together with configurable ECMP, Intel DPDK and SRIOV support are key features of the new release. All of these features show that OpenContrail became the number one SDN/NFV solution for telco and service providers. Simply because telcos as Deutsche Telekom, France Telekom and AT&T pick this as solution for the SDN. The last named has significantly influenced features in the last OpenContrail release. To explain the reasons for these requirements and decisions you can watch Austin OC meetup videos, where AT&T has explained their use cases and why they like MPLS L3VPN approach.

tcp cloud tries to bring the real use cases not only for telco VNF for running virtual router appliances. Therefore we try to show you another interesting use case for global community, where BGPaaS is very important part not only for VNF. We deployed Kubernetes with Calico on top of OpenStack with OpenContrail and redistributed routes through the BGPaaS.

BGP as a Service

The BGP as a service (BGPaaS) allows a guest virtual machine (VM) to place routes in its own virtual routing and forwarding (VRF) instance using BGP. It has been implemented according to following blueprint.

However why do we need BGP route redistribution within Contrail? By default, virtual machines have only directly connected routes and default route pointing to Contrail IRB interface where all unknown traffic is being sent to. Then the route lookup occurs in that particular VRF. Normally, in VRF are only /32 routes of virtual machines and sometimes routes which are propagated via BGP from Cloud GW. When no match in route fits the lookup, the traffic is discarded.

You can run into several issues with this default behavior. For example Calico does not use overlay tunnels between its containers/VMs so the traffic goes transparently through your infrastructure. That means all networking devices between Calico nodes must be aware of Calico routes, so the traffic can be routed properly.

I’ll explain this issue on one of our use cases - Kubernetes with Calico. When we operate Kubernetes on top of OpenStack with OpenContrail, the problem occurs after the first container is started. Calico allocates /26 route for Kubernetes node, where the container started. This route is distributed via BGP to all the other Kubernetes nodes. But when you try to access this container, traffic goes to particular Kubernetes node. The problem is with traffic that is going back. By default there is Reverse Path Forwarding enabled, so when the traffic goes back, Contrail VRF discards traffic. The only solution prior OpenContrail 3.x release was to use static routes. This is not very agile since subnets for Calico nodes are generated dynamically and in larger scale it would be really painful to maintain all of these. In 3.x release we can use BGPaaS or disable Reverse Path Forwarding. In this blog we want to show how BGPaaS is implemented, therefore we leave Reverse Path Forwarding enabled. More detail explanation is in next section.

Standard BGPaaS use cases are following:
  • Dynamic Tunnel Insertion Within a Tenant Overlay
  • Dynamic Network Reachability of Applications
  • Liveness Detection for High Availability
More information about this feature in general is available at link.

Kubernetes with Calico in OpenStack with OpenContrail

The motivation for this use case is not just use BGPaaS feature for NFV/VNF service providers, but also for standard private clouds as well, where Kubernetes on OpenStack is deployed. Kubernetes can be used with OpenContrail plugin especially in mixing VMs with containers (multi-cloud networking blog). However Overlay on top of Overlay is not really good idea from performance point of view. OpenContrail community has already discussed working on reusing underlay vRouter instead of vRouter in vRouter, which is little bit similar like BGPaaS feature of propagation routing information from VMs to underlay.

Based on this we decided to use Calico as network plugin for Kubernetes, which uses BIRD routing engine without any overlay technology.

Let’s explain the BGPaaS solution. Since Calico is using Bird, you can create BGP peering directly from each Calico node to OpenContrail. However this full-mesh approach does not scale very well. So we decided to create two VMs with Bird service and use them as a route reflectors for Calico. Then we uses these VMs as BGP peers with OpenContrail. The route exchange will be further described in following architecture section.

Lab Architecture

../_images/bgpAsAService-calico2.png
Let’s have a closer look on this figure. The red and black lines stand for BGP peering between our Bird route reflectors (RTR01 and RTR02) VMs and OpenContrail controllers. When you want to use BGPaaS you first create a peering with .1 which stands for default gateway (peering with ntw01) and .2 (peering with ntw02) which stands for DNS (both are OpenContrail interfaces), but the actual peering is done with Controllers and .1 .2 are just BGP proxies. There is also BGP peering between all Calico nodes and RTR01,02 router reflectors. Last peering is default XMPP connections between Contrail controllers and vRouters which is used to learn and distribute route information between vRouters.

Now we have all information about connections in our use-case and we can now explain Control plane workflow on yellow balls. We start with creating a pod on Kubernetes master (1). Kubernetes scheduler scheduled the pod on Kubernetes Node02 and Calico allocated /26 network for that node as well as /32 route for pod (2). This /26 is distributed via BGP to route reflectors (3). Route reflectors then send the update to other Kubernetes nodes as well as to Contrail Controllers (4). Right now, all Kubernetes nodes are aware of this subnet and would be able to route traffic between them, but there is a need for route information in VRF as well. That is achieved in step (5), where route is distributed via XMPP to vRouters. Now we have dynamic Kubernetes with Calico environment on top of OpenStack with OpenContrail.

Configuration and Outputs

First we had to setup and configure BIRD service on OpenStack VMs RTR01 and RTR02. It peers with default gateway and DNS server, which is propagated through vRouter to OpenContrail controls. Then it peers with each Calico node and second route reflector RTR01.
#Peering with default GW/vRouter
    protocol bgp contrail1 {
            debug all;
            local as 64512;
            neighbor 172.16.10.1 as 64512;
            import all;
            export all;
            source address 172.16.10.115;
    }

#Peering with default DNS server/vRouter protocol bgp contrail2 { debug all; local as 64512; neighbor 172.16.10.2 as 64512; import all; export all; }
#Peering with calico nodes protocol bgp calico_master { local as 64512; neighbor 172.16.10.111 as 64512; rr client; import all; export all; }
protocol bgp calico_node1 { local as 64512; neighbor 172.16.10.112 as 64512; rr client; import all; export all; }
protocol bgp calico_node2 { local as 64512; neighbor 172.16.10.113 as 64512; rr client; import all; export all; }
#Peering with second route reflector BIRD protocol bgp rtr1 { local as 64512; neighbor 172.16.10.114 as 64512; import all; export all; }
After that we configured a new BGPaaS in OpenContrail UI under Configure -> Services -> BGPaaS.

../_images/create_bgp.png
Then we can see Established BGP peerings (172.16.10.114 and .115) under peers in Control Nodes.

../_images/peering.png
Calico uses by default bgp full mesh topology. We had to disable full mesh and configure only peerings with route reflectors (RTR01 and RTR02).
root@kubernetes-node01:~# calicoctl bgp node-mesh off
root@kubernetes-node01:~# calicoctl bgp peer add 172.16.10.114 as 64512
root@kubernetes-node01:~# calicoctl bgp peer add 172.16.10.115 as 64512
Calico status shows Established peerings with our RTR01 and RTR02.
root@kubernetes-node01:~# calicoctl status
calico-node container is running. Status: Up 44 hours
Running felix version 1.4.0rc2

IPv4 BGP status
IP: 172.16.10.111 AS Number: 64512 (inherited) +---------------+-----------+-------+----------+-------------+
| Peer address | Peer type | State | Since | Info | +---------------+-----------+-------+----------+-------------+
| 172.16.10.114 | global | up | 13:14:54 | Established | | 172.16.10.115 | global | up | 07:26:10 | Established | +---------------+-----------+-------+----------+-------------+
Finally we can see part of VRF routing table for our virtual network on compute 01. It shows direct interface for RTR01 VM (172.16.10.114/32) and tunnel to RTR02 (172.16.10.115/32). Subnet 192.168.156.192/26 is for Kubernetes pods and it is dynamically propagated by Calico through BIRD route reflectors.

../_images/vrouter_routing_table.png

Conclusion

In our blog post we showed how easy it is to use BGPaaS in OpenContrail and how you can look at general use case of running Kubernetes on top of OpenStack. All OpenContrail installations can be automated via Heat templates, but contrail-heat resources for BGPaaS require some modifications to work properly.

Jakub Pavlik & Marek Celoud

Choose your cloud native journey.

Whatever your role, we’re here to help with open source tools and world-class support.

GET STARTED

Join Our Exclusive Newsletter

Get cloud-native insights and expert commentary straight to your inbox.

SUBSCRIBE NOW