Mirantis named a Challenger in 2024 Gartner® Magic Quadrant™ for Container Management  |  Learn More

< BLOG HOME

Tieto's path to containerized OpenStack, or How I learned to stop worrying and love containers

Nick Chase - October 21, 2016
image
Tieto is the #1 cloud service provider in Northern Europe, with over 150 cloud customers in the region and revenues in the neighborhood of €1.5 billion (with a "b"). So when the company decided to take the leap into OpenStack, it was a decision that wasn't taken lightly -- or without very strict requirements.
Now, we've been talking a lot about containerized OpenStack here at Mirantis lately, and at the OpenStack Summit in Barcelona, our Director of Product Engineering will get with Tieto's Cloud Architect  Lukáš Kubín to explain the company's journey from a traditional architecture to a fully adaptable cloud infrastructure, so we wanted to take a moment and ask the question:
How does a company decide that containerized OpenStack is a good idea?

What Tieto wanted

At its heart, Tieto wanted to deliver a bimodal multicloud solution that would help customers digitize their businesses. In order to do that, it needed an infrastructure in which it could have confidence, and OpenStack was chosen as the platform for cloud native applications delivery.  The company had the following goals:
  • Remove vendor lock-in
  • Achieve the elasticity of a seamless on-demand capacity fulfillment
  • Rely on robust automation and orchestration
  • Adopt innovative open source solutions
  • Implement Infrastructure as Code
It was this last item, implementing Infrastructure as Code, that was perhaps the biggest challenge from an OpenStack standpoint.

Where we started

In fact, Tieto had been working with OpenStack since 2013, creating internal projects to evaluate OpenStack Havana and Icehouse using internal software development projects; at that time, the target architecture included Neutron and Open vSwitch. 
By 2015, the company was providing scale-up focused IaaS cloud offerings and unique application-focused PaaS services, but what was lacking was a shared platform with full API controlled infrastructure for horizontally scalable workload.
Finally, this year, the company announced its OpenStack Cloud offering, based on the OpenStack distribution of tcp cloud (now part of Mirantis), and OpenContrail rather than Open vSwitch.
Why OpenContrail? The company cited several reasons:
  • Licensing: OpenContrail is an open source solution, but commercial support is available from vendors such as Mirantis.
  • High Availability: OpenContrail includes native HA support.
  • Cloud gateway routing: North-South traffic must be routed on physical edge routers  instead of software gateways to work with existing solutions
  • Performance: OpenContrail provides excellent pps, bandwidth, scalability, and so on (up to 9.6 Gbps)
  • Interconnection between SDN and Fabric: OpenContrail supports the dynamic legacy connections through EVPN or ToR switches
  • Containers: OpenContrail includes support for containers, making it possible to use one networking framework for multiple environments.
Once completed, the Tieto Proof of Concept cloud included;
  • OpenContrail 2.21
  • 20 compute nodes
  • Glance and Cinder running on Ceph
  • Heat orchestration
Tieto had achieved Infrastructure as Code, in that deployment and operations were controlled through OpenStack Salt formulas. This architecture enabled the company to use DevOps principles, in that they could use declarative configurations that could be stored in a repository and re-used as necessary.
What's more, the company had an architecture that worked, and that included commercial support for OpenContrail (through Mirantis).
But there was still something missing.

What was missing

With operations support and Infrastructure as Code, Tieto's OpenStack Cloud was already beyond what many deployments ever achieve, but it still wasn't as straightforward as the company would have liked.  
As designed, the OpenStack architecture consisted of almost two dozen VMs on at least 3 physical KVM nodes -- and that was just the control plane!
tieto
As you might imagine, trying to keep all of those VMs up to date through operating system updates and other changes made operations more complex that it needed to be.  Any time an update needed to be applied, it had to be applied to each and every VM. Sure, that process was easier because of the DevOps advantages introduced by the OpenStack-Salt formulas that were already in the repository, but that was still an awful lot of moving parts.
There had to be a better way.

How to meet that challenge

That "better way" involves treating OpenStack as a containerized application in order to take advantage of the efficiencies this architecture enables, including:
  • Easier operations, because each service no longer has its own VM, with it own operating system to worry about
  • Better reliability and easier manageability, because containers and docker files can be tested as part of a CI/CD workflow
  • Easier upgrades, because once OpenStack has been converted to a microservices architecture, it's much easier to simply replace one service
  • Better performance and scalability, because the containerized OpenStack services can be orchestrated by a tool such as Kubernetes.
So that's the "why".  But what about the "how"?  Well, that's a tale for another day, but if you'll be in Barcelona, join us at 12:15pm on Wednesday to get the full story and maybe even see a demo of the new system in action!

Choose your cloud native journey.

Whatever your role, we’re here to help with open source tools and world-class support.

GET STARTED