What's the big deal about running OpenStack in containers?
But there were a few people who realized that there was yet another possibility: that containers could actually save OpenStack.
Look, it's no secret that deploying and managing OpenStack is difficult at best, and frustratingly impossible at worst. So what if I told you that using Kubernetes and containers could make it easy?
Mirantis has been experimenting with container-based OpenStack for the past several years -- since before it was "cool" -- and lately we'd decided on an architecture that would enable us to take advantage of the management capabilities and scalability that comes with the Kubernetes container orchestration engine. (You might have seen the news that we've also acquired TCP Cloud, which will help us jump our R&D forward about 9 months.)
Specifically, using Kubernetes as an OpenStack underlay lets us turn a monolithic software package into discrete services with well-defined APIs that can be freely distributed, orchestrated, recovered, upgraded and replaced -- often automatically based on configured business logic.
That said, it's more than just dropping OpenStack into containers, and talk is cheap. It's one thing for me to say that Kubernetes makes it easy to deploy OpenStack services. And frankly, almost anything would be easier than deploying, say, a new controller with today's systems.
But what if I told you you could turn an empty bare metal node into an OpenStack controller just by adding a couple of tags to it?
Have a look at this video:
Containerizing the OpenStack Control Plane on Kubernetes: auto-scaling OpenStack services
I know, right? Are you as excited about this as I am?