What is Openstack, What is MOSK, and Why do you Care?
Mirantis OpenStack on Kubernetes (MOSK) is our fully-supported enterprise distribution of OpenStack – the leading open source infrastructure-as-a-service (i.e., virtual-machine-centric cloud) platform. We just released MOSK 24.1 with OpenStack Antelope: current MOSK users will want to read our release blog for details.
We wrote this blog, meanwhile, to explain more about what OpenStack is, and why you should care.
When we say ‘virtual machine-centric cloud platform,’ think ‘VMware Vsphere or VCF or the AWS EC2 codebase in a box, but make it open source.’ Begun around 2012, OpenStack is a vibrant community of open source projects, managed by the OpenInfra Foundation, collaborating to evolve a foundational IaaS cloud framework. In other words, Openstack is FOSS software that organizations can use to build and operate private or public IaaS clouds.
OpenStack lets you virtualize physical compute, storage, and network (i.e., actual computers networked together) and lets you create software-defined datacenters hosting virtual machines – themselves running Linux, Windows, or potentially other operating systems, plus more software.
That software can include any kind of application built to run directly on a host operating system (many enterprise, scientific, and other applications are still built like this). The advantage of doing this in a ‘private cloud’ (as opposed to doing it directly on bare-metal machines or independent virtual machines running under a hypervisor) is that ‘software-defined’ part.
Using OpenStack, you can point-and-click (or use scripts) to create a whole hierarchy of nested abstractions (orgs, projects, etc. – think AWS VPCs), permission users into them and give them quotas (think AWS IAM), and then let your users create flocks of sub-projects, networks, and virtual machines of different sizes and capacities (think AWS machine types, but you define them: RAM size, virtual CPU cores, etc.), add virtual SSDs, network cards, and other virtual peripherals – and then boot them up and run software on them.
By doing this, you can use your hardware very efficiently and your IT operations can be speedy, convenient, dynamic, and secure. All the basic ‘ops’ stuff OpenStack cloud operators, application operators, development teams, etc., need to do becomes point-and-click and/or software that talks to the OpenStack API … plus software that talks with virtual machines and installs and manages software on them.
Obviously, this is better than racking and stacking physical machines. It’s functionally equivalent to using AWS EC2 or another public cloud. But OpenStack runs on your choice of infrastructure, in a datacenter that you control. When you run it yourself, it is usually cheaper than any public cloud or VMware. Read our Total Guide to Private Cloud to learn even more.
OpenStack is very popular
In a world where it feels like AWS and VMware have become the defaults for public and private cloud, respectively, discovering OpenStack’s popularity can come as a shock. But OpenStack does such a good job with cloud that it now powers 45 million compute cores worldwide. It’s used at large scale (up to hundreds of thousands of cores) in private clouds – OpenStack is the primary open source alternative to VMware for this use-case. OpenStack also powers a chunk of the world’s ‘public cloud’ footprint – over 300 datacenters now run OpenStack on behalf of end-customers as a public cloud service.
OpenStack hosts Kubernetes and other platforms
OpenStack virtual machines are also used to host a lot of Kubernetes clusters. Over 70% of OpenStack datacenters now implement some version of the LOKI stack (Linux, OpenStack, Kubernetes Infrastructure) – and even more use OpenStack VMs to host container runtimes, Docker Swarm clusters, and other container orchestrators, including platforms like serverless-compute and functions-as-a-service that run on Kubernetes. Think VMware + Tanzu, but with lots more options: a single, open source infrastructure – OpenStack – that can host VM workloads (most organizations still need quite a few of these), and deliver many flavors of container orchestration on demand as well.
Where Mirantis comes in
Mirantis has a long history with OpenStack. As a long-time sponsor of the OpenInfra Foundation, Mirantis has contributed to OpenStack since the Cactus release in 2011. We’re a top-five lifetime contributor to OpenStack by commits and/or lines of code. Our CEO, Alex Freedland, served on the Foundation's board. We still work with and on the upstream projects (some of which we founded), adding features, fixing bugs, and hardening.
Through the mid-2010s, Mirantis built, and still supports a conventional, process-based distribution of OpenStack called MOS (Mirantis OpenStack); and we’ve used it – along with an OpenStack deployment system we created, called FUEL – to build some of the largest private clouds in the world. But along the way, we kept hearing that – no matter how well OpenStack was hardened and automated – building, scaling, and updating conventional clusters was perceived as being very difficult. Organizations using OpenStack – once their cloud was stable – tended to keep a specific version running for a long time; despite the fact that OpenStack releases happen twice a year. That meant users routinely missed out on new features and innovation. Eventually, slow-to-update users got left behind by the OpenStack project despite the project being very good about patches and backports; meaning that third parties, including Mirantis had to pick up the slack of keeping aging clusters functional and safe.
What is MOSK?
In 2015, Mirantis began experimenting with running containerized OpenStack control plane components on Kubernetes. Eventually, this work got rolled into a new OpenStack distro, now called MOSK – Mirantis OpenStack for Kubernetes. MOSK changed … a lot:
Running OpenStack on Kubernetes makes OpenStack radically simpler to deploy – given a highly-available Kubernetes host cluster, you launch control plane containers, then configure bare metal machines as OpenStack worker nodes and attach them.
The control plane can then be scaled flexibly – you can launch a full set of control plane components on a Kubernetes worker, or even scale individual components based on component-specific traffic and performance requirements.
Kubernetes complements OpenStack’s native resilience: if a containerized component fails, Kubernetes can restart it automatically.
Updates and even upgrades are also now much simpler: just point to new containers and Kubernetes can even update things in place, non-disruptively.
Mirantis now releases MOSK updates and upgrades several times yearly, staying roughly one calendar year behind the upstream release schedule to ensure that version releases are entirely stable and appropriately hardened.
Mirantis provides multiple tiers of support for MOSK, plus comprehensive professional and managed services, including flexible and rapid cloud architecture, deployment and testing, and training. We even offer MOSK as a fully-managed service, where Mirantis engineers monitor your MOSK cloud 24/7 and contact you to proactively mitigate issues. They ensure that your cloud functions optimally, stays secure, and that upgrades, scaling, and other operations tasks are planned and executed efficiently. You just use your cloud, worry-free.
Learn more, and contact us!
Our MOSK TCO calculator will give you an approximation of how much you can save by moving to Mirantis OpenStack for Kubernetes (e.g., from VMware). If you’re interested in a deeper dive into MOSK benefits, please contact us!