Total Guide to Private Cloud:
ZeroOps for Cloud on Prem
Managed Cloud, On-Premises or Anywhere, Delivered as a Service
Time to read: 28 minutes
Managed Cloud, On-Premises or Anywhere, Delivered as a Service
Time to read: 28 minutes
Organizations that undertake an on-premises cloud project generally start with a good idea of the technologies they need to provide. Business leadership, developers, line-of-business product owners, and others will have asked for them. You’ll have a mandate – in some cases, the point of the spear for an enterprise-wide IT facelift or multi-year “digital transformation” effort.
So you start with the obvious, and go on from there. If your organization isn’t an edge-case – for example, a Greenfield SaaS startup – you’ll likely require an Infrastructure-as-a-Aervice (IaaS) solution that hosts virtual machines, suitable for running conventional application workloads on Linux or Windows. Most organizations have many such workloads to wrangle: a mix of legacy and contemporary applications, databases, websites and other things.
Most will also require – if not now, then soon – capacity to deploy environments that orchestrate containerized workloads. As you probably already know, container-based development is the foundation of cloud native application engineering. Containerization limits dependency issues, enabling “write once, run anywhere” movement of workload components from developer desktops to QA/test to production. It enables radical use of automation to speed development of high-quality applications. And properly-designed containerized applications work with container orchestration to reduce cloud operations costs, enable scale, and increase application reliability.
Basic use of containers requires two main components: a container runtime (single-node execution environment for container runtimes), and a container registry: a service that stores ‘images’ of containerized applications for retrieval and execution. The quality of these components, and their degree of integration with one another, is important for performance and security.
Most organizations want a fully Docker-compatible container runtime. (Developers familiar with container development will typically have cut their teeth in Docker environments.) Many – particularly as regards regulated industries, government, and other more-demanding use cases – need a runtime with encryption services built in, for example, encryption compatible with the Federal Information Processing Standard (FIPS), level 140-2.
ENTERPRISES MAY NEED A VM-ORIENTED CLOUD, PLUS FLEXIBLE CONTAINER ORCHESTRATION CAPABILITY
Likewise, most organizations will want a Docker-compatible container registry. The best will provide built-in container security scanning—automatic evaluation of container images to identify known exploits, driven from a continually-updated, authoritative database of Common Vulnerabilities and Exceptions (CVEs) A container registry should also provide the ability to sign container images cryptographically, preventing tampering and identifying specific images as having been approved for production use. Ideally, the registry and runtime can be integrated in a secure software supply chain enabling automatic security and approval policy management: that is, only properly-signed (approved) container images will be permitted to execute on the runtime.
Container orchestration refers to software platforms that coordinate potentially-many container runtimes, creating a reliable execution environment for production applications. Container orchestrators may be relatively simple, like Swarm – a friendly environment for Docker-oriented developers that helps make workloads scale (up to a point) and makes them resilient. Or they may be more complex, operationally sophisticated, and powerful, like Kubernetes – a go-forward platform for enterprise-wide cloud native application development, testing, and production operations.
Note that some enterprise Kubernetes distributions, like Mirantis Kubernetes Engine, support both Kubernetes and Swarm orchestration, within the same cluster, enabling management of both orchestration types from a single interface and set of APIs.
Kubernetes (and Swarm) are powerfully self-healing: restarting stopped containers and redistributing container workloads to healthy worker nodes, letting the orchestrator route around even severe hardware failures to keep applications available. Kubernetes itself is also easy to update: non-disruptive, simple-to-use rolling update and rollback capabilities are built right into the system. Containerized applications, meanwhile, can be updated at will if properly designed: just push a new configuration that points to the current version of a container, and Kubernetes will retrieve and execute the updated workload with minimal impact on application availability.
Unless you have particular performance requirements that compel running container orchestration on bare metal, you’ll likely want to host Kubernetes and Swarm on virtual machines – in other words, on your IaaS. This makes a lot of sense for operations. VM-hosted container orchestrators can be deployed quickly. You can hand out isolated Kubernetes and Swarm environments to developers and teams at need. Clusters can be scaled easily, up to available bare machine capacity.
Historically, IaaS frameworks have been installed directly on bare metal hosts, necessitating creation of an entire supporting architecture for control-plane high availability, networking, and storage. Once in place, complex procedures were required to update the many components and dependencies in complex IaaS stacks, slowing adoption of new features, and potentially increasing exposure to security exploits.
These problems have largely been solved by using Kubernetes on bare metal to host containerized IaaS control planes. Hosted on Kubernetes, containerized IaaS functionality can leverage high-efficiency virtual networking and storage from the underlying Kubernetes cluster, scale easily on demand, and enjoy very high availability thanks to Kubernetes’ powerful self-healing capabilities. Kubernetes makes it simple, as noted above, to update itself and underlying host operating systems, and to scale bare metal capacity. Running within Kubernetes, containerized IaaS components are also easy to update non-disruptively, meaning your IaaS can stay patched and current.
Satisfying the need for IaaS and the need for container orchestration, both at scale, and doing so on a uniform Kubernetes substrate sounds complicated (and it is). Doing it via DIY means bringing all the relevant components and their interfaces into a single schema – building the discipline and best practices for component and cloud lifecycle management into everything as you go.
It’s an enormous task – and in practice, for DIY clouds, it almost never happens. What happens instead is that every component and cloud gets built and managed as a single entity. At first, this looks like it keeps things simple, but leads to proliferation of strategies for platform engineering, SRE, and DevOps, no common standards or consistent best practices for developer self-service (specialized help required every time a dev needs a cluster), abandonment of centralized governance (or proliferation of governance methodologies), and growing (and in fact, unknowable) risks to security, access control, and compliance at all points.
Lack of a single point of access, administration, and automation thus undermines cloud efficiency and works against ROI in numerous ways. Perhaps the worst impact of a DIY strategy is that ensuing proliferation prevents platform engineers, site reliability engineers (SREs), and DevOps from performing their inter-related functions effectively: platform engineers automating the platforms themselves, SREs building and maintaining common services and disciplines for application observability, availability, security, and scale, and DevOps consuming those services to ensure that specific applications evolve quickly, find markets, delight customers, and generate profits. The whole “shift left” ethos of cloud hinges on the cloud working as a logical whole.
The take-away here is that DIY cloud doesn’t really work – both because building up a multi-layer IaaS + container orchestration cloud stack is intrinsically complex, and because most organizations can’t do so in a consistent, uniform way – creating a single, “as a service” operator and developer experience that makes ongoing operations consistently easier to automate, simpler, and thus more affordable; and automated operations easier to extend to improve application-layer operations (where the upsides live).
VIDEO: The On-Premise Data Center is Dead? Think again…
Singular on-premises clouds aren’t the whole story. Most organizations will eventually need more: for example, a second private cloud at an alternate location (providing redundancy, enabling compliance, or dwelling closer to customers, for example). Or they might need the ability to create distributed IaaS or container-oriented datacenters – for example, at multiple factory or retail locations. In all cases, it might also be helpful to be able to swiftly recruit public cloud capacity at need – to deal with seasonal traffic spikes, to host applications lacking the stringent security and performance requirements that compel hosting them on premises, to experiment with new technologies before investing, or for many other reasons.
A hybrid or multi-cloud benefits you in many ways when it remains consistent with the configuration of your private cloud and can be operated the same way – leveraging the private cloud’s centralized affordances and user experience for operations, development, and self-service, and exploiting the fast-growing and valuable automation, observability, procedures (for example, for data protection and compliance), and application development tooling and workflows you’ll devise for use on-premises.
YOUR CLOUD SHOULD WORK AS ONE
In short, however far-flung it may extend across platforms, your cloud should work as one, coherent system, and it should be accessible through one set of webUIs and APIs, just like public cloud, that is, “as a service.” Only in this scenario does expanding your cloud to diverse platforms not mean significant increases in operating costs, the need for new, platform-wrangling skills, and the obligation to rebuild automation around platform specifics.
Again, organizations that build clouds without outside support rarely, if ever, achieve anything like orderly, simple, centralized control. What they get, instead, is proliferating environments, now on many platforms, each a singular entity, making its own demands on operations.
Mirantis can engage with customers at any point in a cloud journey to help structure decision-making around your use-cases, develop requirements, and help you compare options, prove business value, and determine the right cloud architecture for your organization. We can then help you build the cloud you envision – quickly and with confidence – and help you fully operationalize it: customizing solutions for improving operator and developer experience.
Typical engagements begin with an Architecture Design Assessment (ADA) – a structured process which assesses your business objective, current state of infrastructure and applications, desired end state, and use cases. Upon completion of the assessment, Mirantis delivers a comprehensive architecture design document that provides an actionable plan to deploy a cloud solution. In addition, the ADA assesses your organization’s cloud-readiness in a detailed way, and provides prescriptions and options for training, ongoing support, managed services, and point-services such as application modernization and developer workflow automation.
LEARN MORE ABOUT ARCHITECTURAL DESIGN ASSESSMENT
Engagements then typically proceed to deployment. Mirantis offers a range of deployment services to help you quickly get production environments up and running, whether you are deploying a single cloud or need a multi-cloud solution. Solutions that conform to Mirantis Reference Architectures for our primary solution stack are implemented, complete with the centralized automation needed to operationalize them. This process can be customized and extended to suit special requirements or use cases demanding deviations from reference architecture specifications.
LEARN MORE ABOUT CLOUD BUILD SERVICES
Once the cloud is on its feet, Mirantis offers a range of options for conventional support (such as , email, tickets, and standard SLAs), but also a full portfolio of “ZeroOps” services for remote cloud operations and lifecycle management, and for managed, customized developer workflow automation.
Gaining efficiency means leveraging centralized automation in every layer of an increasingly-complex multi-platform stack, achieving speed, consistency, and predictability, and gaining the ability to host and administer applications of many types (VM-oriented, containerized, cloud native) seamlessly across several-to-many infrastructures. Mirantis provides a complete, open-source-based, pre-integrated solution stack, engineered to enable frictionless adoption of IaaS, container, and container orchestration technologies; centralized operations, and optimized for lifecycle management and cloud operations by Mirantis – in effect, “private cloud as a service.”
Mirantis Container Cloud (MCC) is a Kubernetes-hosted solution engineered to work as the foundation of such a stack, administering everything below what we call the “application horizon line.” MCC aggregates tools manually (webUI) or automatically (API):
Provision and observe bare-metal, hosted bare-metal (Equinix), private cloud, and popular public cloud infrastructures (AWS, Azure) and services on Linux and (for worker nodes) Windows hosts.
Deploy, observe, and lifecycle manage Mirantis Kubernetes Engine (Kubernetes/Swarm hybrid-capable orchestration) clusters in Gov/Mil/Fintech-secure, highly-available, performant, and consistent configurations that can be:
Updated continuously, scaled, and access-managed from the MCC central point of authority, policy, and governance.
Delivered rapidly and flawlessly, on-demand, on any supported infrastructure, in validated configurations for use in dev, test, and production.
Additionally, atop the Kubernetes substrate, MCC can deploy and provide observability for Mirantis OpenStack for Kubernetes (MOSK) – a hardened, containerized-control-plane OpenStack distribution that provides extremely full-featured, open source infrastructure-as-a-service (IaaS - virtual machine-oriented cloud, comparable to VMware) in a way that leverages Kubernetes’ powerful features to provide high performance and self-healing resilience for the IaaS layer.
Enable observability of MCC itself, ensuring continued health of the centralized point of platform operations.
MCC is engineered to centralize and provide a common point for manual (webUI) and automated (API) administration and observation for all infrastructures and platforms, and also – because it’s securely multi-tenant – provides secure, policy-manageable self-service, everywhere. It thus solves, in principle, all problems below the “application horizon line”:
MCC provides a host of dependable self-service capabilities to DevOps and individual teams developing, observing, and operating cloud native containerized and virtual-machine-based applications (for example, the ability to automate Blue/Green deployments in production).
MCC provides platform engineers, SREs, security engineers, regulatory governance, and other central pools of expertise with tools they need to provide standards, lock down and protect enterprise assets, manage by policy, solve problems, audit compliance, and help developers move fast with less risk.
To provide this high level of integration and consistency, MCC assembles and coordinates a host of open-source-based technologies and solutions including:
Mirantis Kubernetes Engine
Mirantis Container Runtime (FIPS 140-2 validated)
Mirantis Secure Registry (scanning and image signing)
Mirantis OpenStack for Kubernetes (MOSK)
Mirantis StackLight—comprehensive open source observability and metrics based on Prometheus, Grafana, Kibana, OpenSearch Kibana, Grafana, and other broadly-accepted and reliable open source standard components
The Mirantis technical solution stack thus provides the benefits of open source – rapid, community-driven improvements, transparent security standards, absence of proprietary requirements and other forms of lock-in, significant flexibility of configuration – while also providing tested, opinionated solutions that work together reliably at very large scales and are designed to reduce time-to-results for initial deployment, lifecycle management and scaling, extension to new hardware and clouds, and all other aspects of operations. The solution stack is also engineered to facilitate comprehensive remote operations by Mirantis.
LEARN MORE ABOUT MIRANTIS CONTAINER CLOUD
LEARN MORE ABOUT MIRANTIS KUBERNETES ENGINE
LEARN MORE ABOUT SWARM VS. KUBERNETES
LEARN MORE ABOUT MIRANTIS OPENSTACK FOR KUBERNETES
LEARN MORE ABOUT MCC ON EQUINIX METAL
VIDEO: The Future of OpenStack: Mirantis OpenStack for Kubernetes
ZeroOps is a new model for onsite/multisite remote managed services, where Mirantis employs its purpose-engineered technology and tested best practices, along with its globally-distributed bench of cloud experts and smart automation, to deliver the cloud platforms and solutions you need, where you need them, both below and above the application horizontal line. Working with Mirantis’ open source-based, integrated infrastructure and platform operations technologies – all tuned to enable remote observability and intervention – your multi-platform, multi-infrastructure cloud estates can literally work “as a service” in a ZeroOps regime.
VIDEO: What is a Cloud Native Data Center & How Does Mirantis Deliver it to You as-a-Service