Return of the Smesh (Spinnaker Shpinnaker and Istio Shmistio to make a Smesh! Part 2)
Yes, the Kubernetes framework does satisfy a host of application needs in an acceptable manner for most applications. But what happens when your needs become more and more dependent on the flow of data between components and the distances between the providing resources becomes greater? Issues such as Quality of Service (QoS) become very important, for one thing. What if there's a greater need for secured access against the individual services? These issues point to needs not addressed within the Kubernetes framework itself. This is where the concept of the Smesh (Service Mesh) comes into being to fill the need (Learn more about what is a service mesh by reading our guide to Istio).
Before we go right to the heart of the Smesh, let’s take a closer look at the the Microservices architecture and the needs that it is designed to address.
The Microservices Architecture
Martin Fowler, renowned British author and software developer, described the microservice architectural style as "an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms," often via an HTTP resource or API.Providing a native microservice-capable platform such as Kubernetes is essential to supporting the Microservices Architecture properly.
Below is an example of how the Microservices Architecture is laid out, and a rudimentary diagram of how the services interact:
Istio is a service mesh layered on top of the K8s framework to support the definition of authority, enhance performance of the bandwidth, and to control the flow of data between microservices.
What is a Smesh (Service Mesh)? Regarding Istio and other tools...
A service mesh is a configurable infrastructure layer for a microservices application. It makes communication between service instances flexible, reliable, and fast. The mesh provides service discovery, load balancing, encryption, authentication and authorization, support for the circuit breaker pattern, and other capabilities.William Morgan described the service mesh as "a dedicated infrastructure layer for handling service-to-service communication. It’s responsible for the reliable delivery of requests through the complex topology of services that comprise a modern, cloud native application."
The service mesh technology comes with its own lexicon of new terms for old features and capabilities to learn and understand. Some of the more important terms and concepts are listed below for reference:
- Container orchestration framework - Kubernetes is the most common framework filling this need, but there are others.
- Services vs. service instances - There is a difference between the term service and the term service instance. The distinction is that the service represents the definition rather than the instance itself.
- Sidecar proxy - A sidecar proxy attaches itself to a specific service instance. It is managed by the orchestration framework and handles intercommunication between all the other proxies, reducing demand on the instances themselves.
- Service discovery - This capability enables the different services to “discover” each other when needed. The Kubernetes framework keeps a list of instances that are healthy and ready to receive requests.
- Load balancing - In a service mesh, load balancing capabilities place the least busy instances at the top of the stack, so that more busy instances can get the greatest amount of service without starving the least busy instances of resources.
- Encryption - Instead of having each of the services provide their own encryption/decryption, the service mesh can encrypt and decrypt requests and responses instead.
- Authentication and authorization. The service mesh can validate requests BEFORE they are sent to the service instances.
- Support for the circuit breaker pattern. The service mesh can support the circuit breaker pattern, which can stop requests from ever being sent to an unhealthy instance. We will discuss this specific feature later.
Istio also includes the capability of circuit-breaking to the application development process. Circuit-breaking helps to guard against partial or total cascading network communication failures by maintaining a status of the health and viability of a service instance. The circuit-breaker feature determines whether traffic should continue to be routed to a given service instance. The application developer must determine what to do as a design consideration when the service instance has been marked as not accepting requests.
Envoy, which is integrated as the backend proxy for Istio, treats its circuit-breaking functionality as a subset of load balancing and health checking. Envoy separates out its routing methods from the communication to the actual backend clusters, eliminating the routes to those service instances which are unhealthy or unable to accept requests. This method allows for the creation of many different potential routes to map traffic to the proper healthy and request accepting backends.
Below is a diagram of the Istio architecture for reference:
The Istio components and their functions are listed below:
Control plane:
- Istio-Manager: provides routing rules and service discovery information to the Envoy proxies.
- Mixer: collects telemetry from each Envoy proxy and enforces access control policies.
- Istio-Auth: provides “service to service” and “user to service” authentication. This component also converts unencrypted traffic to TLS based traffic between services, as needed.
- Envoy: a feature rich proxy managed by control plane components. Envoy intercepts traffic to and from the service, applying routing and access policies following the rules set in the control plane.