What is Kubernetes Ingress?
Ingress routes and manages traffic from the outside world to workloads running on Kubernetes
Out of the box, a minimal Kubernetes cluster provides several abstractions for letting applications receive requests from other apps (i.e., inside the cluster) and from the outside world. When developers want to expose an application to traffic (e.g., for testing), they typically define one of these basic services as a starting point:
ClusterIP – Assigns a port within a known range. Lets a workload receive requests from other applications and entities inside the cluster.
NodePort – Lets a workload receive requests from the outside world on a specific port, exposed on all (or a subset of) cluster node IP addresses.
LoadBalancer – Assigns an external load balancer (and optionally, DNS name) to the workload’s NodePort. The load balancer, which must be provided by surrounding infrastructure marshaled by a specific infrastructure provider running on the cluster (for example, a Kubernetes cluster running on AWS might integrate with Elastic Load Balancer — another AWS service, via an AWS provider), can then balance requests across nodes exposing the workload.
These primitive service types, however, don’t support all features production applications need. They can’t terminate SSL connections. They can’t support rewrite rules. They can’t (at least not easily) support conditional routing schemes — for example, sending requests to myapp/dothis to one workload, and requests to myapp/dothat to another.
In conventional web environments, features like these are provided by the webserver/proxy (e.g., nginx), by adaptations (e.g., certificates) on the host supporting the webserver, by helper functions like .htaccess, by front-end proxies, etc., or even by applications themselves.
Kubernetes, however, is designed to encourage:
Decoupling of workloads from housekeeping – Simpler workloads are easier to maintain, improve, and assemble dynamically to achieve operational goals. Ideally, each container or service should just do its assigned job, robustly and statelessly, making as few assumptions about its environment as possible. Routing should be managed outside of applications.
Aggregation and simplification of configuration – Ideally, it should be possible to collect configuration information defining something as potentially-complicated as traffic routing for a complex application in one or as few places as possible, and represent configuration in standard ways, rather than trying to produce desired effects by coordinating diverse configs for many different entities.
Enterprise Kubernetes ingress
Ingress is a Kubernetes service type designed to solve these problems. It provides a standard way of describing routing, termination, URL-rewriting and other rules in a YAML configuration file, plus standards for building applications/services to read and implement these configurations.
Kubernetes itself doesn’t implement an Ingress solution — most simple cluster models (k0s being a good example) don’t support it, unless users choose and run an ingress controller on their cluster, and integrate with it (nginX is a frequent default choice).
Enterprise Kubernetes solutions are more frequently provided with an ingress controller pre-integrated. An enterprise-grade ingress controller often provides features in excess of those defined by Kubernetes standard ingress, and will provide ways for configuring these additional features alongside basic features, in otherwise-standard ingress configuration files. For example, the well-known Istio ingress controller provides means for blacklisting IP address ranges (perhaps because these IP addresses are recognized as a source of denial-of-service attacks).
apiVersion: "config.istio.io/v1alpha2"
kind: handler
metadata:
name: blacklisthandler
namespace: istio-system
spec:
compiledAdapter: listchecker
params:
overrides:
- 37.72.166.13
- <IP/CIDR TO BE BLACKLISTED>
blacklist: true
entryType: IP_ADDRESSES
refresh_interval: 1s
ttl: 1s
caching_interval: 1s
---
apiVersion: "config.istio.io/v1alpha2"
kind: instance
metadata:
name: blacklistinstance
namespace: istio-system
spec:
compiledTemplate: listentry
params:
value: ip(request.headers["x-forwarded-for"]) || ip("0.0.0.0")
---
apiVersion: "config.istio.io/v1alpha2"
kind: rule
metadata:
name: blaclistcidrblock
namespace: istio-system
spec:
match: (source.labels["istio"] | "") == "ingressgateway"
actions:
- handler: blacklisthandler
instances:
- blacklistinstance
Is ingress a load balancer?
Ingress is similar to Kubernetes load balancing, in that ingress functionality is specified by Kubernetes, but implemented by a third-party solution. An enterprise Kubernetes solution will typically implement both integrations, so that Ingress (which manages routing) can work behind external load balancing (which terminates traffic on FQDNs and distributes it across nodes running instances of an application).
Ingress can be hard to integrate
An enterprise-grade ingress solution like Istio can be fairly challenging to manually integrate with a Kubernetes cluster. A full implementation may require clustered implementation of the controller, plus sidecars on each node, as well as integration with other cluster services (e.g., Cert-manager, for SSL certificate management) and metrics solutions like Prometheus (for enabling dynamic routing schemes based on changes in traffic, for example).
Ingress delivers big benefits
Fully integrated in an enterprise Kubernetes solution, however, ingress becomes easy for operators and developers to use, and confers huge benefits. In the most basic sense, ingress provides the routing functionality needed to weave together the components of microservices applications. Ingress helps applications keep processing traffic seamlessly while Kubernetes helps them self-repair and scale according to conditions. It provides fundamental security services, like enabling https, and more advanced services, like the ability to quickly apply blacklists and make apps more resilient against concerted attacks.
Ingress can also play an important role in accelerating delivery of new features to end-users. For example, a paradigm now growing in popularity among developers is to implement so-called “canary deployments” of new application releases. In a canary deployment, a new release gets deployed alongside a stable release, and configured to receive traffic from only a known subset of end-users. Developers can then watch what happens, evaluate, and if needed, roll back the new release to fix problems — all without causing disruption for the whole customer base. Ingress is typically the way modern Kubernetes apps manage this trick: an ingress configuration is created to identify traffic from the target customer pool, and route it to the new release, while letting most traffic continue to the stable release.
Learn more about enterprise Kubernetes.
Learn more about production Kubernetes.
Learn more about the role of secure container runtime.
Learn more about the importance of a secure registry.
Or Download Mirantis Kubernetes Engine or K0S – zero friction Kubernetes.