By Andrew Martin
This article was originally published in JAX Magazine
Kubernetes has continued its meteoric rise in 2018, with 83% of CNCF survey respondents using it to run their workloads. But with so many interested participants in the ecosystem, it can be difficult to separate the signal from the noise. Here are some wishes and expectations for the future of the Kubernetes ecosystem in 2019.
Google’s eponymous Kubernetes Engine (GKE) has been ahead of the competition since launch, and continues to ship features faster with hosted Istio recently hitting beta. Microsoft’s Azure is a strong competitor, with node autoscaling and network policy both launching in late 2018. By contrast Amazon have been notably slow to deliver a hosted Kubernetes solution in favour of their own ECS service, however the Elastic Kubernetes Service (EKS) has finally launched, and along with the eksctl tool now provides a viable alternative to user-provisioned masters on EC2. Digital Ocean now have a managed offering too, and as these managed services converge we can hope to see wider feature sets, deeper service integrations (such as AWS’s Service Operator), and tighter default security profiles for these services.
Containers revolutionised web application development and deployment, stealing market share from Virtual Machines with faster startup and smaller footprints, and now the circle is closing. With projects like Kata Containers, NABLA Containers, Google’s Gvisor, and now AWS’s Firecracker, container-compatible virtual machines are fighting for market share. Relying on virtual machines for isolation requires fine-tuning of start times, security settings, and developer experience to match what we have become used to in containers. As the projects mature, Kubernetes should be able to orchestrate VMs transparently to the end user - projects such as KubeVirt and firecracker-containerd have begun this process already. The option to wrap processes in whichever isolation technology is most appropriate to their workload may greatly enhance security without compromising usability and performance. The holy grail!
Kubernetes requires a lot of YAML to configure, and the difficulty of taming this complexity has spawned various different approaches and tools. Helm is the most used, and the most flexible templating solution, but its in-cluster Tiller component has had some security issues. Ksonnet offers a hierarchical method for templating that favours inheritance, and at the other end of the spectrum users are using tools like Ansible and Terraform to deploy applications, which are arguably the wrong abstractions for the job. But now Kustomize has been merged into Kubectl - and with it yet another YAML format for generating Kubernetes resources. As applications tend to choose a single templating tool to deliver YAML, I hope to see some standardisation across these tools, or possibly a mechanism to transform between them.
Supply chain security - compromising an upstream supplier and using the target’s trust in them to compromise the target - is gaining recognition as an easy attack vector. The Petya ransomware attack on the Ukrainian government affecting Maersk, Magecart’s attacks on TicketMaster’s and BA’s suppliers, and the NPM event-stream module poisoning all suggest attackers are looking to exploit the supply chain in 2019. Fortunately Kubernetes and container supply chains have been the subject of scrutiny in recent years, with tools such as Notary (ensuring images match their expected content with side-channel GPG signatures using TUF), Grafeas (Google Cloud’s Binary Authorisation technology exposed as an open source project), and in-toto (pipeline metadata security and policy control) all exposing admission controllers to validate images as they are deployed to Kubernetes. These tools dramatically increase an organisation’s compromise resilience, and can be used to limit supply chain attack vectors in build pipelines, and for images deployed to Kubernetes. These tools need greater awareness as projects start to distribute signed software, which ultimately leads to increased trust in Kubernetes workloads.
The oldest criticism of Docker is that its daemon runs as root, so an escape from a container via the container runtime can potentially gain root on the host. The last few years have seen progress in some of the challenges of integrating user namespaces to allow running unprivileged containers. LXC already solves some of these problems, but is not supported by Kubernetes. However an experimental binary distribution of Kubernetes called usernetes runs rootless Moby (Docker) and CRI-O runs without root privilege by using user namespaces. If this approach gains traction we will see a dramatic improvement in the safety of containerised workloads, and therefore Kubernetes itself.
2018 saw the Cluster API introduce an API for machine and cluster provisioning, the kubeadm control plane provisioner reach general availability, and the GitOps application deployment pattern rise as the logical progression of infrastructure as code. Each of these projects address a deployment problem end users have struggled with at different layers (deployment of machines, the Kubernetes control plane, and application workloads), and we will continue to see adoption of these projects as they reach maturity. Notably absent is the Federation v2 SIG, based on the lessons learned implementing cluster Federation v1. The complexity of herding distributed systems has yielded some valuable lessons, but the project may need more than a year to ensure sufficient testing for production readiness.
Service Meshes hijacked KubeCon Austin in 2016, and again in Copenhagen and Seattle in 2017, with the Kubernetes-native Istio and Linkerd 2 the front-runners. Envoy, the proxy that powers Istio, has already won the hearts of the cloud native community with its snappy performance, container-friendly immutable configuration model, and hot reload capability. Commercial entities are building around Envoy (including Tetrate, Solo, and Octarine, AWS’s App Mesh, Hashicorp’s Consul Connect, and a slew of others), whilst Google’s Knative has launched a full developer-focused platform on top of Istio. The steep learning curve will start to be outweighed by the security, availability, and observability guarantees of stable service meshes. Expect to see general adoption by high compliance enterprises that would otherwise have to manage their own network encryption and policy.
Rootless container image builds (as distinct from rootless container runtimes) have been on the horizon for a couple of years with orca-build, BuildKit, and img proving the concept. They allow container images to be built without exposing the Docker socket, which can be used to escalate privilege - and is probably a backdoor into most Kubernetes-based CI build farms. With a slew of new rootless tooling emerging including Red Hat’s buildah, Google’s Kaniko, and Uber’s Makisu, we will see build systems that will eventually support building untrusted Dockerfiles - although there are outstanding issues that prevent these tools achieving that today.
Serverless offerings, also referred to as Function as a Service, will continue to fight for market share - there is obvious interest in the promises of reduced resource utilisation and pay-per-use compute. The original managed services that triggered the trend have seen huge adoption, and AWS’s Lambda has finally introduced a layered ZIP format that allows the same type of composition as Docker images. The Kubernetes-hosted equivalents OpenFaaS, Knative, Kubeless, and Fission will battle to deliver the smoothest developer experience and greatest feature set.
Now that Kubernetes 1.13 supports multi-version Custom Resource Definitions (CRDs) and conversion via webhooks, the reimplementation of Third Party Resources (deprecated in 1.7) is complete. CRDs allow extension of the Kubernetes API, or the ability to add entirely new APIs. As databases such as Vitess, Oracle, and MongoDB launch operators that manage their products at runtime using CRDs, application developers will follow, utilising application scaffolding like the Operator Framework to manage Kubernetes native applications and decrease the operational burden on SREs.
The Kubernetes community continues to innovate and inspire, driven as much by open source interests as commercial entities. The work done behind the scenes by SIG leads, developers, community and conference organisers, and end-users is invaluable to the growth of the ecosystem, and with the predicted growth in 2019 it’s hard to see and end in sight.