Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery. Kubernetes builds upon 15 years of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community.
Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.
Automated rollouts and rollbacks
Kubernetes progressively rolls out changes to your application or its configuration, while monitoring application health to ensure it doesn’t kill all your instances at the same time.
Service discovery and load balancing
No need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, and can load-balance across them.
Routing of service traffic based upon cluster topology.
Automatically mount the storage system of your choice, whether from local storage, a public cloud provider such as GCP or AWS, or a network storage system such as NFS, iSCSI, Gluster, Ceph, Cinder, or Flocker.
Secret and configuration management
Deploy and update secrets and application configuration without rebuilding your image and without exposing secrets in your stack configuration.
Automatic bin packing
Automatically places containers based on their resource requirements and other constraints, while not sacrificing availability. Mix critical and best-effort workloads in order to drive up utilization and save even more resources.
In addition to services, Kubernetes can manage your batch and CI workloads, replacing containers that fail, if desired.
Allocation of IPv4 and IPv6 addresses to Pods and Services
Scale your application up and down with a simple command, with a UI, or automatically based on CPU usage.
Restarts containers that fail, replaces and reschedules containers when nodes die, kills containers that don’t respond to your user-defined health check, and doesn’t advertise them to clients until they are ready to serve.
Containers are a good way to bundle and run your applications. In a production environment, you need to manage the containers that run the applications and ensure that there is no downtime. For example, if a container goes down, another container needs to start. Wouldn’t it be easier if this behavior was handled by a system?
That’s how Kubernetes comes to the rescue!! Kubernetes provides you with a framework to run distributed systems resiliently. It takes care of scaling and failover for your application, provides deployment patterns, and more. For example, Kubernetes can easily manage a canary deployment for your system.
Pinterest’s journey to a Kubernetes platform
Challenges faced by Pinterest before kubernetes:
Pinterest, a social media web and mobile app that allows users to save or “pin” information, has a huge user base who have collectively saved more than 200 billion pins across 4 billion boards. As a result of this volume and the associated growth of their infrastructure stack, the Pinterest team had several challenges. They stated that their engineers didn’t have a unified experience when launching their workload and that managing huge numbers of virtual machines was creating a huge maintenance load for the infrastructure team. Furthermore, it was hard to build infrastructure governance tools across the separate systems and to determine which resources could be recycled.
After eight years in existence, Pinterest had grown into 1,000 microservices and multiple layers of infrastructure and diverse set-up tools and platforms. In 2016 the company launched a roadmap towards a new compute platform, led by the vision of creating the fastest path from an idea to production, without making engineers worry about the underlying infrastructure.
According to lead author Lida Li and team, the Cloud Management Platform team started their journey with Kubernetes in 2017 by dockerizing their production workloads and evaluating different container orchestration systems. The Kubernetes native workload model covered deployment, jobs and daemonsets but the team needed more to model their workloads. They stated that usability issues were ‘huge blockers’ on the way to adopting Kubernetes and that it would have been difficult to support different versions of runtime support on the same Kubernetes cluster. Their solution was to design custom resource definitions (CRDs). This was a pre-release deploy workflow available to early adopters of the new Kubernetes-based Compute Platform. The team was integrating this workflow into their CI/CD platform to create a cleaner service for their engineers.
Pinterest designed its CRDs to achieve various ends that may also be informative for engineers considering Kubernetes adoption. Firstly, they wanted to bundle various native Kubernetes resources to work as a single workload, which saved their engineers from doing this piece by piece. Secondly, they wanted to inject necessary runtime support for their applications by adding the necessary sidecars, init containers, equipment variables and volumes into the specification. Lastly, these definitions were used to perform the life cycle management for native resources, such as reconciling the specifications and updating the event record. The Pinterest team surmised that this evolution significantly reduced the workload on engineers and therefore the risk of error. This echoes the experience which the Shopify team shared at QCon New York last year.
One consideration for engineers taking on similar problems is that in order to avoid inconsistencies between applications as well as bloating maintenance and support burdens, Pinterest found their infrastructure team needed to deploy all workflow types such as pod-level sidecars, node-level daemonsets or VM-level daemons. Tinder, whose platform has run exclusively on Kubernetes since March 2019, took the opposite approach and its infrastructure responsibility is shared between all engineers in the organisation.
Another consideration is that the Pinterest team built an end-to-end test pipeline on top of the native Kubernetes test infrastructure with tests deployed to all clusters. This mitigated risks associated with going beyond the Kubernetes native workflow model and the engineers stated it caught many regressions before they reached production. The Pinterest team was also integrating their deployment workflow into their new CI|CD platform.
I hope you got some good insights of the pinterest-kubernetes case study.
THANK YOU FOR READING!!
A special thanks to World Record Holder, Vimal Daga sir for his extraordinary teaching skills and to provide such a platform where we can develop ourselves and learn new technologies, their integration, etc. from his years of hard work and research. I consider myself really lucky to be a part of his trainings where I get to improve myself and learn new & exciting things every day.