Industry Use Case of Kubernetes

Abhinav Shreyash
6 min readMay 14, 2022

Hello , everyone after researching much about the industry use cases of kubernetes i have short listed the some of the industry Use Case Studies for you which i thought were very informational , many of them were sourced from sites like kubernetes.io and many others like this .

For Introduction to what we are talking about read on the following paragraph.

Containers are a good way to bundle and run your applications. In a production environment, you need to manage the containers that run the applications and ensure that there is no downtime. For example, if a container goes down, another container needs to start. Wouldn’t it be easier if this behavior was handled by a system?

That’s how Kubernetes comes to the rescue! Kubernetes provides you with a framework to run distributed systems resiliently. It takes care of scaling and failover for your application, provides deployment patterns, and more. For example, Kubernetes can easily manage a canary deployment for your system.

Kubernetes provides you with:

  • Service discovery and load balancing Kubernetes can expose a container using the DNS name or using their own IP address. If traffic to a container is high, Kubernetes is able to load balance and distribute the network traffic so that the deployment is stable.
  • Storage orchestration Kubernetes allows you to automatically mount a storage system of your choice, such as local storages, public cloud providers, and more.
  • Automated rollouts and rollbacks You can describe the desired state for your deployed containers using Kubernetes, and it can change the actual state to the desired state at a controlled rate. For example, you can automate Kubernetes to create new containers for your deployment, remove existing containers and adopt all their resources to the new container.
  • Self-healing Kubernetes restarts containers that fail, replaces containers, kills containers that don’t respond to your user-defined health check, and doesn’t advertise them to clients until they are ready to serve.
  • Secret and configuration management Kubernetes lets you store and manage sensitive information, such as passwords and SSH keys. You can deploy and update secrets and application configuration without rebuilding your container images, and without exposing secrets in your stack configuration.

WHAT IS KUBERNETES USED FOR?

As a Kubernetes user, you can define how your applications should run and the ways they should be able to interact with other applications or the outside world. You can scale your services up or down, perform graceful rolling updates, and switch traffic between different versions of your applications to test features or rollback problematic deployments. Kubernetes provides interfaces and composable platform primitives that allow you to define and manage your applications with high degrees of flexibility, power, and reliability.

Here is a one such blog from kubernetes.io which explains the entire case study of the travel site booking.com

Booking.com

“After Learning the Ropes with a Kubernetes Distribution, Booking.com Built a Platform of Its Own”

Challenge

In 2016, Booking.com migrated to an OpenShift platform, which gave product developers faster access to infrastructure. But because Kubernetes was abstracted away from the developers, the infrastructure team became a “knowledge bottleneck” when challenges arose. Trying to scale that support wasn’t sustainable.

Solution

After a year operating OpenShift, the platform team decided to build its own vanilla Kubernetes platform — and developers were needed to learn this technology in order to build such system.

Impact(On the Buisness by using Kubernetes Technology)

Despite the learning curve, there’s been a great uptick in adoption of the new Kubernetes platform. Before containers, creating a new service could take a couple of days if the developers understood Puppet, or weeks if they didn’t. On the new platform, it can take as few as 10 minutes. About 500 new services were built on the platform in the first 8 months.

KUBERNETES IN IBM

The Challenge:

IBM Cloud offers public, private, and hybrid cloud functionality across a diverse set of runtimes from its OpenWhisk-based function as a service (FaaS) offering, managed Kubernetes and containers, to Cloud Foundry platform as a service (PaaS). These runtimes are combined with the power of the company’s enterprise technologies, such as MQ and DB2, its modern artificial intelligence (AI) Watson, and data analytics services.

Users of IBM Cloud can exploit capabilities from more than 170 different cloud native services in its catalog, including capabilities such as IBM’s Weather Company API and data services. In the later part of 2017, the IBM Cloud Container Registry team wanted to build out an image trust service.

The Solution:

The work on this new service culminated with its public availability in the IBM Cloud in February 2018. The image trust service, called Portieris, is fully based on the Cloud Native Computing Foundation (CNCF) open source project Notary, according to Michael Hough, a software developer with the IBM Cloud Container Registry team.

Portieris is a Kubernetes admission controller for enforcing content trust. Users can create image security policies for each Kubernetes namespace, or at the cluster level, and enforce different levels of trust for different images. Portieris is a key part of IBM’s trust story, since it makes it possible for users to consume the company’s Notary offering from within their IKS clusters.

The offering is that Notary server runs in IBM’s cloud, and then Portieris runs inside the IKS cluster. This enables users to be able to have their IKS cluster verify that the image they’re loading containers from contains exactly what they expect it to, and Portieris is what allows an IKS cluster to apply that verification.

The Impact:

IBM’s intention in offering a managed Kubernetes container service and image registry is to provide a fully secure end-to-end platform for its enterprise customers. “Image signing is one key part of that offering, and our container registry team saw Notary as the de facto way to implement that capability in the current Docker and container ecosystem,” Hough says.

The company had not been offering image signing before, and Notary is the tool it used to implement that capability. “We had a multi-tenant Docker Registry with private image hosting,” Hough says. “The Docker Registry uses hashes to ensure that image content is correct, and data is encrypted both in flight and at rest. But it does not provide any guarantees of who pushed an image. We used Notary to enable users to sign images in their private registry namespaces if they so choose.”

OpenAI Case Study

Challenge

An artificial intelligence research lab, OpenAI needed infrastructure for deep learning that would allow experiments to be run either in the cloud or in its own data center, and to easily scale. Portability, speed, and cost were the main drivers.

Solution

OpenAI began running Kubernetes on top of AWS in 2016, and in early 2017 migrated to Azure. OpenAI runs key experiments in fields including robotics and gaming both in Azure and in its own data centers, depending on which cluster has free capacity. “We use Kubernetes mainly as a batch scheduling system and rely on our autoscaler to dynamically scale up and down our cluster,” says Christopher Berner, Head of Infrastructure. “This lets us significantly reduce costs for idle nodes, while still providing low latency and rapid iteration.”

Impact

The company has benefited from greater portability: “Because Kubernetes provides a consistent API, we can move our research experiments very easily between clusters,” says Berner. Being able to use its own data centers when appropriate is “lowering costs and providing us access to hardware that we wouldn’t necessarily have access to in the cloud,” he adds. “As long as the utilization is high, the costs are much lower there.” Launching experiments also takes far less time: “One of our researchers who is working on a new distributed training system has been able to get his experiment running in two or three days. In a week or two he scaled it out to hundreds of GPUs. Previously, that would have easily been a couple of months of work.”

Problem Solved Using Kubernetes

  • Automatic deployment of application services
  • Automatic configuration of application network
  • Automatic distribution of services across infrastructure
  • Automatic resource allocation for application services
  • Automatic load balancing between application components
  • Automatic replication of application components
  • Automatic failover of application components
  • Easily scale out application services when required
  • Sharing of storage across multiple containers

Conclusion:

Kubernetes has benefited many firms and since its creator Google took the decision to make it opensource by donating it to the public , it has become the prime reason for the people to learn it , and when they learn they know its importance , and why so many companies are rapidly adopting it in their architecture , because it seriously affects the ease of administration of the company and so much so you can understand as you might have got from the given case studies of different companies.

As for the future scope of the kubernetes is that its definitely going to grow and more features will be added to it as time passes , and more of its applications we will see in the coming future and we have just seen the start ,

it has become a very integral part of the DevOps industry.

--

--