KUBERNETES METRICS AND MONITORING

This post explores the current state of metrics and monitoring in Kubernetes by walking through the gradual thought process that I experienced when learning this topic. Kubernetes needs some metrics for it’s basic out-of-the-box functionality, like autoscaling and scheduling. This is regardless of any monitoring solution you may want for the purpose of troubleshooting and alerting. The case for Kubernetes is often being referred to as the ‘core metrics pipeline’ in contrast to a general monitoring solution. Heapster was a cluster wide resource aggregator that Kubernetes depended on which is now deprecated. Kubernetes introduced a new API for describing metrics – the ‘Metrics API’.

Read more

KUBERNETES OPERATIONS: PRIORITIZE WORKLOAD IN OVERCOMMITTED CLUSTERS

One of the benefits in adopting a system like Kubernetes is facilitating burst-able and scalable workload. Horizontal application scaling involves adding or removing instances of an application to match demand. Kubernetes Horizontal Pod Autoscaler enables automated pod scaling based on demand. This is cool, however can lead to unpredictable load on the cluster, which may put the cluster into an overcommitted state. The following image represents a three node cluster that runs three applications. Pink is the most critical.

Read more

USE ISTIO TRAFFIC MIRRORING FOR QUICKER DEBUGGING

Often when an error occurs, especially in production, one needs to debug the application to create a fix. Unfortunately the input that created the issue is gone. And the test data on file does not trigger the error (otherwise it would have been fixed before delivery). Likewise if one is creating new code, one often wants to see what values a client can supply (and to be honest I have used more than once WireShark to see what is being sent). Istio’s traffic mirroring feature can help, as it allows an application to receive real traffic, which is processed by the main version. The same request is copied and then sent to the Mirror service.

Read more

KUBERNETES INGRESS CONTROLLERS: HOW TO CHOOSE THE RIGHT ONE: PART 1

In this article, I will share my experiences with 3 major types of Kubernetes ingress solutions. Let’s go through their pros and cons and find out which one suits your needs. How does it work behind thescene? First, Let’s deploy a hello-world service with 2 Pods running in demo namespace. Next, We apply the hello-world ingress resource file as below. Let’s take a look when an Ingress resource is deployed, how does the ingress controller translate it into Nginx configuration?For API path /api/hello-world, through an upstream directive as below, it will route incoming traffic to Service hello-world with 2 destination Pod IPs on container port 8080 in the namespace demo.

Read more

WHEN AWS AUTOSCALE DOESN’T

The premise behind autoscaling in AWS is simple: you can maximize your ability to handle load spikes and minimize costs if you automatically scale your application out based on metrics like CPU or memory utilization. If you need 100 Docker containers to support your load during the day but only 10 when load is lower at night, running 100 containers at all times means that you’re using 900% more capacity than you need every night. With a constant container count, you’re either spending more money than you need to most of the time or your service will likely fall over during a load spike.

Read more

KUBERNETES AT CERN: USE CASES, INTEGRATION AND CHALLENGES

Kubernetes at CERN: Use Cases, Integration and Challenges. Source: speakerdeck.com

ISTIO AND KUBERNETES IN PRODUCTION. PART 2. TRACING

In the previous post, we took a look at the building blocks of Service Mesh Istio, got familiar with the system, and went through the questions that new Istio users often ask. In this post, we will look at how to organize the collection of tracing information over the network. The first thing that developers and system administrators think about when they hear the term Service Mesh is tracing.

Read more

EXPOSING MICROSERVICES RUNNING IN AWS EKS WITH A MICROSERVICES/API GATEWAY LIKE SOLO GLOO

So you’ve decided to run your Kubernetes workloads in AWS. As we’ve seen before setting up AWS EKS requires a lot of patience and headache. You may be able to get it working. For others, you should check out the eksctl tool from Weaveworks. Now that you’ve got a Kubernetes cluster, you want to start deploying your microservices to it and start exposing and integrating APIs and services to your clients and other parts of your organization. At Solo.io we’ve open-sourced a microservices gateway built on top of Envoy Proxy named Gloo.

Read more

AMBASSADOR: BUILDING A CONTROL PLANE FOR AN ENVOY-POWERED API GATEWAY ON KUBERNETES

This article provides an insight into the creation of the Ambassador open source API gateway for Kubernetes, and discusses the technical challenges and lessons learned from building a developer-focused control plane for managing ingress or ‘edge’ traffic within microservice-based applications. Key Takeaways Developed by Datawire, Ambassador is an open source API gateway designed specifically for use with the Kubernetes container orchestration framework. At its core, Ambassador is a control plane tailored for edge/API configuration for managing the Envoy Proxy “data plane”.

Read more

POSEIDON-FIRMAMENT SCHEDULER – FLOW NETWORK GRAPH BASED SCHEDULER

In this blog post, we briefly describe the novel Firmament flow network graph based scheduling approach (OSDI paper) in Kubernetes. We specifically describe the Firmament Scheduler and how it integrates with the Kubernetes cluster manager using Poseidon as the integration glue. We have seen extremely impressive scheduling throughput performance benchmarking numbers with this novel scheduling approach. Originally, Firmament Scheduler was conceptualized, designed and implemented by University of Cambridge researchers, Malte Schwarzkopf & Ionel Gog. At a very high level, Poseidon-Firmament scheduler augments the current Kubernetes scheduling capabilities by incorporating novel flow network graph based scheduling capabilities alongside the default Kubernetes Scheduler. It models the scheduling problem as a constraint-based optimization over a flow network graph – by reducing scheduling to a min-cost max-flow optimization problem.

Read more