News

Kubernetes Metrics and Monitoring

Kubernetes Metrics and Monitoring

This post explores the current state of metrics and monitoring in Kubernetes by walking through the gradual thought process that I experienced when learning this topic. Kubernetes needs some metrics for it’s basic out-of-the-box functionality, like autoscaling and scheduling. This is regardless of any monitoring solution you may want for the purpose of troubleshooting and alerting.

Read More
Kubernetes Operations: Prioritize Workload in Overcommitted Clusters

Kubernetes Operations: Prioritize Workload in Overcommitted Clusters

One of the benefits in adopting a system like Kubernetes is facilitating burst-able and scalable workload. Horizontal application scaling involves adding or removing instances of an application to match demand. Kubernetes Horizontal Pod Autoscaler enables automated pod scaling based on demand.

Read More
Use Istio traffic mirroring for quicker debugging

Use Istio traffic mirroring for quicker debugging

Often when an error occurs, especially in production, one needs to debug the application to create a fix. Unfortunately the input that created the issue is gone. And the test data on file does not trigger the error (otherwise it would have been fixed before delivery).

Read More
When AWS Autoscale Doesn’t

When AWS Autoscale Doesn’t

The premise behind autoscaling in AWS is simple: you can maximize your ability to handle load spikes and minimize costs if you automatically scale your application out based on metrics like CPU or memory utilization. If you need 100 Docker containers to support your load during the day but only 10 when load is lower at night, running 100 containers at all times means that you’re using 900% more capacity than you need every night. With a constant container count, you’re either spending more money than you need to most of the time or your service will likely fall over during a load spike.

Read More
Kubernetes at CERN: Use Cases, Integration and Challenges

Kubernetes at CERN: Use Cases, Integration and Challenges

Kubernetes at CERN: Use Cases, Integration and Challenges.

Read More
Istio and Kubernetes in production. Part 2. Tracing

Istio and Kubernetes in production. Part 2. Tracing

In the previous post, we took a look at the building blocks of Service Mesh Istio, got familiar with the system, and went through the questions that new Istio users often ask. In this post, we will look at how to organize the collection of tracing information over the network. The first thing that developers and system administrators think about when they hear the term Service Mesh is tracing.

Read More
Poseidon-Firmament Scheduler – Flow Network Graph Based Scheduler

Poseidon-Firmament Scheduler – Flow Network Graph Based Scheduler

In this blog post, we briefly describe the novel Firmament flow network graph based scheduling approach (OSDI paper) in Kubernetes. We specifically describe the Firmament Scheduler and how it integrates with the Kubernetes cluster manager using Poseidon as the integration glue. We have seen extremely impressive scheduling throughput performance benchmarking numbers with this novel scheduling approach.

Read More
Tags