Istio and Kubernetes in production. Part 2. Tracing

Istio and Kubernetes in production. Part 2. Tracing

  • February 8, 2019
Table of Contents

Istio and Kubernetes in production. Part 2. Tracing

In the previous post, we took a look at the building blocks of Service Mesh Istio, got familiar with the system, and went through the questions that new Istio users often ask. In this post, we will look at how to organize the collection of tracing information over the network. The first thing that developers and system administrators think about when they hear the term Service Mesh is tracing.

Indeed, to each microservice we add a special proxy server to handle all TCP traffic. You may think that now you can easily collect information about all networking events. Unfortunately, in reality, there are many nuances that have to be kept in mind.

Let’s have a look at these. In fact, what is relatively easily is only a diagram of our system’s nodes connected by arrows and the data rate between services (in fact, only bytes per unit of time). However, in most situations, the services communicate over some application layer protocol, such as HTTP, gRPC, Redis, etc.

And, of course, we want to see the tracing information of these protocols, we want to see the rate of application level requests and not the rate of data. Further, we want to know the protocol’s request latency. Finally, we want to see the full path made by the request from the moment the user enters to the moment the response was received.

Unfortunately, this is not an easy task.

Source: medium.com

Share :
comments powered by Disqus

Related Posts

Docker and Kubernetes in high security environments

Docker and Kubernetes in high security environments

This is brief summary of parts of my master’s thesis and the conclusions to draw from it. This medium-story focuses on containerized application isolation. The thesis also covers segmentation of cluster networks in Kubernetes which is not discussed in this story.

Read More
Poseidon-Firmament Scheduler – Flow Network Graph Based Scheduler

Poseidon-Firmament Scheduler – Flow Network Graph Based Scheduler

In this blog post, we briefly describe the novel Firmament flow network graph based scheduling approach (OSDI paper) in Kubernetes. We specifically describe the Firmament Scheduler and how it integrates with the Kubernetes cluster manager using Poseidon as the integration glue. We have seen extremely impressive scheduling throughput performance benchmarking numbers with this novel scheduling approach.

Read More
A Crash Course For Running Istio

A Crash Course For Running Istio

At Namely we’ve been running with Istio for a year now. Yes, that’s pretty much when it first came out. We had a major performance regression with a Kubernetes cluster, we wanted distributed tracing, and used Istio to bootstrap Jaeger to investigate.

Read More