Kiali with production-scale Prometheus

Kiali with production-scale Prometheus

  • May 6, 2020
Table of Contents

Kiali with production-scale Prometheus

Of course, a definition of “production-scale Prometheus” can be as wide as the variety of cases where Istio and Prometheus are used in production. So in the context of this article, we have to make some assumptions. First of all, this article focuses on Istio using Telemetry v2, which is enabled by default starting from Istio 1.5.

This feature was also present as an experimental feature (disabled by default) in previous releases of Istio. Secondly, this post is written in reaction to the Istio guidelines that were written precisely to describe how to set up Prometheus for production-scale. So you can refer to these guidelines, or also read the article that inspired them, to get the details of that setup.

But let me summarize the key points: Unlike in telemetry v1, envoy sidecars directly report (expose) the Istio metrics to Prometheus, instead of going through Mixer. The main motivation is to remove the bottleneck that Mixer was with respect to the telemetry. But a side effect is that it increases the metrics cardinality because they are now per-pod, not per-workload (Mixer was doing this pods aggregation).

The production-scale setup addresses this issue.

Source: medium.com

Share :
comments powered by Disqus

Related Posts

Announcing etcd 3.4

Announcing etcd 3.4

In particular, etcd experienced performance issues with a large number of concurrent read transactions even when there is no write (e.g. “read-only range request … took too long to execute”). Previously, the storage backend commit operation on pending writes blocks incoming read transactions, even when there was no pending write. Now, the commit does not block reads which improve long-running read transaction performance.

Read More
Introducing PodTopologySpread

Introducing PodTopologySpread

Managing Pods distribution across a cluster is hard. The well-known Kubernetes features for Pod affinity and anti-affinity, allow some control of Pod placement in different topologies. However, these features only resolve part of Pods distribution use cases: either place unlimited Pods to a single topology, or disallow two Pods to co-locate in the same topology.

Read More