How does a Prometheus Histogram work?

How does a Prometheus Histogram work?

  • October 5, 2019
Table of Contents

How does a Prometheus Histogram work?

How does a Prometheus Histogram work? We looked previously at thecounter, gauge, and summary, how does the Prometheus histogram work? The histogram has several similarities to the summary.

A histogram is a combination of various counters. Like summary metrics, histogram metrics are used to track the size of events, usually how long they take, via their observe method. There’s usually also the exact utilities to make it easy to time things as there are for summarys.

Where they differ is their handling of quantiles.

Source: robustperception.io

Share :
comments powered by Disqus

Related Posts

How much disk space do Prometheus blocks use?

How much disk space do Prometheus blocks use?

Memory for ingestion is just one part of the resources Prometheus uses, let’s look at disk blocks. Every 2 hours Prometheus compacts the data that has been buffered up in memory onto blocks on disk. This will include the chunks, indexes, tombstones, and various metadata.

Read More
How Uber Monitors 4,000 Microservices

How Uber Monitors 4,000 Microservices

With 4,000 proprietary microservices and a growing number of open source systems that needed to be monitored, by late 2014 Uber was outgrowing its usage of Graphite and Nagios for metrics. They evaluated several technologies, including Atlas and OpenTSDB, but the fact that a growing number of open source systems were adding native support for the Prometheus Metrics Exporter format tipped the scales in that direction. Uber found with its use of Prometheus and M3, Uber’s storage costs for ingesting metrics became 8.53x more cost effective per metric per replica.

Read More
Optimising Prometheus 2.6.0 Memory Usage with pprof

Optimising Prometheus 2.6.0 Memory Usage with pprof

There have been some reportsthat compaction was causing larger memory spikes than was desirable. I dug into this and improved it for Prometheus 2.6.0, so let’s see how. Firstly I wrote a test setup that created some samples for 100k time series, in a way that would require compaction.

Read More