Personal tools
Skip to content. | Skip to navigation
Exporter for Celery/Flower metrics, inspired by https://github.com/vooydzig/flower-prometheus-exporter
iPerf3 exporter is configured via command-line flags. To view all available command-line flags, run iperf3_exporter -h. The timeout of each probe is automatically determined from the scrape_timeout in the Prometheus config. This can be also be limited by the iperf3.timeout command-line flag. If neither is specified, it defaults to 30 seconds. Prometheus Configuration The iPerf3 exporter needs to be passed the target as a parameter, this can be done with relabelling. Optional: pass the port that the target iperf3 server is lisenting on as the "port" parameter. Example config: scrape_configs: - job_name: 'iperf3' metrics_path: /probe static_configs: - targets: - foo.server - bar.server params: port: ['5201'] relabel_configs: - source_labels: [__address__] target_label: __param_target - source_labels: [__param_target] target_label: instance - target_label: __address__ replacement: 127.0.0.1:5201 # The iPerf3 exporter's real hostname:port.
Prometheus exporter for machine metrics, written in Go with pluggable metric collectors. Collectors There is varying support for collectors on each operating system. The tables below list all existing collectors and the supported systems. Which collectors are used is controlled by the --collectors.enabled flag. Enabled by default Name Description OS conntrack Shows conntrack statistics (does nothing if no /proc/sys/net/netfilter/ present). Linux cpu Exposes CPU statistics FreeBSD diskstats Exposes disk I/O statistics from /proc/diskstats. Linux entropy Exposes available entropy. Linux filefd Exposes file descriptor statistics from /proc/sys/fs/file-nr. Linux filesystem Exposes filesystem statistics, such as disk space used. FreeBSD, Dragonfly, Linux, OpenBSD loadavg Exposes load average. Darwin, Dragonfly, FreeBSD, Linux, NetBSD, OpenBSD, Solaris mdadm Exposes statistics about devices in /proc/mdstat (does nothing if no /proc/mdstat present). Linux meminfo Exposes memory statistics. Dragonfly, FreeBSD, Linux netdev Exposes network interface statistics such as bytes transferred. Dragonfly, FreeBSD, Linux, OpenBSD netstat Exposes network statistics from /proc/net/netstat. This is the same information as netstat -s. Linux stat Exposes various statistics from /proc/stat. This includes CPU usage, boot time, forks and interrupts. Linux textfile Exposes statistics read from local disk. The --collector.textfile.directory flag must be set. any time Exposes the current system time. any vmstat Exposes statistics from /proc/vmstat. Linux Disabled by default Name Description OS bonding Exposes the number of configured and active slaves of Linux bonding interfaces. Linux devstat Exposes device statistics FreeBSD gmond Exposes statistics from Ganglia. any interrupts Exposes detailed interrupts statistics. Linux, OpenBSD ipvs Exposes IPVS status from /proc/net/ip_vs and stats from /proc/net/ip_vs_stats. Linux ksmd Exposes kernel and system statistics from /sys/kernel/mm/ksm. Linux logind Exposes session counts from logind. Linux megacli Exposes RAID statistics from MegaCLI. Linux meminfo_numa Exposes memory statistics from /proc/meminfo_numa. Linux ntp Exposes time drift from an NTP server. any runit Exposes service status from runit. any supervisord Exposes service status from supervisord. any systemd Exposes service and system status from systemd. Linux tcpstat Exposes TCP connection status information from /proc/net/tcp and /proc/net/tcp6. (Warning: the current version has potential performance issues in high load situations.) Linux Textfile Collector The textfile collector is similar to the Pushgateway, in that it allows exporting of statistics from batch jobs. It can also be used to export static metrics, such as what role a machine has. The Pushgateway should be used for service-level metrics. The textfile module is for metrics that are tied to a machine. To use it, set the --collector.textfile.directory flag on the Node exporter. The collector will parse all files in that directory matching the glob *.prom using the text format.
Prometheus exporter for PostgreSQL server metrics. Flags help Show context-sensitive help (also try --help-long and --help-man). web.listen-address Address to listen on for web interface and telemetry. Default is :9187. web.telemetry-path Path under which to expose metrics. Default is /metrics. disable-default-metrics Use only metrics supplied from queries.yaml via --extend.query-path. disable-settings-metrics Use the flag if you don't want to scrape pg_settings. auto-discover-databases Whether to discover the databases on a server dynamically. extend.query-path Path to a YAML file containing custom queries to run. Check out queries.yaml for examples of the format. dumpmaps Do not run - print the internal representation of the metric maps. Useful when debugging a custom queries file. constantLabels Labels to set in all metrics. A list of label=value pairs, separated by commas. version Show application version. exclude-databases A list of databases to remove when autoDiscoverDatabases is enabled. log.level Set logging level: one of debug, info, warn, error, fatal log.format Set the log output target and format. e.g. logger:syslog?appname=bob&local=7 or logger:stdout?json=true Defaults to logger:stderr. Environment Variables The following environment variables configure the exporter: DATA_SOURCE_NAME the default legacy format. Accepts URI form and key=value form arguments. The URI may contain the username and password to connect with. DATA_SOURCE_URI an alternative to DATA_SOURCE_NAME which exclusively accepts the hostname without a username and password component. For example, my_pg_hostname or my_pg_hostname?sslmode=disable. DATA_SOURCE_URI_FILE The same as above but reads the URI from a file. DATA_SOURCE_USER When using DATA_SOURCE_URI, this environment variable is used to specify the username. DATA_SOURCE_USER_FILE The same, but reads the username from a file. DATA_SOURCE_PASS When using DATA_SOURCE_URI, this environment variable is used to specify the password to connect with. DATA_SOURCE_PASS_FILE The same as above but reads the password from a file. PG_EXPORTER_WEB_LISTEN_ADDRESS Address to listen on for web interface and telemetry. Default is :9187. PG_EXPORTER_WEB_TELEMETRY_PATH Path under which to expose metrics. Default is /metrics. PG_EXPORTER_DISABLE_DEFAULT_METRICS Use only metrics supplied from queries.yaml. Value can be true or false. Default is false. PG_EXPORTER_DISABLE_SETTINGS_METRICS Use the flag if you don't want to scrape pg_settings. Value can be true or false. Defauls is false. PG_EXPORTER_AUTO_DISCOVER_DATABASES Whether to discover the databases on a server dynamically. Value can be true or false. Defauls is false. PG_EXPORTER_EXTEND_QUERY_PATH Path to a YAML file containing custom queries to run. Check out queries.yaml for examples of the format. PG_EXPORTER_CONSTANT_LABELS Labels to set in all metrics. A list of label=value pairs, separated by commas. PG_EXPORTER_EXCLUDE_DATABASES A comma-separated list of databases to remove when autoDiscoverDatabases is enabled. Default is empty string. Settings set by environment variables starting with PG_ will be overwritten by the corresponding CLI flag if given.
Prometheus exporter for RabbitMQ metrics.
Prometheus Exporter for Redis Metrics. Supports Redis 2.x, 3.x, 4.x, 5.x and 6.x
Prometheus Exporter Squid
statsd_exporter receives StatsD-style metrics and exports them as Prometheus metrics. Overview With StatsD To pipe metrics from an existing StatsD environment into Prometheus, configure StatsD's repeater backend to repeat all received metrics to a statsd_exporter process. This exporter translates StatsD metrics to Prometheus metrics via configured mapping rules. +----------+ +-------------------+ +--------------+ | StatsD |---(UDP/TCP repeater)--->| statsd_exporter |<---(scrape /metrics)---| Prometheus | +----------+ +-------------------+ +--------------+ Without StatsD Since the StatsD exporter uses the same line protocol as StatsD itself, you can also configure your applications to send StatsD metrics directly to the exporter. In that case, you don't need to run a StatsD server anymore. We recommend this only as an intermediate solution and recommend switching to native Prometheus instrumentation in the long term. Tagging Extensions The exporter supports Librato, InfluxDB, and DogStatsD-style tags, which will be converted into Prometheus labels. For Librato-style tags, they must be appended to the metric name with a delimiting #, as so: metric.name#tagName=val,tag2Name=val2:0|c See the statsd-librato-backend README for a more complete description. For InfluxDB-style tags, they must be appended to the metric name with a delimiting comma, as so: metric.name,tagName=val,tag2Name=val2:0|c See this InfluxDB blog post for a larger overview. For DogStatsD-style tags, they're appended as a |# delimited section at the end of the metric, as so: metric.name:0|c|#tagName=val,tag2Name=val2 See Tags in the DogStatsD documentation for the concept description and Datagram Format. If you encounter problems, note that this tagging style is incompatible with the original statsd implementation. Be aware: If you mix tag styles (e.g., Librato/InfluxDB with DogStatsD), the exporter will consider this an error and the sample will be discarded. Also, tags without values (#some_tag) are not supported and will be ignored.
Loki: like Prometheus, but for logs. Loki is a horizontally-scalable, highly-available, multi-tenant log aggregation system inspired by Prometheus. It is designed to be very cost effective and easy to operate. It does not index the contents of the logs, but rather a set of labels for each log stream. Compared to other log aggregation systems, Loki: does not do full text indexing on logs. By storing compressed, unstructured logs and only indexing metadata, Loki is simpler to operate and cheaper to run. indexes and groups log streams using the same labels you’re already using with Prometheus, enabling you to seamlessly switch between metrics and logs using the same labels that you’re already using with Prometheus. is an especially good fit for storing Kubernetes Pod logs. Metadata such as Pod labels is automatically scraped and indexed. has native support in Grafana (needs Grafana v6.0). A Loki-based logging stack consists of 3 components: promtail is the agent, responsible for gathering logs and sending them to Loki. loki is the main server, responsible for storing logs and processing queries. Grafana for querying and displaying the logs. Loki is like Prometheus, but for logs: we prefer a multidimensional label-based approach to indexing, and want a single-binary, easy to operate system with no dependencies. Loki differs from Prometheus by focusing on logs instead of metrics, and delivering logs via push, instead of pull.
Grafana Mimir is an open source software project that provides a scalable long-term storage for Prometheus. Some of the core strengths of Grafana Mimir include: Easy to install and maintain: Grafana Mimir’s extensive documentation, tutorials, and deployment tooling make it quick to get started. Using its monolithic mode, you can get Grafana Mimir up and running with just one binary and no additional dependencies. Once deployed, the best-practice dashboards, alerts, and playbooks packaged with Grafana Mimir make it easy to monitor the health of the system. Massive scalability: You can run Grafana Mimir's horizontally-scalable architecture across multiple machines, resulting in the ability to process orders of magnitude more time series than a single Prometheus instance. Internal testing shows that Grafana Mimir handles up to 1 billion active time series. Global view of metrics: Grafana Mimir enables you to run queries that aggregate series from multiple Prometheus instances, giving you a global view of your systems. Its query engine extensively parallelizes query execution, so that even the highest-cardinality queries complete with blazing speed. Cheap, durable metric storage: Grafana Mimir uses object storage for long-term data storage, allowing it to take advantage of this ubiquitous, cost-effective, high-durability technology. It is compatible with multiple object store implementations, including AWS S3, Google Cloud Storage, Azure Blob Storage, OpenStack Swift, as well as any S3-compatible object storage. High availability: Grafana Mimir replicates incoming metrics, ensuring that no data is lost in the event of machine failure. Its horizontally scalable architecture also means that it can be restarted, upgraded, or downgraded with zero downtime, which means no interruptions to metrics ingestion or querying. Natively multi-tenant: Grafana Mimir’s multi-tenant architecture enables you to isolate data and queries from independent teams or business units, making it possible for these groups to share the same cluster. Advanced limits and quality-of-service controls ensure that capacity is shared fairly among tenants.