Personal tools
Skip to content. | Skip to navigation
See for documentation.
This library allows to export metrics data to Prometheus. Installation pip install opentelemetry-exporter-prometheus References OpenTelemetry Prometheus Exporter Prometheus OpenTelemetry Project
Prometheus Exporter for Alibaba Cloud
Prometheus Exporter for Redis Metrics. Supports ValKey and Redis 2.x, 3.x, 4.x, 5.x and 6.x
Prometheus exporter for PostgreSQL server metrics. Flags help Show context-sensitive help (also try --help-long and --help-man). web.listen-address Address to listen on for web interface and telemetry. Default is :9187. web.telemetry-path Path under which to expose metrics. Default is /metrics. disable-default-metrics Use only metrics supplied from queries.yaml via --extend.query-path. disable-settings-metrics Use the flag if you don't want to scrape pg_settings. auto-discover-databases Whether to discover the databases on a server dynamically. extend.query-path Path to a YAML file containing custom queries to run. Check out queries.yaml for examples of the format. dumpmaps Do not run - print the internal representation of the metric maps. Useful when debugging a custom queries file. constantLabels Labels to set in all metrics. A list of label=value pairs, separated by commas. version Show application version. exclude-databases A list of databases to remove when autoDiscoverDatabases is enabled. log.level Set logging level: one of debug, info, warn, error, fatal log.format Set the log output target and format. e.g. logger:syslog?appname=bob&local=7 or logger:stdout?json=true Defaults to logger:stderr. Environment Variables The following environment variables configure the exporter: DATA_SOURCE_NAME the default legacy format. Accepts URI form and key=value form arguments. The URI may contain the username and password to connect with. DATA_SOURCE_URI an alternative to DATA_SOURCE_NAME which exclusively accepts the hostname without a username and password component. For example, my_pg_hostname or my_pg_hostname?sslmode=disable. DATA_SOURCE_URI_FILE The same as above but reads the URI from a file. DATA_SOURCE_USER When using DATA_SOURCE_URI, this environment variable is used to specify the username. DATA_SOURCE_USER_FILE The same, but reads the username from a file. DATA_SOURCE_PASS When using DATA_SOURCE_URI, this environment variable is used to specify the password to connect with. DATA_SOURCE_PASS_FILE The same as above but reads the password from a file. PG_EXPORTER_WEB_LISTEN_ADDRESS Address to listen on for web interface and telemetry. Default is :9187. PG_EXPORTER_WEB_TELEMETRY_PATH Path under which to expose metrics. Default is /metrics. PG_EXPORTER_DISABLE_DEFAULT_METRICS Use only metrics supplied from queries.yaml. Value can be true or false. Default is false. PG_EXPORTER_DISABLE_SETTINGS_METRICS Use the flag if you don't want to scrape pg_settings. Value can be true or false. Defauls is false. PG_EXPORTER_AUTO_DISCOVER_DATABASES Whether to discover the databases on a server dynamically. Value can be true or false. Defauls is false. PG_EXPORTER_EXTEND_QUERY_PATH Path to a YAML file containing custom queries to run. Check out queries.yaml for examples of the format. PG_EXPORTER_CONSTANT_LABELS Labels to set in all metrics. A list of label=value pairs, separated by commas. PG_EXPORTER_EXCLUDE_DATABASES A comma-separated list of databases to remove when autoDiscoverDatabases is enabled. Default is empty string. Settings set by environment variables starting with PG_ will be overwritten by the corresponding CLI flag if given.
Features MongoDB Server Status metrics (cursors, operations, indexes, storage, etc) MongoDB Replica Set metrics (members, ping, replication lag, etc) MongoDB Replication Oplog metrics (size, length in time, etc) MongoDB Sharding metrics (shards, chunks, db/collections, balancer operations) MongoDB RocksDB storage-engine metrics (levels, compactions, cache usage, i/o rates, etc) MongoDB WiredTiger storage-engine metrics (cache, blockmanger, tickets, etc) MongoDB Top Metrics per collection (writeLock, readLock, query, etc*)
netbox-plugin-prometheus-sd Provide Prometheus http_sd compatible API Endpoint with data from Netbox. HTTP SD is a feature since Prometheus 2.28.0 that allows hosts to be found via a URL instead of just files. This plugin implements API endpoints in Netbox to make devices, services, IPs and virtual machines available to Prometheus. Compatibility We aim to support the latest major versions of Netbox. For now we support Netbox >= 4.0 including bugfix versions. Older versions may work, but without any guarantee. Check the .github/workflows/ci.yml pipeline for the current tested builds. Other versions may work, but we do not test them explicitly. All relevant target versions are tested in CI. Installation The plugin is available as a Python package in pypi and can be installed with pip pip install netbox-plugin-prometheus-sd Enable the plugin in /opt/netbox/netbox/netbox/configuration.py: PLUGINS = ['netbox_prometheus_sd'] The plugin has not further plugin configuration. Usage Th
See https://github.com/prometheus/snmp_exporter/blob/master/README.md for documentation.
The Prometheus Pushgateway exists to allow ephemeral and batch jobs to expose their metrics to Prometheus. Since these kinds of jobs may not exist long enough to be scraped, they can instead push their metrics to a Pushgateway. The Pushgateway then exposes these metrics to Prometheus. The Pushgateway is explicitly not an aggregator, but rather a metrics cache. It does not have a statsd-like semantics. The metrics pushed are exactly the same as you would present for scraping in a permanently running program. For machine-level metrics, the textfile collector of the Node exporter is usually more appropriate. The Pushgateway is best used for service-level metrics.