You are here: Home

Modified items

All recently modified items, latest first.
RPMPackage python-rpy2-2.8.3-1.lbn19.x86_64
Python interface to the R language (embedded R)
RPMPackage prometheus-snmp-exporter-0.0.5-1.lbn19.noarch
See https://github.com/prometheus/snmp_exporter/blob/master/README.md for documentation.
RPMPackage prometheus-pushgateway-0.3.0-1.gitc3b9ef4.lbn19.x86_64
The Prometheus Pushgateway exists to allow ephemeral and batch jobs to expose their metrics to Prometheus. Since these kinds of jobs may not exist long enough to be scraped, they can instead push their metrics to a Pushgateway. The Pushgateway then exposes these metrics to Prometheus. The Pushgateway is explicitly not an aggregator, but rather a metrics cache. It does not have a statsd-like semantics. The metrics pushed are exactly the same as you would present for scraping in a permanently running program. For machine-level metrics, the textfile collector of the Node exporter is usually more appropriate. The Pushgateway is best used for service-level metrics.
RPMPackage prometheus-promu-0.0.0-1.gitd007363.lbn19.x86_64
promu is the utility tool for Prometheus projects Usage: promu [flags] promu [command] Available Commands: build Build a Go project crossbuild Crossbuild a Go project using Golang builder Docker images info Print info about current project and exit release Upload tarballs to the Github release tarball Create a tarball from the builded Go project version Print the version and exit Flags: --config string Config file (default is ./.promu.yml) -v, --verbose Verbose output --viper Use Viper for configuration (default true) Use "promu [command] --help" for more information about a command.
RPMPackage prometheus-cli-0.3.0-1.gita5c0897.lbn19.x86_64
Usage: prometheus-cli [flags] query <expression> prometheus-cli [flags] query_range <expression> <end_timestamp> <range_seconds> [<step_seconds>] prometheus-cli [flags] metrics Flags: -csv=true: Whether to format output as CSV -csvDelimiter=";": Single-character delimiter to use in CSV output -server="": URL of the Prometheus server to query -timeout=1m0s: Timeout to use when querying the Prometheus server
RPMPackage prometheus-alertmanager-0.3.0-1.git97eca6d.lbn19.x86_64
The Alertmanager handles alerts sent by client applications such as the Prometheus server. It takes care of deduplicating, grouping, and routing them to the correct receiver integration such as email, PagerDuty, or OpsGenie. It also takes care of silencing and inhibition of alerts.
RPMPackage prometheus-1.2.1-1.lbn19.x86_64
Prometheus is a systems and service monitoring system. It collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts if some condition is observed to be true.
RPMPackage packetbeat-5.0.0-3.rc1.b85820f.lbn19.x86_64
Packetbeat is an open source network packet analyzer that ships the data to Elasticsearch. Think of it like a distributed real-time Wireshark with a lot more analytics features. The Packetbeat shippers sniff the traffic between your application processes, parse on the fly protocols like HTTP, MySQL, PostgreSQL, Redis or Thrift and correlate the messages into transactions. For each transaction, the shipper inserts a JSON document into Elasticsearch, where it is stored and indexed. You can then use Kibana to view key metrics and do ad-hoc queries against the data. To learn more about Packetbeat, check out https://www.elastic.co/products/beats/packetbeat.
RPMPackage opentsdb-2.2.0-1.lbn19.noarch
OpenTSDB is a distributed, scalable Time Series Database (TSDB) written on top of HBase. OpenTSDB was written to address a common need: store, index and serve metrics collected from computer systems (network gear, operating systems, applications) at a large scale, and make this data easily accessible and graphable. Thanks to HBase's scalability, OpenTSDB allows you to collect many thousands of metrics from thousands of hosts and applications, at a high rate (every few seconds). OpenTSDB will never delete or downsample data and can easily store billions of data points.
RPMPackage nodejs-elasticsearch-browser-12.0.0-1.lbn19.noarch
The official low-level Elasticsearch client, for use in the browser.
RPMPackage nodejs-elasticsearch-12.0.0-1.lbn19.noarch
The official low-level Elasticsearch client for Node.js and the browser.
RPMPackage nodejs-elasticdump-2.4.0-1.lbn19.noarch
import and export tools for elasticsearch
RPMPackage nodejs-elastic-eslint-config-kibana-0.1.0-1.lbn19.noarch
The eslint config used by the kibana team
RPMPackage nodejs-elastic-datemath-2.3.0-1.lbn19.noarch
elasticsearch datemath parser, used in kibana
RPMPackage nodejs-bigfunger-jsondiffpatch-0.1.38_webpack-1.lbn19.noarch
Diff & Patch for Javascript objects
RPMPackage nodejs-bigfunger-decompress-zip-0.2.0_stripfix2-1.lbn19.noarch
Extract files from a ZIP archive
RPMPackage nodejs-autoprefixer-loader-3.2.0-1.lbn19.noarch
[deprecated] Autoprefixer loader for webpack
RPMPackage nodejs-autoprefixer-6.3.4-1.lbn19.noarch
Parse CSS and add vendor prefixes to CSS rules using values from the Can I Use website
RPMPackage morgoth-0.2.0-0.0.1.git368693b.lbn19.x86_64
Morgoth is a framework for flexible anomaly detection algorithms packaged to be used with Kapacitor Morgoth provides a framework via the for implementing the smaller pieces of an anomaly detection problem. The basic framework is that Morgoth maintains a dictionary of normal behaviors and compares new windows of data to the normal dictionary. If the new window of data is not found in the dictionary then it is considered anomalous. Morgoth uses algorithms, called fingerprinters, to compare windows of data to determine if they are similar. The Lossy Counting Algorithm(LCA) is used to maintain the dictionary of normal windows. The LCA is a space efficient algorithm that can account for drift in the normal dictionary, more on LCA below. Morgoth uses a consensus model where each fingerprinter votes for whether it thinks the current window is anomalous. If the total votes percentage is greater than a consensus threshold then the window is considered anomalous. Fingerprinters A fingerprinter is a method that can determine if a window of data is similar to a previous window of data. In effect the fingerprinters take fingerprints of the incoming data and can compare fingerprints of new data to see if they match. These fingerprinting algorithms provide the core of Morgoth as they are the means by which Morgoth determines if a new window of data is new or something already observed. An example fingerprinting algorithm is a sigma algorithm that computes the mean and standard deviation of a window and store them as the fingerprint for the window. When a new window arrives it compares the fingerprint (mean, stddev) of the new window to the previous window. If the windows are too far apart then they are not considered at match. By defining several fingerprinting algorithms Morgoth can decide whether new data is anomalous or normal. Lossy Counting Algorithm The LCA counts frequent items in a stream of data. It is lossy because to conserve space it will drop less frequent items. The result is that the algorithm will find frequent items but may loose track of less frequent items. More on the specific mathematical properties of the algorithm can be found below. There are two parameters to the algorithm, error tolerance (e) and minimum support (m). First e is in the range [0, 1] and is an error bound, interpreted as a percentage value. For example given and e = 0.01 (1%), items less the 1% frequent in the data set can be dropped. Decreasing e will require more space but will keep track of less frequent items. Increasing e will require less space but will loose track of less frequent items. Second m is in the range [0, 1] and is a minimum support such that items that are considered frequent have at least m% frequency. For example if m = 0.05 (5%) then if an item has a support less than 5% it is not considered frequent, aka normal. The minimum support becomes the threshold for when items are considered anomalous. Notice that m > e, this is so that we reduce the number of false positives. For example say we set e = 5% and m = 5%. If a normal behavior X, has a true frequency of 6% than based on variations in the true frequency, X might fall below 5% for a small interval and be dropped. This will cause X's frequency to be underestimated, which will cause it to be flagged as an anomaly, triggering a false positive. By setting e < m we have a buffer to help mitigate creating false positives. Properties The Lossy Counting algorithm has three properties: there are no false negatives, false positives are guaranteed to have a frequency of at least (m - e)*N, the frequency of an item can underestimated by at most e*N, where N is the number of items encountered. The space requirements for the algorithm are at most (1 / e) * log(e*N). It has also been show that if the item with low frequency are uniformly random than the space requirements are no more than 7 / e. This means that as Morgoth continues to processes windows of data its memory usage will grow as the log of the number of windows and can reach a stable upper bound.
RPMPackage metricbeat-5.0.0-3.rc1.b85820f.lbn19.x86_64
Metricbeat fetches a set of metrics on a predefined interval from the operating system and services such as Apache web server, Redis, and more.