Every module should be compatible with python2 and python3.
All third party libraries should be installed system-wide or in python_modules
directory.
Module configurations are written in YAML and pyYAML is required.
Every configuration file must have one of two formats:
- Configuration for only one job:
update_every : 2 # update frequency
retries : 1 # how many failures in update() is tolerated
priority : 20000 # where it is shown on dashboard
other_var1 : bla # variables passed to module
other_var2 : alb
- Configuration for many jobs (ex. mysql):
# module defaults:
update_every : 2
retries : 1
priority : 20000
local: # job name
update_every : 5 # job update frequency
other_var1 : some_val # module specific variable
other_job:
priority : 5 # job position on dashboard
retries : 20 # job retries
other_var2 : val # module specific variable
update_every
, retries
, and priority
are always optional.
The following python.d modules are supported:
This module will monitor one or more apache servers depending on configuration.
Requirements:
- apache with enabled
mod_status
It produces the following charts:
- Requests in requests/s
- requests
- Connections
- connections
- Async Connections
- keepalive
- closing
- writing
- Bandwidth in kilobytes/s
- sent
- Workers
- idle
- busy
- Lifetime Avg. Requests/s in requests/s
- requests_sec
- Lifetime Avg. Bandwidth/s in kilobytes/s
- size_sec
- Lifetime Avg. Response Size in bytes/request
- size_req
Needs only url
to server's server-status?auto
Here is an example for 2 servers:
update_every : 10
priority : 90100
local:
url : 'http://localhost/server-status?auto'
retries : 20
remote:
url : 'http://www.apache.org/server-status?auto'
update_every : 5
retries : 4
Without configuration, module attempts to connect to http://localhost/server-status?auto
Module monitors apache mod_cache log and produces only one chart:
cached responses in percent cached
- hit
- miss
- other
Sample:
update_every : 10
priority : 120000
retries : 5
log_path : '/var/log/apache2/cache.log'
If no configuration is given, module will attempt to read log file at /var/log/apache2/cache.log
Module parses bind dump file to collect real-time performance metrics
Requirements:
- Version of bind must be 9.6 +
- Netdata must have permissions to run
rndc status
It produces:
- Name server statistics
- requests
- responses
- success
- auth_answer
- nonauth_answer
- nxrrset
- failure
- nxdomain
- recursion
- duplicate
- rejections
- Incoming queries
- RESERVED0
- A
- NS
- CNAME
- SOA
- PTR
- MX
- TXT
- X25
- AAAA
- SRV
- NAPTR
- A6
- DS
- RSIG
- DNSKEY
- SPF
- ANY
- DLV
- Outgoing queries
- Same as Incoming queries
Sample:
local:
named_stats_path : '/var/log/bind/named.stats'
If no configuration is given, module will attempt to read named.stats file at /var/log/bind/named.stats
This module monitors the precision and statistics of a local chronyd server.
It produces:
- frequency
- last offset
- RMS offset
- residual freq
- root delay
- root dispersion
- skew
- system time
Requirements:
Verify that user netdata can execute chronyc tracking
. If necessary, update /etc/chrony.conf
, cmdallow
.
Sample:
# data collection frequency:
update_every: 1
# chrony query command:
local:
command: 'chronyc -n tracking'
This module monitors vital statistics of a local Apache CouchDB 2.x server, including:
- Overall server reads/writes
- HTTP traffic breakdown
- Request methods (
GET
,PUT
,POST
, etc.) - Response status codes (
200
,201
,4xx
, etc.)
- Request methods (
- Active server tasks
- Replication status (CouchDB 2.1 and up only)
- Erlang VM stats
- Optional per-database statistics: sizes, # of docs, # of deleted docs
Sample for a local server running on port 5984:
local:
user: 'admin'
pass: 'password'
node: 'couchdb@127.0.0.1'
Be sure to specify a correct admin-level username and password.
You may also need to change the node
name; this should match the value of -name NODENAME
in your CouchDB's etc/vm.args
file. Typically this is of the form couchdb@fully.qualified.domain.name
in a cluster, or couchdb@127.0.0.1
/ couchdb@localhost
for a single-node server.
If you want per-database statistics, these need to be added to the configuration, separated by spaces:
local:
...
databases: 'db1 db2 db3 ...'
This module shows the current CPU frequency as set by the cpufreq kernel module.
Requirement:
You need to have CONFIG_CPU_FREQ
and (optionally) CONFIG_CPU_FREQ_STAT
enabled in your kernel.
This module tries to read from one of two possible locations. On
initialization, it tries to read the time_in_state
files provided by
cpufreq_stats. If this file does not exist, or doesn't contain valid data, it
falls back to using the more inaccurate scaling_cur_freq
file (which only
represents the current CPU frequency, and doesn't account for any state
changes which happen between updates).
It produces one chart with multiple lines (one line per core).
Sample:
sys_dir: "/sys/devices"
If no configuration is given, module will search for cpufreq files in /sys/devices
directory.
Directory is also prefixed with NETDATA_HOST_PREFIX
if specified.
This module monitors the usage of CPU idle states.
Requirement:
Your kernel needs to have CONFIG_CPU_IDLE
enabled.
It produces one stacked chart per CPU, showing the percentage of time spent in each state.
This module provides dns query time statistics.
Requirement:
python-dnspython
package
It produces one aggregate chart or one chart per dns server, showing the query time.
This module provides statistics information from dovecot server.
Statistics are taken from dovecot socket by executing EXPORT global
command.
More information about dovecot stats can be found on project wiki page.
Requirement: Dovecot unix socket with R/W permissions for user netdata or dovecot with configured TCP/IP socket.
Module gives information with following charts:
- sessions
- active sessions
- logins
- logins
- commands - number of IMAP commands
- commands
- Faults
- minor
- major
- Context Switches
- volountary
- involountary
- disk in bytes/s
- read
- write
- bytes in bytes/s
- read
- write
- number of syscalls in syscalls/s
- read
- write
- lookups - number of lookups per second
- path
- attr
- hits - number of cache hits
- hits
- attempts - authorization attemts
- success
- failure
- cache - cached authorization hits
- hit
- miss
Sample:
localtcpip:
name : 'local'
host : '127.0.0.1'
port : 24242
localsocket:
name : 'local'
socket : '/var/run/dovecot/stats'
If no configuration is given, module will attempt to connect to dovecot using unix socket localized in /var/run/dovecot/stats
Module monitor elasticsearch performance and health metrics
It produces:
- Search performance charts:
- Number of queries, fetches
- Time spent on queries, fetches
- Query and fetch latency
- Indexing performance charts:
- Number of documents indexed, index refreshes, flushes
- Time spent on indexing, refreshing, flushing
- Indexing and flushing latency
- Memory usage and garbace collection charts:
- JVM heap currently in use, commited
- Count of garbage collections
- Time spent on garbage collections
- Host metrics charts:
- Available file descriptors in percent
- Opened HTTP connections
- Cluster communication transport metrics
- Queues and rejections charts:
- Number of queued/rejected threads in thread pool
- Fielddata cache charts:
- Fielddata cache size
- Fielddata evictions and circuit breaker tripped count
- Cluster health API charts:
- Cluster status
- Nodes and tasks statistics
- Shards statistics
- Cluster stats API charts:
- Nodes statistics
- Query cache statistics
- Docs statistics
- Store statistics
- Indices and shards statistics
Sample:
local:
host : 'ipaddress' # Server ip address or hostname
port : 'password' # Port on which elasticsearch listed
cluster_health : True/False # Calls to cluster health elasticsearch API. Enabled by default.
cluster_stats : True/False # Calls to cluster stats elasticsearch API. Enabled by default.
If no configuration is given, module will fail to run.
Simple module executing exim -bpc
to grab exim queue.
This command can take a lot of time to finish its execution thus it is not recommended to run it every second.
It produces only one chart:
- Exim Queue Emails
- emails
Configuration is not needed.
Module monitor fail2ban log file to show all bans for all active jails
Requirements:
- fail2ban.log file MUST BE readable by netdata (A good idea is to add create 0640 root netdata to fail2ban conf at logrotate.d)
It produces one chart with multiple lines (one line per jail)
Sample:
local:
log_path: '/var/log/fail2ban.log'
conf_path: '/etc/fail2ban/jail.local'
exclude: 'dropbear apache'
If no configuration is given, module will attempt to read log file at /var/log/fail2ban.log
and conf file at /etc/fail2ban/jail.local
.
If conf file is not found default jail is ssh
.
Uses the radclient
command to provide freeradius statistics. It is not recommended to run it every second.
It produces:
- Authentication counters:
- access-accepts
- access-rejects
- auth-dropped-requests
- auth-duplicate-requests
- auth-invalid-requests
- auth-malformed-requests
- auth-unknown-types
- Accounting counters: [optional]
- accounting-requests
- accounting-responses
- acct-dropped-requests
- acct-duplicate-requests
- acct-invalid-requests
- acct-malformed-requests
- acct-unknown-types
- Proxy authentication counters: [optional]
- proxy-access-accepts
- proxy-access-rejects
- proxy-auth-dropped-requests
- proxy-auth-duplicate-requests
- proxy-auth-invalid-requests
- proxy-auth-malformed-requests
- proxy-auth-unknown-types
- Proxy accounting counters: [optional]
- proxy-accounting-requests
- proxy-accounting-responses
- proxy-acct-dropped-requests
- proxy-acct-duplicate-requests
- proxy-acct-invalid-requests
- proxy-acct-malformed-requests
- proxy-acct-unknown-typesa
Sample:
local:
host : 'localhost'
port : '18121'
secret : 'adminsecret'
acct : False # Freeradius accounting statistics.
proxy_auth : False # Freeradius proxy authentication statistics.
proxy_acct : False # Freeradius proxy accounting statistics.
Freeradius server configuration:
The configuration for the status server is automatically created in the sites-available directory. By default, server is enabled and can be queried from every client. FreeRADIUS will only respond to status-server messages, if the status-server virtual server has been enabled.
To do this, create a link from the sites-enabled directory to the status file in the sites-available directory:
- cd sites-enabled
- ln -s ../sites-available/status status
and restart/reload your FREERADIUS server.
The go_expvar
module can monitor any Go application that exposes its metrics with the use of expvar
package from the Go standard library.
go_expvar
produces charts for Go runtime memory statistics and optionally any number of custom charts. Please see the wiki page for more info.
For the memory statistics, it produces the following charts:
- Heap allocations in kB
- alloc: size of objects allocated on the heap
- inuse: size of allocated heap spans
- Stack allocations in kB
- inuse: size of allocated stack spans
- MSpan allocations in kB
- inuse: size of allocated mspan structures
- MCache allocations in kB
- inuse: size of allocated mcache structures
- Virtual memory in kB
- sys: size of reserved virtual address space
- Live objects
- live: number of live objects in memory
- GC pauses average in ns
- avg: average duration of all GC stop-the-world pauses
Please see the wiki page for detailed info about module configuration.
Module monitors frontend and backend metrics such as bytes in, bytes out, sessions current, sessions in queue current. And health metrics such as backend servers status (server check should be used).
Plugin can obtain data from url OR unix socket.
Requirement: Socket MUST be readable AND writable by netdata user.
It produces:
- Frontend family charts
- Kilobytes in/s
- Kilobytes out/s
- Sessions current
- Sessions in queue current
- Backend family charts
- Kilobytes in/s
- Kilobytes out/s
- Sessions current
- Sessions in queue current
- Health chart
- number of failed servers for every backend (in DOWN state)
Sample:
via_url:
user : 'username' # ONLY IF stats auth is used
pass : 'password' # # ONLY IF stats auth is used
url : 'http://ip.address:port/url;csv;norefresh'
OR
via_socket:
socket : 'path/to/haproxy/sock'
If no configuration is given, module will fail to run.
Module monitors disk temperatures from one or more hddtemp daemons.
Requirement:
Running hddtemp
in daemonized mode with access on tcp port
It produces one chart Temperature with dynamic number of dimensions (one per disk)
Sample:
update_every: 3
host: "127.0.0.1"
port: 7634
If no configuration is given, module will attempt to connect to hddtemp daemon on 127.0.0.1:7634
address
Module monitors IPFS basic information.
- Bandwidth in kbits/s
- in
- out
- Peers
- peers
Only url to IPFS server is needed.
Sample:
localhost:
name : 'local'
url : 'http://localhost:5001'
Module monitor leases database to show all active leases for given pools.
Requirements:
- dhcpd leases file MUST BE readable by netdata
- pools MUST BE in CIDR format
It produces:
- Pools utilization Aggregate chart for all pools.
- utilization in percent
- Total leases
- leases (overall number of leases for all pools)
- Active leases for every pools
- leases (number of active leases in pool)
Sample:
local:
leases_path : '/var/lib/dhcp/dhcpd.leases'
pools : '192.168.3.0/24 192.168.4.0/24 192.168.5.0/24'
In case of python2 you need to install py2-ipaddress
to make plugin work.
The module will not work If no configuration is given.
Module monitor /proc/mdstat
It produces:
-
Health Number of failed disks in every array (aggregate chart).
-
Disks stats
- total (number of devices array ideally would have)
- inuse (number of devices currently are in use)
- Current status
- resync in percent
- recovery in percent
- reshape in percent
- check in percent
- Operation status (if resync/recovery/reshape/check is active)
- finish in minutes
- speed in megabytes/s
No configuration is needed.
Memcached monitoring module. Data grabbed from stats interface.
- Network in kilobytes/s
- read
- written
- Connections per second
- current
- rejected
- total
- Items in cluster
- current
- total
- Evicted and Reclaimed items
- evicted
- reclaimed
- GET requests/s
- hits
- misses
- GET rate rate in requests/s
- rate
- SET rate rate in requests/s
- rate
- DELETE requests/s
- hits
- misses
- CAS requests/s
- hits
- misses
- bad value
- Increment requests/s
- hits
- misses
- Decrement requests/s
- hits
- misses
- Touch requests/s
- hits
- misses
- Touch rate rate in requests/s
- rate
Sample:
localtcpip:
name : 'local'
host : '127.0.0.1'
port : 24242
If no configuration is given, module will attempt to connect to memcached instance on 127.0.0.1:11211
address.
Module monitor mongodb performance and health metrics
Requirements:
python-pymongo
package.
You need to install it manually.
Number of charts depends on mongodb version, storage engine and other features (replication):
- Read requests:
- query
- getmore (operation the cursor executes to get additional data from query)
- Write requests:
- insert
- delete
- update
- Active clients:
- readers (number of clients with read operations in progress or queued)
- writers (number of clients with write operations in progress or queued)
- Journal transactions:
- commits (count of transactions that have been written to the journal)
- Data written to the journal:
- volume (volume of data)
- Background flush (MMAPv1):
- average ms (average time taken by flushes to execute)
- last ms (time taken by the last flush)
- Read tickets (WiredTiger):
- in use (number of read tickets in use)
- available (number of available read tickets remaining)
- Write tickets (WiredTiger):
- in use (number of write tickets in use)
- available (number of available write tickets remaining)
- Cursors:
- opened (number of cursors currently opened by MongoDB for clients)
- timedOut (number of cursors that have timed)
- noTimeout (number of open cursors with timeout disabled)
- Connections:
- connected (number of clients currently connected to the database server)
- unused (number of unused connections available for new clients)
- Memory usage metrics:
- virtual
- resident (amount of memory used by the database process)
- mapped
- non mapped
- Page faults:
- page faults (number of times MongoDB had to request from disk)
- Cache metrics (WiredTiger):
- percentage of bytes currently in the cache (amount of space taken by cached data)
- percantage of tracked dirty bytes in the cache (amount of space taken by dirty data)
- Pages evicted from cache (WiredTiger):
- modified
- unmodified
- Queued requests:
- readers (number of read request currently queued)
- writers (number of write request currently queued)
- Errors:
- msg (number of message assertions raised)
- warning (number of warning assertions raised)
- regular (number of regular assertions raised)
- user (number of assertions corresponding to errors generated by users)
- Storage metrics (one chart for every database)
- dataSize (size of all documents + padding in the database)
- indexSize (size of all indexes in the database)
- storageSize (size of all extents in the database)
- Documents in the database (one chart for all databases)
- documents (number of objects in the database among all the collections)
- tcmalloc metrics
- central cache free
- current total thread cache
- pageheap free
- pageheap unmapped
- thread cache free
- transfer cache free
- heap size
- Commands total/failed rate
- count
- createIndex
- delete
- eval
- findAndModify
- insert
- Locks metrics (acquireCount metrics - number of times the lock was acquired in the specified mode)
- Global lock
- Database lock
- Collection lock
- Metadata lock
- oplog lock
- Replica set members state
- state
- Oplog window
- window (interval of time between the oldest and the latest entries in the oplog)
- Replication lag
- member (time when last entry from the oplog was applied for every member)
- Replication set member heartbeat latency
- member (time when last heartbeat was received from replica set member)
Sample:
local:
name : 'local'
host : '127.0.0.1'
port : 27017
user : 'netdata'
pass : 'netdata'
If no configuration is given, module will attempt to connect to mongodb daemon on 127.0.0.1:27017
address
Module monitors one or more mysql servers
Requirements:
It will produce following charts (if data is available):
- Bandwidth in kbps
- in
- out
- Queries in queries/sec
- queries
- questions
- slow queries
- Operations in operations/sec
- opened tables
- flush
- commit
- delete
- prepare
- read first
- read key
- read next
- read prev
- read random
- read random next
- rollback
- save point
- update
- write
- Table Locks in locks/sec
- immediate
- waited
- Select Issues in issues/sec
- full join
- full range join
- range
- range check
- scan
- Sort Issues in issues/sec
- merge passes
- range
- scan
You can provide, per server, the following:
- username which have access to database (deafults to 'root')
- password (defaults to none)
- mysql my.cnf configuration file
- mysql socket (optional)
- mysql host (ip or hostname)
- mysql port (defaults to 3306)
Here is an example for 3 servers:
update_every : 10
priority : 90100
retries : 5
local:
'my.cnf' : '/etc/mysql/my.cnf'
priority : 90000
local_2:
user : 'root'
pass : 'blablablabla'
socket : '/var/run/mysqld/mysqld.sock'
update_every : 1
remote:
user : 'admin'
pass : 'bla'
host : 'example.org'
port : 9000
retries : 20
If no configuration is given, module will attempt to connect to mysql server via unix socket at /var/run/mysqld/mysqld.sock
without password and with username root
This module will monitor one or more nginx servers depending on configuration. Servers can be either local or remote.
Requirements:
- nginx with configured 'ngx_http_stub_status_module'
- 'location /stub_status'
Example nginx configuration can be found in 'python.d/nginx.conf'
It produces following charts:
- Active Connections
- active
- Requests in requests/s
- requests
- Active Connections by Status
- reading
- writing
- waiting
- Connections Rate in connections/s
- accepts
- handled
Needs only url
to server's stub_status
Here is an example for local server:
update_every : 10
priority : 90100
local:
url : 'http://localhost/stub_status'
retries : 10
Without configuration, module attempts to connect to http://localhost/stub_status
Module uses the nsd-control stats_noreset
command to provide nsd
statistics.
Requirements:
- Version of
nsd
must be 4.0+ - Netdata must have permissions to run
nsd-control stats_noreset
It produces:
- Queries
- queries
- Zones
- master
- slave
- Protocol
- udp
- udp6
- tcp
- tcp6
- Query Type
- A
- NS
- CNAME
- SOA
- PTR
- HINFO
- MX
- NAPTR
- TXT
- AAAA
- SRV
- ANY
- Transfer
- NOTIFY
- AXFR
- Return Code
- NOERROR
- FORMERR
- SERVFAIL
- NXDOMAIN
- NOTIMP
- REFUSED
- YXDOMAIN
Configuration is not needed.
Module monitor openvpn-status log file.
Requirements:
-
If you are running multiple OpenVPN instances out of the same directory, MAKE SURE TO EDIT DIRECTIVES which create output files so that multiple instances do not overwrite each other's output files.
-
Make sure NETDATA USER CAN READ openvpn-status.log
-
Update_every interval MUST MATCH interval on which OpenVPN writes operational status to log file.
It produces:
- Users OpenVPN active users
- users
- Traffic OpenVPN overall bandwidth usage in kilobit/s
- in
- out
Sample:
default
log_path : '/var/log/openvpn-status.log'
This module will monitor one or more php-fpm instances depending on configuration.
Requirements:
- php-fpm with enabled
status
page - access to
status
page via web server
It produces following charts:
- Active Connections
- active
- maxActive
- idle
- Requests in requests/s
- requests
- Performance
- reached
- slow
Needs only url
to server's status
Here is an example for local instance:
update_every : 3
priority : 90100
local:
url : 'http://localhost/status'
retries : 10
Without configuration, module attempts to connect to http://localhost/status
Simple module executing postfix -p
to grab postfix queue.
It produces only two charts:
- Postfix Queue Emails
- emails
- Postfix Queue Emails Size in KB
- size
Configuration is not needed.
Module monitors one or more postgres servers.
Requirements:
python-psycopg2
package. You have to install to manually.
Following charts are drawn:
- Database size MB
- size
- Current Backend Processes processes
- active
- Write-Ahead Logging Statistics files/s
- total
- ready
- done
- Checkpoints writes/s
- scheduled
- requested
- Current connections to db count
- connections
- Tuples returned from db tuples/s
- sequential
- bitmap
- Tuple reads from db reads/s
- disk
- cache
- Transactions on db transactions/s
- commited
- rolled back
- Tuples written to db writes/s
- inserted
- updated
- deleted
- conflicts
- Locks on db count per type
- locks
socket:
name : 'socket'
user : 'postgres'
database : 'postgres'
tcp:
name : 'tcp'
user : 'postgres'
database : 'postgres'
host : 'localhost'
port : 5432
When no configuration file is found, module tries to connect to TCP/IP socket: localhost:5432
.
Module monitor powerdns performance and health metrics.
Following charts are drawn:
- Queries and Answers
- udp-queries
- udp-answers
- tcp-queries
- tcp-answers
- Cache Usage
- query-cache-hit
- query-cache-miss
- packetcache-hit
- packetcache-miss
- Cache Size
- query-cache-size
- packetcache-size
- key-cache-size
- meta-cache-size
- Latency
- latency
local:
name : 'local'
url : 'http://127.0.0.1:8081/api/v1/servers/localhost/statistics'
header :
X-API-Key: 'change_me'
Module monitor rabbitmq performance and health metrics.
Following charts are drawn:
- Queued Messages
- ready
- unacknowledged
- Message Rates
- ack
- redelivered
- deliver
- publish
- Global Counts
- channels
- consumers
- connections
- queues
- exchanges
- File Descriptors
- used descriptors
- Socket Descriptors
- used descriptors
- Erlang processes
- used processes
- Memory
- free memory in megabytes
- Disk Space
- free disk space in gigabytes
socket:
name : 'local'
host : '127.0.0.1'
port : 15672
user : 'guest'
pass : 'guest'
When no configuration file is found, module tries to connect to: localhost:15672
.
Get INFO data from redis instance.
Following charts are drawn:
- Operations per second
- operations
- Hit rate in percent
- rate
- Memory utilization in kilobytes
- total
- lua
- Database keys
- lines are creates dynamically based on how many databases are there
- Clients
- connected
- blocked
- Slaves
- connected
socket:
name : 'local'
socket : '/var/lib/redis/redis.sock'
localhost:
name : 'local'
host : 'localhost'
port : 6379
When no configuration file is found, module tries to connect to TCP/IP socket: localhost:6379
.
Performance metrics of Samba file sharing.
It produces the following charts:
- Syscall R/Ws in kilobytes/s
- sendfile
- recvfle
- Smb2 R/Ws in kilobytes/s
- readout
- writein
- readin
- writeout
- Smb2 Create/Close in operations/s
- create
- close
- Smb2 Info in operations/s
- getinfo
- setinfo
- Smb2 Find in operations/s
- find
- Smb2 Notify in operations/s
- notify
- Smb2 Lesser Ops as counters
- tcon
- negprot
- tdis
- cancel
- logoff
- flush
- lock
- keepalive
- break
- sessetup
Requires that smbd has been compiled with profiling enabled. Also required
that smbd
was started either with the -P 1
option or inside smb.conf
using smbd profiling level
.
This plugin uses smbstatus -P
which can only be executed by root. It uses
sudo and assumes that it is configured such that the netdata
user can
execute smbstatus as root without password.
For example:
netdata ALL=(ALL) NOPASSWD: /usr/bin/smbstatus -P
update_every : 5 # update frequency
System sensors information.
Charts are created dynamically.
For detailed configuration information please read sensors.conf
file.
There have been reports from users that on certain servers, ACPI ring buffer errors are printed by the kernel (dmesg
) when ACPI sensors are being accessed.
We are tracking such cases in issue #827.
Please join this discussion for help.
This module will monitor one or more squid instances depending on configuration.
It produces following charts:
- Client Bandwidth in kilobits/s
- in
- out
- hits
- Client Requests in requests/s
- requests
- hits
- errors
- Server Bandwidth in kilobits/s
- in
- out
- Server Requests in requests/s
- requests
- errors
priority : 50000
local:
request : 'cache_object://localhost:3128/counters'
host : 'localhost'
port : 3128
Without any configuration module will try to autodetect where squid presents its counters
data
Module monitor smartd
log files to collect HDD/SSD S.M.A.R.T attributes.
It produces following charts (you can add additional attributes in the module configuration file):
-
Read Error Rate attribute 1
-
Start/Stop Count attribute 4
-
Reallocated Sectors Count attribute 5
-
Seek Error Rate attribute 7
-
Power-On Hours Count attribute 9
-
Power Cycle Count attribute 12
-
Load/Unload Cycles attribute 193
-
Temperature attribute 194
-
Current Pending Sectors attribute 197
-
Off-Line Uncorrectable attribute 198
-
Write Error Rate attribute 200
local:
log_path : '/var/log/smartd/'
If no configuration is given, module will attempt to read log files in /var/log/smartd/ directory.
Present tomcat containers memory utilization.
Charts:
- Requests per second
- accesses
- Volume in KB/s
- volume
- Threads
- current
- busy
- JVM Free Memory in MB
- jvm
localhost:
name : 'local'
url : 'http://127.0.0.1:8080/manager/status?XML=true'
user : 'tomcat_username'
pass : 'secret_tomcat_password'
Without configuration, module attempts to connect to http://localhost:8080/manager/status?XML=true
, without any credentials.
So it will probably fail.
Module uses the varnishstat
command to provide varnish cache statistics.
It produces:
- Client metrics
- session accepted
- session dropped
- good client requests received
- All history hit rate ratio
- cache hits in percent
- cache miss in percent
- cache hits for pass percent
- Curent poll hit rate ratio
- cache hits in percent
- cache miss in percent
- cache hits for pass percent
- Thread-related metrics (only for varnish version 4+)
- total number of threads
- threads created
- threads creation failed
- threads hit max
- length os session queue
- sessions queued for thread
- Backend health
- backend conn. success
- backend conn. not attempted
- backend conn. too many
- backend conn. failures
- backend conn. reuses
- backend conn. recycles
- backend conn. retry
- backend requests made
- Memory usage
- memory available in megabytes
- memory allocated in megabytes
- Problems summary
- session dropped
- session accept failures
- session pipe overflow
- backend conn. not attempted
- fetch failed (all causes)
- backend conn. too many
- threads hit max
- threads destroyed
- length of session queue
- HTTP header overflows
- ESI parse errors
- ESI parse warnings
- Uptime
- varnish instance uptime in seconds
No configuration is needed.
Tails the apache/nginx/lighttpd/gunicorn log files to collect real-time web-server statistics.
It produces following charts:
- Response by type requests/s
- success (1xx, 2xx, 304)
- error (5xx)
- redirect (3xx except 304)
- bad (4xx)
- other (all other responses)
- Response by code family requests/s
- 1xx (informational)
- 2xx (successful)
- 3xx (redirect)
- 4xx (bad)
- 5xx (internal server errors)
- other (non-standart responses)
- unmatched (the lines in the log file that are not matched)
-
Detailed Response Codes requests/s (number of responses for each response code family individually)
-
Bandwidth KB/s
- received (bandwidth of requests)
- send (bandwidth of responses)
- Timings ms (request processing time)
- min (bandwidth of requests)
- max (bandwidth of responses)
- average (bandwidth of responses)
-
Request per url requests/s (configured by user)
-
Http Methods requests/s (requests per http method)
-
Http Versions requests/s (requests per http version)
-
IP protocols requests/s (requests per ip protocol version)
-
Curent Poll Unique Client IPs unique ips/s (unique client IPs per data collection iteration)
-
All Time Unique Client IPs unique ips/s (unique client IPs since the last restart of netdata)
nginx_log:
name : 'nginx_log'
path : '/var/log/nginx/access.log'
apache_log:
name : 'apache_log'
path : '/var/log/apache/other_vhosts_access.log'
categories:
cacti : 'cacti.*'
observium : 'observium'
Module has preconfigured jobs for nginx, apache and gunicorn on various distros.