3.0
2017-07-05T09:03:49Z
Templates
ceph-mgr Zabbix module
ceph-mgr Zabbix module
Templates
Ceph
-
Number of Monitors
2
0
ceph.num_mon
0
90
365
0
3
0
0
0
0
1
0
0
Number of Monitors configured in Ceph cluster
0
Ceph
-
Number of OSDs
2
0
ceph.num_osd
0
90
365
0
3
0
0
0
0
1
0
0
Number of OSDs in Ceph cluster
0
Ceph
-
Number of OSDs in state: IN
2
0
ceph.num_osd_in
0
90
365
0
3
0
0
0
0
1
0
0
Total number of IN OSDs in Ceph cluster
0
Ceph
-
Number of OSDs in state: UP
2
0
ceph.num_osd_up
0
90
365
0
3
0
0
0
0
1
0
0
Total number of UP OSDs in Ceph cluster
0
Ceph
-
Number of Placement Groups
2
0
ceph.num_pg
0
90
365
0
3
0
0
0
0
1
0
0
Total number of Placement Groups in Ceph cluster
0
Ceph
-
Number of Placement Groups in Temporary state
2
0
ceph.num_pg_temp
0
90
365
0
3
0
0
0
0
1
0
0
Total number of Placement Groups in pg_temp state
0
Ceph
-
Number of Pools
2
0
ceph.num_pools
0
90
365
0
3
0
0
0
0
1
0
0
Total number of pools in Ceph cluster
0
Ceph
-
Ceph OSD avg fill
2
0
ceph.osd_avg_fill
0
90
365
0
0
0
0
0
0
1
0
0
Average fill of OSDs
0
Ceph
-
Ceph backfill full ratio
2
1
ceph.osd_backfillfull_ratio
0
90
365
0
0
0
0
0
0
100
0
0
Backfill full ratio setting of Ceph cluster as configured on OSDMap
0
Ceph
-
Ceph full ratio
2
1
ceph.osd_full_ratio
0
90
365
0
0
0
0
0
0
100
0
0
Full ratio setting of Ceph cluster as configured on OSDMap
0
Ceph
-
Ceph OSD Apply latency Avg
2
0
ceph.osd_latency_apply_avg
0
90
365
0
0
0
0
0
0
1
0
0
Average apply latency of OSDs
0
Ceph
-
Ceph OSD Apply latency Max
2
0
ceph.osd_latency_apply_max
0
90
365
0
0
0
0
0
0
1
0
0
Maximum apply latency of OSDs
0
Ceph
-
Ceph OSD Apply latency Min
2
0
ceph.osd_latency_apply_min
0
90
365
0
0
0
0
0
0
1
0
0
Miniumum apply latency of OSDs
0
Ceph
-
Ceph OSD Commit latency Avg
2
0
ceph.osd_latency_commit_avg
0
90
365
0
0
0
0
0
0
1
0
0
Average commit latency of OSDs
0
Ceph
-
Ceph OSD Commit latency Max
2
0
ceph.osd_latency_commit_max
0
90
365
0
0
0
0
0
0
1
0
0
Maximum commit latency of OSDs
0
Ceph
-
Ceph OSD Commit latency Min
2
0
ceph.osd_latency_commit_min
0
90
365
0
0
0
0
0
0
1
0
0
Minimum commit latency of OSDs
0
Ceph
-
Ceph OSD max fill
2
0
ceph.osd_max_fill
0
90
365
0
0
0
0
0
0
1
0
0
Percentage fill of maximum filled OSD
0
Ceph
-
Ceph OSD min fill
2
0
ceph.osd_min_fill
0
90
365
0
0
0
0
0
0
1
0
0
Percentage fill of minimum filled OSD
0
Ceph
-
Ceph nearfull ratio
2
1
ceph.osd_nearfull_ratio
0
90
365
0
0
0
0
0
0
100
0
0
Near full ratio setting of Ceph cluster as configured on OSDMap
0
Ceph
-
Overall Ceph status
2
0
ceph.overall_status
0
90
0
0
4
0
0
0
0
1
0
0
Overall Ceph cluster status, eg HEALTH_OK, HEALTH_WARN of HEALTH_ERR
0
Ceph
-
Overal Ceph status (numeric)
2
0
ceph.overall_status_int
0
90
365
0
3
0
0
0
0
1
0
0
Overal Ceph status in numeric value. OK: 0, WARN: 1, ERR: 2
0
Ceph
-
Ceph Read bandwidth
2
0
ceph.rd_bytes
0
90
365
0
3
b
1
0
0
0
1
0
0
Global read bandwidth
0
Ceph
-
Ceph Read operations
2
0
ceph.rd_ops
0
90
365
0
3
1
0
0
0
1
0
0
Global read operations per second
0
Ceph
-
Total bytes available
2
0
ceph.total_avail_bytes
0
90
365
0
3
B
0
0
0
0
1
0
0
Total bytes available in Ceph cluster
0
Ceph
-
Total bytes
2
0
ceph.total_bytes
0
90
365
0
3
B
0
0
0
0
1
0
0
Total (RAW) capacity of Ceph cluster in bytes
0
Ceph
-
Total number of objects
2
0
ceph.total_objects
0
90
365
0
3
0
0
0
0
1
0
0
Total number of objects in Ceph cluster
0
Ceph
-
Total bytes used
2
0
ceph.total_used_bytes
0
90
365
0
3
B
0
0
0
0
1
0
0
Total bytes used in Ceph cluster
0
Ceph
-
Ceph Write bandwidth
2
0
ceph.wr_bytes
0
90
365
0
3
b
1
0
0
0
1
0
0
Global write bandwidth
0
Ceph
-
Ceph Write operations
2
0
ceph.wr_ops
0
90
365
0
3
1
0
0
0
1
0
0
Global write operations per second
0
Ceph
{ceph-mgr Zabbix module:ceph.overall_status_int.last()}=2
Ceph cluster in ERR state
0
5
Ceph cluster is in ERR state
0
{ceph-mgr Zabbix module:ceph.overall_status_int.avg(1h)}=1
Ceph cluster in WARN state
0
4
Issue a trigger if Ceph cluster is in WARN state for >1h
0
{ceph-mgr Zabbix module:ceph.num_osd_in.change()}>0
Number of IN OSDs decreased
0
2
Amount of OSDs in IN state decreased
0
{ceph-mgr Zabbix module:ceph.num_osd_up.change()}>0
Number of UP OSDs decreased
0
2
Amount of OSDs in UP state decreased
0
Ceph bandwidth
900
200
0.0000
100.0000
1
1
0
1
0
0.0000
0.0000
0
0
0
0
0
0
1A7C11
0
2
0
-
ceph-mgr Zabbix module
ceph.rd_bytes
1
0
F63100
0
2
0
-
ceph-mgr Zabbix module
ceph.wr_bytes
Ceph free space
900
200
0.0000
100.0000
1
1
0
1
0
0.0000
0.0000
1
2
0
ceph-mgr Zabbix module
ceph.total_bytes
0
0
2774A4
0
2
0
-
ceph-mgr Zabbix module
ceph.total_avail_bytes
Ceph health
900
200
0.0000
2.0000
1
1
0
1
0
0.0000
0.0000
1
1
0
0
0
0
1A7C11
0
7
0
-
ceph-mgr Zabbix module
ceph.overall_status_int
Ceph I/O
900
200
0.0000
100.0000
1
1
0
1
0
0.0000
0.0000
0
0
0
0
0
0
1A7C11
0
2
0
-
ceph-mgr Zabbix module
ceph.rd_ops
1
0
F63100
0
2
0
-
ceph-mgr Zabbix module
ceph.wr_ops
Ceph OSD latency
900
200
0.0000
100.0000
1
1
0
1
0
0.0000
0.0000
0
0
0
0
0
0
1A7C11
0
2
0
-
ceph-mgr Zabbix module
ceph.osd_latency_apply_avg
1
0
F63100
0
2
0
-
ceph-mgr Zabbix module
ceph.osd_latency_commit_avg
2
0
2774A4
0
2
0
-
ceph-mgr Zabbix module
ceph.osd_latency_apply_max
3
0
A54F10
0
2
0
-
ceph-mgr Zabbix module
ceph.osd_latency_commit_max
4
0
FC6EA3
0
2
0
-
ceph-mgr Zabbix module
ceph.osd_latency_apply_min
5
0
6C59DC
0
2
0
-
ceph-mgr Zabbix module
ceph.osd_latency_commit_min
Ceph OSD utilization
900
200
0.0000
100.0000
1
1
0
1
0
0.0000
0.0000
1
1
0
0
0
0
0000CC
0
2
0
-
ceph-mgr Zabbix module
ceph.osd_nearfull_ratio
1
0
F63100
0
2
0
-
ceph-mgr Zabbix module
ceph.osd_full_ratio
2
0
CC00CC
0
2
0
-
ceph-mgr Zabbix module
ceph.osd_backfillfull_ratio
3
0
A54F10
0
2
0
-
ceph-mgr Zabbix module
ceph.osd_max_fill
4
0
FC6EA3
0
2
0
-
ceph-mgr Zabbix module
ceph.osd_avg_fill
5
0
6C59DC
0
2
0
-
ceph-mgr Zabbix module
ceph.osd_min_fill
Ceph storage overview
900
200
0.0000
0.0000
0
0
2
1
0
0.0000
0.0000
0
0
0
0
0
0
F63100
0
2
0
-
ceph-mgr Zabbix module
ceph.total_used_bytes
1
0
00CC00
0
2
0
-
ceph-mgr Zabbix module
ceph.total_avail_bytes