Skip to content

Infrastructure Integration

Configuration

  1. Configure the agent by editing /etc/nutanix/epoch-dd-agent/conf.d/ceph.yamlin the collectors.

Example:

    init_config:
    instances:
    #  - tags:
    #    - name:ceph_cluster
    #
    #    ceph_cmd: /usr/bin/ceph
    #    ceph_cluster: ceph
    #
    # If your environment requires sudo, please add a line like:
    #          dd-agent ALL=(ALL) NOPASSWD:/usr/bin/ceph
    # to your sudoers file, and uncomment the below option.
    #
    #    use_sudo: True
  1. Check and make sure that all yaml files are valid with following command:

    /etc/init.d/epoch-collectors configcheck
    
  2. Restart the Agent using the following command:

    /etc/init.d/epoch-collectors restart
    
  3. Execute the info command to verify that the integration check has passed:

    /etc/init.d/epoch-collectors info
    

The output of the info command should contain a section similar to the following:

    Checks
    ======
      [...]
      ceph
      ----------
          - instance #0 [OK]
          - Collected 8 metrics & 0 events

Infrastructure Datasources

Datasource Available Aggregations Unit Description
ceph.commit_latency_ms avg max min sum millisecond Time taken to commit an operation to the journal
ceph.apply_latency_ms avg max min sum millisecond Time taken to flush an update to disks
ceph.op_per_sec avg max min sum operation/second IO operations per second for given pool
ceph.read_bytes_sec avg max min sum byte Bytes/second being read
ceph.write_bytes_sec avg max min sum byte Bytes/second being written
ceph.num_osds avg max min sum item Number of known storage daemons
ceph.num_in_osds avg max min sum item Number of participating storage daemons
ceph.num_up_osds avg max min sum item Number of online storage daemons
ceph.num_pgs avg max min sum item Number of placement groups available
ceph.num_mons avg max min sum item Number of monitor daemons
ceph.aggregate_pct_used avg max min sum percent Overall capacity usage metric
ceph.total_objects avg max min sum item Object count from the underlying object store
ceph.num_objects avg max min sum item Object count for a given pool
ceph.read_bytes avg max min sum byte Per-pool read bytes
ceph.write_bytes avg max min sum byte Per-pool write bytes
ceph.num_pools avg max min sum item Number of pools
ceph.pgstate.active_clean avg max min sum item Number of active+clean placement groups
ceph.read_op_per_sec avg max min sum operation/second Per-pool read operations/second
ceph.write_op_per_sec avg max min sum operation/second Per-pool write operations/second
ceph.num_near_full_osds avg max min sum item Number of nearly full osds
ceph.num_full_osds avg max min sum item Number of full osds
ceph.osd.pct_used avg max min sum percent Percentage used of full/near full osds