Skip to content


Set Up Collectors

Epoch collectors can be run in both containerized as well as non-containerized environments. Only one collector is needed per host (VM or bare metal OS). Follow the environment specific installation instructions below.


Before you begin: Ensure that your environment meets Supported Platforms and Collector Requirements before installing.

  1. Save the manifest below as epoch-ns.yaml

    apiVersion: v1
    kind: Namespace
      name: epoch
  2. Create the namespace.

    kubectl create -f epoch-ns.yaml
  3. Save the manifest below as collector.yaml.

    apiVersion: extensions/v1beta1
    kind: DaemonSet
      namespace: epoch
      name: collector
        app: epoch
        component: collector
      minReadySeconds: 0
        type: RollingUpdate
          maxUnavailable: 1
            app: epoch
            component: collector
          hostNetwork: true
          dnsPolicy: ClusterFirstWithHostNet
          - name: collector
            command: ["/bin/bash","-c","while true ; do EPOCH_AOC_HOST=$EPOCH_SERVICE_HOST /opt/nutanix/epoch/collectors/ ; echo Exiting, possibly to upgrade ; sleep 5 ; done"]
                - NET_RAW
                - NET_ADMIN
            # DO NOT prepend http:// or https:// to the EPOCH_SERVICE_HOST value
            - name: EPOCH_SERVICE_HOST
              value: ${your_epoch_host}
            - name: EPOCH_ORGANIZATION_ID
              value: ${organizationId}
            - name: EPOCH_ANALYSIS_DEPTH
              value: "layer7"
            - name: EPOCH_L7_SAMPLINGRATE
              value: "20"
            - name: EPOCH_INTERFACE
              value: "any"
            - name: DEPLOY_ENV
              value: "docker"
            - name: KUBERNETES
              value: "yes"
            - name: SD_BACKEND
              value: "docker"
                memory: "512Mi"
                cpu: "1000m"
                memory: "1Gi"
                cpu: "2000m"
            - name: cgroup
              mountPath: /host/sys/fs/cgroup/
              readOnly: true
            - name: proc
              mountPath: /host/proc/
              readOnly: true
            - name: docker-sock
              mountPath: /var/run/docker.sock
              readOnly: true
          - name: cgroup
              path: /sys/fs/cgroup/
          - name: proc
              path: /proc/
          - name: docker-sock
              path: /var/run/docker.sock
  4. Install the collectors as a DaemonSet.

    kubectl create -f collector.yaml

Installing on Kubernetes 1.7 and above

  1. If you are installing on Kubernetes 1.7 or a later version, add an environment variable to your collector.yaml as shown below.
          fieldPath: status.hostIP

Installing on Kubernetes 1.6

  1. If you are installing on a Kubernetes version lower than 1.6, download a separate collector.yaml manifest.
  2. In the manifest, uncomment the EPOCH_SERVICE_HOST parameter and replace it with ${your_epoch_host}; and uncomment the EPOCH_ORGANIZATION_ID parameter and replace it with ${organizationId}.

      value: ${your_epoch_host}
      value: ${organizationId}
  3. Add the following environment variables.

      value: yes
    - name: K8S_NAMESERVER
      value: <kube-dns-address>

    Replace <kube-dns-address> with the location of your Kubernetes DNS server. Without these two variables, the kubernetes_state metrics from the kubernetes integration will not work.

RBAC Setup

If you have RBAC enabled in your kubernetes cluster, you need to give the collector proper permissions before installation.

  1. Add the field serviceAccountName: epoch to the template spec of your collectors manifest.

      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      serviceAccountName: epoch
      - name: collector
        image: epoch/collectors:latest
  2. Download rbac-setup.yaml.

  3. Give the collector the proper authentication and authorization to work with your k8s cluster's RBAC

    kubectl create -f rbac-setup.yaml
  4. Redeploy the collectors manifest with kubectl.

Installing Integrations

Creating and Persisting the Config File

You can create the config file such that it will be persisted across container restarts in one of three ways:

  • ConfigMaps
  • Custom images
  • Volume mounting

If the service you are integrating with is running as a container, its integration likely supports autoconf. In this case, your integrations config file should reside in the /etc/nutanix/epoch-dd-agent/conf.d/auto_conf directory of your collectors. Use%%host%% and %%port%% in place of any hardcoded host and port parameters in the config file.

Use the Configuration section from the instructions page for your integration as reference for the name and contents of the config file.


To use ConfigMaps to configure collector integrations, do the following.

  1. In your collector.yaml, add the following block to the volumeMounts section.

    - name: configmap-volume
      mountPath: /conf.d/
    - name: configmap-auto-conf
      mountPath: /etc/nutanix/epoch-dd-agent/auto_conf/
  2. Add the following block to the volumes section.

    - name: configmap-volume
        name: integrations
    - name: configmap-auto-conf
        name: auto-conf
  3. Download integrations-configmap.yml and auto-configmap.yml

  4. If you want to add more integrations, append them to integrations-configmap.yml and create the integrations.

    kubectl create -f integrations-configmap.yaml
  5. If you want to add more integrations of the autoconf variety, append them to auto-configmap.yaml and create the integrations.

    kubectl create -f auto-configmap.yaml
  6. Install the collectors.

    kubectl create -f collector.yaml

Custom Images

To create a custom container image that derives from the base collector image provided by Epoch, use the following template Docker file.

  # Collectors follow the same versioning scheme as the AOC. Replace x.x.x with your AOC version
  FROM epoch/collectors:stable-x.x.x

  # Copy the ".yaml" file(s) at collector build time
  COPY *.yaml /etc/nutanix/epoch-dd-agent/conf.d/auto_conf


To use volume-mounting, do the following.

  1. Mount the configuration directory from the host such that the yaml configuration fields are visible on the host file system.

  2. Provide the following parameter to the container's run command.

    -v /etc/nutanix/epoch-dd-agent/conf.d/:/etc/nutanix/epoch-dd-agent/conf.d/:ro

Running the Integration

After you have persisted the configuration file, the integration should start automatically when the collector image itself is run.

Checking Configuration

To check that all yaml files are valid, run the following command:

kubectl exec -n epoch <collector-pod> /etc/init.d/epoch-collectors configcheck

Checking Runtime

To check that the integration is running correctly, run the following command:

kubectl exec -n epoch <collector-pod> /etc/init.d/epoch-collectors info

The output of the info command should contain a section similar to the following:

          - instance #0 [OK]
          - Collected 8 metrics & 0 events

Reporting Troubleshooting Information

If you are having issues with your collectors, you can run an inspect command which will gather troubleshooting information about the collectors, as well as any necessary logs.

  1. Generate the tar file.

    kubectl exec -n epoch <collector-pod> /etc/init.d/epoch-collectors inspect

    The tar file is created in the /tmp directory, and the file name begins with epoch-collectors-inspection.

  2. Get the exact name of the tar file.

    kubectl exec -n epoch <collector-pod> ls /tmp
  3. Retrieve the tar file.

    kubectl cp -n epoch <collector-pod>:/tmp/<inspect-tar-filename> <inspect-tar-filename>
  4. Send the tar file to Epoch support through email at


To uninstall the collectors, run the following command:

kubectl delete -f collector.yaml