Kubernetes
Set Up Collectors¶
Epoch collectors can be run in both containerized as well as non-containerized environments. Only one collector is needed per host (VM or bare metal OS). Follow the environment specific installation instructions below.
Installation¶
Before you begin: Ensure that your environment meets Supported Platforms and Collector Requirements before installing.
-
Save the manifest below as
epoch-ns.yaml
apiVersion: v1 kind: Namespace metadata: name: epoch
-
Create the namespace.
kubectl create -f epoch-ns.yaml
-
Save the manifest below as
collector.yaml
.apiVersion: extensions/v1beta1 kind: DaemonSet metadata: namespace: epoch name: collector labels: app: epoch component: collector spec: minReadySeconds: 0 updateStrategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 template: metadata: labels: app: epoch component: collector spec: hostNetwork: true dnsPolicy: ClusterFirstWithHostNet containers: - name: collector image: gcr.io/nutanix-epoch/collectors:latest command: ["/bin/bash","-c","while true ; do EPOCH_AOC_HOST=$EPOCH_SERVICE_HOST /opt/nutanix/epoch/collectors/start.sh ; echo Exiting, possibly to upgrade ; sleep 5 ; done"] securityContext: capabilities: add: - NET_RAW - NET_ADMIN env: # DO NOT prepend http:// or https:// to the EPOCH_SERVICE_HOST value - name: EPOCH_SERVICE_HOST value: ${your_epoch_host} - name: EPOCH_ORGANIZATION_ID value: ${organizationId} - name: EPOCH_ANALYSIS_DEPTH value: "layer7" - name: EPOCH_L7_SAMPLINGRATE value: "20" - name: EPOCH_INTERFACE value: "any" - name: DEPLOY_ENV value: "docker" - name: KUBERNETES value: "yes" - name: SD_BACKEND value: "docker" resources: requests: memory: "512Mi" cpu: "1000m" limits: memory: "1Gi" cpu: "2000m" volumeMounts: - name: cgroup mountPath: /host/sys/fs/cgroup/ readOnly: true - name: proc mountPath: /host/proc/ readOnly: true - name: docker-sock mountPath: /var/run/docker.sock readOnly: true volumes: - name: cgroup hostPath: path: /sys/fs/cgroup/ - name: proc hostPath: path: /proc/ - name: docker-sock hostPath: path: /var/run/docker.sock
-
Install the collectors as a DaemonSet.
kubectl create -f collector.yaml
Installing on Kubernetes 1.7 and above¶
- If you are installing on Kubernetes 1.7 or a later version, add an environment variable to your
collector.yaml
as shown below.- name: KUBERNETES_KUBELET_HOST valueFrom: fieldRef: fieldPath: status.hostIP
Installing on Kubernetes 1.6¶
- If you are installing on a Kubernetes version lower than 1.6, download a separate collector.yaml manifest.
-
In the manifest, uncomment the
EPOCH_SERVICE_HOST
parameter and replace it with${your_epoch_host}
; and uncomment theEPOCH_ORGANIZATION_ID
parameter and replace it with${organizationId}
.- name: EPOCH_SERVICE_HOST value: ${your_epoch_host} - name: EPOCH_ORGANIZATION_ID value: ${organizationId}
-
Add the following environment variables.
- name: OVERWRITE_RESOLVCONF value: yes - name: K8S_NAMESERVER value: <kube-dns-address>
Replace
<kube-dns-address>
with the location of your Kubernetes DNS server. Without these two variables, thekubernetes_state
metrics from thekubernetes
integration will not work.
RBAC Setup¶
If you have RBAC enabled in your kubernetes cluster, you need to give the collector proper permissions before installation.
-
Add the field
serviceAccountName: epoch
to the template spec of your collectors manifest.spec: hostNetwork: true dnsPolicy: ClusterFirstWithHostNet serviceAccountName: epoch containers: - name: collector image: epoch/collectors:latest
-
Download rbac-setup.yaml.
-
Give the collector the proper authentication and authorization to work with your k8s cluster's RBAC
kubectl create -f rbac-setup.yaml
-
Redeploy the collectors manifest with
kubectl
.
Installing Integrations¶
Creating and Persisting the Config File¶
You can create the config file such that it will be persisted across container restarts in one of three ways:
- ConfigMaps
- Custom images
- Volume mounting
If the service you are integrating with is running as a container, its integration likely supports autoconf.
In this case, your integrations config file should reside in the /etc/nutanix/epoch-dd-agent/conf.d/auto_conf
directory of your collectors.
Use%%host%%
and %%port%%
in place of any hardcoded host
and port
parameters in the config file.
Use the Configuration section from the instructions page for your integration as reference for the name and contents of the config file.
ConfigMaps¶
To use ConfigMaps to configure collector integrations, do the following.
-
In your
collector.yaml
, add the following block to thevolumeMounts
section.- name: configmap-volume mountPath: /conf.d/ - name: configmap-auto-conf mountPath: /etc/nutanix/epoch-dd-agent/auto_conf/
-
Add the following block to the
volumes
section.- name: configmap-volume configMap: name: integrations - name: configmap-auto-conf configMap: name: auto-conf
-
Download integrations-configmap.yml and auto-configmap.yml
-
If you want to add more integrations, append them to
integrations-configmap.yml
and create the integrations.kubectl create -f integrations-configmap.yaml
-
If you want to add more integrations of the autoconf variety, append them to
auto-configmap.yaml
and create the integrations.kubectl create -f auto-configmap.yaml
-
Install the collectors.
kubectl create -f collector.yaml
Custom Images¶
To create a custom container image that derives from the base collector image provided by Epoch, use the following template Docker file.
# Collectors follow the same versioning scheme as the AOC. Replace x.x.x with your AOC version
FROM epoch/collectors:stable-x.x.x
# Copy the ".yaml" file(s) at collector build time
COPY *.yaml /etc/nutanix/epoch-dd-agent/conf.d/auto_conf
Volume-mounting¶
To use volume-mounting, do the following.
-
Mount the configuration directory from the host such that the
yaml
configuration fields are visible on the host file system. -
Provide the following parameter to the container's
run
command.-v /etc/nutanix/epoch-dd-agent/conf.d/:/etc/nutanix/epoch-dd-agent/conf.d/:ro
Running the Integration¶
After you have persisted the configuration file, the integration should start automatically when the collector image itself is run.
Checking Configuration¶
To check that all yaml files are valid, run the following command:
kubectl exec -n epoch <collector-pod> /etc/init.d/epoch-collectors configcheck
Checking Runtime¶
To check that the integration is running correctly, run the following command:
kubectl exec -n epoch <collector-pod> /etc/init.d/epoch-collectors info
The output of the info command should contain a section similar to the following:
Checks
======
[...]
<name-of-integration>
----------
- instance #0 [OK]
- Collected 8 metrics & 0 events
Reporting Troubleshooting Information¶
If you are having issues with your collectors, you can run an inspect
command which will gather troubleshooting information about the collectors, as well as any necessary logs.
-
Generate the tar file.
kubectl exec -n epoch <collector-pod> /etc/init.d/epoch-collectors inspect
The tar file is created in the
/tmp
directory, and the file name begins withepoch-collectors-inspection
. -
Get the exact name of the tar file.
kubectl exec -n epoch <collector-pod> ls /tmp
-
Retrieve the tar file.
kubectl cp -n epoch <collector-pod>:/tmp/<inspect-tar-filename> <inspect-tar-filename>
-
Send the tar file to Epoch support through email at epoch-support@nutanix.com.
Uninstallation¶
To uninstall the collectors, run the following command:
kubectl delete -f collector.yaml