Sometimes you will face a scenario where you have a Kubernetes cluster monitored with Prometheus and different services that live outside the K8S cluster; you would like to have these monitored with Prometheus as well, so how you do that with a ServiceMonitor? We can create a service without selectors and manually defined Endpoints.
Having a single monitoring interface can be beneficial for a lot of reason: one of them is to use the monitoring tools we already use in the cluster, without setting up different monitoring stacks across VMs, K8S clusters etc (and the burden of managing them)
Create the service and endpoint resources
apiVersion: v1
kind: Service
metadata:
name: external-application
namespace: monitoring
labels:
app: external-application
spec:
ports:
- name: metrics
port: 9100
protocol: TCP
targetPort: 9100
As you can see the selector section is omitted hence it doesn’t generate a matching endpoint resource. Now define the following:
apiVersion: v1
kind: Endpoints
metadata:
name: external-application
namespace: monitoring
subsets:
- addresses:
- ip: 10.10.129.6
- ip: 10.10.130.9
- ip: 10.10.129.8
ports:
- name: metrics
port: 9100
protocol: TCP
In this way we manually specify the IPs backing the service; it’s important to have the same name and port as the service
metadata:
name: external-application
[...]
ports:
- name: metrics
port: 9100
protocol: TCP
ServiceMonitor
Now we can define the ServiceMonitor that will scrape the remote application through the service already specified above
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: external-application
namespace: monitoring
labels:
app: external-application
spec:
endpoints:
- port: metrics
interval: 3s
path: /metrics
selector:
matchLabels:
app: external-application
namespaceSelector:
matchNames:
- monitoring
Make sure you’re using the same port name, namespace and label as defined in the service
Bonus part: Install Node Exporter on a VM
As you may know, the above only applies if the target application on the remote environment (whatever it is) exposes a Prometheus compatible metrics. In this scenario we are going to monitor a remote VM with NodeExporter
First of all download the binary on the raget VM (grab the latest version)
wget https://github.com/prometheus/node_exporter/releases/download/v*/node_exporter-*.*-amd64.tar.gz
tar xvfz node_exporter-*.*-amd64.tar.gz
sudo mv node_exporter-*.*-amd64/node_exporter /usr/local/bin/
Now we are going to create a service for it , so we need to
- Create the user that will run the NodeExporter service
- Create the Systemd service file
- Restart the system daemon
- Test the /metrics endpoint
sudo useradd -rs /bin/false node_exporter
sudo tee /etc/systemd/system/node_exporter.service<<EOF
[Unit]
Description=Node Exporter
After=network.target
[Service]
User=node_exporter
Group=node_exporter
Type=simple
ExecStart=/usr/local/bin/node_exporter
[Install]
WantedBy=multi-user.target
EOF
sudo systemctl daemon-reload
sudo systemctl start node_exporter
sudo systemctl enable node_exporter
Now the NodeExporter should be running on port 9100; you can test it by hitting
http://<server-IP>:9100/metrics
You may need to allow connection on that port from the outside by using firewall-cmd.
sudo firewall-cmd --zone=public --permanent --add-port=9100/tcp
sudo firewall-cmd --reload
Important: You are strongly advised to only allow access to those ports from trusted networks.
Now you may check your Prometheus UI for VM related metrics such as
node_cpu_seconds_total
node_filesystem_avail_bytes
[...]
You may also like to use this Grafana dashboard to visualize these metrics.