Integrate Prometheus with Azure Kubernetes Service (AKS)
How to enable managed prometheus on Azure and integrate it with Azure Kubernetes Service (AKS)
The What
In this post I want to walk through the following:
- What is Azure Monitor Managed Service for Prometheus
- How to setup managed Prometheus in Azure to scrape your pods in AKS
- The true value-add of using Prometheus over other solutions when it comes to Kubernetes
The Why
One of the key benefits in leveraging Kubernetes is portability, which means the processes and tools we use to build, deploy, manage, and govern applications works the same way in any environment where we have a Kubernetes cluster deployed. This portability is standardized through the CNCF, allowing us across the industry to work together and enabling us as a community to make a safe assumption that one can build a solution on top of Kubernetes and give customers the option to deploy it on-premise or in any cloud provider of their choice, so long as certified Kubernetes distribution exists as a home for the solution.
A similar promise has been made with Prometheus, which is a graduated CNCF project that is open source and one that the community has aligned to. Specifically, when I say aligned to, what I mean is that developers building solutions have made the assumption that somewhere there is a Prometheus instance running in your environment, so to make it easier for you to monitor their solution, they have already built the necessary Prometheus integrations required. This makes it easy for you to get metrics "out of the box" as opposed to having to figure out how you'll monitor each solution independently.
As an example of this, just take a look at any popular helm chart. For example, if we refer to the drupal bitnami helm chart, you will find a whole section regarding the configuration of Prometheus metrics exposed from the Drupal instance. This means you can configure the Prometheus metrics settings for drupal in your values.yaml
file and you'll be ready to get metrics from Drupal, no additional work required.
With Azure Monitor Managed Service for Prometheus, we now can get a managed instance of Prometheus so that we can fulfill the necessary portability promise on our end, and with that we've aligned to the community and can inherit the work others have done to make it easier to monitor whatever solutions we deploy in our environment.
The How
The Architecture
As a first step, let's focus on reviewing what and how Prometheus works generally, and then we can understand how it works as a managed service in Azure (and also what managed means in this context).
If we look at the Prometheus architecture, we can think of the Prometheus service as a solution that pulls metrics from a specified target or endpoint, stores it in a time-series database, and enables clients to use the PromQL query language to analyze and review the metrics.
Let's now understand what Azure Managed Prometheus is giving us. First, a key piece that we are getting is the storage of the time-series metrics. When we want to enable Prometheus in Azure, the requirement is that we need to create an Azure Monitor workspace which is where Prometheus metrics are stored. This is a great benefit since we avoid having to stand up our own Prometheus Server and manage storage concerns.
Additionally, the managed offering gives us the agent that runs on AKS (or an Azure-Arc enabled Kubernetes Cluster) which will collect and send data to the Azure Monitor workspace.
In terms of what we as a customer are responsible for to make this work, we mentioned that we need to create the workspace. We also need to tell the agent what to scrape (i.e. what are the targets/endpoints that I need to pull metrics from) and also make sure that our apps actually expose the metrics endpoint to be scraped by the agent.
In Prometheus, a scrape config is what tells Prometheus what endpoints to scrape. These can be defined either statically by explicitly providing the endpoints to scrape, or dynamically by using a supported service discovery mechanism. Kubernetes fortunately has the kubernetes_sd_config mechanism that Prometheus can use to dynamically determine what it needs to scrape, and this is supported by the agent Azure deploys for us within the cluster.
The architecture looks something like this if I were to visualize it. The orange boxes are what we get from the managed offering, while the pieces of the architecture with the user identified represent what we are responsible for as a customer.
Setting Up Prometheus with AKS
Deploying a Sample Application
Let's work through a demonstration of setting up Prometheus with AKS. To help with the demo, I'll deploy an ingress-nginx controller and a backend Apache web server to my cluster (keep in mind the cluster does not have Container Insights enabled at this point):
Once the ingress controller and Apache are deployed, apply the following yaml to create an ingress resource:
Once that's deployed, you should be able to hit your app on the public IP used by the ingress controller (you can get the IP by running kubectl get svc -n ingress-basic
):
Creating the Azure Monitor Workspace and Deploying the Agents to AKS
Now that we have a sample application deployed, let's deploy the necessary resources on the Azure side.
First we deploy an Azure Monitor workspace with the defaults specified to get us started:
Once the resource is created, you can click into it, navigate to the Monitored clusters blade, and enable metrics collection for any clusters you have deployed. From there, you should see the agents get deployed to your instance:
By default, we are actually now scraping metrics from AKS - but they are not the metrics exposed by your pods. Additional platform metrics exposed by AKS natively are by default scraped by the agents. The full list is noted here.
Expose Metrics on Ingress-Nginx
So what we need to do are two things: expose metrics from our pods and services, and then tell the agents about those endpoints to scrape.
Fortunately, because of how the community has aligned to using Prometheus, open source solutions like the ingress-nginx controller have made it easy to expose metrics. We can simply update the helm values passed and upgrade the installation we currently have running. For context, I figured this out by reviewing this configuration on the docs page:
We should be able to validate this worked by pinging that port with a kubectl port-forward
command (note that you need to navigate to the /metrics path):
Configure Agent to Scrape Custom Pods
The last piece of this puzzle and potentially the most complex if new to Prometheus is how to tell the agent to scrape this pod at that endpoint. As mentioned prior, with Kubernetes we can dynamically discover endpoints to scrape by using the kubernetes_sd_config in the Prometheus configuration and pass that to our agent through a config map. The full process is documented here in the docs. Here is an example configuration that scrapes all pods in the cluster.
I first generated a Prometheus scrape config file and saved it locally as prometheus-config
:
Next, I deployed it as a config map to the kube-system namespace with the following command. Notice that I'm using the replica pod to scrape this as opposed to the DaemonSet pod (here's an overview of what I'm referring to - there are different config maps that can be used depending on what agent you want to use for scraping):
kubectl create configmap ama-metrics-prometheus-config --from-file=prometheus-config -n kube-system
As a few checks to see if this worked, let's first validate that we see the replica pods restarting which means it saw we updated the configuration:
Additionally, we can see in the logs that it took and merged the custom config map with the default configuration:
Finally, let's see if we can query these metrics. The easiest way to actually view if this worked is to link an Azure Managed Grafana Workspace to the Azure Monitor workspace. In a seperate post, I'll dive deeper into Grafana as a dashboarding solution, but the argument for why one would use Grafana is similar to the argument being made here with Prometheus.
Let's create a Grafana instance and link it to the workspace - you can leave all the defaults:
Once it deploys, you can return to your workspace and link the workspace with the Grafana instance:
Once it's fully linked, go ahead and click on the dashboard for Grafana:
After you click on explore, you can create a query. First, make sure that the Grafana datasource is configured to your managed Prometheus workspace:
When you select on Metrics from there, you should see many of the default platform metrics as well as the custom nginx metrics:
So safe to say that the metrics are being scraped! One cool thing we can do since we're using Grafana is we can get a pre-populate dashboard to visualize these metrics from the ingress-nginx docs:
Let's import this by returning to our Grafana instance and navigating to the import tab:
Once there, we want to import the following JSON from the link provided in the docs:
Once you select Load, make sure on the options that you choose the managed Prometheus data source:
Now at first it seems like everything is okay, but on deeper inspection you can see some of the visuals are not actually populating:
After some digging, I realized that our ingress resource we deployed is NOT using a hostname for routing and in the ingress-nginx docs it calls out that certain metrics will not be populated if that is the case (so in production, we do likely want to use a hostname):
It does mention we can get it working by passing the --metrics-per-host=false
flag. I did some digging and found that we can pass this as an extraArgs flag with helm:
You can confirm that your values.yaml
is correct by running the following. Notice how the resulting file shows all the user-supplied configurations:
helm get values ingress-nginx --namespace ingress-basic
Once that's complete, you should now be able to see data across the charts:
As one last review, you can also make PromQL queries directly from the Azure Monitor workspace. For example, now that we know what metrics are exposed by the ingress controller, I can navigate to that workbook and make a query directly:
Summary
I'm hoping this post reveals the steps for how to enable and configure Azure Managed service for Prometheus with AKS. Throughout the post, we leveraged the adoption of Prometheus (as well as Grafana) within the Open Source community to accelerate our ability to gain insights from our ingress controller, and this is really the value-add of using those tools. By aligning to community standards, you will be able to move more quickly and inherit the already developed integrations others have produced.