Log Istio Requests with AKS and Azure Monitor
How to configure Istio to log all requests with AKS, Azure Monitor, and Container Insights
Building on some of my recent posts, I want to demonstrate how we can customize Istio request logs and send them to Azure Monitor. We will try to capture logs happening at the ingress gateway as well as the sidecar proxies.
A few pre-requisites:
- AKS Cluster Deployed
- Log Analytics Workspace Deployed
- Istio - This one isn't required, in the post below I will deploy Istio. If you already have Istio running, then this will show you the config to update the request logging
Step 1 - Deploy Container Insights to AKS
The easiest and quickest way for us to capture any logs from your containers in AKS is to use Container Insights. Behind the scenes, Container Insights reads from stdout (equivalent to you running kubectl logs...
) and then sends those logs to your specified Log Analytics workspace.
Step 2 - Apply MeshConfig Customization to IstioOperator
Before we deploy Istio to the cluster, we need to add the envoy access log configurations to ensure we enable tracing and customize anything about the request log format.
The config to change the access log details are documented on the MeshConfig resource of the IstioOperator CR. Below I've captured the three properties that you can customize. At a minimum, the yaml shown below ensures you are logging the default request log format - you can simply copy the accessLog properties shown below into your own IstioOperator template:
We can apply this operator to the cluster and deploy a sample workload to see if we can capture the request logs. We'll come back to modifying the log format after we get the default working.
First, deploy Istio to the cluster with istioctl apply -f <filename.yaml>
and then we can deploy the sample httpbin workload that Istio has in it's repo as an example. We will want to make sure we inject the sidecar into the httpbin workload and also hook it into the ingress gateway that was deployed when Istio was installed with the default profile:
Once deployed, obtain the IP of your Ingress Gateway and run a few curl commands against the httpbin workload (assuming the ingress gateway was deployed with a LoadBalancer service type):
Step 3 - Review Logs of Ingress Gateway and Sidecar Proxy
Now that we have a workload deployed, let's review the logs at both the ingress gateway and the sidecar proxy.
As a quick check, confirm you see it by running kubectl logs
against the ingress gateway:
Similarly, we should see request logs being emitted from the proxy running alongside the httpbin container. Notice that we specify the istio-proxy container running within the httpbin pod:
For the real check, let's confirm that we see these logs in Azure Monitor. If we navigate to the Logs blade on the AKS Portal, we should be able to query the ContainerLogV2 table and see the same results:
// Example Query for httpbin Logs
ContainerLogV2 |
where PodNamespace == 'httpbin' and ContainerName == 'istio-proxy' |
project TimeGenerated, PodName, LogMessage |
order by TimeGenerated desc
Step 4 - Apply Customization to Log Request Encoding
We are now capturing the request logs from the Istio proxy components (both the ingress gateway and sidecars). Now, if we want to customize the logs we have two properties we can use:
The AccessLogEncoding property can either take on TEXT or JSON. What this will do is change the way the logs are represented. The default is TEXT, so upon changing and reapplying the Istio mesh config with the value set to JSON, you can see that the difference is simply that the output is a JSON object:
The benefit of using JSON is that it may make it easier to query and parse the logs in Azure. For example, the query below is a nice way for me to capture all 200s produced by the ingress gateway:
// Example Query for Ingress Gateway Logs with 200
ContainerLogV2 |
where PodNamespace == 'istio-system' and ContainerName == 'istio-proxy' |
project TimeGenerated, PodName, LogMessage |
extend request_message=parse_json(LogMessage) |
project TimeGenerated, PodName, LogMessage, response_code = request_message['response_code'] |
where response_code == 200
Step 5 - Apply Customization to Log Request Format
Should we want to change what is actually logged, we can use the command operators specified in the Envoy Access Log documentation. For example, let's say that we want to capture a unique header, which we call Test-Request. We will stick to using the JSON encoding. Notice that I've changed the path I'm requesting on the httpbin pod to be /headers which actually sends back in the response the request headers sent in the request:
Step 6 - Customizing the Ingress Gateway Request Logs and the Sidecar Request Logs Independently
Up to this point, any change we've made to the access logs has been applied mesh-wide to both the ingress gateway and sidecar proxies. However, what if we want to customize the ingress gateway and sidecars independently? This is where we need to use the concept of an Istio Extension Provider and the Telemetry API such that we can specify one format for the ingress gateway and another for the sidecars.
Let's revert our mesh config back to the default, since we may want to capture the usual request info on the ingress gateway which may service a number of different workloads. However, for the httpbin workload specifically, we care about our customized request header that we used above. The Istio Operator yaml will need to look like the following, where we use the Envoy File Access Log Provider:
At this point we we still haven't told the sidecars to use the extension provider, all we've done is registered the extension provider with Istio. Now we need to use the Telemetry API to designate the sidecar-log extension provider as the logging configuration for the httpbin workload. There are different scopes at which the Telemetry API can be applied to achieve different configurations - for example, we could apply the Telemetry API at the root configuration namespace (i.e. istio-system) and that would make the specified extension provider the default. In this case, let's apply it at the httpbin namespace scope so it only affects the httpbin pod:
Now, let's check the CURL command output:
If we look at the ingress gateway pod logs, we see the original default request log:
However, when we query against the httpbin sidecar, we can see our custom access_log_header property in the JSON, indicating that it took our Telemetry config:
The Telemetry API is the key to allowing for customized access log formats at different scopes within your mesh, enabling a variety of scenarios that may come up if you support multiple workloads with a single ingress gateway.
Summary
The goal of this blog is to help you customize request logs emitted by your Istio proxies (both ingress gateways and sidecars) and query them natively in Azure Monitor. From here, you can use dashboards, alerts, and all the other tools that Azure Monitor provides you to ensure your workloads are running smoothly.