LogoBlack
Published on

Verifying mTLS in an Istio enabled service mesh

Authors
Verifying mTLS in Istio Cover

Intro

At security audits it's often required to prove that workloads not only on paper, but also in reality use mTLS to secure network traffic. In the following article we will use Istio's go-to infrastructure for configuring mTLS1 as a starting point and then monitor the traffic between workloads using Ksniff and Wireshark.

Infrastructure Setup

minikube

Throughout the article we will use minikube as a local Kubernets cluster. Make sure to start minikube in a multi-node mode (minikube start --nodes 2 -p multinode-demo) in order to avoid unwanted network address translation (NATting). When started in single node mode, (as others2) we have had issues with some calls being NAT'd...

Istio 1.11.2

We installed Istio 1.11 using the default profile (istioctl install) and then followed Istio's mTLS article1 to set up foo, bar and legacy namespaces with their corresponding pods. Also install Wireshark and Ksniff as these will be used to intercept and analyze network traffic.

Intercepting plain HTTP traffic using Ksniff

If Istio's mTLS example is properly running in the cluster, the communication between the workloads of foo and legacy namespaces can be visualized as below:

Workload to legacy communication

The current setup somewhat resembles the typical scenario, where an old legacy service running on a VM needs to communicate with a new application (microservice), that runs in a Kubernetes cluster. In this case the new containerized application is the httpbin container running in the foo namespace. Since it's part of our service mesh the istio-proxy sidecar container was injected into the pod. Its job is to intercept all traffic coming in and out of the httpbin container.

Let's make contact from the sleep container to httpbin!

Get a shell to the running sleep container: kubectl exec -it -n legacy sleep-557747455f-xsr4v -- sh (Your pod name will be different. To find this out use kubectl get pods -n foo)

In another terminal window query the details of the httpbin service: kubectl get svc -n foo. We see that it listens on port 8000.

NAME      TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
httpbin   ClusterIP   10.109.97.12   <none>        8000/TCP   11m
sleep     ClusterIP   10.97.76.208   <none>        80/TCP     10m

From the sleep container execute the following command: curl httpbin.foo:8000. The /ip route of the httpbin service returns the requester's IP address. If you execute kubectl get pods -n legacy -o wide you will see that the IP of the legacy pod is not 127.0.0.6. This is because the isito-proxy sidecar (using envoy) proxies traffic to 127.0.0.6 as of Istio 1.11.23.

{
  "origin": "127.0.0.6"
}

Let's intercept the traffic on the httpbin pod: kubectl sniff httpbin-74fb669cc6-4chnj -n foo. (For this to work make sure Wirehsark is on your PATH. On MacOS typically: export PATH=/Applications/Wireshark.app/Contents/MacOS:$PATH)

Legacy to httpbin traffic sniffing using Wireshark and ksniff

Now call the curl httpbin.foo:8000 from the shell attached to the legacy container again and observe the traffic in Wireshark!

If we take a closer look at the intercepted traffic, we can identify the following parts:

Analyzing packets in unencrypted communication using Wireshark
  1. Three-way tcp handshake between client (sleep) and the istio-proxy container of the server (httpbin-74fb669cc6-4chnj) pod.
  2. Client sends the /ip HTTP GET request, which is received and acknowledged by the istio-proxy sidecar.
  3. Istio-proxy initiates TCP connection to the httpbin container. (three-way tcp handshake)
  4. Istio-proxy repeats the same /ip request to httpbin container, which is later acknowledged.
  5. httpbin sends response to the istio-proxy. Plain HTTP, content can be read in Wireshark.
  6. istio-proxy wraps content and sends the message to the client. The message is not encrypted.
  7. Connection is terminated with the client.

The most important packet was the one, where istio-proxy sent the non-encrypted response back to the caller. This should have never happened in a secure mTLS communication.

Raw http response

Intercepting traffic sent over mTLS using Ksniff

Next we repeat the same process, but this time we call httpbin from the sleep container that lives in the same namespace.

Raw http response

The difference now to the earlier case is that the client is also part of the service mesh and has an istio-proxy sidecar. This is the standard scenario for service to service communication in a service mesh.

If we take a closer look at the sniffed traffic we can identify the following differences:

Analyzing packets in encrypted communication using Wireshark
  1. When the request arrives we are not able to see its content.
  2. We first see the content when the istio-proxy decrypts the message and proxies it to the httpbin container. (HTTP GET /ip)
  3. We see the response of httpbin being sent to the istio-proxy sidecar, but once the message is wrapped and encrypted the message leaving the pod can't be read anymore.
mTLS TCP response

Wrapping up

In this article we have covered how we can sniff and analyze traffic using Ksniff. We have seen that Istio by default uses mTLS between "istio-proxy" enabled workloads, while as long as it's not prohibited, by default it also allows plain HTTP traffic. This can be changed using PeerAuthentication.


  1. https://istio.io/latest/docs/tasks/security/authentication/mtls-migration/
  2. https://rocky-chen.medium.com/minikube-nats-source-ip-address-of-pods-for-services-with-clusterip-e2b1a77f483a
  3. https://github.com/istio/istio/issues/29603#issuecomment-748266875