Service Mesh

What is "Service Mesh" and how does it help to work with distributed applications (cloud-native)?

In this blog post, I will demonstrate how to use a Service Mesh, such as Istio, with a .NET Aspire based sample application.

What is Service Mesh?

Service mesh is a term used to describe a decentralized application-networking infrastructure that allows applications to be secure, resilient, observable, and controllable.

What is Istio?

Istio is an open source service mesh that layers transparently onto existing distributed applications. Istio’s powerful features provide a uniform and more efficient way to secure, connect, and monitor services. Istio is the path to load balancing, service-to-service authentication, and monitoring – with few or no service code changes. Istio extends Kubernetes to establish a programmable, application-aware network. Working with both Kubernetes and traditional workloads, Istio brings standard, universal traffic management, telemetry, and security to complex deployments.

How it works?

Istio uses a proxy to intercept all your network traffic, allowing a broad set of application-aware features based on configuration you set.

What is Service Mesh Addons?

The addons integrate with Istio to provides additional functionalities such as

  • cert-manager
  • Grafana
  • Jaeger
  • Kiali
  • Prometheus
  • SPIRE
  • Apache SkyWalking
  • Zipkin

Prerequisites

For detailed instructions on creating a DEMO App and deploying it on a Kubernetes cluster, you can refer my blog posts, .NET Aspire and Aspirate (Aspir8) and Demystifying Kubernetes for developers.

NOTE: I have used KinD (Kubernetes in Docker) multi-node cluster. You can use any Kubernetes Cluster such as KinD, Docker Desktop, Rancher Desktop, K3s and Minikube.

Install istioctl CLI

istioctl is a command-line tool used for managing and interacting with Istio, a popular service mesh platform that provides a way to manage, secure, and observe microservices within a Kubernetes environment.

# Run this command to install istioctl
brew install istioctl

Preparation of installation/un-installation scripts for Istio and Addons

Create a script file named 01-install-istio.sh with the following content. It is used to create a namespace and install Istio, including Prometheus, Grafana, Kiali and Jaeger addons.

ISTIO_VERSION="1.22"
 
kubectl create namespace istio-system
istioctl install --set profile=demo -y
 
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-$ISTIO_VERSION/samples/addons/prometheus.yaml
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-$ISTIO_VERSION/samples/addons/grafana.yaml
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-$ISTIO_VERSION/samples/addons/kiali.yaml
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-$ISTIO_VERSION/samples/addons/jaeger.yaml

Create a script named 02-uninstall-istio.sh with the following content. It is used to delete the namespace, uninstall Istio, including Prometheus, Grafana, Kiali and Jaeger addons.

ISTIO_VERSION="1.22"
 
kubectl delete -f https://raw.githubusercontent.com/istio/istio/release-$ISTIO_VERSION/samples/addons/prometheus.yaml
kubectl delete -f https://raw.githubusercontent.com/istio/istio/release-$ISTIO_VERSION/samples/addons/grafana.yaml
kubectl delete -f https://raw.githubusercontent.com/istio/istio/release-$ISTIO_VERSION/samples/addons/kiali.yaml
kubectl delete -f https://raw.githubusercontent.com/istio/istio/release-$ISTIO_VERSION/samples/addons/jaeger.yaml
 
istioctl uninstall --purge -y
 
kubectl delete namespace istio-system

Preparation of start/stop scripts for dashboards

Create a script named 03-start-dashboards.sh with the following content. It is used to start K8S, Grafana, Kiali and Jaeger dashboards.

# K8S dashboard
kubectl proxy > /dev/null &
 
# Grafana dashboard
kubectl -n istio-system port-forward svc/grafana 3000 > /dev/null &
 
# Kiali dashboard
kubectl -n istio-system port-forward svc/kiali 20001 > /dev/null &
 
# Jaeger dashboard
podName=$(kubectl get pods -n istio-system --selector=app=jaeger -o=jsonpath='{.items..metadata.name}')
kubectl -n istio-system port-forward pod/$podName 16686 > /dev/null &

Create a script named 04-stop-dashboards.sh with the following content. It is used to stop K8S, Grafana, Kiali and Jaeger dashboards.

# Function to stop and remove a service if it is running
manage_service() {
    local service_name=$1
    if systemctl is-active --quiet "$service_name"; then
        echo "$service_name is running"
        systemctl stop "$service_name"
        systemctl disable "$service_name"
        echo "$service_name stopped"
    else
        echo "$service_name is not running"
    fi
}
 
# Check and manage K8S dashboard
manage_service "K8SDB"
 
# Check and manage Istio Jaeger dashboard
manage_service "IstioJaegerDB"
 
# Check and manage Istio Grafana dashboard
manage_service "IstioGrafanaDB"
 
# Check and manage Istio Kiali dashboard
manage_service "IstioKialiDB"

Preparation of start/stop scripts for load test

Create a script named 05-start-loadtest.ps1 with the following content. It is used to generate load using various URIs.

param (
    [Parameter(Mandatory=$true)][string]$jobName
)
 
$uris = Get-Content -Path loadtest-uris.txt
 
echo 'Starting load-test for:'
echo $uris
 
foreach ($uri in $uris) {
    start-job -name $jobName -argumentlist $uri -scriptblock {
        param($uri)
 
        for(;;) {
            # execute request
            echo "Execute request: $uri"
            $progressPreference = 'SilentlyContinue'
            Invoke-Webrequest -Headers @{"Cache-Control"="no-cache"} $uri -ErrorAction SilentlyContinue | Out-Null
            $progressPreference = 'Continue'
 
            # random delay
            $delay = Get-Random -Minimum 100 -Maximum 2000
            echo "Wait $delay ms. ..."
            Start-Sleep -Milliseconds $delay
        }
    }
}

Create a script named 06-stop-loadtest.ps1 with the following content. It is used to remove load generation jobs.

param (
    [Parameter(Mandatory=$true)][string]$jobName
)
 
stop-job $jobName
remove-job $jobName

Create a text file named loadtest-uris.txt with the following content. It is used in start-loadtest.ps1 to read URIs.

http://localhost:8080
http://localhost:8080/counter
http://localhost:8080/weather

Preparation of Service Account & Cluster Role Binding

Create a script file named 07-dashboard-adminuser.yaml with the following content. It is used to create Service Account and Cluster Role Binding. It is only needed, if you want to access Kubernetes dashboard.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: admin-user
    namespace: kubernetes-dashboard

Preparation of Istio ConfigMap

Create a file named istio-configmap.yaml with the following content. It is used to enable tracing for Jeager as default installation of Addons doesn't enable it. To update, you can read current setting using following commands and update as per below comment.

kubectl get configmap istio -n istio-system -o yaml > istio-configmap.yaml
apiVersion: v1
data:
  mesh: |-
    accessLogFile: /dev/stdout
    defaultConfig:
      discoveryAddress: istiod.istio-system.svc:15012
      tracing:
        sampling: 100.0
        zipkin:
          address: jaeger-collector.istio-system:9411      
    defaultProviders:
      metrics:
      - prometheus
    enablePrometheusMerge: true
    extensionProviders:
    - envoyOtelAls:
        port: 4317
        service: opentelemetry-collector.observability.svc.cluster.local
      name: otel
    - name: skywalking
      skywalking:
        port: 11800
        service: tracing.istio-system.svc.cluster.local
    - name: otel-tracing
      opentelemetry:
        port: 4317
        service: opentelemetry-collector.observability.svc.cluster.local
    rootNamespace: istio-system
    trustDomain: cluster.local
  meshNetworks: "networks: {}"
kind: ConfigMap
metadata:
  creationTimestamp: "2024-07-31T10:46:41Z"
  labels:
    install.operator.istio.io/owning-resource: installed-state
    install.operator.istio.io/owning-resource-namespace: istio-system
    istio.io/rev: default
    operator.istio.io/component: Pilot
    operator.istio.io/managed: Reconcile
    operator.istio.io/version: 1.22.3
    release: istio
  name: istio
  namespace: istio-system
  resourceVersion: "20560"
  uid: 4ae7cd79-bb32-4ec7-8e85-ab645da9af2a

Enabling Istio Injection

Create a file named default-namespace-istio.yaml with the following content. It is used to enable istio-injection for default namespace.

apiVersion: v1
kind: Namespace
metadata:
  name: default
  labels:
    istio-injection: enabled

DEMO App Code Structure

The images below illustrate the structure of the DEMO application, encompassing all scripts & projects.

Enable Sidecar Injection

You need to update deployment.yml for apiservice and webfrontend to enable sidecar injection.

# apiservice->deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: apiservice
spec:
  minReadySeconds: 60
  replicas: 1
  selector:
    matchLabels:
      app: apiservice
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: apiservice
      annotations:
        sidecar.istio.io/inject: "true"
    spec:
      containers:
        - name: apiservice
          image: localhost:5000/apiservice:latest
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
            - containerPort: 8443
          envFrom:
            - configMapRef:
                name: apiservice-env
      terminationGracePeriodSeconds: 180
# webfrontend->deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: webfrontend
spec:
  minReadySeconds: 60
  replicas: 1
  selector:
    matchLabels:
      app: webfrontend
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: webfrontend
      annotations:
        sidecar.istio.io/inject: "true"
    spec:
      containers:
        - name: webfrontend
          image: localhost:5000/webfrontend:latest
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
            - containerPort: 8443
          envFrom:
            - configMapRef:
                name: webfrontend-env
      terminationGracePeriodSeconds: 180

Let's start the action

Until, now you have been preparing the environment, all necessary scripts, DEMO App and Kubernetes Cluster. You can now start running the scripts according to the sequence below:

# Install Istio and Addons
./install-istio.sh
 
# Enable istio injection
default-namespace-istio.ps1
 
# Enable Jeager Tracing
kubectl apply -f istio-configmap.yaml
kubectl rollout restart deployment istio-ingressgateway -n istio-system
kubectl rollout restart deployment istiod -n istio-system
 
# Deploy apiservice & webfrontend
aspirate apply
 
# Port-forward to access the DEMO App
kubectl port-forward service/webfrontend 8080:8080
 
# Installing Kubernetes dashboard, you can use below mentioned command.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
 
# Creating Service Account with the name of admin-user and creating ClusterRoleBinding for serviceaccount.
kubectl apply -f 07-dashboard-adminuser.yaml
 
# Start Dashboards
./start-dashboards.sh

Verify state of Kubernetes cluster

You can run the following commands to check the status of all pods, services, deployments, and replica sets.

kubectl get all -n istio-system
 
NAME                                       READY   STATUS    RESTARTS        AGE
pod/grafana-657df88ffd-b5r78               1/1     Running   1 (4m22s ago)   4h49m
pod/istio-egressgateway-5d57c5bd5d-6ft6r   1/1     Running   1 (4m22s ago)   4h49m
pod/istio-ingressgateway-c5db84cf7-t994v   1/1     Running   1 (4m22s ago)   4h39m
pod/istiod-c7d7bcf5f-fk4nc                 1/1     Running   1 (4m22s ago)   4h39m
pod/jaeger-697d898d6-zdgv2                 1/1     Running   2 (4m22s ago)   4h48m
pod/kiali-5446b88647-dsl5d                 1/1     Running   1               4h48m
pod/prometheus-777db476b6-7rbtk            2/2     Running   2 (4m22s ago)   4h49m
 
NAME                           TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                                                                      AGE
service/grafana                ClusterIP      10.96.132.44    <none>        3000/TCP                                                                     4h49m
service/istio-egressgateway    ClusterIP      10.96.161.152   <none>        80/TCP,443/TCP                                                               4h49m
service/istio-ingressgateway   LoadBalancer   10.96.139.246   <pending>     15021:30906/TCP,80:32103/TCP,443:31589/TCP,31400:30421/TCP,15443:31362/TCP   4h49m
service/istiod                 ClusterIP      10.96.175.40    <none>        15010/TCP,15012/TCP,443/TCP,15014/TCP                                        4h49m
service/jaeger-collector       ClusterIP      10.96.179.117   <none>        14268/TCP,14250/TCP,9411/TCP,4317/TCP,4318/TCP                               4h48m
service/kiali                  ClusterIP      10.96.27.25     <none>        20001/TCP,9090/TCP                                                           4h49m
service/prometheus             ClusterIP      10.96.68.29     <none>        9090/TCP                                                                     4h49m
service/tracing                ClusterIP      10.96.154.67    <none>        80/TCP,16685/TCP                                                             4h48m
service/zipkin                 ClusterIP      10.96.76.170    <none>        9411/TCP                                                                     4h48m
 
NAME                                   READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/grafana                1/1     1            1           4h49m
deployment.apps/istio-egressgateway    1/1     1            1           4h49m
deployment.apps/istio-ingressgateway   1/1     1            1           4h49m
deployment.apps/istiod                 1/1     1            1           4h49m
deployment.apps/jaeger                 1/1     1            1           4h48m
deployment.apps/kiali                  1/1     1            1           4h48m
deployment.apps/prometheus             1/1     1            1           4h49m
 
NAME                                              DESIRED   CURRENT   READY   AGE
replicaset.apps/grafana-657df88ffd                1         1         1       4h49m
replicaset.apps/istio-egressgateway-5d57c5bd5d    1         1         1       4h49m
replicaset.apps/istio-ingressgateway-75fdb574c4   0         0         0       4h49m
replicaset.apps/istio-ingressgateway-c5db84cf7    1         1         1       4h39m
replicaset.apps/istiod-5875f7686d                 0         0         0       4h49m
replicaset.apps/istiod-c7d7bcf5f                  1         1         1       4h39m
replicaset.apps/jaeger-697d898d6                  1         1         1       4h48m
replicaset.apps/kiali-5446b88647                  1         1         1       4h48m
replicaset.apps/prometheus-777db476b6             1         1         1       4h49m```
 
```shell
kubectl get all -n default
 
NAME                               READY   STATUS    RESTARTS        AGE
pod/apiservice-7486f48c57-8ngf5    2/2     Running   2 (5m38s ago)   4h39m
pod/webfrontend-799d98cfb9-7gt6w   2/2     Running   2 (5m38s ago)   4h39m
 
NAME                  TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)             AGE
service/apiservice    ClusterIP   10.96.95.136   <none>        8080/TCP,8443/TCP   4h39m
service/kubernetes    ClusterIP   10.96.0.1      <none>        443/TCP             8h
service/webfrontend   ClusterIP   10.96.12.54    <none>        8080/TCP,8443/TCP   4h39m
 
NAME                          READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/apiservice    1/1     1            1           4h39m
deployment.apps/webfrontend   1/1     1            1           4h39m
 
NAME                                     DESIRED   CURRENT   READY   AGE
replicaset.apps/apiservice-7486f48c57    1         1         1       4h39m
replicaset.apps/webfrontend-799d98cfb9   1         1         1       4h39m

NOTE: If you notice that the apiservice and webfrontend pods show a ready status of 2/2, it is due to the sidecar container.

kubectl get all -n kubernetes-dashboard
 
NAME                                             READY   STATUS    RESTARTS      AGE
pod/dashboard-metrics-scraper-795895d745-ll2mg   1/1     Running   1 (14m ago)   4h32m
pod/kubernetes-dashboard-56cf4b97c5-6bz6j        1/1     Running   1 (14m ago)   4h32m
 
NAME                                TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/dashboard-metrics-scraper   ClusterIP   10.96.187.250   <none>        8000/TCP   4h32m
service/kubernetes-dashboard        ClusterIP   10.96.104.193   <none>        443/TCP    4h32m
 
NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/dashboard-metrics-scraper   1/1     1            1           4h32m
deployment.apps/kubernetes-dashboard        1/1     1            1           4h32m
 
NAME                                                   DESIRED   CURRENT   READY   AGE
replicaset.apps/dashboard-metrics-scraper-795895d745   1         1         1       4h32m
replicaset.apps/kubernetes-dashboard-56cf4b97c5        1         1         1       4h32m

Accessing Dashboards

You can access Grafana dashboard using http://localhost:3000, it helps to monitor the health of Istio and of applications within the service mesh.

You can access Kiali dashboard using http://localhost:20001, it is an observability console for Istio with service mesh configuration and validation capabilities. It helps you understand the structure and health of your service mesh by monitoring traffic flow to infer the topology and report errors.

You can access Jaeger dashboard using http://localhost:16686, it is end-to-end distributed tracing system, allowing users to monitor and troubleshoot transactions in complex distributed systems.

Kubernetes dashboard is a general purpose, web-based UI for Kubernetes clusters. It allows users to manage applications running in the cluster and troubleshoot them, as well as manage the cluster itself. You can access using

http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

# Use this command to create a token to login to the dashboard
kubectl -n kubernetes-dashboard create token admin-user

Load Generation

You can use the commands mentioned below to start and stop generating load.

# start loadtest
./05-start-loadtest.ps1 load
 
# stop loadtest
./06-stop-loadtest.ps1 load

Clean-up

# stop all dashboards
./04-stop-dashboards.sh
 
# Remove the admin ServiceAccount and ClusterRoleBinding
kubectl -n kubernetes-dashboard delete serviceaccount admin-user
kubectl -n kubernetes-dashboard delete clusterrolebinding admin-user
 
# delete pods, svc & deployments
aspirate destroy
 
# uninstall istio and addons
02-uninstall-istio.yaml

Happy Coding & Learning...