How to deploy SQL Server containers to a Kubernetes cluster for high availability?

In this blog we will learn how to deploy the SQL server container on Azure Kubernetes services with High availability.We will use persistent storage feature of Kubernetes to add resilency in the solution. In this scenario if SQL server instance fails, Kubernetes will automatically re-creates it in new POD and attach it to the persistent volume. It will also provides protection from Node failure by recreating it again.If you are new to the Kubernetes we wil start by understanding the basic terminology of the Kubernetes and it’s Architecture.

Here is the basic terminology used in the Kubernetes.

PoolsGroups of nodes with identical configurations.
NodesIndividual VM running containerized applications.
PodsSingle instance of an application. A pod can contain multiple containers.
DeploymentOne or more identical pods managed by Kubernetes‚Äč.
ManifestYAML file describing a deployment

This is depicted in the diagram below:

SQL server deployment

We will use this deployment configuration file to deploy the SQL server.

# Create the SQL Server deployment
apiVersion: apps/v1
kind: Deployment
  name: mssql-deployment
  replicas: 1
      app: mssql
    type: Recreate
        app: mssql
      terminationGracePeriodSeconds: 10
        fsGroup: 1000
      - name: mssql
        - name: MSSQL_PID
          value: "Developer"
        - name: ACCEPT_EULA
          value: "Y"
        - name: MSSQL_SA_PASSWORD
              name: mssql-secret
              key: SA_PASSWORD 
        - name: mssqldb
          mountPath: /var/opt/mssql
      - name: mssqldb
          claimName: mssql-data

Here is the explanation of the file.This is quite self explainatory.

Deploy Persistent Storage Volume and Persistent Storage Claim

We will use this file to create PV(persistent volumen) and PVC (Persistent Volume Claim):

# Create the storage class and persistent volumne claim
kind: StorageClass
  name: azure-disk
  storageaccounttype: Standard_LRS
  kind: Managed
kind: PersistentVolumeClaim
apiVersion: v1
  name: mssql-data
  annotations: azure-disk
  - ReadWriteOnce
      storage: 8Gi

PVC and PV description is self explanatory that it uses Azure disk to create the claim and size is 8 GB.

Deploy Load blancer service

We will deploy load blancing service with this configuraiton.

# Create the load balancing service
apiVersion: v1
kind: Service
  name: mssql-service
    app: mssql
    - protocol: TCP
      port: 1433
      targetPort: 1433
  type: LoadBalancer

This is equivalent to the SQL server service running on SQL server. SQL server service always runs onTCP port 1433 we are using the same setting in SQL loadblancer service.

Basic Architecture of the Kubernetes

All the apps connects to the loadblancing services and service connects to the POD. POD is connecting to the PVC and PV. PV is deployed via azure disk.

What happens when Container fails?

When a container fails Kubernetes can create new POD and connect it to persistent volume storage. Since the previous container already saved the data into the persistent volume storage it works seemlessly.

What happens when Node fails?

Now suppose that the node is failed then in that case it will switch to another node and recreate the POD in the different node and finally connect the node to the persistent volume storage.

Azure CLI to create the AKS and test the failure.

You can use this CLI code to create the SQL server on AKS and test it. I have provided extra commands at the end for troubelshooting. Please copy this file into VS code and save it’s extension with .azcli and run it in bash shell line by line. You need to keep three yaml files insto same folder. The content of the file is alreay provided above.The code is self explanatory since I have provided comments in the code.

# Login to Azure
az login
# Create an Azure resource group
az group create --name SQL-RG --location westus
# Create a two node cluster

az aks create --resource-group SQL-RG --name SQLSVR --node-count 2 --generate-ssh-keys --node-vm-size=Standard_B4ms

# Get credentials for the cluster
az aks get-credentials --resource-group SQL-RG --name SQLSVR

# List nodes
kubectl get nodes

# Create the load balancing service
kubectl apply -f sqlloadbalancer.yaml --record

# Create external storage with PV and PVC
kubectl apply -f sqlstorage.yaml --record

# Display the persistent volume and claim
kubectl get pv
kubectl get pvc

# Optional: In case if you want to explore differennt choices of storage classes you can run this line  otherwise you can ignore it
kubectl get storageclass

# Use Kubernetes secrets to store required sa password for SQL Server container. This is a best Practice
# If you want to delete the previously created secret use this one otherwise avoid it and go to next line
kubectl delete secret mssql-secret 

# use complex password
kubectl create secret generic mssql-secret --from-literal=SA_PASSWORD="Welcome@0001234567"

# Deploy the SQL Server 2019 container
kubectl apply -f sqldeployment.yaml --record

# List the running pods and services
kubectl get pods
kubectl get services

# TO fetch details about the POD
kubectl describe pod mssql

# Copy the sample database to the pod
# You can download the AdventureWorks2014.bak file from this URL

# Use curl command to download the database if you are using Linux otherwise use direct download link
# curl -L -o AdventureWorks2014.bak ""

# Retrieve pod name to variable
podname=$(kubectl get pods | grep mssql | cut -c1-32)
#Display the variable name
echo $podname
#Copy the backup file to POD in AKS. In Linux SQL server is installed on this path. We use this POD Name: /var/opt/mssql/data/ to access the specific directory in the POD
# Just to verify the path. 
echo $fullpath
# just to echo what are we doing
echo Copying AdventureWorks2014 database to pod $podname

# Remember to specify the path if your project is running in different directory otherwise we can remove this path and make it kubectl cp AdventureWorks2014.bak  $fullpath
kubectl cp AdventureWorks2014.bak ${fullpath}

# Connect to the SQL Server pod with Azure Data Studio
# Retrieve external IP address
ip=$(kubectl get services | grep mssql | cut -c45-60)
echo $ip
# Simulate a failure by killing the pod. Delete pod exactkly does it.
kubectl delete pod ${podname}

# Wait one second
echo Waiting 3 second to show newly started pod
sleep 3

# now retrieve the running POD and you see the that POD name is different because Kubernetes recreated 
#it after we deleted the earlier one
echo Retrieving running pods
kubectl get pods

# Get all of the running components
kubectl get all

# for Troubelshooting purpose you can use this command to view the events  

kubectl describe pod -l app=mssql

# Display the container logs
kubectl logs -l app=mssql

Failure testing

For failure testing we need to copy the database to the AKS cluster with kubectl cp command and then we need to restore the database as depicted below.

Now when we connect to the SQL server and test any query it works file.

The same query works when we delete the pod because Kubernetes recreates a new POD and attaches it to PVC.

I hope this enjoyed this blog and it will be useful for you to impement it in your environment.

1 Comment

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.