How to configure Azure Load balancer with Azure CLI?


Recently I came across a situation where we were supposed to create an Azure load balancer for our customer in the quickest possible way to test some azure features. I developed an Azure CLI script for the same so it can be utilized in the future. Here we will go thru the script. I have also created a video to showcase the Azure Load balancer functionality.

What is Azure Loadbalancer?

The Azure Load Balancer delivers high availability and network performance to your applications. The load balancer distributes inbound traffic to backend resources using load balancing rules and health probes.

The azure load balancer has four components:

1. Frontend IP address: This is the entry point to the load balancer. Client application access the Load balancer thru the Frontend IP address. 2. Health probe rules: These rules are used to verify the health status of the VMs and if the VM is found to be unhealthy state Load Balancer switches to another healthy VM. 3. Backend Pool: These are the Pool of Virtual machines used to balance the load. In case any machine becomes unhealthy or unavailable the load will be transferred to another machine. 4. Load Balancer: Configuration of load balancer specifying the external and internal ports, Backend Pool, and health probe rules.

There are two types of load balancers.

1. Public Loadbalancer: A public load balancer maps the public IP address and port number of incoming traffic to the private IP address and port number of the VM, and vice versa for the response traffic from the VM. By applying load-balancing rules, you can distribute specific types of traffic across multiple VMs or services. For example, you can distribute a load of incoming web request traffic across multiple web servers.

2. Internal Loadbalancer: An internal load balancer directs traffic only to resources that are inside a virtual network or that use a VPN to access Azure infrastructure. Frontend IP addresses and virtual networks are never directly exposed to an internet endpoint. Internal line-of-business applications run in Azure and are accessed from within Azure or from on-premises resources. For example, an internal load balancer could receive database requests that need to be distributed to backend SQL servers.

Here we will implement a Public load balancer. The public load balancer is depicted in the below figure. Here are the step-by-step instructions for deploying the public load balancer.

Step1: Login to Azure Environment. Azure CLI needs to log in to Azure Environment to work.

#Login to Azure Environment 
az login

Step2: Create a Resource Group. This resource group will host all the load balancer components.

#Create a resource group
az group create `
--name myResourceGroupSLB `
--location westeurope

Step3: Create a frontend public IP address: This will be the entry point for the application trying to access the NLB. All the requests coming from outside will hit this IP address.

#Create a zone redundant public IP Standard
az network public-ip create `
--resource-group myResourceGroupSLB `
--name myPublicIP `
--sku Standard

Step4: Create a Load Balancer: While creating the load balancer specify the Public IP address created in step3.This also requires adding a backend pool.

#Create Azure Standard Load Balancer
az network lb create `
--resource-group myResourceGroupSLB `
--name myLoadBalancer `
--public-ip-address myPublicIP `
--frontend-ip-name myFrontEnd `
--backend-pool-name myBackEndPool `
--sku Standard

Step5: Create health probe rules to find the health status of the VMs. In our environment, we are pinging port 80 on VM and the incoming port from the internet is also port 80 but you can change it based on your configuration.

#Create health probe on port 80
az network lb probe create `
--resource-group myResourceGroupSLB `
--lb-name myLoadBalancer `
--name myHealthProbe `
--protocol tcp `
--port 80

Step 6: Create a Loadbalancer rule: Specify backend and frontend port and glue the backend pool and health probes.

#Create load balancer rule for port 80
az network lb rule create `
--resource-group myResourceGroupSLB `
--lb-name myLoadBalancer `
--name myLoadBalancerRuleWeb `
--protocol tcp `
--frontend-port 80 `
--backend-port 80 `
--frontend-ip-name myFrontEnd `
--backend-pool-name myBackEndPool `
--probe-name myHealthProbe

Step:7 Create a Virtual Network and add a subnet to it.

#Configure virtual network
az network vnet create `
--resource-group myResourceGroupSLB `
--location westeurope `
--name myVnet `
--subnet-name mySubnet

Step 8: Create a Network security group so we can specify the port to be opened.

#Create a network security group
az network nsg create `
--resource-group myResourceGroupSLB `
--name myNetworkSecurityGroup

Step 9: Open Port 80 on the Network security group so communication from port 80 can be allowed.

#Create a network security group rule named myNetworkSecurityGroupRule for port 80 
az network nsg rule create `
--resource-group myResourceGroupSLB `
--nsg-name myNetworkSecurityGroup `
--name myNetworkSecurityGroupRule `
--protocol tcp `
--direction inbound `
--source-address-prefix '*' `
--source-port-range '*' `
--destination-address-prefix '*' `
--destination-port-range 80 `
--access allow `
--priority 200

Step 10:Create three Network Interface cards.

#Create three NIC cards one for each VM
$no_Of_NIC = 1,2,3
foreach($i in $no_Of_NIC ){

    az network nic create `
        --resource-group myResourceGroupSLB `
        --name myNic$i `
        --vnet-name myVnet `
        --subnet mySubnet `
        --network-security-group myNetworkSecurityGroup `
        --lb-name myLoadBalancer `
        --lb-address-pools myBackEndPool
}

Step11: Create three virtual machines and add three NIC cards to these VMs.

#Create Three Virtual Machines and attach the three NIC cards created in the previous step
$no_Of_VM = 1,2,3
foreach($i in $no_Of_VM ){

  az vm create `
    --resource-group myResourceGroupSLB `
    --name myVM$i `
    --nics myNic$i `
    --image UbuntuLTS `
    --generate-ssh-keys `
    --zone $i `
    --custom-data "C:\Demo\cloud-init.txt"
}

We have used the below cloud-init file to deploy the Nginx, Nodejs, and NPM packages while creating the VMs.If you look at the config file very carefully you will find that it listens to the port and sends a string “Hello World from host “+ Hostname. so in the case of VM1, it will display the message Hello World from host VM1 and for VM2 it will display Hello World from host VM2

#cloud-config
package_upgrade: true
packages:
  - nginx
  - nodejs
  - npm
write_files:
  - owner: www-data:www-data
  - path: /etc/nginx/sites-available/default
    content: |
      server {
        listen 80;
        location / {
          proxy_pass http://localhost:3000;
          proxy_http_version 1.1;
          proxy_set_header Upgrade $http_upgrade;
          proxy_set_header Connection keep-alive;
          proxy_set_header Host $host;
          proxy_cache_bypass $http_upgrade;
        }
      }
  - owner: azureuser:azureuser
  - path: /home/azureuser/myapp/index.js
    content: |
      var express = require('express')
      var app = express()
      var os = require('os');
      app.get('/', function (req, res) {
        res.send('Hello World from host ' + os.hostname() + '!')
      })
      app.listen(3000, function () {
        console.log('Hello world app listening on port 3000!')
      })
runcmd:
  - service nginx restart
  - cd "/home/azureuser/myapp"
  - npm init
  - npm install express -y
  - nodejs index.js

Step 12:Test the Loadbalancer

# Now Test the Load Balancer by going into the IP address exposed by NLB
az network public-ip show `
    --resource-group myResourceGroupSLB `
    --name myPublicIP `
    --query [ipAddress] `
    --output tsv

Here is the entire Demo explained in my Video.

Hope it was useful!!

1 comment

Add yours

Leave a Reply to ywqyumrfhgCancel reply