In this blog post we are going to see that how easy it is to set up inline loadbalancing method in NSX-T. I am using latest NSX-T 3.1 platform to configure the services.
Let’s look at the setup in my lab.
In this topology we are focusing on left part of it, where LB-T1-GW is configured and web-ls is attached to same gateway. Client is configured on the external network and client request is coming through T0-Prod-GW.
Load balancer T1 to be created : LB-T1-GW
Segment to be used for load-balancing : web-ls
web-ls segment has two VMs: webvm and webvm2 ( These VMs are acting as target server pools)
Step by Step process:
3.) Create a T1 Gateway “LB-T1-GW” and follow to do these items (a.) Add Edge cluster to this T1 as Load-balancer is a stateful service& will run as Service Router (b.) Link to Tier0 gateway where BGP has configured and our client resides on external network.
4.) Next part is very important, as we have to enable route advertisement to upstream routers. I have enabled all LB related routes. Just to update here in inline LB deployment, LB-SNAT is not required as the traffic between client and servers go through the load-balancer. And also in this model, pool members can identify the clients from the source ip address.
5.) Save Tier1 configuration and get back to segment where we will connect web-ls segment to LB-T1-Gw
6.) Until now, We have setup our segments and Tier1 gateway. Now, we are configuring load balancer service using path “Networking > LoadBalancing”
Let’s create server pools, in order to create it you can manually add members in the wizard or you can use dynamically created groups (NSgroups). I already had group created in NSX inventory and thus I am using the same.
Enter the name, choose any Algorithm and then click on add members/group. I have already added web-workload group which is created in NSX inventory and this web-workload groups has two VMs webvm and webvm2. Also, I have added two active monitors “http and icmp” to continously monitor the health of servers.
I have left rest of the items to default except TCP multiplexing which will enable to keep the LB to Server session open for different client to connect.
Above two screenshots shows web-workload group contains IPs of those two vms webvm and webvm2.
7.) Click on Add Load balancer and enter the details. You can choose different sizes of load balancer (small, medium, large) but make sure to check NSX Edge size supportability as well. And then attach this load balancer to LB-T1-GW.
8.) Now, go ahead and create a virtual service, specify a VIP and port (I have added L7 HTTP service for demonstration). Attach LB and server pool to this virtual service. It is mandatory to attach atleast one application profile so I have choosen default http profile.
9.) Click on Save and LB configuration is successfully done.
Now is the testing part so just to recall here LB service running on Tier1 ( Edge cluster) is in the traffic path b/w external client and server pools.
Let’s test our client reachability on both the server separately first. From my browser I am able to reach 172.16.4.5 (webvm) and 172.16.4.6 (webvm2)
Now, I am accessing load balancer VIP 18.104.22.168 and I am able to load balancer traffic between webvm2 and webvm ( RoundRobin). So, LB configuration has successfully worked in inline mode where my client exists on external network and reaching NSX domain segment vms.
Logged into active NSX-edge to see LB configuration.
Type the command get load-balancer and will give UUIDs and basic configuration set in the UI. In the second screenshot, I am checking health tables as I have configured two active health monitors icmp and http type in order to monitor servers health.
Also in the last screenshot you can see Service_Router_T1 is deployed running LB service.
Thank you and happy learning.