Introduction
Microsoft offers a wide range of load-balancing services as part of its Azure portfolio of cloud services. The Azure Load Balancer acts as a single point of contact for software clients, and distributes new inbound flows that arrive at the Load Balancer's front-end to back-end pool instances, according to specified load balancing rules and health probes. The backend pool instances can be Azure Virtual Machines or instances in a virtual machine scale set. Azure Load Balancer supports both inbound and outbound scenarios, provides low latency and high throughput, and can scale up to millions of flows for all TCP and UDP applications.
When used with Nasuni Edge Appliance instances running in Microsoft Azure, the Azure Load Balancer can be configured to balance SMB (CIFS) client traffic to back-end Edge Appliance instances, allowing the operator to seamlessly handle client demand and better mitigate the impact of appliance outages.
Overview
Azure Load Balancer is a Layer 4 network load balancer that acts as a single point of contact for clients. The operator can configure load-balancing rules and inbound NAT rules to determine how incoming connections are routed to back-end resources. Although Azure Load Balancer acts as the endpoint for client communication, it is important to note that the Azure Load Balancer does not directly interact with TCP, UDP, or the application layer; it does not terminate or originate flows, interact with traffic payloads, or provide any application layer gateway (ALG) functionality. Handshakes and client-server communication always occur directly between the client and a back-end pool virtual machine. When the flow arrives at the virtual machine, the original client source IP address is also preserved.
This transparency allows Azure Load Balancer to work with any TCP or UDP application, but limits the control over connection routing available to the operator. Azure Load Balancer uses a hashing algorithm to distribute inbound flows, and rewrites the headers of traffic received by the load balancer to direct them to healthy back-end instances. By default, Azure Load Balancer uses a 5-tuple hash that includes source IP address, source port, destination IP address, destination port, and the IP protocol number to map flows to available back-end servers. This behavior can be overridden by configuring a 2-tuple (source IP address and destination IP address) or 3-tuple (source IP address, destination IP address, protocol type) hash to map traffic to available servers. For a more detailed description of the hashing algorithms available on Azure Load Balancer, see Configure the Distribution Mode for Azure Load Balancer.
SMB traffic flows require several connections to the back-end server throughout the lifetime of the connection. Nasuni recommends configuring all Azure Load Balancer load-balancing rules to use the 2-tuple session persistence model to ensure that clients remain connected to a single Edge Appliance if it is healthy. For detailed guidance, see Configuring Azure Load Balancer for Use with the Nasuni Edge Appliance.
How Azure Load Balancer is Used with Nasuni
Nasuni Edge Appliances running on Microsoft Azure can be used as back-end pool instances with Azure Load Balancer. Client SMB traffic can then be directed to the Azure Load Balancer to distribute client load, and help to mitigate against Edge Appliance outages by redirecting client connections away from failed or offline instances. Azure Load Balancer load-balancing rules are defined based on the inbound port only, allowing for increased transparency, but limiting the control available to the operator in defining how flows will be routed.
Azure Load Balancer can increase Nasuni Edge Appliance availability to downstream clients for some use cases. For a detailed enumeration of supported configurations, see Best Practices.
Configuring Azure Load Balancer for Use with the Nasuni Edge Appliance
Prerequisites
This document assumes that the operator can access a Microsoft Azure account with permission to deploy and configure new resources. This document also assumes that an Azure Resource Group and Azure Virtual Network have been created, and at least two Nasuni Edge Appliances have been deployed into this environment. For detailed configuration guidance of these prerequisites, see Installing Nasuni on Microsoft Azure.
Nasuni Recommendations
Here is a summary of the settings required to use Azure Load Balancer with Nasuni Edge Appliance Azure instances. More detailed configuration steps for each of these settings can be found in the following section.
Load Balancing Rules
Protocol | Frontend port | Backend port | Session persistence | Idle timeout (minutes) | TCP reset |
TCP | 111 | 111 | Client IP | 4 | Disabled |
TCP | 137 | 137 | Client IP | 4 | Disabled |
TCP | 138 | 138 | Client IP | 4 | Disabled |
TCP | 139 | 139 | Client IP | 4 | Disabled |
TCP | 445 | 445 | Client IP | 4 | Disabled |
TCP | 901 | 901 | Client IP | 4 | Disabled |
TCP | 2049 | 2049 | Client IP | 4 | Disabled |
UDP | 111 | 111 | Client IP | N/A | N/A |
UDP | 137 | 137 | Client IP | N/A | N/A |
UDP | 138 | 138 | Client IP | N/A | N/A |
UDP | 139 | 139 | Client IP | N/A | N/A |
UDP | 2049 | 2049 | Client IP | N/A | N/A |
Frontend IP Configuration
Subnet: The subnet into which your Nasuni Edge Appliance instances have been deployed.
Assignment: Static.
IP Address: Choose a free, valid IP address in the subnet range.
Health Probe
Protocol: TCP.
Port: 8443.
Interval (seconds): 5.
Unhealthy Threshold: 3.
Deploying the Azure Load Balancer
To deploy the Azure Load Balancer, follow these steps:
Log into the Microsoft Azure Portal at https://portal.azure.com.
Click Create a Resource at the top of the page.
Locate the Azure Load Balancer by either:
Typing “Load Balancer” into the Search the Marketplace search box.
Clicking Networking on the left-hand side, then locate Load Balancer in the right-hand column.
The Create Load Balancer screen appears.
On the Basics tab, under Project details, complete the following:
Resource Group: Choose the resource group into which you deployed the back-end Nasuni Edge Appliances.
Instance Details
Name: Give the Azure Load Balancer a unique name.
Region: Select the region where you deployed the back-end Nasuni Edge Appliances.
Type: Internal.
SKU: Standard.
Configure Virtual Network
Virtual Network: Select the virtual network into which you deployed the back-end Nasuni Edge Appliances.
Subnet: Select the subnet into which you deployed the back-end Nasuni Edge Appliances.
IP Address Assignment: Static.
Private IP Address: Choose a free, valid IP address in the subnet range.
After the Azure Load Balancer resource has been created, locate it on the Azure Portal.
Under Settings, click Backend Pools, click Add in the center pane, complete the following information in the right-hand pane, then click Add.
Name: Give the Backend Pool a unique name.
IP Version: IPv4.
Virtual Machines: Select each of the back-end Nasuni Edge Appliances, then ensure that the ipconfig1 address for each is selected.
Under Settings, click Health Probes, click Add in the center pane, complete the following information, then click OK.
Name: Give the Health Probe a unique name.
Protocol: TCP.
Port: 8443.
Interval: 5 seconds.
Unhealthy Threshold: 3 consecutive failures.
Under Settings, click Load Balancing Rules.
Add a rule for each of the protocol/port combinations in the table below, selecting the Frontend IP Address, Backend Pool, and Health Probe configured above for each.
Protocol | Frontend port | Backend port | Session persistence | Idle timeout (minutes) | TCP reset | Floating IP | Create implicit outbound rules |
TCP | 137 | 137 | Client IP | 4 | Disabled | Disabled | No |
TCP | 138 | 138 | Client IP | 4 | Disabled | Disabled | No |
TCP | 139 | 139 | Client IP | 4 | Disabled | Disabled | No |
TCP | 445 | 445 | Client IP | 4 | Disabled | Disabled | No |
TCP | 901 | 901 | Client IP | 4 | Disabled | Disabled | No |
UDP | 137 | 137 | Client IP | N/A | N/A | Disabled | No |
UDP | 138 | 138 | Client IP | N/A | N/A | Disabled | No |
UDP | 139 | 139 | Client IP | N/A | N/A | Disabled | No |
Considerations
Volume Synchronization
For instances in which multiple users might access and edit files in a volume or directory simultaneously, special care must be taken to ensure that the volumes served by Azure Load Balancer are frequently synchronized and are protected by Nasuni Global File Lock. While volume synchronization and Nasuni Global File Lock generally help to facilitate collaboration between remote sites, both play a critical role in ensuring that Nasuni Edge Appliances that make up the Azure Load Balancer's back-end instance pool remain up to date for collaborative use cases. Even with extremely tight synchronization schedules, data in the cache on one Edge Appliance in a back-end pool would probably not match that on another Edge Appliance in the pool in the case of an active volume. For more information and for detailed configuration instructions, see Volume Architecture and Data Propagation Best Practices.
Best Practices
Performance Considerations
Azure Load Balancer is described as a low-latency, high-throughput load-balancing technology. Nasuni testing confirms this, with no meaningful performance degradation observed while passing SMB traffic through the Azure Load Balancer, regardless of file size, access pattern, or I/O type.
Hashing Algorithms
By default, Azure Load Balancer utilizes a 5-tuple hashing algorithm (source IP address, source port, destination IP address, destination port, and the IP protocol number) to direct incoming flows to back-end pool instances. Because outbound client connections randomize the source port from which they originate, successive requests from a single client in this configuration might be directed to different back-end servers. For stateless or short-lived connections like HTTP requests, this mode of hashing can effectively balance incoming load across back-end servers. For longer-lived, stateful flows like SMB connections, these alternating back-end instance connections can cause unexpected behavior.
Nasuni recommends that all Azure Load Balancer load balancing rules be explicitly set to "Client IP," which uses a 2-tuple (source IP address and destination IP address) hashing algorithm to direct inbound flows to back-end instances and is subject to less inter-connection variability.
Load Balancer Health Probe
Azure Load Balancer Health Probes can utilize TCP, HTTP, or HTTPS to verify the health of back-end pool instances. Both HTTP and HTTPS health probes issue an HTTP GET request to the endpoint specified in the health probe configuration. If the endpoint returns a response code other than 200 OK, the health probe is marked as failed, and the back-end instance is removed from service. Requests to the root of a Nasuni Edge Appliance (https://<filer_ip>:8443/) first return a 302 Found redirect to https://<filer_ip>:8443/login?next=/, then return a 301 Moved Permanently to https://<filer_ip>:8443/login/?next=/.
These two redirects mean that health probes configured to HTTP GET the root of an Edge Appliance web server would be permanently marked as unhealthy. A valid HTTPS health probe can be configured to instead HTTP GET the /login/?next=/ path, but because this path could be changed by a future release, Nasuni recommends configuring the Azure Load Balancer Health Probe to connect to port 8443 via TCP to avoid potential complications from Edge Appliance httpd configuration.
Supported Configurations
Low Change-Rate or Read-Heavy Volumes
Volumes that see little write activity are ideally suited for the protection that Azure Load Balancer can provide. In these use cases, synchronization and automatic caching can be tuned to the customer's requirements to balance cache space against immediate data availability. In the case of an outage of a back-end pool instance, reads from impacted clients fail until the Azure Load Balancer Health Probe marks the back-end instance as offline (15 seconds by default) before resuming on a healthy node.
Non-Collaboration Use Cases
Deployments that do not require aggressive synchronization schedules and Nasuni Global File Lock can allow Azure Load Balancer to provide enhanced client uptime while minimizing the likelihood of the back-end pool Edge Appliance instances falling out of sync. In cases where the importance of overall system availability outweighs that of data being immediately accessible in cache, Azure Load Balancer might be a good fit.