Executive Summary
Amazon offers a wide range of load balancing services as part of its AWS portfolio, including fully managed application, network, and classic load balancers. Amazon Elastic Load Balancing (ELB) acts as a single point of contact for clients, and distributes new inbound flows that arrive at the load balancer’s front-end to back-end pool instances, according to specified load balancing rules and health probes. The back-end pool instances can be discrete Amazon EC2 instances, AWS Auto Scaling scale sets, or hybrid pool sets with both cloud-based and on-premises nodes. Like other cloud load balancers, Amazon Elastic Load Balancing supports bidirectional traffic scenarios, provides low latency and high throughput, and scales up to millions of flows for TCP or UDP applications.
When used with Nasuni Edge Appliance instances running in AWS EC2, on-premises, or a combination thereof, Amazon Elastic Load Balancing can be configured to dynamically route SMB client traffic to back-end Edge Appliance instances, allowing the operator to seamlessly handle client demand and better mitigate the impact of appliance outages.
Overview
Amazon Elastic Load Balancing can be deployed as either a Layer 7 Application Load Balancer or a Layer 4 Network Load Balancer that acts as a single point of contact for upstream clients. The operator can configure load balancing and health check rules to determine how incoming connections are distributed to back-end resources. Amazon ELB can be configured to pass SSL/TLS traffic through to back-end pool instances or terminate SSL/TLS connections onboard, preserving client IP metadata for back-end application use.
In Layer 4 mode, Amazon Elastic Load Balancing can work with any TCP or UDP application, though explicit control of traffic flow to back-end pool nodes is limited. Amazon Elastic Load Balancing uses a hashing algorithm to distribute inbound flows, and rewrites the headers of traffic received by the load balancer to direct them to healthy back-end instances. By default, Amazon Elastic Load Balancing uses a 5-tuple hash that includes source IP address, source port, destination IP address, destination port, and the IP protocol number to map flows to available back-end servers. This configuration allows for even distribution of inbound traffic.
Using Amazon Elastic Load Balancing with Nasuni
Nasuni Edge Appliances running on Amazon Web Services can be used as back-end pool instances with Amazon Elastic Load Balancing to distribute client load and help mitigate against Edge Appliance outages by redirecting client connections away from failed or offline instances. When used as a Layer 4 network load balancer, Amazon Elastic Load Balancing rules are defined based on the inbound port only, allowing for increased transparency, but limiting the control available to the operator in defining how flows are directed to back-end nodes.
Amazon Elastic Load Balancing can be used to increase Nasuni Edge Appliance availability to downstream clients for some use cases. See the Best Practices section below for a detailed enumeration of supported configurations.
Configuring Amazon Elastic Load Balancing for Nasuni Edge Appliance
This section provides details on configuration.
Prerequisites
This document assumes that the operator has access to an Amazon Web Services account with permissions to deploy and configure new resources. This document also assumes that a Virtual Private Cloud (VPC) exists in the operator’s tenant and (at least) two Nasuni Edge Appliances have been deployed into this environment. For more detailed configuration guidance, see Installing on Amazon EC2.
Nasuni Recommendations
Here is a brief discussion of the settings required to use Amazon Elastic Load Balancing with Nasuni Edge Appliance EC2 instances. More detailed configuration steps for each of these settings can be found in the following sections.
Load Balancer Type
Although Layer 7 Application Load Balancers offer increased control over traffic flow, and are attractive from an operational perspective, they only operate on HTTP and HTTPS traffic, and are therefore unsuitable for load balancing SMB client traffic. Nasuni recommends selecting the Network Load Balancer type when creating a new Elastic Load Balancing for use with Edge Appliances.
Load Balancing Rules
Protocol | Load Balancer Port |
TCP_UDP | 137 |
TCP_UDP | 138 |
TCP_UDP | 139 |
TCP | 445 |
TCP | 901 |
Health Check
Protocol | TCP |
Port | 8443 |
Healthy Threshold | 3 |
Unhealthy Threshold | 3 |
Timeout | 10 seconds |
Interval | 30 seconds |
Deploying Amazon Elastic Load Balancing
To deploy Amazon Elastic Load Balancing, follow these steps:
Log in to the Amazon Web Services portal at https://aws.amazon.com.
In the center pane, under All Services, click EC2, or type ‘EC2’ into the Find Services bar at the top of the pane.
In the top-right corner of the screen, ensure that the region in which the Nasuni Edge Appliance instances are located is selected. If it is not, click the down arrow and select the appropriate region from the drop-down menu.
In the left-hand navigation pane, under Load Balancing, click Load Balancers.
At the top of the center pane, click Create Load Balancer.
On the Select Load Balancer Type screen, locate Network Load Balancer and click Create.
On the Step 1: Configure Load Balancer screen, complete the following:
Name: Enter a name for the load balancer.
Scheme: If the load balancer only serves clients in its own VPC, select internal. Otherwise, select internet-facing.
Listeners: Create the following listeners, clicking Add Listener when required.
Load Balancer Protocol | Load Balancer Port |
---|---|
TCP_UDP | 137 |
TCP_UDP | 138 |
TCP_UDP | 139 |
TCP_UDP | 662 (for NFS workloads using AWS Network Load Balancer: use sticky sessions with NLB) * |
TCP_UDP | 892 (for NFS workloads using AWS Network Load Balancer: use sticky sessions with NLB) * |
TCP_UDP | 2050 (for NFS workloads using AWS Network Load Balancer: use sticky sessions with NLB) * |
TCP | 445 |
TCP | 901 |
* When configuring the listeners, run the netstat -a command from the command line to monitor the active listener ports for custom applications.
d. In the Availability Zones section, complete the following:
i. VPC: select the VPC into which the Nasuni Edge Appliances are deployed.
ii. Availability Zones: Select appropriate availability zones and subnets/IPv4 addresses for your load balancer endpoint.
On the Step 2: Configure Security Settings screen, click Next.
On the Step 3: Configure Routing screen, complete the following:
In the Target Group section:
Target Group: Select New target group.
Name: Enter a name for the target group.
Target type: If the Nasuni Edge Appliances that serve this load balancer are EC2 instances, select Instance. Otherwise, select IP.
Protocol: Select TCP_UDP.
Port: Select 137.
In the Health checks section:
Protocol: Select TCP.
Expand Advanced Health Check Settings.
Port: Select Override and enter 8443.
Accept the default settings for the other parameters and click Next: Register Targets at the bottom of the screen.
On the Step 4: Register Targets screen, scroll to the Instances section, locate your Nasuni Edge Appliance instances, and click the check box next to each. Then click Add to registered at the top of the list.
After all instances have been added to the target group, click Next: Review.
Review the configuration displayed, then click Create.
Considerations
Volume Synchronization
For instances in which multiple users might access and edit files in a volume or directory at the same time, special care must be taken to ensure that the volumes served by Amazon Elastic Load Balancing are frequently synchronized and are protected by Nasuni Global File Lock. While volume synchronization and Nasuni Global File Lock generally help to facilitate collaboration between remote sites, both play a critical role in ensuring that Nasuni Edge Appliances that make up the Amazon Elastic Load Balancing's back-end instance pool remain up to date for collaborative use cases. Even with extremely tight synchronization schedules, data in the cache on one Edge Appliance in a back-end pool likely does not match that on another Edge Appliance in the pool in the case of an active volume. Collaboration use cases should not be recommended without this consideration.
Best Practices
Hashing Algorithm
By default, Amazon Elastic Load Balancing utilizes a 5-tuple hashing algorithm (source IP address, source port, destination IP address, destination port, and the IP protocol number) to direct incoming flows to back-end pool instances. Because outbound client connections randomize the source port from which they originate, successive requests from a single client in this configuration might be directed to different back-end servers. For stateless or short-lived connections like HTTP requests, this mode of hashing can effectively balance incoming load across back-end servers. However, for longer-lived stateful flows like SMB connections, this alternating back-end instance connections can cause unexpected behavior.
Amazon Elastic Load Balancing includes built-in session stickiness that directs successive requests in a traffic flow to a single back-end node; no adverse effects of the 5-tuple hashing algorithm have been observed during internal testing.
Load Balancer Health Probe
Amazon Elastic Load Balancing Health Probes can utilize TCP, HTTP, or HTTPS to verify the health of back-end pool instances. Both HTTP and HTTPS health probes issue an HTTP GET request to the endpoint specified in the health probe configuration. If the endpoint returns a response code other than 200 OK, the health probe is marked as failed and the back-end instance is removed from service. Requests to the root of a Nasuni Edge Appliance (https://<edge_appliance_ip>:8443/) first return a 302 Found redirect to https://< edge_appliance_ip>:8443/login?next=/, then return a 301 Moved Permanently to https://< edge_appliance_ip>:8443/login/?next=/.
These two redirects mean that health probes configured to HTTP GET the root of an Edge Appliance web server are permanently marked as unhealthy. A valid HTTPS health probe can be configured to instead HTTP GET the /login/?next=/ path, but because this path could be changed by a future release, Nasuni recommends configuring the Amazon Elastic Load Balancing Health Check to connect to port 8443 via TCP to avoid potential complications from Edge Appliance httpd configuration.
Supported Configurations
Low Change-Rate or Read-Heavy Volumes
Volumes that see little write activity are ideally suited for the protection that Amazon Elastic Load Balancing can provide. In these use-cases, synchronization and automatic caching can be tuned to the customer's requirements to balance cache space against immediate data availability. In the case of a back-end pool instance outage, reads from impacted clients fail until the Amazon Elastic Load Balancing Health Check marks the back-end instance as offline (15 seconds by default) before resuming on a healthy node.
Non-Collaboration Use Cases
Deployments that do not require aggressive synchronization schedules and Nasuni Global File Lock can allow Amazon Elastic Load Balancing to provide enhanced client uptime while minimizing the likelihood of the back-end pool Edge Appliance instances falling out of sync. In cases where the importance of overall system availability outweighs that of data being immediately accessible in cache, Amazon Elastic Load Balancing might be a good fit.
Documentation
Nasuni Documentation
Category | Document name | URL of Document |
Installation | Installing on Amazon EC2 | |
Configuration | Data Propagation Considerations |
Amazon Documentation
Category | Document Name | URL of Document |
Overview | Elastic Load Balancing | |
Overview | Elastic Load Balancing Features | |
Configuration | Elastic Load Balancing Best Practices | https://aws.amazon.com/articles/best-practices-in-evaluating-elastic-load-balancing/ |