Classic Load Balancer: A Deep Dive in AWS Resources & Best Practices to Adopt
Modern applications serve millions of requests daily, handling everything from simple web pages to complex API calls that power mobile apps and enterprise systems. Behind every smooth user experience is a load balancer working tirelessly to distribute traffic across multiple servers, ensuring no single point of failure can bring down your application. Classic Load Balancers have been the foundation of AWS networking since 2009, quietly orchestrating traffic distribution for countless applications across the globe.
As organizations scale their infrastructure, the importance of reliable traffic distribution becomes paramount. Studies show that 88% of users are less likely to return to a site after a bad experience, and even a one-second delay in page load time can result in a 7% reduction in conversions. Classic Load Balancers address these challenges by providing automatic traffic distribution across multiple targets, health checking to ensure only healthy instances receive traffic, and built-in fault tolerance that keeps applications running even when individual servers fail.
This makes Classic Load Balancers particularly valuable for traditional web applications, legacy systems that require TCP/SSL load balancing, and scenarios where you need simple, reliable traffic distribution without the complexity of modern application-aware routing. While newer Application Load Balancers offer more advanced features, Classic Load Balancers remain the go-to choice for straightforward load balancing needs.
In this blog post we will learn about what Classic Load Balancers are, how you can configure and work with them using Terraform, and learn about the best practices for this service.
What is a Classic Load Balancer?
A Classic Load Balancer is a cloud resource provided by Amazon Web Services (AWS) that automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, in multiple Availability Zones. It increases the fault tolerance of your applications and improves the overall availability and scalability.
Classic Load Balancers operate at both the connection level (Layer 4) and application level (Layer 7), providing the flexibility to handle various types of traffic. They were the first load balancing solution introduced by AWS and continue to serve as a reliable foundation for many applications today. Unlike their more modern counterparts, Classic Load Balancers maintain a simpler architecture that makes them ideal for straightforward traffic distribution scenarios.
The service integrates seamlessly with other AWS services, automatically scaling to handle varying traffic loads while maintaining consistent performance. Classic Load Balancers monitor the health of registered instances and automatically route traffic only to healthy targets, ensuring your application remains available even when individual servers experience issues. This health checking capability extends beyond simple ping tests, allowing you to configure custom health check paths and parameters that better reflect your application's actual health.
The Technical Architecture
Classic Load Balancers use a distributed architecture that spans multiple Availability Zones, providing built-in redundancy and fault tolerance. When you create a Classic Load Balancer, AWS automatically provisions load balancer nodes in each specified Availability Zone. These nodes work together to distribute traffic across your registered instances, but each node operates independently to prevent single points of failure.
The load balancer maintains a pool of registered instances across multiple Availability Zones. When a request arrives, the load balancer uses its configured algorithm (typically round-robin by default) to select an appropriate instance from the healthy pool. This selection process considers both the health status of instances and the configured load balancing algorithm, ensuring optimal traffic distribution.
Traffic flow begins when a client makes a request to the load balancer's DNS name. The request is routed to one of the load balancer nodes, which then forwards it to a healthy instance in the target group. The instance processes the request and returns the response through the same path. This architecture ensures that even if individual load balancer nodes fail, traffic continues to flow through the remaining nodes without interruption.
Classic Load Balancers support both internet-facing and internal configurations. Internet-facing load balancers have public IP addresses and can receive traffic from the internet, while internal load balancers use private IP addresses and serve traffic within your VPC. This flexibility allows you to design complex architectures with multiple tiers of load balancing, such as having an internet-facing load balancer distribute traffic to internal load balancers that serve different application tiers.
Key Components and Configuration
Classic Load Balancers consist of several essential components that work together to provide traffic distribution and health monitoring. The primary component is the load balancer itself, which acts as the central point for receiving and distributing traffic. This component is configured with listeners that define the ports and protocols on which the load balancer accepts connections.
Listeners are crucial components that specify the port and protocol for front-end (client-to-load-balancer) and back-end (load-balancer-to-instance) connections. For example, you might configure a listener to accept HTTP traffic on port 80 and forward it to instances on port 8080. This flexibility allows you to standardize external interfaces while maintaining different internal configurations.
Health checks represent another vital component, continuously monitoring the health of registered instances. These checks can be configured with custom paths, timeout values, and failure thresholds. The health check configuration determines how quickly the load balancer can detect and respond to instance failures, directly impacting your application's availability.
Security groups provide network-level security for your load balancer, controlling which traffic can reach the load balancer and which traffic the load balancer can send to instances. Proper security group configuration is essential for maintaining both accessibility and security.
The load balancer also maintains connection logs and metrics, providing visibility into traffic patterns and performance. These logs can be stored in S3 buckets for analysis, helping you understand usage patterns and troubleshoot issues.
The Strategic Importance of Classic Load Balancers in Modern Infrastructure
Classic Load Balancers play a fundamental role in ensuring application availability and performance, particularly in environments where simplicity and reliability are paramount. Despite the introduction of more advanced load balancing options, Classic Load Balancers continue to serve critical functions in modern infrastructure architectures.
The strategic importance of Classic Load Balancers extends beyond simple traffic distribution. They provide the foundation for scalable architectures by enabling horizontal scaling of applications. When traffic increases, you can simply add more instances behind the load balancer without changing client configurations or managing complex routing rules. This scalability model has proven effective for countless organizations seeking to grow their infrastructure incrementally.
Consistency and Standardization
Classic Load Balancers provide consistent traffic distribution patterns that applications can rely on. Unlike more complex load balancing solutions that might implement sophisticated routing algorithms, Classic Load Balancers use predictable patterns that simplify application design and troubleshooting.
This consistency proves particularly valuable in environments with multiple development teams or legacy applications. Teams can standardize on Classic Load Balancer configurations, creating reusable patterns that reduce complexity and improve maintainability. The straightforward nature of Classic Load Balancers means that configuration errors are less likely, and when they do occur, they're easier to diagnose and fix.
The standardization benefits extend to operational procedures as well. Teams can develop consistent monitoring, alerting, and troubleshooting processes around Classic Load Balancers. This operational consistency reduces the learning curve for new team members and improves overall reliability by ensuring that best practices are consistently applied across all load balancers.
Cost Optimization
Classic Load Balancers offer a cost-effective solution for traffic distribution, particularly for applications with predictable traffic patterns. Their pricing model is straightforward, based on the number of load balancers and the amount of data processed, making it easy to predict and control costs.
The cost optimization benefits become more apparent when compared to more sophisticated load balancing solutions that might include features you don't need. For applications that simply need reliable traffic distribution without advanced routing capabilities, Classic Load Balancers provide the necessary functionality at a lower cost point.
Organizations can also optimize costs by using Classic Load Balancers in internal architectures where advanced features aren't required. For example, distributing traffic between internal application tiers or database read replicas can be handled effectively by Classic Load Balancers at a fraction of the cost of more complex solutions.
Security and Compliance
Classic Load Balancers provide several security benefits that make them attractive for compliance-sensitive environments. They act as a barrier between external traffic and your instances, providing a controlled entry point that can be monitored and secured.
The security benefits include SSL termination capabilities, which allow you to manage certificates centrally at the load balancer level rather than on individual instances. This centralized approach simplifies certificate management and ensures consistent security policies across your application tier.
Classic Load Balancers also support integration with AWS security services, allowing you to implement network-level security controls. The load balancer can be configured to work with AWS WAF (Web Application Firewall) for additional protection against common web exploits and attacks.
Key Features and Capabilities
Multi-Availability Zone Support
Classic Load Balancers provide automatic traffic distribution across multiple Availability Zones, ensuring high availability and fault tolerance. This capability is fundamental to building resilient applications that can withstand datacenter-level failures.
The multi-AZ support works by automatically detecting healthy instances in each configured Availability Zone and distributing traffic proportionally. If an entire Availability Zone becomes unavailable, the load balancer automatically routes traffic to healthy instances in the remaining zones. This automatic failover capability ensures that your application remains available even during significant infrastructure failures.
Health Monitoring and Automatic Failover
Classic Load Balancers continuously monitor the health of registered instances using configurable health checks. These health checks can be customized with specific paths, ports, and protocols to accurately reflect your application's health status.
The health monitoring system provides granular control over how quickly instances are marked as unhealthy and when they're brought back into service. This flexibility allows you to tune the health check parameters to match your application's characteristics, ensuring optimal balance between responsiveness and stability.
When instances fail health checks, they're automatically removed from the load balancer's target pool. Once they pass health checks again, they're automatically added back to the pool. This automatic lifecycle management reduces operational overhead and ensures that traffic is always directed to healthy instances.
SSL/TLS Termination
Classic Load Balancers support SSL/TLS termination, allowing you to manage certificates centrally and reduce the computational burden on your application instances. This capability is particularly valuable for applications serving HTTPS traffic, as it enables you to maintain security while optimizing performance.
The SSL termination feature supports various cipher suites and protocols, allowing you to configure security policies that meet your organization's requirements. You can choose from predefined security policies or create custom policies that specify exact cipher suites and protocol versions.
Connection Draining
Classic Load Balancers support connection draining, which allows existing connections to complete gracefully when instances are being deregistered. This feature is essential for maintaining service quality during deployments or maintenance activities.
Connection draining can be configured with custom timeout values, allowing you to balance between graceful shutdown and deployment speed. This flexibility ensures that users aren't disrupted during routine maintenance while preventing deployments from being unnecessarily delayed.
Integration Ecosystem
Classic Load Balancers integrate seamlessly with the broader AWS ecosystem, providing connectivity and interoperability with numerous other AWS services. This integration capability makes them a central component in many AWS architectures, serving as the gateway between external traffic and your application infrastructure.
At the time of writing there are 15+ AWS services that integrate with Classic Load Balancers in some capacity. Key integrations include EC2 instances for target registration, Route 53 for DNS management, and Auto Scaling groups for automatic target registration.
Classic Load Balancers integrate naturally with EC2 Auto Scaling, automatically registering newly launched instances and deregistering terminated instances. This integration enables truly elastic architectures that can scale up during peak traffic periods and scale down during low-traffic periods without manual intervention.
Route 53 integration allows you to create alias records that map your domain names to load balancer DNS names. This integration provides better performance than CNAME records and supports health checks at the DNS level, creating additional layers of fault tolerance.
CloudWatch integration provides comprehensive monitoring and alerting capabilities. You can monitor metrics such as request count, latency, and error rates, setting up alarms that trigger automated responses or notifications when thresholds are breached.
Pricing and Scale Considerations
Classic Load Balancers use a straightforward pricing model based on the number of load balancer hours and the amount of data processed. You pay for each hour (or partial hour) that a load balancer is running, plus charges for each GB of data processed through the load balancer.
The pricing structure includes two main components: a fixed hourly charge for running the load balancer and a variable charge based on the amount of data transferred. This model makes it easy to predict costs for applications with consistent traffic patterns, while still providing cost efficiency for applications with variable traffic.
Scale Characteristics
Classic Load Balancers can handle significant traffic volumes, with the ability to scale automatically to meet demand. AWS manages the underlying infrastructure, automatically adding capacity as needed to handle increased traffic loads.
The scaling is designed to be transparent to your application, with AWS monitoring traffic patterns and preemptively scaling load balancer capacity. This automatic scaling ensures that your load balancer doesn't become a bottleneck as your application grows.
Performance characteristics include the ability to handle hundreds of thousands of requests per second, with latency typically measured in single-digit milliseconds. The actual performance depends on factors such as the complexity of your health checks, the number of registered instances, and the geographic distribution of your traffic.
Enterprise Considerations
For enterprise environments, Classic Load Balancers provide several features that support large-scale deployments. These include comprehensive logging capabilities, integration with enterprise monitoring systems, and support for complex network architectures.
Classic Load Balancers can be integrated with enterprise identity and access management systems through AWS IAM, allowing you to implement fine-grained access controls. This integration ensures that only authorized personnel can modify load balancer configurations, maintaining security and compliance requirements.
When compared to hardware load balancers or other cloud providers, Classic Load Balancers offer competitive pricing and performance characteristics. However, for infrastructure running on AWS this is the native solution that provides the best integration with other AWS services, simplified management, and automatic scaling capabilities.
The total cost of ownership typically favors Classic Load Balancers when you factor in the operational overhead of managing hardware load balancers, the complexity of configuring third-party solutions, and the ongoing maintenance requirements of alternative approaches.
Managing Classic Load Balancers using Terraform
Managing Classic Load Balancers with Terraform requires understanding both the basic resource configuration and the various dependencies that make a load balancer functional. The complexity goes beyond simply creating the load balancer resource itself – you need to properly configure health checks, security groups, and target registrations to create a fully functional load balancing solution.
Creating a Basic Classic Load Balancer
A common scenario involves setting up a Classic Load Balancer to distribute HTTP traffic across multiple web servers in different Availability Zones. This configuration provides high availability and fault tolerance for web applications that don't require advanced routing capabilities.
# Classic Load Balancer for web application
resource "aws_elb" "web_lb" {
name = "web-app-classic-lb"
# Specify availability zones for the load balancer
availability_zones = ["us-west-2a", "us-west-2b", "us-west-2c"]
# Configure listeners for HTTP and HTTPS traffic
listener {
instance_port = 80
instance_protocol = "HTTP"
lb_port = 80
lb_protocol = "HTTP"
}
listener {
instance_port = 80
instance_protocol = "HTTP"
lb_port = 443
lb_protocol = "HTTPS"
ssl_certificate_id = aws_iam_server_certificate.web_cert.arn
}
# Configure health check settings
health_check {
healthy_threshold = 2
unhealthy_threshold = 3
timeout = 5
target = "HTTP:80/health"
interval = 30
}
# Attach EC2 instances to the load balancer
instances = [
aws_instance.web_server_1.id,
aws_instance.web_server_2.id,
aws_instance.web_server_3.id
]
# Enable connection draining
connection_draining = true
connection_draining_timeout = 300
# Configure security groups
security_groups = [aws_security_group.web_lb_sg.id]
# Enable access logs
access_logs {
bucket = aws_s3_bucket.lb_logs.bucket
bucket_prefix = "web-lb-logs"
enabled = true
}
tags = {
Name = "web-app-classic-lb"
Environment = "production"
Team = "web-team"
}
}
The availability_zones
parameter specifies which Availability Zones the load balancer should operate in. This configuration ensures that the load balancer can distribute traffic across multiple zones for high availability. The listeners
blocks define how the load balancer handles incoming connections, specifying both the external port clients connect to and the internal port where traffic is forwarded to instances.
Health check configuration is crucial for
Managing Classic Load Balancers using Terraform
Classic Load Balancers represent AWS's original load balancing solution, and while they've been largely superseded by Application Load Balancers (ALBs) and Network Load Balancers (NLBs), they still have their place in certain architectures. Managing Classic Load Balancers with Terraform requires careful attention to instance registration, health checks, and security group configuration.
Setting Up a Basic Classic Load Balancer
The most common scenario involves creating a Classic Load Balancer that distributes traffic across multiple EC2 instances. This configuration requires proper subnet placement and security group configuration to ensure traffic flows correctly.
# Data source to get existing VPC
data "aws_vpc" "main" {
filter {
name = "tag:Name"
values = ["production-vpc"]
}
}
# Data source to get public subnets
data "aws_subnets" "public" {
filter {
name = "vpc-id"
values = [data.aws_vpc.main.id]
}
filter {
name = "tag:Type"
values = ["public"]
}
}
# Security group for the load balancer
resource "aws_security_group" "elb_sg" {
name = "classic-elb-sg"
description = "Security group for Classic Load Balancer"
vpc_id = data.aws_vpc.main.id
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "classic-elb-sg"
Environment = "production"
ManagedBy = "terraform"
}
}
# Classic Load Balancer
resource "aws_elb" "web" {
name = "production-web-elb"
availability_zones = data.aws_availability_zones.available.names
subnets = data.aws_subnets.public.ids
security_groups = [aws_security_group.elb_sg.id]
listener {
instance_port = 80
instance_protocol = "http"
lb_port = 80
lb_protocol = "http"
}
listener {
instance_port = 80
instance_protocol = "http"
lb_port = 443
lb_protocol = "https"
ssl_certificate_id = aws_acm_certificate.web.arn
}
health_check {
healthy_threshold = 2
unhealthy_threshold = 2
timeout = 3
target = "HTTP:80/"
interval = 30
}
cross_zone_load_balancing = true
idle_timeout = 400
connection_draining = true
connection_draining_timeout = 400
tags = {
Name = "production-web-elb"
Environment = "production"
ManagedBy = "terraform"
}
}
The availability_zones
parameter specifies which AZs the load balancer should operate in, while subnets
defines the specific subnets within those AZs. The cross_zone_load_balancing
setting ensures traffic is distributed evenly across all registered instances regardless of their AZ. The connection_draining
configuration allows existing connections to complete before removing instances during deployments.
This configuration creates a load balancer that can handle both HTTP and HTTPS traffic. The health check configuration ensures that only healthy instances receive traffic, with a 30-second interval between checks and a 3-second timeout. The load balancer will mark instances as unhealthy after 2 consecutive failed checks and healthy after 2 consecutive successful checks.
Advanced SSL/TLS Configuration with Multiple Certificates
For applications requiring SSL termination with multiple certificates or advanced SSL policies, Classic Load Balancers can be configured with specific SSL policies and multiple certificate support.
# ACM certificate for the primary domain
resource "aws_acm_certificate" "primary" {
domain_name = "example.com"
validation_method = "DNS"
subject_alternative_names = [
"*.example.com",
"api.example.com"
]
lifecycle {
create_before_destroy = true
}
tags = {
Name = "primary-domain-cert"
Environment = "production"
ManagedBy = "terraform"
}
}
# Route 53 validation records
resource "aws_route53_record" "cert_validation" {
for_each = {
for dvo in aws_acm_certificate.primary.domain_validation_options : dvo.domain_name => {
name = dvo.resource_record_name
record = dvo.resource_record_value
type = dvo.resource_record_type
}
}
allow_overwrite = true
name = each.value.name
records = [each.value.record]
ttl = 60
type = each.value.type
zone_id = data.aws_route53_zone.primary.zone_id
}
# Certificate validation
resource "aws_acm_certificate_validation" "primary" {
certificate_arn = aws_acm_certificate.primary.arn
validation_record_fqdns = [for record in aws_route53_record.cert_validation : record.fqdn]
}
# Classic Load Balancer with SSL configuration
resource "aws_elb" "ssl_web" {
name = "production-ssl-web-elb"
subnets = data.aws_subnets.public.ids
security_groups = [aws_security_group.elb_sg.id]
listener {
instance_port = 80
instance_protocol = "http"
lb_port = 80
lb_protocol = "http"
}
listener {
instance_port = 80
instance_protocol = "http"
lb_port = 443
lb_protocol = "https"
ssl_certificate_id = aws_acm_certificate_validation.primary.certificate_arn
}
health_check {
healthy_threshold = 2
unhealthy_threshold = 3
timeout = 5
target = "HTTP:80/health"
interval = 30
}
# Access logging configuration
access_logs {
bucket = aws_s3_bucket.elb_logs.bucket
bucket_prefix = "elb-logs"
enabled = true
}
cross_zone_load_balancing = true
idle_timeout = 60
connection_draining = true
connection_draining_timeout = 300
tags = {
Name = "production-ssl-web-elb"
Environment = "production"
ManagedBy = "terraform"
SSL = "enabled"
}
}
# S3 bucket for access logs
resource "aws_s3_bucket" "elb_logs" {
bucket = "production-elb-access-logs-${random_string.suffix.result}"
tags = {
Name = "elb-access-logs"
Environment = "production"
Purpose = "elb-logging"
}
}
resource "aws_s3_bucket_policy" "elb_logs" {
bucket = aws_s3_bucket.elb_logs.id
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Sid = "AWSConsoleAutoGen"
Effect = "Allow"
Principal = {
AWS = data.aws_elb_service_account.main.arn
}
Action = "s3:PutObject"
Resource = "${aws_s3_bucket.elb_logs.arn}/*"
},
{
Sid = "AWSLogDeliveryWrite"
Effect = "Allow"
Principal = {
Service = "delivery.logs.amazonaws.com"
}
Action = "s3:PutObject"
Resource = "${aws_s3_bucket.elb_logs.arn}/*"
Condition = {
StringEquals = {
"s3:x-amz-acl" = "bucket-owner-full-control"
}
}
}
]
})
}
resource "random_string" "suffix" {
length = 8
special = false
upper = false
}
This configuration demonstrates several advanced features: SSL certificate management through ACM with automatic DNS validation, access logging to S3, and proper IAM permissions for the ELB service to write logs. The ssl_certificate_id
references the validated certificate, and the health check is configured to use a custom health endpoint.
The access logging configuration stores detailed request logs in S3, which can be useful for troubleshooting and analytics. The S3 bucket policy grants the necessary permissions for the ELB service to write logs, following AWS security best practices.
Instance Registration and Auto Scaling Integration
Classic Load Balancers work seamlessly with Auto Scaling groups, but you can also manage instance registration manually. This approach gives you fine-grained control over which instances receive traffic.
# Auto Scaling launch configuration
resource "aws_launch_configuration" "web" {
name_prefix = "web-server-"
image_id = data.aws_ami.amazon_linux.id
instance_type = "t3.medium"
security_groups = [aws_security_group.web_sg.id]
key_name = var.key_pair_name
iam_instance_profile = aws_iam_instance_profile.web.name
user_data = base64encode(templatefile("${path.module}/user-data.sh", {
environment = "production"
app_name = "web-app"
}))
lifecycle {
create_before_destroy = true
}
root_block_device {
volume_type = "gp3"
volume_size = 20
delete_on_termination = true
encrypted = true
}
}
# Auto Scaling group with load balancer attachment
resource "aws_autoscaling_group" "web" {
name = "production-web-asg"
launch_configuration = aws_launch_configuration.web.name
min_size = 2
max_size = 10
desired_capacity = 3
vpc_zone_identifier = data.aws_subnets.private.ids
load_balancers = [aws_elb.web.name]
health_check_type = "ELB"
health_check_grace_period = 300
tag {
key = "Name"
value = "production-web-server"
propagate_at_launch = true
}
tag {
key = "Environment"
value = "production"
propagate_at_launch = true
}
tag {
key = "ManagedBy"
value = "terraform"
propagate_at_launch = true
}
instance_refresh {
strategy = "Rolling"
preferences {
min_healthy_percentage = 50
instance_warmup = 300
}
}
}
# Security group for web servers
resource "aws_security_group" "web_sg" {
name = "web-server-sg"
description = "Security group for web servers"
vpc_id = data.aws_vpc.main.id
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
security_groups = [aws_security_group.elb_sg.id]
}
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = [var.admin_cidr]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "web-server-sg"
Environment = "production"
ManagedBy = "terraform"
}
}
# CloudWatch alarms for auto scaling
resource "aws_cloudwatch_metric_alarm" "high_cpu" {
alarm_name = "production-web-high-cpu"
comparison_operator = "GreaterThanThreshold"
evaluation_periods = "2"
metric_name = "CPUUtilization"
namespace = "AWS/EC2"
period = "300"
statistic = "Average"
threshold = "80"
alarm_description = "This metric monitors ec2 cpu utilization"
alarm_actions = [aws_autoscaling_policy.scale_up.arn]
dimensions = {
AutoScalingGroupName = aws_autoscaling_group.web.name
}
tags = {
Name = "production-web-high-cpu"
Environment = "production"
ManagedBy = "terraform"
}
}
resource "aws_autoscaling_policy" "scale_up" {
name = "production-web-scale-up"
scaling_adjustment = 2
adjustment_type = "ChangeInCapacity"
cooldown = 300
autoscaling_group_name = aws_autoscaling_group.web.name
depends_on = [aws_autoscaling_group.web]
}
This configuration creates a complete auto-scaling setup with CloudWatch monitoring. The health_check_type = "ELB"
setting ensures that Auto Scaling uses the load balancer's health checks to determine instance health, rather than just EC2 status checks. The instance_refresh
configuration enables rolling updates when the launch configuration changes.
The security group configuration follows the principle of least privilege, allowing traffic from the load balancer security group on port 80 and SSH access from a specific CIDR block. The CloudWatch alarm triggers scaling actions based on CPU utilization, providing automatic capacity adjustment based on demand.
This setup provides a robust, scalable web application infrastructure that can handle varying traffic loads while maintaining high availability through the Classic Load Balancer's cross-zone load balancing capabilities.
Best practices for Classic Load Balancers
Classic Load Balancers remain an important part of AWS infrastructure for legacy applications and specific use cases. Following these best practices will help you maximize performance, security, and cost efficiency.
Enable Cross-Zone Load Balancing
Why it matters: Without cross-zone load balancing, Classic Load Balancers only distribute traffic evenly across Availability Zones, not across individual instances. This can lead to uneven load distribution when you have different numbers of instances in each zone.
Implementation:
Cross-zone load balancing ensures traffic is distributed evenly across all healthy instances regardless of their Availability Zone. This is particularly important for maintaining consistent performance and avoiding hotspots.
resource "aws_elb" "main" {
name = "my-classic-lb"
availability_zones = ["us-west-2a", "us-west-2b", "us-west-2c"]
cross_zone_load_balancing = true
cross_zone_load_balancing_enabled = true
listener {
instance_port = 80
instance_protocol = "HTTP"
lb_port = 80
lb_protocol = "HTTP"
}
}
Additional guidance: While cross-zone load balancing incurs additional data transfer charges, the improved traffic distribution typically provides better performance and more predictable behavior, especially in scenarios with varying instance counts across zones.
Implement Proper Health Checks
Why it matters: Ineffective health checks can lead to traffic being routed to unhealthy instances, causing application failures and poor user experience. They're your first line of defense against serving traffic to problematic instances.
Implementation:
Configure health checks that accurately reflect your application's health status. Use application-specific endpoints that verify not just that the server is running, but that it's capable of serving requests properly.
resource "aws_elb" "main" {
name = "my-classic-lb"
availability_zones = ["us-west-2a", "us-west-2b"]
health_check {
healthy_threshold = 2
unhealthy_threshold = 2
timeout = 3
target = "HTTP:80/health"
interval = 30
}
listener {
instance_port = 80
instance_protocol = "HTTP"
lb_port = 80
lb_protocol = "HTTP"
}
}
Additional guidance: Set your health check interval and timeout values based on your application's response characteristics. For critical applications, consider using custom health check endpoints that verify database connectivity and other dependencies. Avoid using the root path (/) for health checks unless it's specifically designed for that purpose.
Configure SSL/TLS Termination Properly
Why it matters: SSL termination at the load balancer level reduces CPU overhead on your backend instances and centralizes certificate management. Improper SSL configuration can expose your application to security vulnerabilities.
Implementation:
Use SSL termination to offload encryption/decryption processing from your application servers. Always use strong ciphers and keep certificates up to date.
resource "aws_elb" "main" {
name = "my-classic-lb"
availability_zones = ["us-west-2a", "us-west-2b"]
listener {
instance_port = 80
instance_protocol = "HTTP"
lb_port = 443
lb_protocol = "HTTPS"
ssl_certificate_id = aws_iam_server_certificate.main.arn
}
listener {
instance_port = 80
instance_protocol = "HTTP"
lb_port = 80
lb_protocol = "HTTP"
}
}
resource "aws_iam_server_certificate" "main" {
name = "my-cert"
certificate_body = file("${path.module}/cert.pem")
private_key = file("${path.module}/private_key.pem")
}
Additional guidance: Always redirect HTTP traffic to HTTPS to ensure all communication is encrypted. Consider using AWS Certificate Manager (ACM) for automatic certificate renewal when possible.
Implement Connection Draining
Why it matters: Connection draining allows existing connections to complete before an instance is removed from the load balancer rotation, preventing abrupt connection termination that can disrupt user sessions.
Implementation:
Enable connection draining to gracefully handle instance removal during deployments or auto-scaling events.
resource "aws_elb" "main" {
name = "my-classic-lb"
availability_zones = ["us-west-2a", "us-west-2b"]
connection_draining = true
connection_draining_timeout = 400
listener {
instance_port = 80
instance_protocol = "HTTP"
lb_port = 80
lb_protocol = "HTTP"
}
}
Additional guidance: Set the connection draining timeout based on your application's typical session duration. For web applications, 300-400 seconds is usually sufficient. For long-running connections or file uploads, you may need longer timeout periods.
Configure Appropriate Security Groups
Why it matters: Security groups act as virtual firewalls for your Classic Load Balancer, controlling inbound and outbound traffic. Poorly configured security groups can either block legitimate traffic or expose your infrastructure to unnecessary risks.
Implementation:
Create dedicated security groups for your load balancer that only allow necessary traffic. Follow the principle of least privilege.
resource "aws_security_group" "elb" {
name = "classic-elb-sg"
description = "Security group for Classic Load Balancer"
vpc_id = aws_vpc.main.id
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 65535
protocol = "tcp"
cidr_blocks = ["10.0.0.0/8"]
}
}
resource "aws_elb" "main" {
name = "my-classic-lb"
availability_zones = ["us-west-2a", "us-west-2b"]
security_groups = [aws_security_group.elb.id]
listener {
instance_port = 80
instance_protocol = "HTTP"
lb_port = 80
lb_protocol = "HTTP"
}
}
Additional guidance: Regularly audit your security group rules and remove unnecessary access. Consider using separate security groups for different load balancer functions (public-facing vs. internal) to maintain clear security boundaries.
Monitor and Set Up Alerts
Why it matters: Proactive monitoring helps you identify performance issues, capacity problems, and security threats before they impact your users. Classic Load Balancers provide valuable metrics that should be actively monitored.
Implementation:
Set up CloudWatch alarms for key metrics like latency, error rates, and healthy host count to ensure early detection of issues.
resource "aws_cloudwatch_metric_alarm" "high_latency" {
alarm_name = "elb-high-latency"
comparison_operator = "GreaterThanThreshold"
evaluation_periods = "2"
metric_name = "Latency"
namespace = "AWS/ELB"
period = "300"
statistic = "Average"
threshold = "1"
alarm_description = "This metric monitors ELB latency"
dimensions = {
LoadBalancerName = aws_elb.main.name
}
}
resource "aws_cloudwatch_metric_alarm" "healthy_hosts" {
alarm_name = "elb-unhealthy-hosts"
comparison_operator = "LessThanThreshold"
evaluation_periods = "2"
metric_name = "HealthyHostCount"
namespace = "AWS/ELB"
period = "60"
statistic = "Average"
threshold = "1"
alarm_description = "This metric monitors ELB healthy host count"
dimensions = {
LoadBalancerName = aws_elb.main.name
}
}
Additional guidance: Monitor both technical metrics (latency, error rates) and business metrics (request volume, geographic distribution). Set up multiple thresholds - warning levels for early intervention and critical levels for immediate action.
Plan for Scaling and Capacity
Why it matters: Classic Load Balancers need time to scale up to handle traffic spikes. Pre-warming and proper capacity planning ensure your load balancer can handle expected traffic patterns without performance degradation.
Implementation:
For predictable traffic spikes, contact AWS support to pre-warm your load balancer. For general scaling, monitor your traffic patterns and request CloudWatch metrics.
# Monitor your load balancer metrics
aws cloudwatch get-metric-statistics \\
--namespace AWS/ELB \\
--metric-name RequestCount \\
--dimensions Name=LoadBalancerName,Value=my-classic-lb \\
--start-time 2024-01-01T00:00:00Z \\
--end-time 2024-01-02T00:00:00Z \\
--period 3600 \\
--statistics Sum
Additional guidance: Document your typical traffic patterns and identify peak usage times. For applications with significant traffic spikes (like e-commerce during sales), work with AWS support to ensure your load balancer is properly pre-warmed and configured for the expected load.
Terraform and Overmind for Classic Load Balancers
Overmind Integration
Classic Load Balancers are used in many places across AWS environments. When you run overmind terraform plan
with Classic Load Balancer modifications, Overmind automatically identifies all resources that depend on your load balancer configuration, including:
- EC2 Instances registered with the load balancer for traffic distribution
- Security Groups controlling inbound and outbound traffic rules
- Route 53 DNS Records pointing to the load balancer's DNS name
- Auto Scaling Groups using the load balancer for health checks
This dependency mapping extends beyond direct relationships to include indirect dependencies that might not be immediately obvious, such as applications relying on specific load balancer endpoints or monitoring systems tracking load balancer metrics.
Risk Assessment
Overmind's risk analysis for Classic Load Balancer changes focuses on several critical areas:
High-Risk Scenarios:
- Load Balancer Deletion: Removing a load balancer that's actively serving traffic can cause immediate service disruption
- Security Group Changes: Modifying security groups attached to load balancers can block legitimate traffic or expose services
- Health Check Configuration: Changing health check parameters can cause healthy instances to be marked as unhealthy
Medium-Risk Scenarios:
- Instance Registration Changes: Adding or removing instances from the load balancer can affect traffic distribution patterns
- SSL Certificate Updates: Changing SSL certificates can cause brief service interruptions during the transition
Low-Risk Scenarios:
- Tag Modifications: Adding or updating tags on load balancers typically has no functional impact
- Connection Settings: Adjusting connection draining timeouts during maintenance windows
Use Cases
Web Application High Availability
A typical e-commerce platform uses Classic Load Balancers to distribute traffic across multiple web servers in different availability zones. When traffic spikes during promotional events, the load balancer ensures that no single server becomes overwhelmed, maintaining consistent response times for customers.
The business impact includes maintaining 99.9% uptime during critical sales periods and automatically handling traffic variations without manual intervention.
Legacy Application Modernization
Organizations running legacy applications often use Classic Load Balancers as a bridge during modernization efforts. The load balancer allows gradual migration from monolithic architectures to microservices by routing traffic between old and new application components.
This approach enables businesses to modernize incrementally while maintaining service availability, reducing the risk associated with big-bang migrations.
Multi-Tier Application Architecture
Development teams deploy Classic Load Balancers in front of application tiers to create clear separation between presentation, application, and data layers. This architecture pattern enables independent scaling of each tier based on demand.
The resulting architecture improves system resilience by isolating failures to specific tiers and enables more efficient resource utilization across the application stack.
Limitations
Feature Constraints
Classic Load Balancers operate at Layer 4 (TCP) and basic Layer 7 (HTTP/HTTPS), which limits their ability to perform advanced routing based on application-specific criteria. They cannot route traffic based on request headers, paths, or other HTTP attributes that modern applications often require.
Advanced features like WebSocket support, HTTP/2, and Server Name Indication (SNI) are not available with Classic Load Balancers, requiring migration to Application Load Balancers for these capabilities.
Scaling and Performance
Classic Load Balancers have pre-warming requirements for handling sudden traffic spikes. Unlike newer load balancer types that scale more dynamically, Classic Load Balancers may experience performance issues during rapid traffic increases without proper pre-warming.
The older architecture also means higher latency compared to Application Load Balancers, particularly for HTTP/HTTPS traffic that could benefit from more modern routing algorithms.
Management Complexity
Classic Load Balancers require more manual configuration compared to newer load balancer types. Health check configurations are less flexible, and there's limited integration with modern AWS services like AWS Certificate Manager for automatic SSL certificate management.
The lack of target groups means instance management is more complex, requiring direct registration and deregistration of instances rather than the more flexible target group approach used by newer load balancer types.
Conclusions
Classic Load Balancers are a foundational AWS service that provides basic traffic distribution capabilities. They support essential load balancing functions with straightforward configuration options, making them suitable for simple web applications and legacy system integration scenarios.
The service integrates well with core AWS services like EC2, Route 53, and Auto Scaling Groups, providing a solid foundation for high availability architectures. However, their limitations in advanced routing capabilities and modern protocol support make them less suitable for complex, modern application architectures.
For infrastructure running on AWS, Classic Load Balancers offer proven reliability and simplicity, particularly when advanced routing features aren't required. However, using Classic Load Balancers in Terraform requires careful consideration of their dependencies and limitations, especially when planning migrations to more modern load balancer types.
Changes to Classic Load Balancer configurations can have significant blast radius effects, particularly in production environments where they serve as critical traffic distribution points. Understanding these dependencies through tools like Overmind becomes essential for maintaining system reliability during infrastructure changes.