← Back to Blog

After jumping on the n8n train and loving every second of it, we decided that we should put together a module for deploying n8n to AWS using Terraform. This is a jumping off point for anyone who is interesting in hosting it in their own account!

An Overview

While n8n's documentation covers Docker Compose deployments, we needed:

  • High availability across multiple AWS availability zones
  • Auto-scaling database that costs less during off-hours
  • Zero-downtime deployments for seamless version upgrades
  • Production security with private subnets and encryption
  • Terraform-native infrastructure for our existing workflows

The official n8n documentation doesn't provide an AWS Fargate module. We built one.


Architecture

Our module deploys a production-ready n8n cluster on AWS Fargate with Aurora Serverless v2:

┌─────────────────────────────────────────────────────┐
│                    Internet                         │
└──────────────────────┬──────────────────────────────┘
                       │
                       ▼
┌─────────────────────────────────────────────────────┐
│          Application Load Balancer (ALB)            │
│              SSL/TLS Termination                    │
│           (Self-signed or ACM cert)                 │
└──────────────┬─────────────────┬────────────────────┘
               │                 │
               ▼                 ▼
    ┌──────────────────┐  ┌──────────────────┐
    │  ECS Fargate     │  │  ECS Fargate     │
    │  Task (AZ-1)     │  │  Task (AZ-2)     │
    │  n8n Container   │  │  n8n Container   │
    └────────┬─────────┘  └─────────┬────────┘
             │                      │
             │  Private Subnets     │
             └──────────┬───────────┘
                        │
                        ▼
            ┌───────────────────────┐
            │  Aurora Serverless v2 │
            │  PostgreSQL Database  │
            │  Auto-scales 0.5-128  │
            │  ACU (capacity units) │
            └───────────────────────┘
                        │
                   Private Subnet

Key Components

1. ECS Fargate Tasks

  • Run n8n containers without managing servers
  • Configurable CPU (256-4096) and memory (512-30720 MB)
  • Auto-restart on failure for resilience
  • Task count adjustable (1-10 for horizontal scaling)

2. Application Load Balancer

  • Distributes traffic across multiple availability zones
  • Health checks ensure traffic only goes to healthy tasks
  • HTTPS enabled with self-signed or ACM certificates
  • Sticky sessions for n8n UI consistency

3. Aurora Serverless v2

  • Auto-scales from 0.5 to 128 ACU based on demand
  • Saves money: Scales down to 0.5 ACU during off-hours (~$100/month savings)
  • High availability: Multi-AZ deployment with automatic failover
  • Encrypted at rest with AWS KMS
  • Automatic backups: 7-day retention by default

4. VPC & Networking

  • Private subnets for ECS tasks and database (no direct internet access)
  • Public subnets for load balancer only
  • NAT Gateway for outbound internet (n8n webhook calls, API integrations)
  • Security groups with least-privilege rules

5. IAM & Security

  • Least-privilege IAM roles for ECS tasks
  • Secrets stored in AWS Secrets Manager (encrypted)
  • Encryption in transit (HTTPS/TLS) and at rest (KMS)
  • No hardcoded credentials

Cost Breakdown

One of the biggest questions we get: "How much does this actually cost?"

Here's our transparent monthly cost analysis (us-east-1 region):

Component Configuration Monthly Cost
ECS Fargate 2 tasks × 0.5 vCPU, 1GB RAM ~$30
Aurora Serverless v2 0.5-2 ACU average ~$50-100
Application Load Balancer Standard ALB ~$20
NAT Gateway 1 NAT Gateway ~$35
Data Transfer Minimal (mostly internal) ~$5
Secrets Manager 2 secrets ~$1
CloudWatch Logs Standard retention ~$5
VPC No additional charge $0
S3 (backups) Optional, minimal ~$2
Total Base configuration ~$160-230/month

Cost Optimization Tips

During business hours (8am-6pm):

  • Aurora scales up to 2-4 ACU for active workflows
  • Cost: ~$100/month for database

During off-hours (nights/weekends):

  • Aurora scales down to 0.5 ACU automatically
  • Cost: ~$20/month for database
  • Savings: $80/month without any manual intervention

Further optimizations:

  • Use Fargate Spot for 70% cost reduction (for non-critical workflows)
  • Single-AZ deployment for dev/staging (cuts costs in half)
  • Reduce NAT Gateway to single AZ (~$15 savings)
  • Use VPC endpoints for S3/Secrets Manager (eliminate NAT charges)

Comparison to managed solutions:

  • Zapier Business Plan: $600/year × 12 = $7,200/year
  • Make.com Teams Plan: $300/year × 12 = $3,600/year
  • Our module: $160 × 12 = $1,920/year (73% cheaper than Zapier)

High Availability & Scalability

Multi-AZ Deployment

By default, the module deploys across 2+ availability zones:

AZ us-east-1a          AZ us-east-1b
┌──────────────┐      ┌──────────────┐
│ ECS Task 1   │      │ ECS Task 2   │
│ n8n instance │      │ n8n instance │
└──────────────┘      └──────────────┘
       │                     │
       └──────────┬──────────┘
                  │
         ┌────────────────┐
         │ Aurora Primary │
         │   (AZ-1a)      │
         └────────┬───────┘
                  │
         ┌────────────────┐
         │ Aurora Replica │
         │   (AZ-1b)      │
         └────────────────┘

Benefits:

  • Zone failure tolerance: If AZ-1a goes down, traffic routes to AZ-1b automatically
  • Database failover: Aurora promotes replica to primary in ~30 seconds
  • No data loss: Synchronous replication between AZs
  • Health checks: ALB removes unhealthy tasks from rotation

Zero-Downtime Deployments

Updating n8n versions is seamless:

# Update n8n version in terraform.tfvars
n8n_version = "1.25.0"  # was 1.24.0

# Apply changes
terraform apply

What happens:

  1. Terraform creates new tasks with v1.25.0
  2. New tasks register with load balancer
  3. Health checks confirm new tasks are healthy
  4. Load balancer drains connections from old tasks
  5. Old tasks shut down gracefully
  6. Zero downtime for end users

Scaling Options

Horizontal Scaling (more tasks):

desired_count = 4  # Run 4 n8n instances
  • Handles more concurrent workflows
  • Better fault tolerance
  • Load distributed across instances

Vertical Scaling (bigger tasks):

cpu    = 1024   # 1 vCPU (was 512)
memory = 2048   # 2 GB (was 1024)
  • Handles more complex workflows per instance
  • Better for CPU/memory intensive operations

Database Auto-Scaling:

min_capacity = 0.5   # Minimum ACU
max_capacity = 16    # Maximum ACU (auto-scales)
  • Aurora automatically scales based on:
    • CPU utilization
    • Database connections
    • Query throughput
  • No manual intervention required

Getting Started

Ready to deploy your own production n8n cluster? Here's how:

Prerequisites

  • AWS account with appropriate permissions
  • Terraform 1.0+ installed
  • Basic familiarity with Terraform and AWS

Step 1: Get the Module

Want to try the module? Enter your email to get the GitHub repository URL.

Get Free Access to the n8n Terraform Module

Enter your email to get instant access to the GitHub repository.

No spam. Unsubscribe anytime. We respect your privacy.

Step 2: Configure the Module

Once you have access, configuration is straightforward:

Create terraform.tfvars:

# Basic Configuration
project_name    = "my-n8n-cluster"
environment     = "production"
aws_region      = "us-east-1"

# n8n Configuration
n8n_version     = "1.25.0"
n8n_encryption_key = "your-32-character-encryption-key-here"

# Database
db_master_password = "your-secure-database-password-here"

# Optional: SSL Certificate ARN (recommended for production)
# certificate_arn = "arn:aws:acm:us-east-1:123456789:certificate/abc-123"

That's it! The module handles:

  • VPC and subnet creation
  • Security group configuration
  • IAM role creation
  • Load balancer setup
  • ECS cluster and service
  • Aurora database cluster
  • Secrets management

Step 3: Deploy

# Initialize Terraform
terraform init

# Review the plan
terraform plan

# Deploy (takes ~10-15 minutes)
terraform apply

# Get your n8n URL
terraform output n8n_url
# Output: https://my-alb-1234567890.us-east-1.elb.amazonaws.com

Step 4: Access n8n

  1. Navigate to the URL from terraform output n8n_url
  2. Create your admin account on first login
  3. Start building workflows!

Step 5: (Optional) Add Custom Domain

# In terraform.tfvars
domain_name = "n8n.mycompany.com"

# Create Route53 record pointing to ALB
# Request ACM certificate for domain
# Update certificate_arn in tfvars

Real-World Use Cases

Here's what we've built with our n8n cluster:

1. Customer Onboarding

Workflow: Stripe Payment → n8n → AWS/Email/Slack

  • Stripe webhook triggers on new subscription
  • Creates DynamoDB user record
  • Generates API key with Secrets Manager
  • Creates Cognito user account
  • Sends welcome email via SES
  • Notifies team in Slack

Result: 100% automated onboarding, zero manual steps

2. Infrastructure Provisioning

Workflow: GitHub Issue → n8n → Terraform Cloud → Slack

  • New GitHub issue with label "provision" triggers workflow
  • Validates request parameters
  • Triggers Terraform Cloud run via API
  • Monitors run status
  • Posts results to Slack with resource URLs

Result: Self-service infrastructure for engineering teams

3. Scheduled AWS Maintenance

Workflow: Cron Schedule → n8n → AWS APIs → S3 → Email

  • Runs every Sunday at 2am
  • Identifies unused EBS snapshots, AMIs, elastic IPs
  • Generates cost optimization report
  • Uploads to S3
  • Emails infrastructure team with recommendations

Result: Saved $500/month in AWS costs

4. Compliance Reporting

Workflow: Monthly Schedule → n8n → AWS Config/CloudTrail → PDF → S3

  • Queries AWS Config for compliance status
  • Pulls CloudTrail logs for audit events
  • Generates PDF report with charts
  • Stores in S3 with versioning
  • Emails compliance team

Result: Automated SOC 2 evidence collection


Logs & Debugging

All logs flow to CloudWatch Logs:

# View n8n logs
aws logs tail /ecs/n8n-production --follow

# Search for errors
aws logs filter-pattern 'ERROR' /ecs/n8n-production

# Export logs to S3 for long-term retention
aws logs create-export-task \
  --log-group-name /ecs/n8n-production \
  --from $(date -d '30 days ago' +%s)000 \
  --to $(date +%s)000 \
  --destination s3-bucket-name

Structured logging in n8n workflows:

// In n8n Function node
const logger = {
  info: (msg, data) => console.log(JSON.stringify({
    level: 'INFO',
    message: msg,
    timestamp: new Date().toISOString(),
    ...data
  }))
};

logger.info('Workflow started', {
  workflowId: $workflow.id,
  executionId: $execution.id
});

Health Checks

ALB Target Health:

  • Checks /healthz endpoint every 30 seconds
  • Unhealthy threshold: 2 consecutive failures
  • Healthy threshold: 2 consecutive successes
  • Removes unhealthy targets from rotation automatically

Database Connection Monitoring:

// n8n workflow to monitor DB health
// Runs every 5 minutes via cron trigger
SELECT
  state,
  COUNT(*) as connection_count
FROM pg_stat_activity
WHERE datname = 'n8n'
GROUP BY state;

Alerts if connection count exceeds threshold → potential memory leak or scaling issue.


Contributing

We welcome contributions! Here's how you can help:

Bug Reports: Open an issue on GitHub with:

  • Terraform version
  • Module version
  • Error messages
  • Expected vs actual behavior

Feature Requests: Describe your use case and why the feature would help

Pull Requests:

  • Fork the repository
  • Create a feature branch
  • Write tests (Terratest)
  • Update documentation
  • Submit PR with clear description

Community & Support

Get the Module

Ready to deploy production-ready n8n on AWS?

Get the GitHub URL by entering your email above (we'll send you the link instantly).

Join the Conversation

  • GitHub Issues: Report bugs, request features
  • Discussions: Share your n8n workflows and infrastructure tips
  • Newsletter: Get updates on new modules and AWS automation best practices

Need Help?

  • Documentation: Full module documentation in the GitHub README
  • Email Support: info@aiopscrew.com (we respond within 24 hours)
  • Consulting: Need help with complex deployments? We offer consulting services

Conclusion

Building production-ready infrastructure shouldn't require weeks of research and trial-and-error. With this Terraform module, you can deploy a scalable, secure, highly available n8n cluster in under 15 minutes.

What's Your Use Case?

We'd love to hear how you're using the module! Drop a comment or email us at info@aiopscrew.com with:

  • Your workflows (CloudWatch automation, CI/CD, data pipelines?)
  • Infrastructure tweaks you made
  • Cost optimizations you discovered
  • Feature requests

About AI Ops Crew

We build production-ready Terraform modules for AWS operations. Our mission: make infrastructure automation accessible to every engineering team.

Our Modules:

  • CloudWatch AI Agent (Premium - $5/mo): AI-powered alarm investigation with real-time AWS analysis
  • n8n Fargate Cluster (Free): Workflow automation platform on AWS
  • More coming soon...

Follow our journey as we open-source more infrastructure tools. Subscribe to our newsletter for updates.


Want to deploy your n8n cluster right now? Get the module and be up and running in 15 minutes.

Happy automating! 🚀


Have questions about the module or want to share your implementation? Email us at info@aiopscrew.com or open a discussion on GitHub.