Subsections of Simplified FortiGate Autoscale Template
Introduction
Welcome
This documentation provides comprehensive guidance for deploying FortiGate autoscale groups in AWS using the FortiGate Autoscale Simplified Template. This template serves as an accessible wrapper around Fortinet’s enterprise-grade FortiGate Autoscale Templates, dramatically reducing deployment complexity while maintaining full architectural flexibility.
Purpose and Scope
The official FortiGate autoscale templates available in the terraform-aws-cloud-modules repository deliver powerful capabilities for deploying elastic, scalable security architectures in AWS. However, these templates require:
- Deep familiarity with complex Terraform variable structures
- Strict adherence to specific syntax requirements
- Extensive knowledge of AWS networking and FortiGate architectures
- Significant time investment to understand configuration dependencies
The Simplified Template addresses these challenges by:
- Abstracting complexity: Encapsulates intricate configuration patterns into intuitive boolean variables and straightforward parameters
- Accelerating deployment: Reduces configuration time from hours to minutes through common-use-case defaults
- Maintaining flexibility: Retains access to advanced features while providing sensible defaults for standard deployments
- Reducing errors: Minimizes misconfiguration risks through validated input patterns and clear parameter descriptions
What This Template Provides
The Simplified Template enables rapid deployment of FortiGate autoscale groups by simplifying configuration of:
Core Infrastructure
- Network architecture: VPC creation or integration with existing network resources
- Subnet design: Automated subnet allocation across multiple Availability Zones
- Transit Gateway integration: Optional connectivity to existing Transit Gateway hubs
- Load balancing: AWS Gateway Load Balancer (GWLB) configuration and target group management
Autoscale Configuration
- Capacity management: Minimum, maximum, and desired instance counts
- Scaling policies: CPU-based thresholds and CloudWatch alarm configuration
- Instance specifications: FortiGate version, instance type, and AMI selection
- High availability: Multi-AZ distribution and health check parameters
Licensing and Management
- Licensing flexibility: Support for BYOL, PAYG, and FortiFlex licensing models
- License automation: Automated license file distribution or token generation
- Hybrid licensing: Configuration for combining multiple license types
- FortiManager integration: Optional centralized management and policy orchestration
Security and Access
- Management access: Dedicated management interfaces or combined data/management design
- Key pair configuration: SSH access for administrative operations
- Security groups: Automated creation of appropriate ingress/egress rules
- IAM roles: Lambda function permissions for license and lifecycle management
Egress Strategies
- Elastic IP allocation: Per-instance EIP assignment for consistent source NAT
- NAT Gateway integration: Shared NAT Gateway configuration for cost optimization
- Route management: Automated routing table updates for egress traffic flows
Common Use Cases
This template is specifically designed for the most frequently deployed FortiGate autoscale architectures:
- Centralized Inspection with Transit Gateway: Single inspection VPC serving multiple spoke VPCs through Transit Gateway routing
- Dedicated Management VPC: Isolated management plane for FortiManager/FortiAnalyzer integration with production traffic inspection VPC
- Hybrid Licensing Architectures: Cost-optimized deployments combining BYOL/FortiFlex baseline capacity with PAYG burst capacity
- Existing Infrastructure Integration: Deployment into pre-existing VPCs, subnets, and Transit Gateway environments
How It Works
The Simplified Template approach:
- Variable Abstraction: Translates complex nested map structures into simple boolean flags and direct parameters
- Conditional Logic: Automatically enables or disables features based on use-case selection
- Default Values: Provides production-ready defaults for parameters not requiring customization
- Validation: Implements input validation to catch configuration errors before deployment
- Module Invocation: Dynamically constructs proper syntax for underlying enterprise templates
- Output Standardization: Presents consistent outputs regardless of architecture variation
Prerequisites
Before using this template, ensure you have:
Required Knowledge
- Basic understanding of AWS networking concepts (VPCs, subnets, route tables)
- Familiarity with Terraform workflow (
init,plan,apply,destroy) - General understanding of FortiGate firewall concepts
- AWS account with appropriate permissions for VPC, EC2, Lambda, and IAM resource creation
Required Tools
- Terraform: Version 1.0 or later (Download)
- AWS CLI: Configured with appropriate credentials (Installation Guide)
- Git: For cloning the repository
- Text Editor: For editing
terraform.tfvarsconfiguration files
AWS Resources
- AWS Account: With permissions to create VPCs, subnets, EC2 instances, Lambda functions, and IAM roles
- Service Quotas: Sufficient EC2 instance limits for desired autoscale group size
- S3 Bucket (for BYOL): Storage location for FortiGate license files
- Key Pair: Existing EC2 key pair for SSH access to FortiGate instances
Optional Resources
- FortiManager: For centralized management (if integration is desired)
- FortiAnalyzer: For centralized logging and reporting
- Transit Gateway: If integrating with existing hub-and-spoke architecture
- FortiFlex Account: If using FortiFlex licensing model
Documentation Structure
This guide is organized into the following sections:
- Introduction (this section): Overview, purpose, and prerequisites
- Overview: Architecture patterns, key benefits, and solution capabilities
- Licensing: Detailed comparison of BYOL, PAYG, and FortiFlex licensing options
- Solution Components: In-depth explanation of architectural elements and configuration options
- Templates: Step-by-step deployment procedures and configuration examples
Additional Resources
For comprehensive FortiGate and FortiOS documentation beyond the scope of this deployment guide, please reference:
- FortiGate Documentation Portal: docs.fortinet.com
- FortiGate AWS Deployment Guides: docs.fortinet.com/document/fortigate-public-cloud/
- AWS Marketplace FortiGate Listings: AWS Marketplace
- Fortinet Developer Network (FNDN): fndn.fortinet.net (requires registration)
- FortiGate Administration Guide: docs.fortinet.com/fortigate/admin-guide
- Terraform AWS Cloud Modules Repository: GitHub - fortinetdev/terraform-aws-cloud-modules
Support and Feedback
For technical support:
- Enterprise Template Issues: Report issues on the terraform-aws-cloud-modules GitHub repository
- FortiGate Technical Support: Open support tickets at FortiCare Support Portal
- AWS Infrastructure Issues: Contact AWS Support through your AWS account
For documentation feedback or Simplified Template enhancement requests, please reach out through your Fortinet account team or technical contacts.
Getting Started
Ready to deploy? Proceed to the Overview section to understand the architecture patterns available, or jump directly to the Templates section to begin configuration and deployment.
Overview
Introduction
FortiOS natively supports AWS Autoscaling capabilities, enabling dynamic horizontal scaling of FortiGate clusters within AWS environments. This solution leverages AWS Gateway Load Balancer (GWLB) to intelligently distribute traffic across FortiGate instances in the autoscale group. The cluster dynamically adjusts its capacity based on configurable thresholds—automatically launching new instances when the cluster size falls below the minimum threshold and terminating instances when capacity exceeds the maximum threshold. As instances are added or removed, they are seamlessly registered with or deregistered from associated GWLB target groups, ensuring continuous traffic inspection capabilities while maintaining optimal cluster performance and capacity.
Key Benefits
This autoscaling solution delivers several strategic advantages for AWS security architectures:
Elastic Scalability
- Horizontal scaling: Automatically scales FortiGate cluster capacity in response to traffic patterns and resource utilization
- Cost optimization: Scales down during low-traffic periods to reduce operational costs
- Performance assurance: Scales up during peak demand to maintain consistent security inspection throughput
Flexible Licensing Options
- Hybrid licensing model: Supports combination of BYOL (Bring Your Own License), FortiFlex usage-based licensing for baseline capacity, and AWS Marketplace PAYG (Pay-As-You-Go) for elastic burst capacity
- License optimization: Minimize costs by using BYOL/FortiFlex licenses for steady-state workloads and PAYG for temporary scale-out events
- Simplified license management: Automated license token injection during instance launch via Lambda functions
High Availability and Configuration Management
- Automated configuration synchronization: Primary FortiGate instance automatically synchronizes security policies and configuration to secondary instances using FortiOS native HA sync mechanisms
- FortiManager integration: Optional centralized management through FortiManager for policy orchestration, compliance monitoring, and operational visibility across the autoscale group
- Consistent security posture: Configuration drift prevention ensures all instances enforce identical security policies
Architectural Flexibility
- Centralized inspection architecture: Single inspection VPC model with Transit Gateway integration for hub-and-spoke topology
- Distributed inspection architecture: Multiple inspection points for geo-distributed workloads (coming soon)
- Deployment patterns: Support for single-arm (1-ENI) and dual-arm (2-ENI) FortiGate deployments
Internet Egress Options
- Elastic IP (EIP) NAT: Each FortiGate instance can leverage individual EIPs for source NAT, providing consistent egress IP addresses for allowlist scenarios
- NAT Gateway integration: Alternative architecture using shared NAT Gateways for cost-optimized egress traffic when static source IPs are not required
- Hybrid egress design: Combine EIP and NAT Gateway approaches based on application requirements
Architecture Considerations
This simplified template streamlines the deployment of FortiGate autoscale groups by abstracting infrastructure complexity while providing customization options for:
- VPC and subnet configuration
- Licensing strategy selection
- FortiManager/FortiAnalyzer integration
- Network interface design (dedicated management ENI options)
- Scaling policies and thresholds
- Transit Gateway attachment and routing
Additional Solutions
Fortinet offers several complementary AWS security architectures optimized for different use cases:
- FGCP HA (Single AZ): Active-passive high availability within a single Availability Zone for maximum configuration synchronization and stateful failover
- FGCP HA (Multi-AZ): Active-passive high availability across multiple Availability Zones for enhanced resilience
- Transit Gateway with FortiGate inspection: Centralized security inspection for multi-VPC environments
- Distributed Gateway Load Balancer architectures: Regional traffic inspection patterns
For comprehensive information on Fortinet’s AWS security portfolio, deployment guides, and architectural best practices, visit www.fortinet.com/aws.
Licensing
Overview
FortiGate autoscale deployments in AWS support three distinct licensing models, each optimized for different operational requirements, cost structures, and scaling behaviors. The choice of licensing strategy significantly impacts deployment complexity, operational costs, and the ability to dynamically scale capacity in response to demand.
This template supports all three licensing models and enables hybrid licensing configurations where multiple license types coexist within the same autoscale group, providing maximum flexibility for cost optimization and capacity management.
Licensing Options
AWS Marketplace Pay-As-You-Go (PAYG)
Best for: Proof of concepts, temporary workloads, elastic burst capacity
AWS Marketplace PAYG licensing offers the simplest deployment path with zero upfront licensing requirements. Instances are billed hourly through your AWS account based on instance type and included FortiGuard services.
Advantages
- Zero configuration: No license files, tokens, or registration required
- Instant deployment: Instances launch immediately without license provisioning delays
- Elastic scaling: Ideal for autoscale groups that frequently scale out and in
- No commitment: Pay only for actual runtime hours with no long-term contracts
- Consolidated billing: All costs appear on AWS invoices alongside infrastructure charges
Considerations
- Higher per-hour cost: Premium pricing compared to BYOL or FortiFlex over extended periods
- Service bundle locked: Cannot customize FortiGuard service subscriptions; you receive the bundle included with the marketplace offering
- Limited cost optimization: No volume discounts or prepaid savings
- Vendor lock-in: Cannot migrate licenses to on-premises or other cloud providers
When to Use
- Development, testing, and staging environments
- Proof-of-concept deployments with undefined timelines
- Burst capacity in hybrid licensing architectures (scale beyond BYOL/FortiFlex baseline)
- Short-term projects (< 6 months) where simplicity outweighs cost
- Disaster recovery standby capacity that remains dormant most of the time
Implementation Notes
- Select PAYG AMI from AWS Marketplace during launch template configuration
- No Lambda-based license management required
- Instances automatically activate upon boot
- FortiGuard services update immediately without additional registration
Bring Your Own License (BYOL)
Best for: Long-term production deployments with predictable capacity requirements
BYOL licensing leverages perpetual or term-based FortiGate-VM licenses purchased directly from Fortinet or authorized resellers. This model provides the lowest per-instance operating cost for sustained workloads but requires manual license file management.
Advantages
- Lowest operating cost: Significant savings (40-60%) compared to PAYG for long-term deployments
- Custom service bundles: Select specific FortiGuard subscriptions (UTP, ATP, Enterprise) based on security requirements
- Portable licenses: Migrate licenses between environments (AWS, Azure, on-premises) with proper licensing terms
- Volume discounts: Enterprise agreements provide additional cost reductions at scale
- Predictable budgeting: Fixed annual or multi-year costs independent of instance runtime
Considerations
- Manual license management: Requires obtaining, storing, and deploying license files for each instance
- Upfront capital expense: Purchase licenses before deployment
- Reduced flexibility: Fixed license count limits maximum autoscale capacity unless additional licenses are procured
- License tracking overhead: Must maintain inventory of assigned vs. available licenses
- Decommissioning process: Requires license recovery when scaling in or decommissioning environments
When to Use
- Production workloads with predictable, steady-state capacity requirements
- Long-term deployments (> 1 year) where cost savings justify management overhead
- Organizations with existing Fortinet licensing agreements or ELAs
- Environments requiring specific FortiGuard service combinations not available in marketplace offerings
- Hybrid licensing architectures as the baseline capacity tier
Implementation Notes
- Store license files in S3 bucket accessible by Lambda function
- Lambda function reads license files and applies them during instance boot
- Configure
lic_folder_pathvariable to point to license file directory - Naming convention: License files should match naming pattern expected by Lambda (e.g., sequential numbering)
- DynamoDB table tracks license assignments to prevent duplicate usage
- Decommissioned instances return licenses to available pool for reuse
License File Requirements
licenses/
├── FGVM01-001.lic
├── FGVM01-002.lic
├── FGVM01-003.lic
└── FGVM01-004.licCritical: Ensure sufficient licenses exist for asg_max_size. If licenses are exhausted during scale-out, new instances will remain unlicensed and non-functional.
FortiFlex (Usage-Based Licensing)
Best for: Dynamic workloads requiring flexibility with optimized costs for medium to long-term deployments
FortiFlex (formerly Flex-VM) is Fortinet’s consumption-based, points-driven licensing program that combines the flexibility of PAYG with cost structures approaching BYOL. Points are consumed daily based on FortiGate configuration (CPU count, service package), and licenses are dynamically provisioned via API tokens.
Advantages
- Flexible scaling: Provision and deprovision licenses on-demand through API integration
- Optimized costs: 20-40% savings compared to PAYG for sustained workloads
- Automated license lifecycle: Lambda function generates license tokens automatically during instance launch
- Right-sizing capability: Change CPU count or service packages dynamically; pay only for what you consume
- Simplified license management: No physical license files; tokens generated via API calls
- Point pooling: Share point allocations across multiple deployments and cloud providers
- Burst capacity support: Quickly provision additional licenses without procurement delays
Considerations
- Initial setup complexity: Requires FortiFlex program registration, configuration templates, and API integration
- Point management: Monitor point consumption to prevent negative balance or service interruption
- Active entitlement management: Must create/stop entitlements to control costs
- API dependency: Relies on connectivity to FortiFlex API endpoints during instance provisioning
- Grace period risks: Running negative balance triggers 90-day grace period; service stops if not resolved
- Minimum commitment: Some FortiFlex programs require minimum annual consumption
When to Use
- Production workloads with variable but predictable traffic patterns
- Multi-environment deployments (dev, staging, production) sharing point pools
- Organizations pursuing cloud-first strategies without legacy perpetual licenses
- Architectures requiring frequent right-sizing of FortiGate instances
- Deployments spanning multiple cloud providers or hybrid architectures
- Cost-conscious autoscale groups with moderate to high uptime requirements
Implementation Notes
- Register FortiFlex program and purchase point packs via FortiCare portal
- Create FortiGate-VM configurations in FortiFlex portal defining CPU count and service packages
- Generate API credentials through IAM portal with FortiFlex permissions
- Configure Lambda function environment variables with FortiFlex API credentials
- Lambda function creates entitlements and retrieves license tokens during instance launch
- Entitlements automatically STOP when instances terminate, halting point consumption
- Monitor point balance via FortiFlex portal or API to prevent service interruption
FortiFlex Prerequisites
FortiFlex Program Registration:
- Purchase program SKU:
FC-10-ELAVR-221-02-XX(12, 36, or 60 months) - Register program in FortiCare at
https://support.fortinet.com - Wait up to 4 hours for program validation
- Purchase program SKU:
Point Pack Purchase:
- Annual packs:
LIC-ELAVM-10K(10,000 points, 1-year term with rollover) - Multi-year packs:
LIC-ELAVMMY-50K-XX(50,000 points, 3-5 year terms) - Bulk packs:
LIC-ELAVMMY-BULK-SEAT(100,000 points per seat, minimum 10 seats)
- Annual packs:
Configuration Creation:
- Define VM specifications (CPU count, service package, VDOMs)
- Example: 2-CPU FortiGate with UTP bundle = ~6.5 points/day
- Use FortiFlex Calculator to estimate consumption:
https://fndn.fortinet.net/index.php?/tools/fortiflex/
API Access Setup:
- Create IAM permission profile including FortiFlex portal
- Create API user and download credentials
- Obtain API token via authentication endpoint
- Store credentials securely (AWS Secrets Manager recommended)
Point Consumption Examples
| Configuration | Daily Points | Monthly Points (30 days) | Annual Points |
|---|---|---|---|
| 1 CPU, FortiCare Premium | 1.63 | 49 | 595 |
| 2 CPU, UTP Bundle | 6.52 | 196 | 2,380 |
| 4 CPU, ATP Bundle | 26.08 | 782 | 9,519 |
| 8 CPU, Enterprise Bundle | 104.32 | 3,130 | 38,077 |
Note: Actual consumption varies based on specific service selections and VDOM count. Always use the FortiFlex Calculator for accurate estimates.
Hybrid Licensing Architecture
Overview
The autoscale template supports hybrid licensing configurations where multiple license types coexist within separate Auto Scaling Groups (ASGs). This architecture provides cost optimization by using BYOL or FortiFlex for baseline capacity and PAYG for elastic burst capacity.
Architecture Pattern
┌─────────────────────────────────────────────────────┐
│ GWLB Target Group │
│ (Unified) │
└────────┬────────────────────────────────┬───────────┘
│ │
▼ ▼
┌─────────────────┐ ┌─────────────────┐
│ BYOL/FortiFlex │ │ PAYG ASG │
│ ASG │ │ │
│ │ │ │
│ Min: 2 │ │ Min: 0 │
│ Max: 4 │ │ Max: 8 │
│ Desired: 2 │ │ Desired: 0 │
│ │ │ │
│ (Baseline) │ │ (Burst) │
└─────────────────┘ └─────────────────┘Configuration Strategy
Primary ASG (BYOL or FortiFlex):
- Configure with minimum = desired capacity
- Sets baseline capacity for steady-state traffic
- Lower per-instance cost for sustained operation
- Example:
min_size = 2,max_size = 4,desired_capacity = 2
Secondary ASG (PAYG):
- Configure with minimum = 0, desired = 0
- Remains dormant during normal operations
- Scales out only when primary ASG reaches maximum capacity
- Example:
min_size = 0,max_size = 8,desired_capacity = 0
Scaling Coordination:
- Configure CloudWatch alarms with staggered thresholds
- Primary ASG scales at lower CPU threshold (e.g., 60%)
- Secondary ASG scales at higher CPU threshold (e.g., 75%)
- Provides buffer for primary ASG to stabilize before burst scaling
Cost Optimization Example
Scenario: E-commerce application with baseline 4 Gbps throughput, occasional spikes to 12 Gbps
Hybrid Configuration:
Primary: 4x c6i.xlarge (4 vCPUs) with FortiFlex
- Daily points: 4 instances × 26.08 points = 104.32 points/day
- Monthly cost: ~$X (based on point pricing)
- Handles baseline traffic continuously
Secondary: 0-8x c6i.xlarge with PAYG
- Hourly cost: $Y per instance
- Scales only during traffic spikes (estimated 10% of time)
- Monthly cost: 8 instances × $Y/hour × 720 hours × 0.10 = $Z
Savings vs. Pure PAYG: Approximately 35-45% reduction for this traffic pattern
Implementation Notes
- Both ASGs register with same GWLB target group for unified traffic distribution
- Each ASG requires separate launch template with appropriate licensing configuration
- CloudWatch alarms must reference correct ASG names for scaling actions
- Lambda function handles license provisioning independently for each ASG
- Monitor scaling activities to validate primary ASG exhausts capacity before secondary ASG activates
License Selection Decision Tree
START: What is your deployment scenario?
│
├─ POC / Testing / Short-term project (< 6 months)
│ └─ Use: AWS Marketplace PAYG
│ └─ Rationale: Simplicity, no upfront investment, easy teardown
│
├─ Long-term production (> 12 months) with steady-state capacity
│ └─ Do you have existing Fortinet licenses or ELA?
│ ├─ YES → Use: BYOL
│ │ └─ Rationale: Lowest cost, leverage existing investment
│ └─ NO → Use: FortiFlex
│ └─ Rationale: Flexible, better cost than PAYG, no upfront licensing
│
├─ Production with variable traffic patterns
│ └─ Use: Hybrid (FortiFlex + PAYG)
│ └─ Rationale: Baseline cost optimization with elastic burst capacity
│
└─ Multi-environment deployment (dev/staging/prod)
└─ Use: FortiFlex
└─ Rationale: Point pooling across environments, on-demand provisioningBest Practices
General Recommendations
Calculate total cost of ownership (TCO):
- Project instance runtime hours over 12-36 months
- Factor in scaling frequency and burst capacity requirements
- Include license management overhead costs for BYOL
- Use FortiFlex Calculator for accurate point consumption estimates
Start with PAYG for prototyping:
- Validate architecture and sizing before committing to licenses
- Measure actual traffic patterns to inform license type selection
- Convert to BYOL or FortiFlex after requirements stabilize
Implement hybrid licensing for cost optimization:
- Use BYOL/FortiFlex for baseline capacity that runs 24/7
- Use PAYG for burst capacity that scales intermittently
- Monitor scaling patterns monthly and adjust ASG configurations
Automate license lifecycle management:
- Use Lambda functions for automated license provisioning
- Implement DynamoDB tracking for BYOL license assignments
- Enable CloudWatch alarms for FortiFlex point balance monitoring
- Store FortiFlex API credentials in AWS Secrets Manager
BYOL-Specific Best Practices
Maintain license inventory:
- Track assigned vs. available licenses in spreadsheet or CMDB
- Reserve 10-20% buffer above
asg_max_sizefor maintenance windows - Implement automated alerts when available licenses fall below threshold
Standardize license file naming:
- Use consistent naming convention (e.g.,
FGVMXX-001.lic) - Document naming pattern in deployment runbooks
- Ensure Lambda function matches naming pattern logic
- Use consistent naming convention (e.g.,
Test license recovery:
- Verify decommissioned instances return licenses to pool
- Validate DynamoDB table updates correctly
- Practice license recovery procedures before production incidents
FortiFlex-Specific Best Practices
Monitor point consumption actively:
- Review Point Usage reports weekly in FortiFlex portal
- Set up email notifications for low balance (90/60/30 day thresholds)
- Correlate point consumption with CloudWatch ASG metrics
Plan point pack purchases:
- Purchase points early in program year to maximize rollover (annual packs)
- Use multi-year packs for long-term stable deployments to avoid rollover complexity
- Maintain 20-30% buffer above projected consumption
Optimize entitlement lifecycle:
- STOP entitlements immediately after instance termination to halt point consumption
- Use Lambda automation to stop entitlements within minutes of scale-in events
- Review STOPPED entitlements weekly and delete if no longer needed
Right-size FortiGate configurations:
- Start with minimal CPU count and scale up as needed
- Use A La Carte service packages for cost optimization when not all services required
- Adjust configurations quarterly based on actual usage patterns
Troubleshooting
Common Licensing Issues
BYOL: Instances Launch Without License
Symptoms: FortiGate instance boots but no license is applied; limited functionality
Causes:
- License file not found in S3 bucket
- Incorrect
lic_folder_pathvariable - Lambda function lacks S3 permissions
- License file naming doesn’t match Lambda logic
- All licenses already assigned (pool exhausted)
Resolution:
- Verify license files exist in S3 bucket:
aws s3 ls s3://<bucket>/licenses/ - Check Lambda CloudWatch logs for S3 access errors
- Validate IAM role attached to Lambda has
s3:GetObjectpermission - Confirm available licenses exist in DynamoDB tracking table
- Manually apply license via FortiGate CLI:
execute restore config license.lic
FortiFlex: License Token Generation Fails
Symptoms: Instance launches but does not activate; no serial number assigned
Causes:
- FortiFlex API credentials expired or invalid
- Insufficient points in FortiFlex account
- FortiFlex program expired
- Network connectivity issues to FortiFlex API
- Configuration ID not found or deactivated
Resolution:
- Check Lambda CloudWatch logs for API authentication errors
- Verify FortiFlex API credentials:
curltest authentication endpoint - Log into FortiFlex portal and check point balance
- Confirm program status and expiration date
- Verify configuration exists and is active in FortiFlex portal
- Test network connectivity from Lambda to
https://support.fortinet.com
Hybrid Licensing: Secondary ASG Scales Before Primary Exhausted
Symptoms: PAYG instances launch while primary ASG has available capacity
Causes:
- CloudWatch alarm thresholds misconfigured
- Alarm evaluation periods too short
- ASG cooldown periods insufficient
- Stale CloudWatch metrics
Resolution:
- Review CloudWatch alarm configurations for both ASGs
- Increase primary ASG alarm threshold (e.g., 60% → 70%)
- Lower secondary ASG alarm threshold (e.g., 75% → 80%)
- Extend alarm evaluation periods to 3-5 minutes
- Implement alarm dependencies (secondary alarm checks primary ASG size)
License Not Applied After Instance Boot
Symptoms: Instance operational but running in limited mode or showing expired license
Causes:
- User-data script failed during execution
- License injection command syntax error
- Network connectivity issues during boot
- FortiGate version mismatch with license
Resolution:
- SSH to FortiGate instance and check status:
get system status - Review user-data execution logs:
/var/log/cloud-init-output.log - Manually inject license:
- BYOL:
execute restore config tftp <license.lic> <tftp_server> - FortiFlex:
execute vm-license <TOKEN>
- BYOL:
- Verify network connectivity:
execute ping fortiguard.com - Check FortiOS version compatibility with license type
Additional Resources
Official Documentation
- FortiFlex Administration Guide: docs.fortinet.com (search “FortiFlex”)
- FortiGate-VM Licensing Guide: docs.fortinet.com/document/fortigate-vm/
- AWS Marketplace FortiGate Listings: AWS Marketplace
Tools & Calculators
- FortiFlex Points Calculator: fndn.fortinet.net/index.php?/tools/fortiflex/
- AWS Pricing Calculator: calculator.aws (for PAYG cost estimation)
Support Channels
- FortiCare Portal: support.fortinet.com
- FortiFlex Portal: FortiCare > Services > Assets & Accounts > FortiFlex
- Technical Support: Open support ticket for licensing issues
- Sales Team: Contact for enterprise licensing agreements or volume discounts
Summary
Choosing the appropriate licensing model for your FortiGate autoscale deployment requires careful evaluation of deployment duration, traffic patterns, operational complexity tolerance, and budget constraints. This template supports all licensing models and hybrid configurations, enabling you to optimize costs while maintaining the flexibility to adapt to changing requirements.
Quick Selection Guide:
- PAYG: Simplicity matters more than cost; short-term or highly variable workloads
- BYOL: Lowest cost for long-term, predictable capacity; you have existing licenses
- FortiFlex: Balance of flexibility and cost; dynamic workloads without upfront licenses
- Hybrid: Best cost optimization; combine baseline BYOL/FortiFlex with PAYG burst capacity
Solution Components
The FortiGate Autoscale Simplified Template abstracts complex architectural patterns into configurable components that can be enabled or customized through the terraform.tfvars file.
This section provides detailed explanations of each component, configuration options, and architectural considerations to help you design the optimal deployment for your requirements.
What You’ll Learn
This section covers the major architectural elements available in the template:
- Internet Egress Options: Choose between EIP or NAT Gateway architectures
- Firewall Architecture: Understand 1-ARM vs 2-ARM configurations
- Management Isolation: Configure dedicated management ENI and VPC options
- Licensing: Manage BYOL licenses and integrate FortiFlex API
- FortiManager Integration: Enable centralized management and policy orchestration
- Capacity Planning: Configure autoscale group sizing and scaling strategies
- Primary Protection: Implement scale-in protection for configuration stability
- Additional Options: Fine-tune instance specifications and advanced settings
Each component page includes:
- Configuration examples
- Architecture diagrams
- Best practices
- Troubleshooting guidance
- Use case recommendations
Select a component from the navigation menu to learn more about specific configuration options.
Subsections of Solution Components
Internet Egress Options
Overview
The FortiGate autoscale solution provides two distinct architectures for internet egress traffic, each optimized for different operational requirements and cost considerations.
Option 1: Elastic IP (EIP) per Instance
Each FortiGate instance in the autoscale group receives a dedicated Elastic IP address. All traffic destined for the public internet is source-NATed behind the instance’s assigned EIP.
Configuration
access_internet_mode = "eip"Architecture Behavior
In EIP mode, the architecture routes all internet-bound traffic to port2 (the public interface). The route table for the public subnet directs traffic to the Internet Gateway (IGW), where automatic source NAT to the associated EIP occurs.
Advantages
- No NAT Gateway costs: Eliminates monthly NAT Gateway charges ($0.045/hour + data processing)
- Distributed egress: Each instance has independent internet connectivity
- Simplified troubleshooting: Per-instance source IP simplifies traffic flow analysis
- No single point of failure: Loss of one instance’s EIP doesn’t affect others
Considerations
- Unpredictable IP addresses: EIPs are allocated from AWS’s pool; you cannot predict or specify the assigned addresses
- Allowlist complexity: Destinations requiring IP allowlisting must accommodate a pool of EIPs (one per maximum autoscale capacity)
- IP churn during scaling: Scale-out events introduce new source IPs; scale-in events remove them
- Limited EIP quota: AWS accounts have default limits (5 EIPs per region, increased upon request)
Best Use Cases
- Cost-sensitive deployments where NAT Gateway charges exceed EIP allocation costs
- Environments where destination allowlisting is not required
- Architectures prioritizing distributed egress over consistent source IPs
- Development and testing environments with limited budget
Option 2: NAT Gateway
All FortiGate instances share one or more NAT Gateways deployed in public subnets. Traffic is source-NATed to the NAT Gateway’s static Elastic IP address.
Configuration
access_internet_mode = "nat_gw"Architecture Behavior
NAT Gateway mode requires additional subnet and route table configuration. Internet-bound traffic is first routed to the NAT Gateway in the public subnet, which performs source NAT to its static EIP before forwarding to the IGW.
Advantages
- Predictable source IP: Single, stable public IP address for the lifetime of the NAT Gateway
- Simplified allowlisting: Destinations only need to allowlist one IP address (per Availability Zone)
- High throughput: NAT Gateway supports up to 45 Gbps per AZ
- Managed service: AWS handles NAT Gateway scaling and availability
- Independent of FortiGate scaling: Source IP remains constant during scale-in/scale-out events
Considerations
- Additional costs: $0.045/hour per NAT Gateway + $0.045 per GB data processed
- Per-AZ deployment: Multi-AZ architectures require NAT Gateway in each AZ for fault tolerance
- Additional subnet requirements: Requires dedicated NAT Gateway subnet in each AZ
- Route table complexity: Additional route tables needed for NAT Gateway routing
Cost Analysis Example
Scenario: 4 FortiGate instances processing 10 TB/month egress traffic
EIP Mode:
- 4 EIP allocations: $0 (first EIP free, then $0.00/hour per EIP)
- Total monthly: ~$0 (minimal costs)
NAT Gateway Mode (2 AZs):
- 2 NAT Gateways: 2 × $0.045/hour × 730 hours = $65.70
- Data processing: 10,000 GB × $0.045 = $450.00
- Total monthly: $515.70
Decision Point: NAT Gateway makes sense when consistent source IP requirement justifies the additional cost.
Best Use Cases
- Production environments requiring predictable source IPs
- Compliance scenarios where destination IP allowlisting is mandatory
- High-volume egress traffic to SaaS providers with IP allowlisting requirements
- Architectures where operational simplicity outweighs additional cost
Decision Matrix
| Factor | EIP Mode | NAT Gateway Mode |
|---|---|---|
| Monthly Cost | Minimal | $500+ (varies with traffic) |
| Source IP Predictability | Variable (changes with scaling) | Stable |
| Allowlisting Complexity | High (multiple IPs) | Low (single IP per AZ) |
| Throughput | Per-instance limit | Up to 45 Gbps per AZ |
| Operational Complexity | Low | Medium |
| Best For | Dev/test, cost-sensitive | Production, compliance-driven |
Next Steps
After selecting your internet egress option, proceed to Firewall Architecture to configure the FortiGate interface model.
Firewall Architecture
Overview
FortiGate instances can operate in single-arm (1-ARM) or dual-arm (2-ARM) network configurations, fundamentally changing traffic flow patterns through the firewall.
Configuration
firewall_policy_mode = "1-arm" # or "2-arm"2-ARM Configuration (Recommended for Most Deployments)
Architecture Overview
The 2-ARM configuration deploys FortiGate instances with distinct “trusted” (private) and “untrusted” (public) interfaces, providing clear network segmentation.
Traffic Flow:
- Traffic arrives at GWLB Endpoints (GWLBe) in the inspection VPC
- GWLB load-balances traffic across healthy FortiGate instances
- Traffic encapsulated in Geneve tunnels arrives at FortiGate port1 (data plane)
- FortiGate inspects traffic and applies security policies
- Internet-bound traffic exits via port2 (public interface)
- Port2 traffic is source-NATed via EIP or NAT Gateway
- Return traffic follows reverse path back through Geneve tunnels
Interface Assignments
- port1: Data plane interface for GWLB connectivity (Geneve tunnel termination)
- port2: Public interface for internet egress (with optional dedicated management when enabled)
Network Interfaces Visualization
The FortiGate GUI displays both physical interfaces and logical Geneve tunnel interfaces. Traffic inspection occurs on the logical tunnel interfaces, while physical port2 handles egress.
Advantages
- Clear network segmentation: Separate trusted and untrusted zones
- Traditional firewall model: Familiar architecture for network security teams
- Simplified policy creation: North-South policies align with interface direction
- Better traffic visibility: Distinct ingress/egress paths ease troubleshooting
- Dedicated management option: Port2 can be isolated for management traffic
Best Use Cases
- Production deployments requiring clear network segmentation
- Environments with security policies mandating separate trusted/untrusted zones
- Architectures where dedicated management interface is required
- Standard north-south inspection use cases
1-ARM Configuration
Architecture Overview
The 1-ARM configuration uses a single interface (port1) for all data plane traffic, eliminating the need for a second network interface.
Traffic Flow:
- Traffic arrives at port1 encapsulated in Geneve tunnels from GWLB
- FortiGate inspects traffic and applies security policies
- Traffic is hairpinned back through the same Geneve tunnel it arrived on
- Traffic returns to originating distributed VPC through GWLB
- Distributed VPC uses its own internet egress path (IGW/NAT Gateway)
This “bump-in-the-wire” architecture is the typical 1-ARM pattern for distributed inspection, where the FortiGate provides security inspection but traffic egresses from the spoke VPC, not the inspection VPC.
Important Behavior: Stateful Load Balancing
GWLB Statefulness: The Gateway Load Balancer maintains connection state tables for traffic flows.
Primary Traffic Pattern (Distributed Architecture):
- ✅ Traffic enters via Geneve tunnel → FortiGate inspection → Hairpins back through same Geneve tunnel
- ✅ Distributed VPC handles actual internet egress via its own IGW/NAT Gateway
- ✅ This “bump-in-the-wire” model provides security inspection without routing traffic through inspection VPC
Key Requirement: Symmetric routing through the GWLB. Traffic must return via the same Geneve tunnel it arrived on to maintain proper state table entries.
Info
Centralized Egress Architecture (Transit Gateway Pattern)
In centralized egress deployments with Transit Gateway, the traffic flow is fundamentally different and represents the primary use case for internet egress through the inspection VPC:
Traffic Flow:
- Spoke VPC traffic routes to Transit Gateway
- TGW routes traffic to inspection VPC
- Traffic enters GWLBe (same AZ to avoid cross-AZ charges)
- GWLB forwards traffic through Geneve tunnel to FortiGate
- FortiGate inspects traffic and applies security policies
- Traffic exits port1 (1-ARM) or port2 (2-ARM) toward internet
- Egress via EIP or NAT Gateway in inspection VPC
- Response traffic returns via same interface to same Geneve tunnel
This is the standard architecture for centralized internet egress where:
- All spoke VPCs route internet-bound traffic through the inspection VPC
- FortiGate autoscale group provides centralized security inspection AND NAT
- Single egress point simplifies security policy management and reduces costs
- Requires careful route table configuration to maintain symmetric routing
When to use: Centralized egress architectures where spoke VPCs do NOT have their own internet gateways.
Note
Distributed Architecture - Alternative Pattern (Advanced Use Case)
In distributed architectures where spoke VPCs have their own internet egress, it is possible (but not typical) to configure traffic to exit through the inspection VPC instead of hairpinning:
- Traffic enters via Geneve tunnel → Exits port1 to internet → Response returns via port1 to same Geneve tunnel
This pattern requires:
- Careful route table configuration in the inspection VPC
- Specific firewall policies on the FortiGate
- Proper symmetric routing to maintain GWLB state tables
This is rarely used in distributed architectures since spoke VPCs typically handle their own egress. The standard bump-in-the-wire pattern (hairpin through same Geneve tunnel) is recommended when spoke VPCs have internet gateways.
Interface Assignments
- port1: Combined data plane (Geneve) and egress (internet) interface
Advantages
- Reduced complexity: Single interface simplifies routing and subnet allocation
- Lower costs: Fewer ENIs to manage and potential for smaller instance types
- Simplified subnet design: Only requires one data subnet per AZ
Considerations
- Hairpinning pattern: Traffic typically hairpins back through same Geneve tunnel
- Higher port1 bandwidth requirements: All traffic flows through single interface (both directions)
- Limited management options: Cannot enable dedicated management ENI in true 1-ARM mode
- Symmetric routing requirement: All traffic must egress and return via port1 for proper state table maintenance
Best Use Cases
- Cost-optimized deployments with lower throughput requirements
- Simple north-south inspection without management VPC integration
- Development and testing environments
- Architectures where simplified subnet design is prioritized
Comparison Matrix
| Factor | 1-ARM | 2-ARM |
|---|---|---|
| Interfaces Required | 1 (port1) | 2 (port1 + port2) |
| Network Complexity | Lower | Higher |
| Cost | Lower | Slightly higher |
| Management Isolation | Not available | Available |
| Traffic Pattern | Hairpin (distributed) or egress (centralized) | Clear ingress/egress separation |
| Best For | Simple deployments, cost optimization | Production, clear segmentation |
Next Steps
After selecting your firewall architecture, proceed to Dedicated Management ENI to learn about management plane isolation options.
Management Isolation Options
Overview
The FortiGate autoscale solution provides multiple approaches to isolating management traffic from data plane traffic, ranging from shared interfaces to complete physical network separation.
This page covers three progressive levels of management isolation, allowing you to choose the appropriate security posture for your deployment requirements.
Option 1: Combined Data + Management (Default)
Architecture Overview
In the default configuration, port2 serves dual purposes:
- Data plane: Internet egress for inspected traffic (in 2-ARM mode)
- Management plane: GUI, SSH, SNMP access
Configuration
enable_dedicated_management_eni = false
enable_dedicated_management_vpc = falseCharacteristics
- Simplest configuration: No additional interfaces or VPCs required
- Lower cost: Minimal infrastructure overhead
- Shared security groups: Same rules govern data and management traffic
- Single failure domain: Management access tied to data plane availability
When to Use
- Development and testing environments
- Proof-of-concept deployments
- Budget-constrained projects
- Simple architectures without compliance requirements
Option 2: Dedicated Management ENI
Architecture Overview
Port2 is removed from the data plane and dedicated exclusively to management functions. FortiOS configures the interface with set dedicated-to management, placing it in an isolated VRF with independent routing.
Configuration
enable_dedicated_management_eni = trueHow It Works
- Dedicated-to attribute: FortiOS configures port2 with
set dedicated-to management - Separate VRF: Port2 is placed in an isolated VRF with independent routing table
- Policy restrictions: FortiGate prevents creation of firewall policies using port2
- Management-only traffic: GUI, SSH, SNMP, and FortiManager/FortiAnalyzer connectivity
FortiOS Configuration Impact
The dedicated management ENI can be verified in the FortiGate GUI:
The interface shows the dedicated-to: management attribute and separate VRF assignment, preventing data plane traffic from using this interface.
Important Compatibility Notes
Warning
Critical Limitation: 2-ARM + NAT Gateway + Dedicated Management ENI
When combining:
firewall_policy_mode = "2-arm"access_internet_mode = "nat_gw"enable_dedicated_management_eni = true
Port2 will NOT receive an Elastic IP address. This is a valid configuration, but imposes connectivity restrictions:
- ❌ Cannot access FortiGate management from public internet
- ✅ Can access via private IP through AWS Direct Connect or VPN
- ✅ Can access via management VPC (see Option 3 below)
If you require public internet access to the FortiGate management interface with NAT Gateway egress, either:
- Use
access_internet_mode = "eip"(assigns EIP to port2) - Use dedicated management VPC with separate internet connectivity (Option 3)
- Implement AWS Systems Manager Session Manager for private connectivity
Characteristics
- Clear separation of concerns: Management traffic isolated from data plane
- Independent security policies: Separate security groups for management interface
- Enhanced security posture: Reduces attack surface on management plane
- Moderate complexity: Requires additional subnet and routing configuration
When to Use
- Production deployments requiring management isolation
- Security-conscious environments
- Architectures without dedicated management VPC
- Compliance requirements for management plane separation
Option 3: Dedicated Management VPC (Full Isolation)
Architecture Overview
The dedicated management VPC provides complete physical network separation by deploying FortiGate management interfaces in an entirely separate VPC from the data plane.
Configuration
enable_dedicated_management_vpc = true
dedicated_management_vpc_tag = "your-mgmt-vpc-tag"
dedicated_management_public_az1_subnet_tag = "your-az1-subnet-tag"
dedicated_management_public_az2_subnet_tag = "your-az2-subnet-tag"Benefits
- Physical network separation: Management traffic never traverses inspection VPC
- Independent internet connectivity: Management VPC has dedicated IGW or VPN
- Centralized management infrastructure: FortiManager and FortiAnalyzer deployed in management VPC
- Separate security controls: Management VPC security groups independent of data plane
- Isolated failure domains: Management VPC issues don’t affect data plane
Management VPC Creation Options
Option A: Created by existing_vpc_resources Template (Recommended)
The existing_vpc_resources template creates the management VPC with standardized tags that the simplified template automatically discovers.
Advantages:
- Management VPC lifecycle independent of inspection VPC
- FortiManager/FortiAnalyzer persistence across inspection VPC redeployments
- Separation of concerns for infrastructure management
Default Tags (automatically created):
Configuration (terraform.tfvars):
enable_dedicated_management_vpc = true
dedicated_management_vpc_tag = "acme-test-management-vpc"
dedicated_management_public_az1_subnet_tag = "acme-test-management-public-az1-subnet"
dedicated_management_public_az2_subnet_tag = "acme-test-management-public-az2-subnet"Option B: Use Existing Management VPC
If you have an existing management VPC with custom tags, configure the template to discover it:
Configuration:
enable_dedicated_management_vpc = true
dedicated_management_vpc_tag = "my-custom-mgmt-vpc-tag"
dedicated_management_public_az1_subnet_tag = "my-custom-mgmt-public-az1-tag"
dedicated_management_public_az2_subnet_tag = "my-custom-mgmt-public-az2-tag"The template uses these tags to locate the management VPC and subnets via Terraform data sources.
Behavior When Enabled
When enable_dedicated_management_vpc = true:
- Automatic ENI creation: Template creates dedicated management ENI (port2) in management VPC subnets
- Implies dedicated management ENI: Automatically sets
enable_dedicated_management_eni = true - VPC peering/TGW: Management VPC must have connectivity to inspection VPC for HA sync
- Security group creation: Appropriate security groups created for management traffic
Network Connectivity Requirements
Management VPC → Inspection VPC Connectivity:
- Required for FortiGate HA synchronization between instances
- Typically implemented via VPC peering or Transit Gateway attachment
- Must allow TCP port 443 (HA sync), TCP 22 (SSH), ICMP (health checks)
Management VPC → Internet Connectivity:
- Required for FortiGuard services (signature updates, licensing)
- Required for administrator access to FortiGate management interfaces
- Can be via Internet Gateway, NAT Gateway, or AWS Direct Connect
Characteristics
- Highest security posture: Complete physical isolation
- Greatest flexibility: Independent infrastructure lifecycle
- Higher complexity: Requires VPC peering or TGW configuration
- Additional cost: Separate VPC infrastructure and data transfer charges
When to Use
- Enterprise production deployments
- Strict compliance requirements (PCI-DSS, HIPAA, etc.)
- Multi-account AWS architectures
- Environments with dedicated management infrastructure
- Organizations with existing management VPCs for network security appliances
Comparison Matrix
| Factor | Combined (Default) | Dedicated ENI | Dedicated VPC |
|---|---|---|---|
| Security Isolation | Low | Medium | High |
| Complexity | Lowest | Medium | Highest |
| Cost | Lowest | Low | Medium |
| Management Access | Via data plane interface | Via dedicated interface | Via separate VPC |
| Failure Domain Isolation | No | Partial | Complete |
| VPC Peering Required | No | No | Yes |
| Compliance Suitability | Basic | Good | Excellent |
| Best For | Dev/test, simple deployments | Production, security-conscious | Enterprise, compliance-driven |
Decision Tree
Use this decision tree to select the appropriate management isolation level:
1. Is this a production deployment?
├─ No → Combined Data + Management (simplest)
└─ Yes → Continue to question 2
2. Do you have compliance requirements for management plane isolation?
├─ No → Dedicated Management ENI (good balance)
└─ Yes → Continue to question 3
3. Do you have existing management VPC infrastructure?
├─ Yes → Dedicated Management VPC (leverage existing)
└─ No → Evaluate cost/benefit:
├─ High security requirements → Dedicated Management VPC
└─ Moderate requirements → Dedicated Management ENIDeployment Patterns
Pattern 1: Dedicated ENI + EIP Mode
firewall_policy_mode = "2-arm"
access_internet_mode = "eip"
enable_dedicated_management_eni = true- Port2 receives EIP for public management access
- Suitable for environments without management VPC
- Simplified deployment with direct internet management access
Pattern 2: Dedicated ENI + Management VPC
firewall_policy_mode = "2-arm"
access_internet_mode = "nat_gw"
enable_dedicated_management_vpc = true
dedicated_management_vpc_tag = "my-mgmt-vpc"- Port2 connects to separate management VPC
- Management VPC has dedicated internet gateway or VPN connectivity
- Preferred for production environments with strict network segmentation
Pattern 3: Combined Management (Default)
firewall_policy_mode = "2-arm"
access_internet_mode = "eip"
enable_dedicated_management_eni = false- Port2 remains in data plane
- Management access shares public interface with egress traffic
- Simplest configuration but lacks management plane isolation
Best Practices
- Enable dedicated management ENI for production: Provides clear separation of concerns
- Use dedicated management VPC for enterprise deployments: Optimal security posture
- Document connectivity requirements: Ensure operations teams understand access paths
- Test connectivity before production: Verify alternative access methods work
- Plan for failure scenarios: Ensure backup access methods (SSM, VPN) are available
- Use existing_vpc_resources template for management VPC: Separates lifecycle management
- Document tag conventions: Ensure consistent tagging across environments
- Monitor management interface health: Set up CloudWatch alarms for management connectivity
Troubleshooting
Issue: Cannot access FortiGate management interface
Check:
- Security groups allow inbound traffic on management port (443, 22)
- Route tables provide path from your location to management interface
- If using dedicated management VPC, verify VPC peering or TGW is operational
- If using NAT Gateway mode, verify you have alternative access method (VPN, Direct Connect)
Issue: Management interface has no public IP
Cause: Using access_internet_mode = "nat_gw" with dedicated management ENI
Solutions:
- Switch to
access_internet_mode = "eip"to receive public IP on port2 - Enable
enable_dedicated_management_vpc = truewith separate internet connectivity - Use AWS Systems Manager Session Manager for private access
- Configure VPN or Direct Connect for private network access
Issue: HA sync not working with dedicated management VPC
Check:
- VPC peering or TGW attachment is configured between management and inspection VPCs
- Security groups allow TCP 443 between FortiGate instances
- Route tables in both VPCs have routes to each other’s subnets
- Network ACLs permit required traffic
Next Steps
After configuring management isolation, proceed to Licensing Options to choose between BYOL, FortiFlex, or PAYG.
Licensing Options
Overview
The FortiGate autoscale solution supports three distinct licensing models, each optimized for different use cases, cost structures, and operational requirements. You can use a single licensing model or combine them in hybrid configurations for optimal cost efficiency.
Licensing Model Comparison
| Factor | BYOL | FortiFlex | PAYG |
|---|---|---|---|
| Total Cost (12 months) | Lowest | Medium | Highest |
| Upfront Investment | High | Medium | None |
| License Management | Manual (files) | API-driven | None |
| Flexibility | Low | High | Highest |
| Capacity Constraints | Yes (license pool) | Soft (point balance) | None |
| Best For | Long-term, predictable | Variable, flexible | Short-term, simple |
| Setup Complexity | Medium | High | Lowest |
Option 1: BYOL (Bring Your Own License)
Overview
BYOL uses traditional FortiGate-VM license files that you purchase from Fortinet or resellers. The template automates license distribution through S3 bucket storage and Lambda-based assignment.
Configuration
asg_license_directory = "asg_license"
asg_byol_asg_min_size = 2
asg_byol_asg_max_size = 4Directory Structure Requirements
Place BYOL license files in the directory specified by asg_license_directory:
terraform/autoscale_template/
├── terraform.tfvars
├── asg_license/
│ ├── FGVM01-001.lic
│ ├── FGVM01-002.lic
│ ├── FGVM01-003.lic
│ └── FGVM01-004.licAutomated License Assignment
- Terraform uploads
.licfiles to S3 duringterraform apply - Lambda retrieves available licenses when instances launch
- DynamoDB tracks assignments to prevent duplicates
- Lambda injects license via user-data script
- Licenses return to pool when instances terminate
Critical Capacity Planning
Warning
License Pool Exhaustion
Ensure your license directory contains at minimum licenses equal to asg_byol_asg_max_size.
What happens if licenses are exhausted:
- New BYOL instances launch but remain unlicensed
- Unlicensed instances operate at 1 Mbps throughput
- FortiGuard services will not activate
- If PAYG ASG is configured, scaling continues using on-demand instances
Recommended: Provision 20% more licenses than max_size
Characteristics
- ✅ Lowest total cost: Best value for long-term (12+ months)
- ✅ Predictable costs: Fixed licensing regardless of usage
- ⚠️ License management: Requires managing physical files
- ⚠️ Upfront investment: Must purchase licenses in advance
When to Use
- Long-term production (12+ months)
- Predictable, steady-state workloads
- Existing FortiGate BYOL licenses
- Cost-conscious deployments
Option 2: FortiFlex (Usage-Based Licensing)
Overview
FortiFlex provides consumption-based, API-driven licensing. Points are consumed daily based on configuration, offering flexibility and cost optimization compared to PAYG.
Prerequisites
- Register FortiFlex Program via FortiCare
- Purchase Point Packs
- Create Configurations in FortiFlex portal
- Generate API Credentials via IAM
For detailed setup, see Licensing Section.
Configuration
fortiflex_username = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
fortiflex_password = "xxxxxxxxxxxxxxxxxxxxx"
fortiflex_sn_list = ["FGVMELTMxxxxxxxx"]
fortiflex_configid_list = ["My_4CPU_Config"]Warning
FortiFlex Serial Number List - Optional
- If defined: Use entitlements from specific programs only
- If omitted: Use any available entitlements with matching configurations
Important: Entitlements must be created manually in FortiFlex portal before deployment.
Obtaining Required Values
1. API Username and Password:
- Navigate to Services > IAM in FortiCare
- Create permission profile with FortiFlex Read/Write access
- Create API user and download credentials
- Username is UUID in credentials file
2. Serial Number List:
- Navigate to Services > Assets & Accounts > FortiFlex
- View your FortiFlex programs
- Note serial numbers from program details
3. Configuration ID List:
- In FortiFlex portal, go to Configurations
- Configuration ID is the Name field you assigned
Match CPU counts:
fgt_instance_type = "c6i.xlarge" # 4 vCPUs
fortiflex_configid_list = ["My_4CPU_Config"] # Must matchWarning
Security Best Practice
Never commit FortiFlex credentials to version control. Use:
- Terraform Cloud sensitive variables
- AWS Secrets Manager
- Environment variables:
TF_VAR_fortiflex_username - HashiCorp Vault
Lambda Integration Behavior
At instance launch:
- Lambda authenticates to FortiFlex API
- Creates new entitlement under specified configuration
- Receives and injects license token
- Instance activates, point consumption begins
At instance termination:
- Lambda calls API to STOP entitlement
- Point consumption halts immediately
- Entitlement preserved for reactivation
Troubleshooting
Problem: Instances don’t activate license
- Check Lambda CloudWatch logs for API errors
- Verify FortiFlex portal for failed entitlements
- Confirm network connectivity to FortiFlex API
Problem: “Insufficient points” error
- Check point balance in FortiFlex portal
- Purchase additional point packs
- Verify configurations use expected CPU counts
Characteristics
- ✅ Flexible consumption: Pay only for what you use
- ✅ No license file management: API-driven automation
- ✅ Lower cost than PAYG: Typically 20-40% less
- ⚠️ Point-based: Requires monitoring consumption
- ⚠️ API credentials: Additional security considerations
When to Use
- Variable workloads with unpredictable scaling
- Development and testing
- Short to medium-term (3-12 months)
- Burst capacity in hybrid architectures
Option 3: PAYG (Pay-As-You-Go)
Overview
PAYG uses AWS Marketplace on-demand instances with licensing included in hourly EC2 charge.
Configuration
asg_ondemand_asg_min_size = 0
asg_ondemand_asg_max_size = 4
asg_ondemand_asg_desired_size = 0How It Works
- Accept FortiGate-VM AWS Marketplace terms
- Lambda launches instances using Marketplace AMI
- FortiGate activates automatically via AWS
- Hourly licensing cost added to EC2 charge
Characteristics
- ✅ Simplest option: Zero license management
- ✅ No upfront commitment: Pay per running hour
- ✅ Instant availability: No license pool constraints
- ⚠️ Highest hourly cost: Premium pricing for convenience
When to Use
- Proof-of-concept and evaluation
- Very short-term (< 3 months)
- Burst capacity in hybrid architectures
- Zero license administration requirement
Cost Comparison Example
Scenario: 2 FortiGate-VM instances (c6i.xlarge, 4 vCPU, UTP) running 24/7
| Duration | BYOL | FortiFlex | PAYG | Winner |
|---|---|---|---|---|
| 1 month | $2,730 | $1,030 | $1,460 | FortiFlex |
| 3 months | $4,190 | $3,090 | $4,380 | FortiFlex |
| 12 months | $10,760 | $12,360 | $17,520 | BYOL |
| 24 months | $19,520 | $24,720 | $35,040 | BYOL |
Note: Illustrative costs. Actual pricing varies by term and bundle.
Hybrid Licensing Strategies
Strategy 1: BYOL Baseline + PAYG Burst (Recommended)
# BYOL for baseline
asg_license_directory = "asg_license"
asg_byol_asg_min_size = 2
asg_byol_asg_max_size = 4
# PAYG for burst
asg_ondemand_asg_max_size = 4Best for: Production with occasional spikes
Strategy 2: FortiFlex Baseline + PAYG Burst
# FortiFlex for flexible baseline
fortiflex_configid_list = ["My_4CPU_Config"]
asg_byol_asg_max_size = 4
# PAYG for burst
asg_ondemand_asg_max_size = 4Best for: Variable workloads with unpredictable spikes
Strategy 3: All BYOL (Cost-Optimized)
asg_license_directory = "asg_license"
asg_byol_asg_min_size = 2
asg_byol_asg_max_size = 6
asg_ondemand_asg_max_size = 0Best for: Stable, predictable workloads
Strategy 4: All PAYG (Simplest)
asg_byol_asg_max_size = 0
asg_ondemand_asg_min_size = 2
asg_ondemand_asg_max_size = 8Best for: POC, short-term, extreme variability
Decision Tree
1. Expected deployment duration?
├─ < 3 months → PAYG
├─ 3-12 months → FortiFlex or evaluate costs
└─ > 12 months → BYOL + PAYG burst
2. Workload predictable?
├─ Yes, stable → BYOL
└─ No, variable → FortiFlex or Hybrid
3. Want to manage license files?
├─ No → FortiFlex or PAYG
└─ Yes, for cost savings → BYOL
4. Tolerance for complexity?
├─ Low → PAYG
├─ Medium → FortiFlex
└─ High (cost focus) → BYOLBest Practices
- Calculate TCO: Use comparison matrix for your scenario
- Start simple: Begin with PAYG for POC, optimize for production
- Monitor costs: Track consumption via CloudWatch and FortiFlex reports
- Provision buffer: 20% more licenses/entitlements than max_size
- Secure credentials: Never commit FortiFlex credentials to git
- Test assignment: Verify Lambda logs show successful injection
- Plan exhaustion: Configure PAYG burst as safety net
- Document strategy: Ensure ops team understands hybrid configs
Next Steps
After configuring licensing, proceed to FortiManager Integration for centralized management.
FortiManager Integration
Overview
The template supports optional integration with FortiManager for centralized management, policy orchestration, and configuration synchronization across the autoscale group.
Configuration
Enable FortiManager integration by setting the following variables in terraform.tfvars:
enable_fortimanager_integration = true
fortimanager_ip = "10.0.100.50"
fortimanager_sn = "FMGVM0000000001"
fortimanager_vrf_select = 1Variable Definitions
| Variable | Type | Required | Description |
|---|---|---|---|
enable_fortimanager_integration | boolean | Yes | Master switch to enable/disable FortiManager integration |
fortimanager_ip | string | Yes | FortiManager IP address or FQDN accessible from FortiGate management interfaces |
fortimanager_sn | string | Yes | FortiManager serial number for device registration |
fortimanager_vrf_select | number | No | VRF ID for routing to FortiManager (default: 0 for global VRF) |
How FortiManager Integration Works
When enable_fortimanager_integration = true:
- Lambda generates FortiOS config: Lambda function creates
config system central-managementstanza - Primary instance registration: Only the primary FortiGate instance registers with FortiManager
- VDOM exception configured: Lambda adds
config system vdom-exceptionto prevent central-management config from syncing to secondaries - Configuration synchronization: Primary instance syncs configuration to secondary instances via FortiGate-native HA sync
- Policy deployment: Policies deployed from FortiManager propagate through primary → secondary sync
Generated FortiOS Configuration
Lambda automatically generates the following configuration on the primary instance only:
config system vdom-exception
edit 0
set object system.central-management
next
end
config system central-management
set type fortimanager
set fmg 10.0.100.50
set serial-number FMGVM0000000001
set vrf-select 1
endSecondary instances do not receive central-management configuration, preventing:
- Orphaned device entries on FortiManager during scale-in events
- Confusion about which instance is authoritative for policy
- Unnecessary FortiManager license consumption
Network Connectivity Requirements
FortiGate → FortiManager:
- TCP 541: FortiManager to FortiGate communication (FGFM protocol)
- TCP 514 (optional): Syslog if logging to FortiManager
- HTTPS 443: FortiManager GUI access for administrators
Ensure:
- Security groups allow traffic from FortiGate management interfaces to FortiManager
- Route tables provide path to FortiManager IP
- Network ACLs permit required traffic
- VRF routing configured if using non-default VRF
VRF Selection
The fortimanager_vrf_select parameter specifies which VRF to use for FortiManager connectivity:
Common scenarios:
0(default): Use global VRF; FortiManager accessible via default routing table1or higher: Use specific management VRF; FortiManager accessible via separate routing domain
When to use non-default VRF:
- FortiManager in separate management VPC requiring VPC peering or TGW
- Network segmentation requires management traffic in dedicated VRF
- Multiple VRFs configured and explicit path selection needed
FortiManager 7.6.3+ Critical Requirement
Warning
CRITICAL: FortiManager 7.6.3+ Requires VM Device Recognition
Starting with FortiManager version 7.6.3, VM serial numbers are not recognized by default for security purposes.
If you deploy FortiGate-VM instances with enable_fortimanager_integration = true to a FortiManager 7.6.3 or later WITHOUT enabling VM device recognition, instances will FAIL to register.
Required Configuration on FortiManager 7.6.3+:
Before deploying FortiGate instances, log into FortiManager CLI and enable VM device recognition:
config system global
set fgfm-allow-vm enable
endVerify the setting:
show system global | grep fgfm-allow-vmImportant notes:
- This configuration must be completed BEFORE deploying FortiGate-VM instances
- When upgrading from FortiManager < 7.6.3, existing managed VM devices continue functioning, but new VM devices cannot be added until
fgfm-allow-vmis enabled - This setting is global and affects all ADOMs on the FortiManager
- This is a one-time configuration change per FortiManager instance
Verification after deployment:
- Navigate to Device Manager > Device & Groups in FortiManager GUI
- Confirm FortiGate-VM instances appear as unauthorized devices (not as errors)
- Authorize devices as normal
Troubleshooting if instances fail to register:
- Check FortiManager version:
get system status - If version is 7.6.3 or later, verify
fgfm-allow-vmis enabled - If disabled, enable it and wait 1-5 minutes for FortiGate instances to retry registration
- Check FortiManager logs:
diagnose debug application fgfmd -1
FortiManager Workflow
After deployment:
Verify device registration:
- Log into FortiManager GUI
- Navigate to Device Manager > Device & Groups
- Confirm primary FortiGate instance appears as unauthorized device
Authorize device:
- Right-click on unauthorized device
- Select Authorize
- Assign to appropriate ADOM and device group
Install policy package:
- Create or assign policy package to authorized device
- Click Install to push policies to FortiGate
Verify configuration sync:
- Make configuration change on FortiManager
- Install policy package to primary FortiGate
- Verify change appears on secondary FortiGate instances via HA sync
Best Practices
- Pre-configure FortiManager: Create ADOMs, device groups, and policy packages before deploying autoscale group
- Test in non-production: Validate FortiManager integration in dev/test environment first
- Monitor device status: Set up FortiManager alerts for device disconnections
- Document policy workflow: Ensure team understands FortiManager → Primary → Secondary sync pattern
- Plan for primary failover: If primary instance fails, new primary automatically registers with FortiManager
- Backup FortiManager regularly: Critical single point of management; ensure proper backup strategy
Reference Documentation
For complete FortiManager integration details, including User Managed Scaling (UMS) mode, see the project file: FortiManager Integration Configuration
Next Steps
After configuring FortiManager integration, proceed to Autoscale Group Capacity to configure instance counts and scaling behavior.
Autoscale Group Capacity
Overview
Configure the autoscale group size parameters to define minimum, maximum, and desired instance counts for both BYOL and on-demand (PAYG) autoscale groups.
Configuration
# BYOL ASG capacity
asg_byol_asg_min_size = 1
asg_byol_asg_max_size = 2
asg_byol_asg_desired_size = 1
# On-Demand (PAYG) ASG capacity
asg_ondemand_asg_min_size = 0
asg_ondemand_asg_max_size = 2
asg_ondemand_asg_desired_size = 0Parameter Definitions
| Parameter | Description | Recommendations |
|---|---|---|
min_size | Minimum number of instances ASG maintains | Set to baseline capacity requirement |
max_size | Maximum number of instances ASG can scale to | Set based on peak traffic projections + 20% buffer |
desired_size | Target number of instances ASG attempts to maintain | Typically equals min_size for baseline capacity |
Capacity Planning Strategies
Strategy 1: BYOL Baseline with PAYG Burst (Recommended)
Objective: Optimize costs by using BYOL for steady-state traffic and PAYG for unpredictable spikes
# BYOL handles baseline 24/7 traffic
asg_byol_asg_min_size = 2
asg_byol_asg_max_size = 4
asg_byol_asg_desired_size = 2
# PAYG handles burst traffic only
asg_ondemand_asg_min_size = 0
asg_ondemand_asg_max_size = 6
asg_ondemand_asg_desired_size = 0Scaling behavior:
- Normal operations: 2 BYOL instances handle traffic
- Traffic increases: BYOL ASG scales up to 4 instances
- Traffic continues increasing: PAYG ASG scales from 0 → 6 instances
- Traffic decreases: PAYG ASG scales down to 0, then BYOL ASG scales down to 2
Strategy 2: All PAYG (Simplest)
Objective: Maximum flexibility with zero license management overhead
# No BYOL instances
asg_byol_asg_min_size = 0
asg_byol_asg_max_size = 0
asg_byol_asg_desired_size = 0
# All capacity is PAYG
asg_ondemand_asg_min_size = 2
asg_ondemand_asg_max_size = 8
asg_ondemand_asg_desired_size = 2Use cases:
- Proof of concept or testing
- Short-term projects (< 6 months)
- Extreme variability where license planning is impractical
Strategy 3: All BYOL (Lowest Cost)
Objective: Minimum operating costs for long-term, predictable workloads
# All capacity is BYOL
asg_byol_asg_min_size = 2
asg_byol_asg_max_size = 6
asg_byol_asg_desired_size = 2
# No PAYG instances
asg_ondemand_asg_min_size = 0
asg_ondemand_asg_max_size = 0
asg_ondemand_asg_desired_size = 0Requirements:
- Sufficient BYOL licenses for
max_size(6 in this example) - Predictable traffic patterns that rarely exceed max capacity
- Willingness to accept capacity ceiling (no burst beyond BYOL max)
CloudWatch Alarm Integration
Autoscale group scaling is triggered by CloudWatch alarms monitoring CPU utilization:
Default thresholds (set in underlying module):
- Scale-out alarm: CPU > 70% for 2 consecutive periods (2 minutes)
- Scale-in alarm: CPU < 30% for 2 consecutive periods (2 minutes)
Customization (requires editing underlying module):
# Located in module: fortinetdev/cloud-modules/aws
scale_out_threshold = 80 # Higher threshold = more aggressive cost optimization
scale_in_threshold = 20 # Lower threshold = more aggressive cost optimizationCapacity Planning Calculator
Formula: Capacity Needed = (Peak Gbps Throughput) / (Per-Instance Gbps) × 1.2
Example:
- Peak throughput requirement: 8 Gbps
- c6i.xlarge (4 vCPU) with IPS enabled: ~2 Gbps per instance
- Calculation: 8 / 2 × 1.2 = 4.8 → round up to 5 instances
- Set
max_size = 5or higher for safety margin
Important Considerations
Tip
Testing Capacity Settings
For initial deployments and testing:
- Start with min_size = 1 and max_size = 2 to verify traffic flows correctly
- Test scaling by generating load and monitoring ASG behavior
- Once validated, increase capacity to production values via AWS Console or Terraform update
- No need to destroy/recreate stack just to change capacity settings
Next Steps
After configuring capacity, proceed to Primary Scale-In Protection to protect the primary instance from being terminated during scale-in events.
Primary Scale-In Protection
Overview
Protect the primary FortiGate instance from scale-in events to maintain configuration synchronization stability and prevent unnecessary primary elections.
Configuration
primary_scalein_protection = trueWhy Protect the Primary Instance?
In FortiGate autoscale architecture:
- Primary instance: Elected leader responsible for configuration management and HA sync
- Secondary instances: Receive configuration from primary via FortiGate-native HA synchronization
Without scale-in protection:
- AWS autoscaling may select primary instance for termination during scale-in
- Remaining instances must elect new primary
- Configuration may be temporarily unavailable during election
- Potential for configuration loss if primary was processing updates
With scale-in protection:
- AWS autoscaling only terminates secondary instances
- Primary instance remains stable unless it is the last instance
- Configuration synchronization continues uninterrupted
- Predictable autoscale group behavior
How It Works
The primary_scalein_protection variable is passed through to the autoscale group configuration:
In the underlying Terraform module (autoscale_group.tf):
AWS autoscaling respects the protection attribute and never selects protected instances for scale-in events.
Verification
You can verify scale-in protection in the AWS Console:
- Navigate to EC2 > Auto Scaling Groups
- Select your autoscale group
- Click Instance management tab
- Look for Scale-in protection column showing “Protected” for primary instance
When Protection is Removed
Scale-in protection automatically removes when:
- Instance is the last remaining instance in the ASG (respecting
min_size) - Manual termination via AWS Console or API (protection can be overridden)
- Autoscale group is deleted
Best Practices
- Always enable for production: Set
primary_scalein_protection = truefor production deployments - Consider disabling for dev/test: Development environments may not require protection
- Monitor primary health: Protected instances still fail health checks and can be replaced
- Document protection status: Ensure operations teams understand why primary instance is protected
AWS Documentation Reference
For more information on AWS autoscaling instance protection:
Next Steps
After configuring primary protection, review Additional Configuration Options for fine-tuning instance specifications and advanced settings.
Additional Configuration Options
Overview
This section covers additional configuration options for fine-tuning FortiGate instance specifications and advanced deployment settings.
FortiGate Instance Specifications
Instance Type Selection
fgt_instance_type = "c7gn.xlarge"Instance type selection considerations:
- c6i/c7i series: Intel-based compute-optimized (best for x86 workloads)
- c6g/c7g/c7gn series: AWS Graviton (ARM-based, excellent performance)
- Sizing: Choose vCPU count matching expected throughput requirements
Common instance types for FortiGate:
| Instance Type | vCPUs | Memory | Network Performance | Best For |
|---|---|---|---|---|
| c6i.large | 2 | 4 GB | Up to 12.5 Gbps | Small deployments, dev/test |
| c6i.xlarge | 4 | 8 GB | Up to 12.5 Gbps | Standard production workloads |
| c6i.2xlarge | 8 | 16 GB | Up to 12.5 Gbps | High-throughput environments |
| c7gn.xlarge | 4 | 8 GB | Up to 30 Gbps | High-performance networking |
| c7gn.2xlarge | 8 | 16 GB | Up to 30 Gbps | Very high-performance networking |
FortiOS Version
fortios_version = "7.4.5"Version specification options:
- Exact version (e.g.,
"7.4.5"): Pin to specific version for consistency across environments - Major version (e.g.,
"7.4"): Automatically use latest minor version within major release - Latest: Omit or use
"latest"to always deploy newest available version
Recommendations:
- Production: Use exact version numbers to prevent unexpected changes
- Dev/Test: Use major version or latest to test new features and fixes
- Always test new FortiOS versions in non-production before upgrading production deployments
Version considerations:
- Newer versions may include critical security fixes
- Performance improvements and new features
- Potential breaking changes in configuration syntax
- Always review release notes before upgrading
FortiGate GUI Port
fortigate_gui_port = 443Common options:
443(default): Standard HTTPS port8443: Alternate HTTPS port (some organizations prefer moving GUI off default port for security)10443: Another common alternate port
When changing the GUI port:
- Update security group rules to allow traffic to new port
- Update documentation and runbooks with new port
- Existing sessions will be dropped when port changes
- Coordinate change with operations team
Gateway Load Balancer Cross-Zone Load Balancing
allow_cross_zone_load_balancing = trueEnabled (true) - Recommended for Production
- GWLB distributes traffic to healthy FortiGate instances in any Availability Zone
- Better utilization of capacity during partial AZ failures
- Improved overall availability and fault tolerance
- Traffic can flow to any healthy instance regardless of AZ
Disabled (false)
- GWLB only distributes traffic to instances in same AZ as GWLB endpoint
- Traffic remains within single AZ (lowest latency)
- Reduced capacity during AZ-specific health issues
- Must maintain sufficient capacity in each AZ independently
Decision Factors
Enable for:
- Production environments requiring maximum availability
- Multi-AZ deployments where instance distribution may be uneven
- Architectures where AZ-level failures must be transparent to applications
- Workloads where availability is prioritized over lowest latency
Disable for:
- Workloads with strict latency requirements
- Architectures with guaranteed even instance distribution across AZs
- Environments with predictable AZ-local traffic patterns
- Data residency requirements mandating AZ-local processing
Recommendation: Enable for production deployments to maximize availability and capacity utilization
SSH Key Pair
keypair_name = "my-fortigate-keypair"Purpose: SSH key pair for emergency CLI access to FortiGate instances
Best practices:
- Create dedicated key pair for FortiGate instances (separate from application instances)
- Store private key securely in password manager or AWS Secrets Manager
- Rotate key pairs periodically (every 6-12 months)
- Document key pair name and location in runbooks
- Limit access to private key to authorized personnel only
Creating a key pair:
# Via AWS CLI
aws ec2 create-key-pair --key-name my-fortigate-keypair --query 'KeyMaterial' --output text > my-fortigate-keypair.pem
chmod 400 my-fortigate-keypair.pem
# Or via AWS Console: EC2 > Key Pairs > Create Key PairResource Tagging
resource_tags = {
Environment = "Production"
Project = "FortiGate-Autoscale"
Owner = "security-team@example.com"
CostCenter = "CC-12345"
}Common tags to include:
- Environment: Production, Development, Staging, Test
- Project: Project or application name
- Owner: Team or individual responsible for resources
- CostCenter: For cost allocation and chargeback
- ManagedBy: Terraform, CloudFormation, etc.
- CreatedDate: When resources were initially deployed
Benefits of comprehensive tagging:
- Cost allocation and reporting
- Resource organization and filtering
- Access control policies
- Automation and orchestration
- Compliance and governance
Summary Checklist
Before proceeding to deployment, verify you’ve configured:
- ✅ Internet Egress: EIP or NAT Gateway mode selected
- ✅ Firewall Architecture: 1-ARM or 2-ARM mode chosen
- ✅ Management Isolation: Dedicated ENI and/or VPC configured (if required)
- ✅ Licensing: BYOL directory populated or FortiFlex configured
- ✅ FortiManager: Integration enabled (if centralized management required)
- ✅ Capacity: ASG min/max/desired sizes set appropriately
- ✅ Primary Protection: Scale-in protection enabled for production
- ✅ Instance Specs: Instance type and FortiOS version selected
- ✅ Additional Options: GUI port, cross-zone LB, key pair, tags configured
Next Steps
You’re now ready to proceed to the Summary page for a complete overview of all solution components, or jump directly to Templates to begin deployment.
Solution Components Summary
Overview
This summary provides a comprehensive reference of all solution components covered in this section, with quick decision guides and configuration references.
Component Quick Reference
1. Internet Egress Options
| Option | Hourly Cost | Data Processing | Monthly Cost (2 AZs) | Source IP | Best For |
|---|---|---|---|---|---|
| EIP Mode | $0.005/IP | None | ~$7.20 | Variable | Cost-sensitive, dev/test |
| NAT Gateway | $0.045/NAT × 2 | $0.045/GB | ~$65 base + data† | Stable | Production, compliance |
† Data processing example: 1 TB/month = $45 additional cost
Total NAT Gateway cost estimate: $65 (base) + $45 (1TB data) = $110/month for 2 AZs with 1TB egress
access_internet_mode = "eip" # or "nat_gw"Key Decision: Do you need predictable source IPs for allowlisting (white-listing)?
- Yes → NAT Gateway (stable IPs, higher cost)
- No → EIP (variable IPs, lower cost)
2. Firewall Architecture
| Mode | Interfaces | Complexity | Best For |
|---|---|---|---|
| 2-ARM | port1 + port2 | Higher | Production, clear segmentation |
| 1-ARM | port1 only | Lower | Simplified routing |
firewall_policy_mode = "2-arm" # or "1-arm"3. Management Isolation
Three progressive levels:
- Combined (Default): Port2 serves data + management
- Dedicated ENI: Port2 dedicated to management only
- Dedicated VPC: Complete physical network separation
enable_dedicated_management_eni = true
enable_dedicated_management_vpc = true4. Licensing Options
| Model | Best For | Cost (12 months) | Management |
|---|---|---|---|
| BYOL | Long-term, predictable | Lowest | License files |
| FortiFlex | Variable, flexible | Medium | API-driven |
| PAYG | Short-term, simple | Highest | None required |
Hybrid Strategy (Recommended): BYOL baseline + PAYG burst
5. FortiManager Integration
enable_fortimanager_integration = true
fortimanager_ip = "10.0.100.50"
fortimanager_sn = "FMGVM0000000001"⚠️ Critical: FortiManager 7.6.3+ requires fgfm-allow-vm enabled before deployment
6. Autoscale Group Capacity
asg_byol_asg_min_size = 2
asg_byol_asg_max_size = 4
asg_ondemand_asg_max_size = 4Formula: Capacity = (Peak Gbps / Per-Instance Gbps) × 1.2
7. Primary Scale-In Protection
primary_scalein_protection = trueAlways enable for production to prevent primary instance termination during scale-in.
8. Additional Configuration
fgt_instance_type = "c6i.xlarge"
fortios_version = "7.4.5"
fortigate_gui_port = 443
allow_cross_zone_load_balancing = true
keypair_name = "my-fortigate-keypair"Common Deployment Patterns
Pattern 1: Production with Maximum Isolation
access_internet_mode = "nat_gw"
firewall_policy_mode = "2-arm"
enable_dedicated_management_eni = true
enable_dedicated_management_vpc = true
asg_license_directory = "asg_license"
enable_fortimanager_integration = true
primary_scalein_protection = trueUse case: Enterprise production, compliance-driven
Pattern 2: Development and Testing
access_internet_mode = "eip"
firewall_policy_mode = "1-arm"
asg_ondemand_asg_min_size = 1
asg_ondemand_asg_max_size = 2
enable_fortimanager_integration = falseUse case: Development, testing, POC
Pattern 3: Balanced Production
access_internet_mode = "nat_gw"
firewall_policy_mode = "2-arm"
enable_dedicated_management_eni = true
fortiflex_username = "your-api-username"
enable_fortimanager_integration = true
primary_scalein_protection = trueUse case: Standard production, flexible licensing
Decision Tree
1. Do you need predictable source IPs for allowlisting?
├─ Yes → NAT Gateway (~$110/month for 2 AZs + 1TB data)
└─ No → EIP (~$7/month)
2. Dedicated management interface?
├─ Yes → 2-ARM + Dedicated ENI
└─ No → 1-ARM
3. Complete management isolation?
├─ Yes → Dedicated Management VPC
└─ No → Dedicated ENI or skip
4. Licensing model?
├─ Long-term (12+ months) → BYOL
├─ Variable workload → FortiFlex
├─ Short-term (< 3 months) → PAYG
└─ Best optimization → BYOL + PAYG hybrid
5. Centralized policy management?
├─ Yes → Enable FortiManager
└─ No → Standalone
6. Production deployment?
├─ Yes → Enable primary scale-in protection
└─ No → OptionalPre-Deployment Checklist
Infrastructure:
- AWS account with permissions
- VPC architecture designed
- Subnet CIDR planning complete
- Transit Gateway configured (if needed)
Licensing:
- BYOL: License files ready (≥ max_size)
- FortiFlex: Program registered, API credentials
- PAYG: Marketplace subscription accepted
FortiManager (if applicable):
- FortiManager deployed and accessible
- FortiManager 7.6.3+:
fgfm-allow-vmenabled - ADOMs and device groups created
- Network connectivity verified
Configuration:
-
terraform.tfvarspopulated - SSH key pair created
- Resource tags defined
- Instance type selected
Troubleshooting Quick Reference
| Issue | Check |
|---|---|
| No internet connectivity | Route tables, IGW, NAT GW, EIP |
| Management inaccessible | Security groups, routing, EIP |
| License not activating | Lambda logs, S3, DynamoDB, FortiFlex API |
| FortiManager registration fails | fgfm-allow-vm, network, serial number |
| Scaling not working | CloudWatch alarms, ASG health checks |
| Primary terminated | Verify protection enabled |
Next Steps
Proceed to Templates for step-by-step deployment procedures.
Additional Resources
Templates
Deployment Templates
The FortiGate Autoscale Simplified Template provides modular Terraform templates for deploying autoscale architectures in AWS. This section covers both templates and their integration patterns.
Available Templates
Templates Overview
Understand the template architecture, choose deployment patterns, and learn how templates work together.
existing_vpc_resources Template (Optional)
Create supporting infrastructure for lab and test environments including management VPC, Transit Gateway, and spoke VPCs with traffic generators.
autoscale_template (Required)
Deploy the core FortiGate autoscale infrastructure including inspection VPC, Gateway Load Balancer, and FortiGate autoscale groups.
Quick Start Paths
For Lab/Test Environments
- Start with Templates Overview to understand architecture
- Deploy existing_vpc_resources for complete test environment
- Deploy autoscale_template connected to created resources
- Time: ~30-40 minutes
For Production Deployments
- Review Templates Overview for integration patterns
- Skip existing_vpc_resources template
- Deploy autoscale_template to existing infrastructure
- Time: ~15-20 minutes
Template Coordination
When using both templates together, ensure these variables match exactly:
aws_regionavailability_zone_1andavailability_zone_2cp(customer prefix)env(environment)vpc_cidr_managementvpc_cidr_spoke
See Templates Overview for detailed coordination requirements.
What’s Next?
- New to autoscale? Start with Templates Overview
- Need lab environment? Go to existing_vpc_resources
- Ready to deploy? Go to autoscale_template
- Need configuration details? See Solution Components
Subsections of Templates
Templates Overview
Introduction
The FortiGate Autoscale Simplified Template consists of two complementary Terraform templates that work together to deploy a complete FortiGate autoscale architecture in AWS:
- existing_vpc_resources (Required First): Creates the Inspection VPC and supporting infrastructure with
Fortinet-Roletags for resource discovery - autoscale_template (Required Second): Deploys the FortiGate autoscale group into the existing Inspection VPC
Warning
Important Workflow Change
The autoscale_template now deploys into existing VPCs rather than creating them. You must run existing_vpc_resources first to create the Inspection VPC with proper Fortinet-Role tags, then run autoscale_template to deploy the FortiGate autoscale group.
This modular approach allows you to:
- Separate VPC infrastructure from FortiGate deployment for better lifecycle management
- Use tag-based resource discovery for flexible integration
- Create a complete lab environment including management VPC, Transit Gateway, and spoke VPCs with traffic generators
- Mix and match components based on your specific requirements
Template Architecture
Component Relationships
┌─────────────────────────────────────────────────────────────────┐
│ existing_vpc_resources Template (Run First) │
│ │
│ ┌──────────────────┐ ┌─────────────────┐ │
│ │ Management VPC │ │ Transit Gateway │ │
│ │ - FortiManager │ │ - Spoke VPCs │ │
│ │ - FortiAnalyzer │ │ - Linux Instances │
│ │ - Jump Box │ │ - Test Traffic │ │
│ └──────────────────┘ └─────────────────┘ │
│ │ │ │
│ └───────────┬───────────┘ │
│ │ │
│ ┌───────────────────▼───────────────────┐ │
│ │ Inspection VPC (with Fortinet-Role │ │
│ │ tags for resource discovery) │ │
│ │ - Public/Private/GWLBE Subnets │ │
│ │ - Route Tables, IGW, NAT GW │ │
│ │ - TGW Attachment (optional) │ │
│ └───────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
│ (Fortinet-Role tag discovery)
┌──────────────────────┼──────────────────────────────────────────┐
│ autoscale_template (Run Second) │ │
│ │ │
│ ┌────────────────── ▼ ────────────────┐ │
│ │ Deploys INTO Inspection VPC │ │
│ │ - FortiGate Autoscale Group │ │
│ │ - Gateway Load Balancer │ │
│ │ - GWLB Endpoints │ │
│ │ - Lambda Functions │ │
│ │ - Route modifications │ │
│ └─────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘Fortinet-Role Tag Discovery
The autoscale_template discovers existing resources using Fortinet-Role tags. This tag-based approach provides:
- Decoupled lifecycle management: VPC infrastructure can persist while FortiGate deployments are updated
- Flexible integration: Works with any VPC that has the correct tags, not just those created by
existing_vpc_resources - Clear resource ownership: Tags explicitly identify resources intended for FortiGate integration
Quick Decision Tree
Use this decision tree to determine your deployment approach:
1. Do you have existing VPCs with Fortinet-Role tags?
├─ YES → Deploy autoscale_template only
│ (Resources discovered via Fortinet-Role tags)
│
└─ NO → Continue to question 2
2. Do you need a complete lab environment for testing?
├─ YES → Deploy existing_vpc_resources (all components)
│ Then deploy autoscale_template
│ See: Lab Environment Pattern
│
└─ NO → Continue to question 3
3. Do you need centralized management (FortiManager/FortiAnalyzer)?
├─ YES → Deploy existing_vpc_resources (with management VPC)
│ Then deploy autoscale_template
│ See: Management VPC Pattern
│
└─ NO → Deploy existing_vpc_resources (inspection VPC only)
Then deploy autoscale_template
See: Minimal Deployment PatternInfo
Key Point: The autoscale_template always requires an existing Inspection VPC with Fortinet-Role tags. Use existing_vpc_resources to create this infrastructure, or manually tag your existing VPCs.
Template Comparison
| Aspect | existing_vpc_resources | autoscale_template |
|---|---|---|
| Required? | Yes (creates Inspection VPC) | Yes (deploys FortiGate) |
| Run Order | First | Second |
| Purpose | VPC infrastructure with Fortinet-Role tags | FortiGate autoscale deployment |
| Creates | Inspection VPC, Management VPC, TGW, Spoke VPCs | FortiGate ASG, GWLB, Lambda, route modifications |
| Discovery | N/A (creates resources) | Uses Fortinet-Role tags |
| Cost | VPC infrastructure costs | FortiGate instance costs |
| Lifecycle | Persistent infrastructure | Can be redeployed independently |
| Production Use | Yes (or tag existing VPCs) | Always |
Common Integration Patterns
Pattern 1: Complete Lab Environment
Use case: Full-featured testing environment with management and traffic generation
Templates needed:
- ✅ existing_vpc_resources (with all components enabled including Inspection VPC)
- ✅ autoscale_template (deploys into Inspection VPC via Fortinet-Role tags)
What you get:
- Inspection VPC with Fortinet-Role tags for resource discovery
- Management VPC with FortiManager, FortiAnalyzer, and Jump Box
- Transit Gateway with spoke VPCs
- Linux instances for traffic generation
- FortiGate autoscale group with GWLB
- Complete end-to-end testing environment
Estimated cost: ~$300-400/month for complete lab
Deployment time: ~25-30 minutes
Next steps: Lab Environment Workflow
Pattern 2: Production Integration (Existing VPCs)
Use case: Deploy FortiGate inspection to existing production infrastructure
Templates needed:
- ⚠️ Manual tagging of existing VPCs with Fortinet-Role tags, OR
- ✅ existing_vpc_resources (inspection VPC only, to create properly tagged infrastructure)
- ✅ autoscale_template (discovers resources via Fortinet-Role tags)
Prerequisites:
- Existing VPCs must have
Fortinet-Roletags (see Required Tags) - OR use existing_vpc_resources to create new Inspection VPC with correct tags
- Network connectivity established
What you get:
- FortiGate autoscale group with GWLB deployed into existing/tagged VPC
- Integration with existing Transit Gateway
- Tag-based resource discovery for flexibility
Estimated cost: ~$150-250/month (FortiGates only, plus any new VPC infrastructure)
Deployment time: ~15-20 minutes (plus tagging time if manual)
Next steps: Production Integration Workflow
Pattern 3: Management VPC Only
Use case: Testing FortiManager/FortiAnalyzer integration without spoke VPCs
Templates needed:
- ✅ existing_vpc_resources (Inspection VPC + management VPC components)
- ✅ autoscale_template (with FortiManager integration enabled)
What you get:
- Inspection VPC with Fortinet-Role tags
- Dedicated management VPC with FortiManager and FortiAnalyzer
- FortiGate autoscale group managed by FortiManager
- No Transit Gateway or spoke VPCs
Estimated cost: ~$300/month
Deployment time: ~20-25 minutes
Next steps: Management VPC Workflow
Pattern 4: Minimal Inspection VPC Only
Use case: Simplest deployment for testing FortiGate autoscale
Templates needed:
- ✅ existing_vpc_resources (Inspection VPC only)
- ✅ autoscale_template (without TGW attachment)
Configuration:
# existing_vpc_resources
enable_build_inspection_vpc = true
enable_build_management_vpc = false
enable_build_existing_subnets = falseWhat you get:
- Inspection VPC with Fortinet-Role tags
- FortiGate autoscale group with GWLB
- No management infrastructure or spoke VPCs
Estimated cost: ~$150-200/month
Deployment time: ~15 minutes
Next steps: Minimal Deployment Workflow
Required Fortinet-Role Tags
The autoscale_template discovers existing resources using Fortinet-Role tags. These tags are automatically created by existing_vpc_resources, or you can manually apply them to existing VPCs.
Required Tags for Inspection VPC
| Resource Type | Fortinet-Role Tag Value | Required |
|---|---|---|
| VPC | {cp}-{env}-inspection-vpc | Yes |
| Internet Gateway | {cp}-{env}-inspection-igw | Yes |
| Public Subnet AZ1 | {cp}-{env}-inspection-public-az1 | Yes |
| Public Subnet AZ2 | {cp}-{env}-inspection-public-az2 | Yes |
| GWLBE Subnet AZ1 | {cp}-{env}-inspection-gwlbe-az1 | Yes |
| GWLBE Subnet AZ2 | {cp}-{env}-inspection-gwlbe-az2 | Yes |
| Private Subnet AZ1 | {cp}-{env}-inspection-private-az1 | Yes |
| Private Subnet AZ2 | {cp}-{env}-inspection-private-az2 | Yes |
| Public Route Table AZ1 | {cp}-{env}-inspection-public-rt-az1 | Yes |
| Public Route Table AZ2 | {cp}-{env}-inspection-public-rt-az2 | Yes |
| GWLBE Route Table AZ1 | {cp}-{env}-inspection-gwlbe-rt-az1 | Yes |
| GWLBE Route Table AZ2 | {cp}-{env}-inspection-gwlbe-rt-az2 | Yes |
| Private Route Table AZ1 | {cp}-{env}-inspection-private-rt-az1 | Yes |
| Private Route Table AZ2 | {cp}-{env}-inspection-private-rt-az2 | Yes |
| NAT Gateway AZ1 | {cp}-{env}-inspection-natgw-az1 | If nat_gw mode |
| NAT Gateway AZ2 | {cp}-{env}-inspection-natgw-az2 | If nat_gw mode |
| Mgmt Subnet AZ1 | {cp}-{env}-inspection-management-az1 | If dedicated mgmt ENI |
| Mgmt Subnet AZ2 | {cp}-{env}-inspection-management-az2 | If dedicated mgmt ENI |
| Mgmt Route Table AZ1 | {cp}-{env}-inspection-management-rt-az1 | If dedicated mgmt ENI |
| Mgmt Route Table AZ2 | {cp}-{env}-inspection-management-rt-az2 | If dedicated mgmt ENI |
| TGW Attachment | {cp}-{env}-inspection-tgw-attachment | If TGW enabled |
| TGW Route Table | {cp}-{env}-inspection-tgw-rtb | If TGW enabled |
Example: For cp="acme" and env="test", the VPC tag would be acme-test-inspection-vpc
Optional Tags for Management VPC
| Resource Type | Fortinet-Role Tag Value | Required |
|---|---|---|
| VPC | {cp}-{env}-management-vpc | If dedicated mgmt VPC |
| Public Subnet AZ1 | {cp}-{env}-management-public-az1 | If dedicated mgmt VPC |
| Public Subnet AZ2 | {cp}-{env}-management-public-az2 | If dedicated mgmt VPC |
Deployment Workflows
Lab Environment Workflow
Objective: Create complete testing environment from scratch
# Step 1: Deploy existing_vpc_resources (creates Inspection VPC with Fortinet-Role tags)
cd terraform/existing_vpc_resources
cp terraform.tfvars.example terraform.tfvars
# Edit: Enable all components:
# enable_build_inspection_vpc = true
# enable_build_management_vpc = true
# enable_build_existing_subnets = true
# enable_fortimanager = true
# enable_fortianalyzer = true
terraform init && terraform apply
# Step 2: Note outputs (Fortinet-Role tags created automatically)
terraform output # Save TGW name and FortiManager IP
# Step 3: Deploy autoscale_template (discovers VPCs via Fortinet-Role tags)
cd ../autoscale_template
cp terraform.tfvars.example terraform.tfvars
# Edit: Use SAME cp and env values (critical for tag discovery)
# Set attach_to_tgw_name from Step 2 output
# Configure FortiManager integration
terraform init && terraform apply
# Step 4: Verify
ssh -i ~/.ssh/keypair.pem ec2-user@<jump-box-ip>
curl http://<linux-instance-ip> # Test connectivityTime to complete: 30-40 minutes
Warning
Critical: The cp and env variables must match between both templates for Fortinet-Role tag discovery to work.
See detailed guide: existing_vpc_resources Template
Production Integration Workflow
Objective: Deploy FortiGate inspection into existing or new Inspection VPC
Option A: Tag Existing VPCs Manually
If you have existing VPCs you want to use:
- Apply Fortinet-Role tags to your existing VPC resources (see Required Tags)
- Deploy autoscale_template with matching
cpandenvvalues
Option B: Create New Inspection VPC (Recommended)
# Step 1: Deploy existing_vpc_resources (Inspection VPC only)
cd terraform/existing_vpc_resources
cp terraform.tfvars.example terraform.tfvars
# Edit:
# enable_build_inspection_vpc = true
# enable_build_management_vpc = false
# enable_build_existing_subnets = true # if TGW needed
# attach_to_tgw_name = "production-tgw" # existing TGW
terraform init && terraform apply
# Step 2: Deploy autoscale_template
cd ../autoscale_template
cp terraform.tfvars.example terraform.tfvars
# Edit: Use SAME cp and env values
# Set attach_to_tgw_name to production TGW
# Configure production-appropriate capacity
terraform init && terraform apply
# Step 3: Update TGW route tables (if needed)
# Route spoke VPC traffic (0.0.0.0/0) to inspection VPC attachment
# Step 4: Test and validate
# Verify traffic flows through FortiGateTime to complete: 20-30 minutes
See detailed guide: autoscale_template
Management VPC Workflow
Objective: Deploy management infrastructure with FortiManager/FortiAnalyzer
# Step 1: Deploy existing_vpc_resources (Inspection + Management VPCs)
cd terraform/existing_vpc_resources
cp terraform.tfvars.example terraform.tfvars
# Edit:
# enable_build_inspection_vpc = true
# enable_build_management_vpc = true
# enable_fortimanager = true
# enable_fortianalyzer = true
# enable_build_existing_subnets = false
terraform init && terraform apply
# Step 2: Configure FortiManager
# Access FortiManager GUI: https://<fmgr-ip>
# Enable VM device recognition if FMG 7.6.3+
config system global
set fgfm-allow-vm enable
end
# Step 3: Deploy autoscale_template
cd ../autoscale_template
cp terraform.tfvars.example terraform.tfvars
# Edit: Use SAME cp and env values
# enable_fortimanager_integration = true
# fortimanager_ip = <from Step 1 output>
# enable_dedicated_management_vpc = true
terraform init && terraform apply
# Step 4: Authorize devices on FortiManager
# Device Manager > Device & Groups
# Right-click unauthorized device > AuthorizeTime to complete: 25-35 minutes
Minimal Deployment Workflow
Objective: Deploy FortiGate with minimal infrastructure
# Step 1: Deploy existing_vpc_resources (Inspection VPC only)
cd terraform/existing_vpc_resources
cp terraform.tfvars.example terraform.tfvars
# Edit:
# enable_build_inspection_vpc = true
# enable_build_management_vpc = false
# enable_build_existing_subnets = false
# inspection_access_internet_mode = "eip" # simpler, lower cost
terraform init && terraform apply
# Step 2: Deploy autoscale_template
cd ../autoscale_template
cp terraform.tfvars.example terraform.tfvars
# Edit: Use SAME cp and env values
# enable_tgw_attachment = false
# access_internet_mode = "eip"
terraform init && terraform apply
# Step 3: Note GWLB endpoint IDs for spoke VPC integration
terraform output gwlb_endpoint_az1_id
terraform output gwlb_endpoint_az2_id
# Step 4: Integrate spoke VPCs
# Deploy GWLB endpoints in spoke VPCs
# Update spoke VPC route tables to point to GWLB endpointsTime to complete: 20-25 minutes (plus spoke VPC endpoint deployment)
When to Use Each Template
existing_vpc_resources - Always Required First
The existing_vpc_resources template is required to create the Inspection VPC with proper Fortinet-Role tags. Use it when:
✅ Any new FortiGate autoscale deployment
- Creates Inspection VPC with all required subnets and tags
- Can optionally include Management VPC, TGW, and spoke VPCs
- Provides foundation for
autoscale_templatedeployment
✅ Creating a lab or test environment
- Enable all components for complete testing environment
- Includes FortiManager/FortiAnalyzer for management testing
- Traffic generators in spoke VPCs for load testing
✅ Production deployments with new infrastructure
- Creates properly tagged VPCs for FortiGate deployment
- Can attach to existing Transit Gateway
- Separates VPC lifecycle from FortiGate lifecycle
Alternative to existing_vpc_resources:
⚠️ Manually tag existing VPCs (advanced users only)
- Apply
Fortinet-Roletags to existing VPCs following the tag schema - Ensures all required resources (subnets, route tables, IGW, etc.) are properly tagged
- Useful when you cannot create new VPCs
autoscale_template - Always Required Second
The autoscale_template deploys FortiGate into the existing Inspection VPC:
✅ All FortiGate autoscale deployments
- Discovers Inspection VPC via Fortinet-Role tags
- Deploys FortiGate ASG, GWLB, Lambda functions
- Modifies route tables to enable traffic inspection
✅ Can be redeployed independently
- Inspection VPC persists between FortiGate redeployments
- Allows FortiGate version upgrades without VPC changes
- Simplifies lifecycle management
Template Variable Coordination
When using both templates together, certain variables must match exactly for Fortinet-Role tag discovery to work:
Critical Variables for Tag Discovery
| Variable | Purpose | Impact if Mismatched |
|---|---|---|
cp (customer prefix) | Fortinet-Role tag prefix | autoscale_template cannot find VPCs |
env (environment) | Fortinet-Role tag prefix | autoscale_template cannot find VPCs |
aws_region | AWS region | Resources in wrong region |
availability_zone_1 | First AZ | Subnet discovery fails |
availability_zone_2 | Second AZ | Subnet discovery fails |
Warning
Critical: The cp and env variables form the prefix for all Fortinet-Role tags. If these don’t match between templates, the autoscale_template will fail with “no matching VPC/Subnet found” errors.
Example Coordinated Configuration
existing_vpc_resources/terraform.tfvars:
aws_region = "us-west-2"
availability_zone_1 = "a"
availability_zone_2 = "c"
cp = "acme" # Creates tags like "acme-test-inspection-vpc"
env = "test"
vpc_cidr_ns_inspection = "10.0.0.0/16"
vpc_cidr_management = "10.3.0.0/16"autoscale_template/terraform.tfvars:
aws_region = "us-west-2" # MUST MATCH
availability_zone_1 = "a" # MUST MATCH
availability_zone_2 = "c" # MUST MATCH
cp = "acme" # MUST MATCH - used for tag lookup
env = "test" # MUST MATCH - used for tag lookup
vpc_cidr_inspection = "10.0.0.0/16" # Should match existing VPC CIDR
vpc_cidr_management = "10.3.0.0/16" # Should match if using management VPC
attach_to_tgw_name = "acme-test-tgw" # Matches cp-env naming conventionHow Tag Discovery Works
When autoscale_template runs, it looks up resources like this:
# autoscale_template/vpc_inspection.tf
data "aws_vpc" "inspection" {
filter {
name = "tag:Fortinet-Role"
values = ["${var.cp}-${var.env}-inspection-vpc"] # e.g., "acme-test-inspection-vpc"
}
}This is why matching cp and env values is essential.
Next Steps
Choose your deployment pattern and proceed to the appropriate template guide:
- Lab/Test Environment: Start with existing_vpc_resources Template
- Production Deployment: Go directly to autoscale_template
- Need to review components?: See Solution Components
- Need licensing guidance?: See Licensing Options
Summary
The FortiGate Autoscale Simplified Template uses a two-phase deployment approach with Fortinet-Role tag discovery:
| Template | Purpose | Run Order | Creates |
|---|---|---|---|
| existing_vpc_resources | VPC infrastructure | First | Inspection VPC, Management VPC, TGW, Spoke VPCs (with Fortinet-Role tags) |
| autoscale_template | FortiGate deployment | Second | FortiGate ASG, GWLB, Lambda (discovers VPCs via tags) |
Key Principles:
- Run existing_vpc_resources first - Creates Inspection VPC with Fortinet-Role tags
- Match cp and env values - Critical for tag discovery between templates
- autoscale_template deploys into existing VPCs - Does not create VPC infrastructure
Recommended Starting Point:
- First-time users: Deploy both templates for complete lab environment
- Production deployments: Use existing_vpc_resources for new Inspection VPC, or manually tag existing VPCs
- All deployments: Ensure
cpandenvmatch between templates
existing_vpc_resources Template
Overview
The existing_vpc_resources template creates the Inspection VPC and supporting infrastructure required for the FortiGate autoscale deployment. All resources are tagged with Fortinet-Role tags that allow the autoscale_template to discover and deploy into them.
Warning
This template must be run BEFORE autoscale_template. The autoscale_template discovers VPCs using Fortinet-Role tags created by this template. If you skip this template, you must manually apply the required tags to your existing VPCs.
What It Creates
The template conditionally creates the following components based on boolean variables. All resources are tagged with Fortinet-Role tags for discovery by autoscale_template.
Component Overview
| Component | Purpose | Required | Typical Cost/Month |
|---|---|---|---|
| Inspection VPC | VPC for FortiGate autoscale deployment | Yes | ~$50 (VPC/networking) |
| Management VPC | Centralized management infrastructure | No | ~$50 (VPC/networking) |
| FortiManager | Policy management and orchestration | No | ~$73 (m5.large) |
| FortiAnalyzer | Logging and reporting | No | ~$73 (m5.large) |
| Jump Box | Bastion host for secure access | No | ~$7 (t3.micro) |
| Transit Gateway | Central hub for VPC interconnectivity | No | ~$36 + data transfer |
| Spoke VPCs (East/West) | Simulated workload VPCs | No | ~$50 (networking) |
| Linux Instances | HTTP servers and traffic generators | No | ~$14 (2x t3.micro) |
Total estimated cost for complete lab: ~$300-400/month
Component Details
0. Inspection VPC (Required)
Purpose: The VPC where FortiGate autoscale group will be deployed by autoscale_template
Configuration variable:
enable_build_inspection_vpc = trueWhat gets created:
Inspection VPC (10.0.0.0/16)
├── Public Subnet AZ1 (FortiGate login/management)
├── Public Subnet AZ2
├── GWLBE Subnet AZ1 (Gateway Load Balancer Endpoints)
├── GWLBE Subnet AZ2
├── Private Subnet AZ1 (TGW attachment)
├── Private Subnet AZ2
├── Management Subnet AZ1 (optional - dedicated management ENI)
├── Management Subnet AZ2 (optional)
├── Internet Gateway
├── NAT Gateways (optional - if nat_gw mode)
├── Route Tables (per subnet type and AZ)
└── TGW Attachment (optional - if TGW enabled)Fortinet-Role tags applied (for autoscale_template discovery):
| Resource | Fortinet-Role Tag |
|---|---|
| VPC | {cp}-{env}-inspection-vpc |
| IGW | {cp}-{env}-inspection-igw |
| Public Subnet AZ1 | {cp}-{env}-inspection-public-az1 |
| Public Subnet AZ2 | {cp}-{env}-inspection-public-az2 |
| GWLBE Subnet AZ1 | {cp}-{env}-inspection-gwlbe-az1 |
| GWLBE Subnet AZ2 | {cp}-{env}-inspection-gwlbe-az2 |
| Private Subnet AZ1 | {cp}-{env}-inspection-private-az1 |
| Private Subnet AZ2 | {cp}-{env}-inspection-private-az2 |
| Public RT AZ1 | {cp}-{env}-inspection-public-rt-az1 |
| Public RT AZ2 | {cp}-{env}-inspection-public-rt-az2 |
| GWLBE RT AZ1 | {cp}-{env}-inspection-gwlbe-rt-az1 |
| GWLBE RT AZ2 | {cp}-{env}-inspection-gwlbe-rt-az2 |
| Private RT AZ1 | {cp}-{env}-inspection-private-rt-az1 |
| Private RT AZ2 | {cp}-{env}-inspection-private-rt-az2 |
| NAT GW AZ1 | {cp}-{env}-inspection-natgw-az1 (if nat_gw mode) |
| NAT GW AZ2 | {cp}-{env}-inspection-natgw-az2 (if nat_gw mode) |
| TGW Attachment | {cp}-{env}-inspection-tgw-attachment (if TGW enabled) |
| TGW Route Table | {cp}-{env}-inspection-tgw-rtb (if TGW enabled) |
Example: For cp="acme" and env="test", tags would be acme-test-inspection-vpc, acme-test-inspection-public-az1, etc.
Warning
Critical Variable Coordination
The cp and env values used here must match exactly in autoscale_template for tag discovery to work. Mismatched values will cause autoscale_template to fail with “no matching VPC found” errors.
Inspection VPC Internet Mode
inspection_access_internet_mode = "nat_gw" # or "eip"- nat_gw: Creates NAT Gateways for FortiGate internet access (recommended for production)
- eip: FortiGates use Elastic IPs directly (simpler, lower cost)
Inspection VPC Dedicated Management ENI
inspection_enable_dedicated_management_eni = trueCreates additional management subnets within the Inspection VPC for dedicated management interfaces on FortiGate instances.
1. Management VPC (Optional)
Purpose: Centralized management infrastructure isolated from production traffic
Components:
- Dedicated VPC with public and private subnets across two Availability Zones
- Internet Gateway for external connectivity
- Security groups for management traffic
Fortinet-Roletags for discovery byautoscale_template
Configuration variable:
enable_build_management_vpc = trueWhat gets created:
Management VPC (10.3.0.0/16)
├── Public Subnet AZ1 (10.3.1.0/24)
├── Public Subnet AZ2 (10.3.2.0/24)
├── Internet Gateway
└── Route TablesFortinet-Role tags applied (for autoscale_template discovery):
| Resource | Fortinet-Role Tag |
|---|---|
| VPC | {cp}-{env}-management-vpc |
| Public Subnet AZ1 | {cp}-{env}-management-public-az1 |
| Public Subnet AZ2 | {cp}-{env}-management-public-az2 |
FortiManager (Optional within Management VPC)
Configuration:
enable_fortimanager = true
fortimanager_instance_type = "m5.large"
fortimanager_os_version = "7.4.5"
fortimanager_host_ip = "10" # Results in .3.0.10Access:
- GUI:
https://<FortiManager-Public-IP> - SSH:
ssh admin@<FortiManager-Public-IP> - Default credentials:
admin/<instance-id>
Use cases:
- Testing FortiManager integration with autoscale group
- Centralized policy management demonstrations
- Device orchestration testing
FortiAnalyzer (Optional within Management VPC)
Configuration:
enable_fortianalyzer = true
fortianalyzer_instance_type = "m5.large"
fortianalyzer_os_version = "7.4.5"
fortianalyzer_host_ip = "11" # Results in .3.0.11Access:
- GUI:
https://<FortiAnalyzer-Public-IP> - SSH:
ssh admin@<FortiAnalyzer-Public-IP> - Default credentials:
admin/<instance-id>
Use cases:
- Centralized logging for autoscale group
- Reporting and analytics demonstrations
- Log retention testing
Jump Box (Optional within Management VPC)
Configuration:
enable_jump_box = true
jump_box_instance_type = "t3.micro"Access:
ssh -i ~/.ssh/keypair.pem ec2-user@<jump-box-public-ip>Use cases:
- Secure access to spoke VPC instances
- Testing connectivity without FortiGate in path (via debug attachment)
- Management access to FortiGate private IPs
Management VPC TGW Attachment (Optional)
Configuration:
enable_mgmt_vpc_tgw_attachment = truePurpose: Connects management VPC to Transit Gateway, allowing:
- Jump box access to spoke VPC Linux instances
- FortiManager/FortiAnalyzer access to FortiGate instances via TGW
- Alternative management access paths
Routing:
- Management VPC → TGW → Spoke VPCs
- Can be combined with
enable_debug_tgw_attachmentfor bypass testing
2. Transit Gateway and Spoke VPCs (Optional)
Purpose: Simulates production multi-VPC environment for traffic generation and testing
Configuration variable:
enable_build_existing_subnets = trueWhat gets created:
Transit Gateway
├── East Spoke VPC (192.168.0.0/24)
│ ├── Public Subnet AZ1
│ ├── Private Subnet AZ1
│ ├── NAT Gateway (optional)
│ └── Linux Instance (optional)
│
├── West Spoke VPC (192.168.1.0/24)
│ ├── Public Subnet AZ1
│ ├── Private Subnet AZ1
│ ├── NAT Gateway (optional)
│ └── Linux Instance (optional)
│
└── TGW Route Tables
├── Spoke-to-Spoke (via inspection VPC)
└── Inspection-to-InternetTransit Gateway
Configuration:
# Created automatically when enable_build_existing_subnets = true
# Named: {cp}-{env}-tgwPurpose:
- Central hub for VPC interconnectivity
- Enables centralized egress architecture
- Allows east-west traffic inspection
Attachments:
- East Spoke VPC
- West Spoke VPC
- Inspection VPC (created by autoscale_template)
- Management VPC (if
enable_mgmt_vpc_tgw_attachment = true) - Debug attachment (if
enable_debug_tgw_attachment = true)
Spoke VPCs (East and West)
Configuration:
vpc_cidr_east = "192.168.0.0/24"
vpc_cidr_west = "192.168.1.0/24"
vpc_cidr_spoke = "192.168.0.0/16" # SupernetComponents per spoke VPC:
- Public and private subnets
- NAT Gateway for internet egress
- Route tables for internet and TGW connectivity
- Security groups for instance access
Linux Instances (Traffic Generators)
Configuration:
enable_east_linux_instances = true
east_linux_instance_type = "t3.micro"
enable_west_linux_instances = true
west_linux_instance_type = "t3.micro"What they provide:
- HTTP server on port 80 (for connectivity testing)
- Internet egress capability (for testing FortiGate inspection)
- East-West traffic generation between spoke VPCs
Testing with Linux instances:
# From jump box or another instance
curl http://<linux-instance-ip>
# Returns: "Hello from <hostname>"
# Generate internet egress traffic
ssh ec2-user@<linux-instance-ip>
curl http://www.google.com # Traffic goes through FortiGateDebug TGW Attachment (Optional)
Configuration:
enable_debug_tgw_attachment = truePurpose: Creates a bypass attachment from Management VPC directly to Transit Gateway, allowing traffic to flow:
Jump Box → TGW → Spoke VPC Linux Instances (bypassing FortiGate inspection)Debug path use cases:
- Validate spoke VPC connectivity independent of FortiGate inspection
- Compare latency/throughput with and without inspection
- Troubleshoot routing issues by eliminating FortiGate as variable
- Generate baseline traffic patterns for capacity planning
Warning
Security Consideration
The debug attachment bypasses FortiGate inspection entirely. Do not enable in production environments. This is strictly for testing and validation purposes.
Configuration Scenarios
Scenario 1: Complete Lab Environment
Use case: Full-featured lab for testing all capabilities
# Inspection VPC (Required)
enable_build_inspection_vpc = true
inspection_access_internet_mode = "nat_gw"
inspection_enable_dedicated_management_eni = false
# Management VPC Components
enable_build_management_vpc = true
enable_fortimanager = true
enable_fortianalyzer = true
enable_jump_box = true
enable_mgmt_vpc_tgw_attachment = true
# Spoke VPC Components
enable_build_existing_subnets = true
enable_east_linux_instances = true
enable_west_linux_instances = true
enable_debug_tgw_attachment = trueWhat you get: Complete environment with inspection VPC (with Fortinet-Role tags), management, spoke VPCs, traffic generators, and debug path
Cost: ~$300-400/month
Best for: Training, demonstrations, comprehensive testing
Scenario 2: Inspection + Management VPC Only
Use case: Testing FortiManager/FortiAnalyzer integration without spoke VPCs
# Inspection VPC (Required)
enable_build_inspection_vpc = true
inspection_access_internet_mode = "eip"
# Management VPC Components
enable_build_management_vpc = true
enable_fortimanager = true
enable_fortianalyzer = true
enable_jump_box = false
enable_mgmt_vpc_tgw_attachment = false
# Spoke VPC Components
enable_build_existing_subnets = falseWhat you get: Inspection VPC (with Fortinet-Role tags) and Management VPC with FortiManager and FortiAnalyzer
Cost: ~$200/month
Best for: FortiManager/FortiAnalyzer integration testing, centralized management evaluation
Scenario 3: Inspection VPC + Traffic Generation
Use case: Testing autoscale with traffic generators, no management VPC
# Inspection VPC (Required)
enable_build_inspection_vpc = true
inspection_access_internet_mode = "nat_gw"
# Management VPC Components
enable_build_management_vpc = false
# Spoke VPC Components
enable_build_existing_subnets = true
enable_east_linux_instances = true
enable_west_linux_instances = true
enable_debug_tgw_attachment = falseWhat you get: Inspection VPC (with Fortinet-Role tags), Transit Gateway, and spoke VPCs with Linux instances
Cost: ~$100-150/month
Best for: Autoscale behavior testing, load testing, capacity planning
Scenario 4: Minimal Inspection VPC Only
Use case: Lowest cost configuration - Inspection VPC only
# Inspection VPC (Required)
enable_build_inspection_vpc = true
inspection_access_internet_mode = "eip" # Lower cost than nat_gw
# Management VPC Components
enable_build_management_vpc = false
# Spoke VPC Components
enable_build_existing_subnets = falseWhat you get: Inspection VPC with Fortinet-Role tags only - minimum required for autoscale_template
Cost: ~$30-50/month (VPC infrastructure only)
Best for: Minimal FortiGate testing, cost-sensitive environments, integration with existing TGW/spoke VPCs
Step-by-Step Deployment
Prerequisites
- AWS account with appropriate permissions
- Terraform 1.0 or later installed
- AWS CLI configured with credentials
- Git installed
- SSH keypair created in target AWS region
Step 1: Clone the Repository
Clone the repository containing both templates:
git clone https://github.com/FortinetCloudCSE/Autoscale-Simplified-Template.git
cd Autoscale-Simplified-Template/terraform/existing_vpc_resourcesStep 2: Create terraform.tfvars
Copy the example file and customize:
cp terraform.tfvars.example terraform.tfvarsStep 3: Configure Core Variables
Region and Availability Zones
aws_region = "us-west-2"
availability_zone_1 = "a"
availability_zone_2 = "c"Tip
Availability Zone Selection
Choose AZs that:
- Support your desired instance types
- Have sufficient capacity
- Match your production environment (if testing for production)
Verify AZ availability:
aws ec2 describe-availability-zones --region us-west-2Customer Prefix and Environment
These values are prepended to all resources for identification:
cp = "acme" # Customer prefix
env = "test" # Environment: prod, test, devResult: Resources named like acme-test-management-vpc, acme-test-tgw, etc.
Warning
Critical: Variable Coordination
These cp and env values must match between existing_vpc_resources and autoscale_template for proper resource discovery via tags.
Step 4: Configure Component Flags
Inspection VPC (Required)
The Inspection VPC is required and must be enabled. This creates the VPC where FortiGate autoscale group will be deployed.
enable_build_inspection_vpc = true
inspection_access_internet_mode = "nat_gw" # or "eip"
inspection_enable_dedicated_management_eni = false # or true for dedicated mgmt ENIInfo
Fortinet-Role Tags: All Inspection VPC resources are automatically tagged with Fortinet-Role tags using the pattern {cp}-{env}-inspection-*. These tags are used by autoscale_template to discover the VPC resources.
Management VPC (Optional)
enable_build_management_vpc = trueSpoke VPCs and Transit Gateway (Optional)
enable_build_existing_subnets = trueStep 5: Configure Optional Components
FortiManager and FortiAnalyzer
enable_fortimanager = true
fortimanager_instance_type = "m5.large"
fortimanager_os_version = "7.4.5"
fortimanager_host_ip = "10" # .3.0.10 within management VPC CIDR
enable_fortianalyzer = true
fortianalyzer_instance_type = "m5.large"
fortianalyzer_os_version = "7.4.5"
fortianalyzer_host_ip = "11" # .3.0.11 within management VPC CIDRInfo
Instance Sizing Recommendations
For testing/lab environments:
- FortiManager: m5.large (minimum)
- FortiAnalyzer: m5.large (minimum)
For heavier workloads or production evaluation:
- FortiManager: m5.xlarge or m5.2xlarge
- FortiAnalyzer: m5.xlarge or larger (depends on log volume)
Management VPC Transit Gateway Attachment
enable_mgmt_vpc_tgw_attachment = trueThis allows jump box and management instances to reach spoke VPC Linux instances for testing.
Linux Traffic Generators
enable_jump_box = true
jump_box_instance_type = "t3.micro"
enable_east_linux_instances = true
east_linux_instance_type = "t3.micro"
enable_west_linux_instances = true
west_linux_instance_type = "t3.micro"Debug TGW Attachment
enable_debug_tgw_attachment = trueEnables bypass path for connectivity testing without FortiGate inspection.
Step 6: Configure Network CIDRs
vpc_cidr_management = "10.3.0.0/16"
vpc_cidr_east = "192.168.0.0/24"
vpc_cidr_west = "192.168.1.0/24"
vpc_cidr_spoke = "192.168.0.0/16" # Supernet for all spoke VPCsWarning
CIDR Planning
Ensure CIDRs:
- Don’t overlap with existing networks
- Match between
existing_vpc_resourcesandautoscale_template - Have sufficient address space for growth
- Align with corporate IP addressing standards
Step 7: Configure Security Variables
keypair = "my-aws-keypair" # Must exist in target region
my_ip = "203.0.113.10/32" # Your public IP for SSH accessTip
Security Group Source IP
The my_ip variable restricts SSH and HTTPS access to management interfaces.
For dynamic IPs, consider:
- Using a CIDR range:
"203.0.113.0/24" - VPN endpoint IP if accessing via corporate VPN
- Multiple IPs: Configure directly in security groups after deployment
Step 8: Deploy the Template
Initialize Terraform:
terraform initReview the execution plan:
terraform planExpected output will show resources to be created based on enabled flags.
Deploy the infrastructure:
terraform applyType yes when prompted to confirm.
Expected deployment time: 10-15 minutes
Deployment progress:
Apply complete! Resources: 47 added, 0 changed, 0 destroyed.
Outputs:
east_linux_instance_ip = "192.168.0.50"
fortianalyzer_public_ip = "52.10.20.30"
fortimanager_public_ip = "52.10.20.40"
jump_box_public_ip = "52.10.20.50"
management_vpc_id = "vpc-0123456789abcdef0"
tgw_id = "tgw-0123456789abcdef0"
tgw_name = "acme-test-tgw"
west_linux_instance_ip = "192.168.1.50"Step 9: Verify Deployment
Verify Management VPC
aws ec2 describe-vpcs --filters "Name=tag:Name,Values=acme-test-management-vpc"Expected: VPC ID and CIDR information
Access FortiManager (if enabled)
# Get public IP from outputs
terraform output fortimanager_public_ip
# Access GUI
open https://<FortiManager-Public-IP>
# Or SSH
ssh admin@<FortiManager-Public-IP>
# Default password: <instance-id>First-time FortiManager setup:
- Login with admin / instance-id
- Change password when prompted
- Complete initial setup wizard
- Navigate to Device Manager > Device & Groups
Enable VM device recognition (FortiManager 7.6.3+):
config system global
set fgfm-allow-vm enable
endAccess FortiAnalyzer (if enabled)
# Get public IP from outputs
terraform output fortianalyzer_public_ip
# Access GUI
open https://<FortiAnalyzer-Public-IP>
# Or SSH
ssh admin@<FortiAnalyzer-Public-IP>Verify Transit Gateway (if enabled)
aws ec2 describe-transit-gateways --filters "Name=tag:Name,Values=acme-test-tgw"Expected: Transit Gateway in “available” state
Test Linux Instances (if enabled)
# Get instance IPs from outputs
terraform output east_linux_instance_ip
terraform output west_linux_instance_ip
# Test HTTP connectivity (if jump box enabled)
ssh -i ~/.ssh/keypair.pem ec2-user@<jump-box-ip>
curl http://<east-linux-ip>
# Expected: "Hello from ip-192-168-0-50"Step 10: Save Outputs for autoscale_template
Save key outputs for use in autoscale_template configuration:
# Save all outputs
terraform output > ../outputs.txt
# Or save specific values
echo "tgw_name: $(terraform output -raw tgw_name)" >> ../autoscale_template/terraform.tfvars
echo "fortimanager_ip: $(terraform output -raw fortimanager_private_ip)" >> ../autoscale_template/terraform.tfvarsOutputs Reference
The template provides these outputs. Note that autoscale_template discovers resources via Fortinet-Role tags rather than using output values directly.
Inspection VPC Outputs
| Output | Description | Notes |
|---|---|---|
inspection_vpc_id | ID of inspection VPC | Discovered by autoscale_template via Fortinet-Role tag |
inspection_vpc_cidr | CIDR of inspection VPC | Used for route table configuration |
Management and Supporting Infrastructure Outputs
| Output | Description | Used By autoscale_template |
|---|---|---|
management_vpc_id | ID of management VPC | VPC peering or TGW routing |
management_vpc_cidr | CIDR of management VPC | Route table configuration |
tgw_id | Transit Gateway ID | TGW attachment |
tgw_name | Transit Gateway name tag | attach_to_tgw_name variable |
fortimanager_private_ip | FortiManager private IP | fortimanager_ip variable |
fortimanager_public_ip | FortiManager public IP | GUI/SSH access |
fortianalyzer_private_ip | FortiAnalyzer private IP | FortiGate syslog configuration |
fortianalyzer_public_ip | FortiAnalyzer public IP | GUI/SSH access |
jump_box_public_ip | Jump box public IP | SSH bastion access |
east_linux_instance_ip | East spoke instance IP | Connectivity testing |
west_linux_instance_ip | West spoke instance IP | Connectivity testing |
Info
Tag-Based Discovery: The autoscale_template discovers Inspection VPC resources using Fortinet-Role tags rather than relying on output values. This allows the templates to be run independently as long as the cp and env values match.
Post-Deployment Configuration
Configure FortiManager for Integration
If you enabled FortiManager and plan to integrate with autoscale group:
Access FortiManager GUI:
https://<FortiManager-Public-IP>Change default password:
- Login with
admin/<instance-id> - Follow password change prompts
- Login with
Enable VM device recognition (7.6.3+):
config system global set fgfm-allow-vm enable endCreate ADOM for autoscale group (optional):
- Device Manager > ADOM
- Create ADOM for organizing autoscale FortiGates
Note FortiManager details for autoscale_template:
- Private IP: From outputs
- Serial number: Get from CLI:
get system status
Configure FortiAnalyzer for Logging
If you enabled FortiAnalyzer:
Access FortiAnalyzer GUI:
https://<FortiAnalyzer-Public-IP>Change default password
Configure log settings:
- System Settings > Storage
- Configure log retention policies
- Enable features needed for testing
Note FortiAnalyzer private IP for FortiGate syslog configuration
Important Notes
Resource Lifecycle Considerations
Warning
Management Resource Persistence
If you deploy the existing_vpc_resources template:
- Management VPC and resources (FortiManager, FortiAnalyzer) will be destroyed when you run
terraform destroy - If you want management resources to persist across inspection VPC redeployments, consider:
- Deploying management VPC separately with different Terraform state
- Using existing management infrastructure instead of template-created resources
- Setting appropriate lifecycle rules in Terraform to prevent destruction
Cost Optimization Tips
Info
Managing Lab Costs
The existing_vpc_resources template can create expensive resources:
- FortiManager m5.large:
$0.10/hour ($73/month) - FortiAnalyzer m5.large:
$0.10/hour ($73/month) - Transit Gateway: $0.05/hour (~$36/month) + data processing charges
- NAT Gateways: $0.045/hour each (~$33/month each)
Cost reduction strategies:
- Use smaller instance types (t3.micro, t3.small) where possible
- Disable FortiManager/FortiAnalyzer if not testing those features
- Destroy resources when not actively testing
- Use AWS Cost Explorer to monitor spend
- Consider AWS budgets and alerts
Example budget-conscious configuration:
enable_fortimanager = false # Save $73/month
enable_fortianalyzer = false # Save $73/month
jump_box_instance_type = "t3.micro" # Use smallest size
east_linux_instance_type = "t3.micro"
west_linux_instance_type = "t3.micro"State File Management
Store Terraform state securely:
# backend.tf (optional - recommended for teams)
terraform {
backend "s3" {
bucket = "my-terraform-state"
key = "existing-vpc-resources/terraform.tfstate"
region = "us-west-2"
encrypt = true
dynamodb_table = "terraform-locks"
}
}Troubleshooting
Issue: Terraform Fails with “Resource Already Exists”
Symptoms:
Error: Error creating VPC: VpcLimitExceededSolutions:
- Check VPC limits in your AWS account
- Clean up unused VPCs
- Request limit increase via AWS Support
Issue: Cannot Access FortiManager/FortiAnalyzer
Symptoms:
- Timeout when accessing GUI
- SSH connection refused
Solutions:
Verify security groups allow your IP:
aws ec2 describe-security-groups --group-ids <sg-id>Check instance is running:
aws ec2 describe-instances --filters "Name=tag:Name,Values=*fortimanager*"Verify
my_ipvariable matches your current public IP:curl ifconfig.meCheck instance system log for boot issues:
aws ec2 get-console-output --instance-id <instance-id>
Issue: Transit Gateway Attachment Pending
Symptoms:
- TGW attachment stuck in “pending” state
- Spoke VPCs can’t communicate
Solutions:
- Wait 5-10 minutes for attachment to complete
- Check TGW route tables are configured
- Verify no CIDR overlaps between VPCs
- Check TGW attachment state:
aws ec2 describe-transit-gateway-attachments
Issue: Linux Instances Not Reachable
Symptoms:
- Cannot curl or SSH to Linux instances
Solutions:
- Verify you’re accessing from jump box (if not public)
- Check security groups allow port 80 and 22
- Verify NAT Gateway is functioning for internet access
- Check route tables in spoke VPCs
Issue: High Costs After Deployment
Symptoms:
- AWS bill higher than expected
Solutions:
Check what’s running:
aws ec2 describe-instances --filters "Name=instance-state-name,Values=running"Identify expensive resources:
# Use AWS Cost Explorer in AWS Console # Filter by resource tags: cp and envShut down unused components:
terraform destroy -target=module.fortimanager terraform destroy -target=module.fortianalyzerOr destroy entire deployment:
terraform destroy
Cleanup
Destroying Resources
To destroy the existing_vpc_resources infrastructure:
cd terraform/existing_vpc_resources
terraform destroyType yes when prompted.
Warning
Destroy Order is Critical
If you also deployed autoscale_template, destroy it FIRST before destroying existing_vpc_resources:
# Step 1: Destroy autoscale_template
cd terraform/autoscale_template
terraform destroy
# Step 2: Destroy existing_vpc_resources
cd ../existing_vpc_resources
terraform destroyWhy? The inspection VPC has a Transit Gateway attachment to the TGW created by existing_vpc_resources. Destroying the TGW first will cause the attachment deletion to fail.
Selective Cleanup
To destroy only specific components:
# Destroy only FortiManager
terraform destroy -target=module.fortimanager
# Destroy only spoke VPCs and TGW
terraform destroy -target=module.transit_gateway
terraform destroy -target=module.spoke_vpcs
# Destroy only management VPC
terraform destroy -target=module.management_vpcVerify Complete Cleanup
After destroying, verify no resources remain:
# Check VPCs
aws ec2 describe-vpcs --filters "Name=tag:cp,Values=acme" "Name=tag:env,Values=test"
# Check Transit Gateways
aws ec2 describe-transit-gateways --filters "Name=tag:cp,Values=acme"
# Check running instances
aws ec2 describe-instances --filters "Name=instance-state-name,Values=running" "Name=tag:cp,Values=acme"Next Steps
After deploying existing_vpc_resources, proceed to deploy the autoscale_template to create the FortiGate autoscale group and inspection VPC.
Key information to carry forward:
- Transit Gateway name (from outputs)
- FortiManager private IP (if enabled)
- FortiAnalyzer private IP (if enabled)
- Same
cpandenvvalues
Recommended next reading:
autoscale_template
Overview
The autoscale_template deploys the FortiGate autoscale group into an existing Inspection VPC. It discovers VPC resources using Fortinet-Role tags created by the existing_vpc_resources template.
Warning
Prerequisites: You must run existing_vpc_resources FIRST to create the Inspection VPC with proper Fortinet-Role tags. Alternatively, you can manually apply the required tags to existing VPCs.
Info
This template is required for all deployments. It deploys the FortiGate autoscale group, Gateway Load Balancer, Lambda functions, and configures routes for traffic inspection.
What It Creates
The autoscale_template discovers the existing Inspection VPC via Fortinet-Role tags and deploys FortiGate autoscale components into it:
Resource Discovery (via Fortinet-Role Tags)
| Resource | Tag Pattern | Purpose |
|---|---|---|
| Inspection VPC | {cp}-{env}-inspection-vpc | VPC for FortiGate deployment |
| Subnets | {cp}-{env}-inspection-{type}-{az} | Public, GWLBE, Private subnets |
| Route Tables | {cp}-{env}-inspection-{type}-rt-{az} | For route modifications |
| IGW | {cp}-{env}-inspection-igw | Internet connectivity |
| NAT Gateways | {cp}-{env}-inspection-natgw-{az} | If nat_gw mode |
| TGW Attachment | {cp}-{env}-inspection-tgw-attachment | If TGW enabled |
Components Created
| Component | Purpose | Always Created |
|---|---|---|
| FortiGate Autoscale Groups | BYOL and/or on-demand instance groups | ✅ Yes |
| Gateway Load Balancer | Distributes traffic across FortiGate instances | ✅ Yes |
| GWLB Endpoints | Connection points in each AZ | ✅ Yes |
| Lambda Functions | Lifecycle management and licensing automation | ✅ Yes |
| DynamoDB Table | License tracking and state management | ✅ Yes (if BYOL) |
| S3 Bucket | License file storage and Lambda code | ✅ Yes (if BYOL) |
| IAM Roles | Permissions for Lambda and EC2 instances | ✅ Yes |
| Security Groups | Network access control | ✅ Yes |
| CloudWatch Alarms | Autoscaling triggers | ✅ Yes |
| Route Modifications | Points private subnets to GWLB endpoints | ✅ Yes (if enabled) |
Optional Components
| Component | Purpose | Enabled By |
|---|---|---|
| Transit Gateway Attachment | Connection to TGW for centralized architecture | enable_tgw_attachment |
| Dedicated Management ENI | Isolated management interface | enable_dedicated_management_eni |
| Dedicated Management VPC Connection | Management in separate VPC | enable_dedicated_management_vpc |
| FortiManager Integration | Centralized policy management | enable_fortimanager_integration |
| East-West Inspection | Inter-spoke traffic inspection | enable_east_west_inspection |
Architecture Patterns
The autoscale_template supports multiple deployment patterns:
Pattern 1: Centralized Architecture with TGW
Configuration:
enable_tgw_attachment = true
attach_to_tgw_name = "production-tgw"Traffic flow:
Spoke VPCs → TGW → Inspection VPC → FortiGate → GWLB → InternetUse cases:
- Production centralized egress
- Multi-VPC environments
- East-west traffic inspection
Pattern 2: Distributed Architecture (No TGW)
Configuration:
enable_tgw_attachment = falseTraffic flow:
Spoke VPC → GWLB Endpoint → FortiGate → Internet GatewayUse cases:
- Distributed security architecture
- Per-VPC inspection requirements
- Bump-in-the-wire deployments
Pattern 3: Hybrid with Management VPC
Configuration:
enable_tgw_attachment = true
enable_dedicated_management_vpc = true
enable_fortimanager_integration = trueTraffic flow:
Data: Spoke VPCs → TGW → FortiGate → Internet
Management: FortiGate → Management VPC → FortiManagerUse cases:
- Enterprise deployments
- Centralized management requirements
- Compliance-driven architectures
Integration Modes
Fortinet-Role Tag Discovery
The autoscale_template discovers all Inspection VPC resources using Fortinet-Role tags. This is how it finds the VPC, subnets, route tables, and other resources created by existing_vpc_resources.
How discovery works:
# autoscale_template looks up resources like this:
data "aws_vpc" "inspection" {
filter {
name = "tag:Fortinet-Role"
values = ["${var.cp}-${var.env}-inspection-vpc"]
}
}
data "aws_subnet" "inspection_public_az1" {
filter {
name = "tag:Fortinet-Role"
values = ["${var.cp}-${var.env}-inspection-public-az1"]
}
}Warning
Critical: The cp and env variables must match exactly between existing_vpc_resources and autoscale_template for tag discovery to work.
Integration with existing_vpc_resources
When deploying after existing_vpc_resources:
Required variable coordination:
# MUST MATCH existing_vpc_resources values (for Fortinet-Role tag discovery)
aws_region = "us-west-2"
availability_zone_1 = "a"
availability_zone_2 = "c"
cp = "acme" # MUST MATCH - used for tag lookup
env = "test" # MUST MATCH - used for tag lookup
# Connect to created TGW (if enabled in existing_vpc_resources)
enable_tgw_attachment = true
attach_to_tgw_name = "acme-test-tgw" # From existing_vpc_resources output
# Connect to management VPC (if created in existing_vpc_resources)
enable_dedicated_management_vpc = true
# Management VPC also discovered via Fortinet-Role tags
# FortiManager integration (if enabled in existing_vpc_resources)
enable_fortimanager_integration = true
fortimanager_ip = "10.3.0.10" # From existing_vpc_resources output
fortimanager_sn = "FMGVM0000000001"Integration with Manually Tagged VPCs
If you have existing VPCs that you want to use instead of creating new ones with existing_vpc_resources, you must apply Fortinet-Role tags to all required resources:
Required tags (see Templates Overview for complete list):
- VPC:
{cp}-{env}-inspection-vpc - Subnets:
{cp}-{env}-inspection-{public|gwlbe|private}-az{1|2} - Route Tables:
{cp}-{env}-inspection-{type}-rt-az{1|2} - IGW:
{cp}-{env}-inspection-igw
Configuration:
# Match your tag prefix
cp = "acme"
env = "prod"
# Connect to existing production TGW
enable_tgw_attachment = true
attach_to_tgw_name = "production-tgw" # Your existing TGW
# Use existing management infrastructure
enable_fortimanager_integration = true
fortimanager_ip = "10.100.50.10" # Your existing FortiManager
fortimanager_sn = "FMGVM1234567890"Step-by-Step Deployment
Prerequisites
- ✅ AWS account with appropriate permissions
- ✅ Terraform 1.0 or later installed
- ✅ AWS CLI configured with credentials
- ✅ SSH keypair created in target AWS region
- ✅ FortiGate licenses (if using BYOL) or FortiFlex account (if using FortiFlex)
- ✅
existing_vpc_resourcesdeployed (creates Inspection VPC with Fortinet-Role tags) - ✅ OR existing VPCs with
Fortinet-Roletags applied manually
Warning
Required: The Inspection VPC must exist with proper Fortinet-Role tags before running this template. Run existing_vpc_resources first, or manually tag your existing VPCs.
Step 1: Navigate to Template Directory
cd Autoscale-Simplified-Template/terraform/autoscale_templateStep 2: Create terraform.tfvars
cp terraform.tfvars.example terraform.tfvarsStep 3: Configure Core Variables
Region and Availability Zones
aws_region = "us-west-2"
availability_zone_1 = "a"
availability_zone_2 = "c"Warning
Variable Coordination
If you deployed existing_vpc_resources, these values MUST MATCH exactly:
aws_regionavailability_zone_1availability_zone_2cp(customer prefix)env(environment)
Mismatched values will cause resource discovery failures and deployment errors.
Customer Prefix and Environment
cp = "acme" # Customer prefix - MUST MATCH existing_vpc_resources
env = "test" # Environment - MUST MATCH existing_vpc_resourcesWarning
Critical for Tag Discovery
These values form the prefix for Fortinet-Role tags used to discover the Inspection VPC. For example, with cp="acme" and env="test", the template looks for:
- VPC with tag
Fortinet-Role = acme-test-inspection-vpc - Subnets with tags like
Fortinet-Role = acme-test-inspection-public-az1
If these don’t match the tags created by existing_vpc_resources, the template will fail with “no matching VPC found” errors.
Step 4: Configure Security Variables
keypair = "my-aws-keypair" # Must exist in target region
my_ip = "203.0.113.10/32" # Your public IP for management access
fortigate_asg_password = "SecurePassword123!" # Admin password for FortiGatesWarning
Password Requirements
The fortigate_asg_password must meet FortiOS password requirements:
- Minimum 8 characters
- At least one uppercase letter
- At least one lowercase letter
- At least one number
- No special characters that might cause shell escaping issues
Never commit passwords to version control. Consider using:
- Terraform variables marked as sensitive
- Environment variables:
TF_VAR_fortigate_asg_password - AWS Secrets Manager
- HashiCorp Vault
Step 5: Configure Transit Gateway Integration
To connect to Transit Gateway:
enable_tgw_attachment = trueSpecify TGW name:
# If using existing_vpc_resources template
attach_to_tgw_name = "acme-test-tgw" # Matches existing_vpc_resources output
# If using existing production TGW
attach_to_tgw_name = "production-tgw" # Your production TGW nameTip
Finding Your Transit Gateway Name
If you don’t know your TGW name:
aws ec2 describe-transit-gateways \
--query 'TransitGateways[*].[Tags[?Key==`Name`].Value | [0], TransitGatewayId]' \
--output tableThe attach_to_tgw_name should match the Name tag of your Transit Gateway.
To skip TGW attachment (distributed architecture):
enable_tgw_attachment = falseEast-West Inspection (requires TGW attachment):
enable_east_west_inspection = true # Routes spoke-to-spoke traffic through FortiGateStep 6: Configure Architecture Options
Firewall Mode
firewall_policy_mode = "2-arm" # or "1-arm"Recommendations:
- 2-arm: Recommended for most deployments (better throughput)
- 1-arm: Use when simplified routing is required
See Firewall Architecture for detailed comparison.
Internet Egress Mode
access_internet_mode = "nat_gw" # or "eip"Recommendations:
- nat_gw: Production deployments (higher availability)
- eip: Lower cost, simpler architecture
See Internet Egress for detailed comparison.
Step 7: Configure Management Options
Dedicated Management ENI
enable_dedicated_management_eni = trueSeparates management traffic from data plane. Recommended for production.
Dedicated Management VPC
enable_dedicated_management_vpc = true
# If using existing_vpc_resources with default tags:
dedicated_management_vpc_tag = "acme-test-management-vpc"
dedicated_management_public_az1_subnet_tag = "acme-test-management-public-az1-subnet"
dedicated_management_public_az2_subnet_tag = "acme-test-management-public-az2-subnet"
# If using existing management VPC with custom tags:
dedicated_management_vpc_tag = "my-custom-mgmt-vpc-tag"
dedicated_management_public_az1_subnet_tag = "my-custom-mgmt-az1-tag"
dedicated_management_public_az2_subnet_tag = "my-custom-mgmt-az2-tag"See Management Isolation for options and recommendations.
Info
Automatic Implication
When enable_dedicated_management_vpc = true, the template automatically sets enable_dedicated_management_eni = true. You don’t need to configure both explicitly.
Step 8: Configure Licensing
The template supports three licensing models. Choose one or combine them for hybrid licensing.
Option 1: BYOL (Bring Your Own License)
asg_license_directory = "asg_license" # Directory containing .lic filesPrerequisites:
Create the license directory:
mkdir asg_licensePlace license files in the directory:
terraform/autoscale_template/ ├── terraform.tfvars ├── asg_license/ │ ├── FGVM01-001.lic │ ├── FGVM01-002.lic │ ├── FGVM01-003.lic │ └── FGVM01-004.licEnsure you have at least as many licenses as
asg_byol_asg_max_size
Warning
License Pool Exhaustion
If you run out of BYOL licenses:
- New BYOL instances launch but remain unlicensed
- Unlicensed instances operate at 1 Mbps throughput
- FortiGuard services will not activate
- If on-demand ASG is configured, scaling continues using PAYG instances
Recommended: Provision 20% more licenses than asg_byol_asg_max_size
Option 2: FortiFlex (API-Driven)
fortiflex_username = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" # API username (UUID)
fortiflex_password = "xxxxxxxxxxxxxxxxxxxxx" # API password
fortiflex_sn_list = ["FGVMELTMxxxxxxxx"] # Optional: specific program serial numbers
fortiflex_configid_list = ["My_4CPU_Config"] # Configuration names (must match CPU count)Prerequisites:
- Register FortiFlex program via FortiCare
- Purchase point packs
- Create configurations matching your instance types
- Generate API credentials via IAM portal
CPU count matching:
fgt_instance_type = "c6i.xlarge" # 4 vCPUs
fortiflex_configid_list = ["My_4CPU_Config"] # MUST have 4 CPUs configuredWarning
Security Best Practice
Never commit FortiFlex credentials to version control. Use:
- Terraform Cloud sensitive variables
- AWS Secrets Manager
- Environment variables:
TF_VAR_fortiflex_usernameandTF_VAR_fortiflex_password - HashiCorp Vault
Example using environment variables:
export TF_VAR_fortiflex_username="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
export TF_VAR_fortiflex_password="xxxxxxxxxxxxxxxxxxxxx"
terraform applySee FortiFlex Setup Guide for complete configuration details.
Option 3: PAYG (AWS Marketplace)
# No explicit configuration needed
# Just set on-demand ASG capacities
asg_byol_asg_min_size = 0
asg_byol_asg_max_size = 0
asg_ondemand_asg_min_size = 2
asg_ondemand_asg_max_size = 8Prerequisites:
- Accept FortiGate-VM terms in AWS Marketplace
- No license files or API credentials required
- Licensing cost included in hourly EC2 charge
Hybrid Licensing (Recommended for Production)
Combine licensing models for cost optimization:
# BYOL for baseline capacity (lowest cost)
asg_license_directory = "asg_license"
asg_byol_asg_min_size = 2
asg_byol_asg_max_size = 4
# PAYG for burst capacity (highest flexibility)
asg_ondemand_asg_min_size = 0
asg_ondemand_asg_max_size = 4See Licensing Options for detailed comparison and cost analysis.
Step 9: Configure Autoscale Group Capacity
# BYOL ASG
asg_byol_asg_min_size = 2
asg_byol_asg_max_size = 4
asg_byol_asg_desired_size = 2
# On-Demand ASG
asg_ondemand_asg_min_size = 0
asg_ondemand_asg_max_size = 4
asg_ondemand_asg_desired_size = 0
# Primary scale-in protection
primary_scalein_protection = trueCapacity planning guidance:
| Deployment Type | Recommended Configuration |
|---|---|
| Development/Test | min=1, max=2, desired=1 |
| Small Production | min=2, max=4, desired=2 |
| Medium Production | min=2, max=8, desired=4 |
| Large Production | min=4, max=16, desired=6 |
Scaling behavior:
- BYOL instances scale first (up to
asg_byol_asg_max_size) - On-demand instances scale when BYOL capacity exhausted
- CloudWatch alarms trigger scale-out at 80% CPU (default)
- Scale-in occurs at 30% CPU (default)
See Autoscale Group Capacity for detailed planning.
Step 10: Configure FortiGate Specifications
fgt_instance_type = "c7gn.xlarge"
fortios_version = "7.4.5"
fortigate_gui_port = 443Instance type recommendations:
| Use Case | Recommended Type | vCPUs | Network Performance |
|---|---|---|---|
| Testing/Lab | t3.xlarge | 4 | Up to 5 Gbps |
| Small Production | c6i.xlarge | 4 | Up to 12.5 Gbps |
| Medium Production | c6i.2xlarge | 8 | Up to 12.5 Gbps |
| High Performance | c7gn.xlarge | 4 | Up to 25 Gbps |
| Very High Performance | c7gn.4xlarge | 16 | 50 Gbps |
FortiOS version selection:
- Use latest stable release for new deployments
- Test new versions in dev/test before production
- Check FortiOS Release Notes for compatibility
Step 11: Configure FortiManager Integration (Optional)
enable_fortimanager_integration = true
fortimanager_ip = "10.3.0.10" # FortiManager IP
fortimanager_sn = "FMGVM0000000001" # FortiManager serial number
fortimanager_vrf_select = 1 # VRF for management routingWarning
FortiManager 7.6.3+ Configuration Required
If using FortiManager 7.6.3 or later, you must enable VM device recognition before deploying:
On FortiManager CLI:
config system global
set fgfm-allow-vm enable
endVerify the setting:
show system global | grep fgfm-allow-vmWithout this configuration, FortiGate-VM instances will fail to register with FortiManager.
See FortiManager Integration for complete details.
FortiManager integration behavior:
- Lambda generates
config system central-managementon primary FortiGate only - Primary FortiGate registers with FortiManager as unauthorized device
- VDOM exception prevents sync to secondary instances
- Configuration syncs from FortiManager → Primary → Secondaries
See FortiManager Integration Configuration for advanced options including UMS mode.
Step 12: Configure Network CIDRs
vpc_cidr_inspection = "10.0.0.0/16"
vpc_cidr_management = "10.3.0.0/16" # Must match existing_vpc_resources if used
vpc_cidr_spoke = "192.168.0.0/16" # Supernet for all spoke VPCs
vpc_cidr_east = "192.168.0.0/24"
vpc_cidr_west = "192.168.1.0/24"
subnet_bits = 8 # /16 + 8 = /24 subnetsWarning
CIDR Planning Considerations
Ensure:
- ✅ No overlap with existing networks
- ✅ Management VPC CIDR matches
existing_vpc_resourcesif used - ✅ Spoke supernet encompasses all individual spoke VPC CIDRs
- ✅ Sufficient address space for growth
- ✅ Alignment with corporate IP addressing standards
Common mistakes:
- ❌ Overlapping inspection VPC with management VPC
- ❌ Spoke CIDR too small for number of VPCs
- ❌ Mismatched CIDRs between templates
Step 13: Configure GWLB Endpoint Names
endpoint_name_az1 = "asg-gwlbe_az1"
endpoint_name_az2 = "asg-gwlbe_az2"These names are used for route table lookups when configuring TGW routing or spoke VPC routing.
Step 14: Configure Additional Options
FortiGate System Autoscale
enable_fgt_system_autoscale = trueEnables FortiGate-native HA synchronization between instances. Recommended to leave enabled.
CloudWatch Alarms
# Scale-out threshold (default: 80% CPU)
scale_out_threshold = 80
# Scale-in threshold (default: 30% CPU)
scale_in_threshold = 30Adjust based on your traffic patterns and capacity requirements.
Step 15: Review Complete Configuration
Review your complete terraform.tfvars file before deployment. Here’s a complete example:
Click to expand complete example terraform.tfvars
#-----------------------------------------------------------------------
# Core Configuration
#-----------------------------------------------------------------------
aws_region = "us-west-2"
availability_zone_1 = "a"
availability_zone_2 = "c"
cp = "acme"
env = "prod"
#-----------------------------------------------------------------------
# Security
#-----------------------------------------------------------------------
keypair = "acme-keypair"
my_ip = "203.0.113.10/32"
fortigate_asg_password = "SecurePassword123!"
#-----------------------------------------------------------------------
# Transit Gateway
#-----------------------------------------------------------------------
enable_tgw_attachment = true
attach_to_tgw_name = "acme-prod-tgw"
enable_east_west_inspection = true
#-----------------------------------------------------------------------
# Architecture Options
#-----------------------------------------------------------------------
firewall_policy_mode = "2-arm"
access_internet_mode = "nat_gw"
#-----------------------------------------------------------------------
# Management Options
#-----------------------------------------------------------------------
enable_dedicated_management_eni = true
enable_dedicated_management_vpc = true
dedicated_management_vpc_tag = "acme-prod-management-vpc"
dedicated_management_public_az1_subnet_tag = "acme-prod-management-public-az1-subnet"
dedicated_management_public_az2_subnet_tag = "acme-prod-management-public-az2-subnet"
#-----------------------------------------------------------------------
# FortiManager Integration
#-----------------------------------------------------------------------
enable_fortimanager_integration = true
fortimanager_ip = "10.3.0.10"
fortimanager_sn = "FMGVM0000000001"
fortimanager_vrf_select = 1
#-----------------------------------------------------------------------
# Licensing - Hybrid BYOL + PAYG
#-----------------------------------------------------------------------
asg_license_directory = "asg_license"
#-----------------------------------------------------------------------
# Autoscale Group Capacity
#-----------------------------------------------------------------------
# BYOL baseline
asg_byol_asg_min_size = 2
asg_byol_asg_max_size = 4
asg_byol_asg_desired_size = 2
# PAYG burst
asg_ondemand_asg_min_size = 0
asg_ondemand_asg_max_size = 4
asg_ondemand_asg_desired_size = 0
# Scale-in protection
primary_scalein_protection = true
#-----------------------------------------------------------------------
# FortiGate Specifications
#-----------------------------------------------------------------------
fgt_instance_type = "c6i.xlarge"
fortios_version = "7.4.5"
fortigate_gui_port = 443
enable_fgt_system_autoscale = true
#-----------------------------------------------------------------------
# Network CIDRs
#-----------------------------------------------------------------------
vpc_cidr_inspection = "10.0.0.0/16"
vpc_cidr_management = "10.3.0.0/16"
vpc_cidr_spoke = "192.168.0.0/16"
vpc_cidr_east = "192.168.0.0/24"
vpc_cidr_west = "192.168.1.0/24"
subnet_bits = 8
#-----------------------------------------------------------------------
# GWLB Endpoints
#-----------------------------------------------------------------------
endpoint_name_az1 = "acme-prod-gwlbe-az1"
endpoint_name_az2 = "acme-prod-gwlbe-az2"Step 16: Deploy the Template
Initialize Terraform:
terraform initReview the execution plan:
terraform planExpected output will show ~40-60 resources to be created.
Deploy the infrastructure:
terraform applyType yes when prompted.
Expected deployment time: 15-20 minutes
Deployment progress indicators:
- VPC and networking: ~2 minutes
- Security groups and IAM: ~1 minute
- Lambda functions and DynamoDB: ~2 minutes
- GWLB and endpoints: ~5 minutes
- FortiGate instances launching: ~5-10 minutes
Step 17: Monitor Deployment
Watch CloudWatch logs for Lambda execution:
# Get Lambda function name from Terraform
terraform output lambda_function_name
# Stream logs
aws logs tail /aws/lambda/<function-name> --followWatch Auto Scaling Group activity:
# Get ASG name
aws autoscaling describe-auto-scaling-groups \
--query 'AutoScalingGroups[?contains(AutoScalingGroupName, `acme-prod`)].AutoScalingGroupName'
# Watch instance launches
aws autoscaling describe-scaling-activities \
--auto-scaling-group-name <asg-name> \
--max-records 10Step 18: Verify Deployment
Check FortiGate Instances
# List running FortiGate instances
aws ec2 describe-instances \
--filters "Name=tag:cp,Values=acme" \
"Name=tag:env,Values=prod" \
"Name=instance-state-name,Values=running" \
--query 'Reservations[*].Instances[*].[InstanceId,PublicIpAddress,Tags[?Key==`Name`].Value|[0]]' \
--output tableAccess FortiGate GUI
# Get FortiGate public IP
terraform output fortigate_instance_ips
# Access GUI
open https://<fortigate-public-ip>:443Login credentials:
- Username:
admin - Password: Value from
fortigate_asg_passwordvariable
Verify License Assignment
For BYOL:
# SSH to FortiGate
ssh -i ~/.ssh/keypair.pem admin@<fortigate-ip>
# Check license status
get system status
# Look for:
# Serial-Number: FGVMxxxxxxxxxx (not FGVMEVXXXXXXXXX)
# License Status: ValidFor FortiFlex:
- Check Lambda CloudWatch logs for successful API calls
- Verify entitlements created in FortiFlex portal
- Check FortiGate shows licensed status
For PAYG:
- Instances automatically licensed via AWS
- Verify license status in FortiGate GUI
Verify Transit Gateway Attachment
aws ec2 describe-transit-gateway-attachments \
--filters "Name=state,Values=available" \
"Name=resource-type,Values=vpc" \
--query 'TransitGatewayAttachments[?contains(Tags[?Key==`Name`].Value|[0], `inspection`)]'Verify FortiManager Registration
If FortiManager integration enabled:
- Access FortiManager GUI:
https://<fortimanager-ip> - Navigate to Device Manager > Device & Groups
- Look for unauthorized device with serial number matching primary FortiGate
- Right-click device and select Authorize
Test Traffic Flow
From jump box (if using existing_vpc_resources):
# SSH to jump box
ssh -i ~/.ssh/keypair.pem ec2-user@<jump-box-ip>
# Test internet connectivity (should go through FortiGate)
curl https://www.google.com
# Test spoke VPC connectivity
curl http://<linux-instance-ip>On FortiGate:
# SSH to FortiGate
ssh -i ~/.ssh/keypair.pem admin@<fortigate-ip>
# Monitor real-time traffic
diagnose sniffer packet any 'host 192.168.0.50' 4
# Check firewall policies
get firewall policy
# View active sessions
diagnose sys session listPost-Deployment Configuration
Configure TGW Route Tables
If you enabled enable_tgw_attachment = true, configure Transit Gateway route tables to route traffic through inspection VPC:
For Centralized Egress
Spoke VPC route table (route internet traffic to inspection VPC):
# Get inspection VPC TGW attachment ID
INSPECT_ATTACH_ID=$(aws ec2 describe-transit-gateway-attachments \
--filters "Name=resource-type,Values=vpc" \
"Name=tag:Name,Values=*inspection*" \
--query 'TransitGatewayAttachments[0].TransitGatewayAttachmentId' \
--output text)
# Add default route to spoke route table
aws ec2 create-transit-gateway-route \
--destination-cidr-block 0.0.0.0/0 \
--transit-gateway-route-table-id <spoke-rt-id> \
--transit-gateway-attachment-id $INSPECT_ATTACH_IDInspection VPC route table (route spoke traffic to internet):
# This is typically configured automatically by the template
# Verify it exists:
aws ec2 describe-transit-gateway-route-tables \
--transit-gateway-route-table-ids <inspection-rt-id>For East-West Inspection
If you enabled enable_east_west_inspection = true:
Spoke-to-spoke traffic routes through inspection VPC automatically.
Verify routing:
# From east spoke instance
ssh ec2-user@<east-linux-ip>
ping <west-linux-ip> # Should succeed and be inspected by FortiGate
# Check FortiGate logs
diagnose debug flow trace start 10
diagnose debug enable
# Generate traffic and watch logsConfigure FortiGate Policies
Access FortiGate GUI and configure firewall policies:
Basic Internet Egress Policy
Policy & Objects > Firewall Policy > Create New
Name: Internet-Egress
Incoming Interface: port1 (or TGW interface)
Outgoing Interface: port2 (internet interface)
Source: all
Destination: all
Service: ALL
Action: ACCEPT
NAT: Enable
Logging: All SessionsEast-West Inspection Policy
Policy & Objects > Firewall Policy > Create New
Name: East-West-Inspection
Incoming Interface: port1 (TGW interface)
Outgoing Interface: port1 (TGW interface)
Source: 192.168.0.0/16
Destination: 192.168.0.0/16
Service: ALL
Action: ACCEPT
NAT: Disable
Logging: All Sessions
Security Profiles: Enable IPS, Application Control, etc.Configure FortiManager (If Enabled)
Authorize FortiGate device:
- Device Manager > Device & Groups
- Right-click unauthorized device > Authorize
- Assign to ADOM
Create policy package:
- Policy & Objects > Policy Package
- Create new package
- Add firewall policies
Install policy:
- Select device
- Policy & Objects > Install
- Select package
- Click Install
Verify sync to secondary instances:
- Check secondary FortiGate instances
- Policies should appear automatically via HA sync
Monitoring and Operations
CloudWatch Metrics
Key metrics to monitor:
# CPU utilization (triggers autoscaling)
aws cloudwatch get-metric-statistics \
--namespace AWS/EC2 \
--metric-name CPUUtilization \
--dimensions Name=AutoScalingGroupName,Value=<asg-name> \
--start-time 2024-01-01T00:00:00Z \
--end-time 2024-01-02T00:00:00Z \
--period 3600 \
--statistics Average
# Network throughput
aws cloudwatch get-metric-statistics \
--namespace AWS/EC2 \
--metric-name NetworkIn \
--dimensions Name=AutoScalingGroupName,Value=<asg-name> \
--start-time 2024-01-01T00:00:00Z \
--end-time 2024-01-02T00:00:00Z \
--period 3600 \
--statistics SumLambda Function Logs
Monitor license assignment and lifecycle events:
# Stream Lambda logs
aws logs tail /aws/lambda/<function-name> --follow
# Search for errors
aws logs filter-log-events \
--log-group-name /aws/lambda/<function-name> \
--filter-pattern "ERROR"
# Search for license assignments
aws logs filter-log-events \
--log-group-name /aws/lambda/<function-name> \
--filter-pattern "license"Auto Scaling Group Activity
# View scaling activities
aws autoscaling describe-scaling-activities \
--auto-scaling-group-name <asg-name> \
--max-records 20
# View current capacity
aws autoscaling describe-auto-scaling-groups \
--auto-scaling-group-names <asg-name> \
--query 'AutoScalingGroups[0].[MinSize,DesiredCapacity,MaxSize]'Troubleshooting
Issue: Instances Launch But Don’t Get Licensed
Symptoms:
- Instances running but showing unlicensed
- Throughput limited to 1 Mbps
- FortiGuard services not working
Causes and Solutions:
For BYOL:
Check license files exist in directory:
ls -la asg_license/Check S3 bucket has licenses uploaded:
aws s3 ls s3://<bucket-name>/licenses/Check Lambda CloudWatch logs for errors:
aws logs tail /aws/lambda/<function-name> --follow | grep -i errorVerify DynamoDB table has available licenses:
aws dynamodb scan --table-name <table-name>
For FortiFlex:
- Check Lambda CloudWatch logs for API errors
- Verify FortiFlex credentials are correct
- Check point balance in FortiFlex portal
- Verify configuration ID matches instance CPU count
- Check entitlements created in FortiFlex portal
For PAYG:
- Verify AWS Marketplace subscription is active
- Check instance profile has correct permissions
- Verify internet connectivity from FortiGate
Issue: Cannot Access FortiGate GUI
Symptoms:
- Timeout when accessing FortiGate IP
- Connection refused
Solutions:
Verify instance is running:
aws ec2 describe-instances --instance-ids <instance-id>Check security groups allow your IP:
aws ec2 describe-security-groups --group-ids <sg-id>Verify you’re using correct port (default 443):
https://<fortigate-ip>:443Try alternate access methods:
# SSH to check if instance is responsive ssh -i ~/.ssh/keypair.pem admin@<fortigate-ip> # Check system status get system statusIf using dedicated management VPC:
- Ensure you’re accessing via correct IP (management interface)
- Check VPC peering or TGW attachment is working
- Verify route tables allow return traffic
Issue: Traffic Not Flowing Through FortiGate
Symptoms:
- No traffic visible in FortiGate logs
- Connectivity tests bypass FortiGate
- Sessions not appearing on FortiGate
Solutions:
Verify TGW routing (if using TGW):
# Check TGW route tables aws ec2 describe-transit-gateway-route-tables \ --transit-gateway-id <tgw-id> # Verify routes point to inspection VPC attachment aws ec2 search-transit-gateway-routes \ --transit-gateway-route-table-id <spoke-rt-id> \ --filters "Name=state,Values=active"Check GWLB health checks:
aws elbv2 describe-target-health \ --target-group-arn <gwlb-target-group-arn>Verify FortiGate firewall policies:
# SSH to FortiGate ssh admin@<fortigate-ip> # Check policies get firewall policy # Enable debug diagnose debug flow trace start 10 diagnose debug enable # Generate traffic and watch logsCheck spoke VPC route tables (for distributed architecture):
# Verify routes point to GWLB endpoints aws ec2 describe-route-tables \ --filters "Name=vpc-id,Values=<spoke-vpc-id>"
Issue: Primary Election Issues
Symptoms:
- No primary instance elected
- Multiple instances think they’re primary
- HA sync not working
Solutions:
Check Lambda logs for election logic:
aws logs tail /aws/lambda/<function-name> --follow | grep -i primaryVerify
enable_fgt_system_autoscale = true:# On FortiGate get system auto-scaleCheck for network connectivity between instances:
# From one FortiGate, ping another execute ping <other-fortigate-private-ip>Manually verify auto-scale configuration:
# SSH to FortiGate ssh admin@<fortigate-ip> # Check auto-scale config show system auto-scale # Should show: # set status enable # set role primary (or secondary) # set sync-interface "port1" # set psksecret "..."
Issue: FortiManager Integration Not Working
Symptoms:
- FortiGate doesn’t appear in FortiManager device list
- Device shows as unauthorized but can’t authorize
- Connection errors in FortiManager
Solutions:
Verify FortiManager 7.6.3+ VM recognition enabled:
# On FortiManager CLI show system global | grep fgfm-allow-vm # Should show: set fgfm-allow-vm enableCheck network connectivity:
# From FortiGate execute ping <fortimanager-ip> # Check FortiManager reachability diagnose debug application fgfmd -1 diagnose debug enableVerify central-management config:
# On FortiGate show system central-management # Should show: # set type fortimanager # set fmg <fortimanager-ip> # set serial-number <fmgr-sn>Check FortiManager logs:
# On FortiManager CLI diagnose debug application fgfmd -1 diagnose debug enable # Watch for connection attempts from FortiGateVerify only primary instance has central-management config:
# On primary: Should have config show system central-management # On secondary: Should NOT have config (or be blocked by vdom-exception) show system vdom-exception
Outputs Reference
Important outputs from the template:
terraform output| Output | Description | Use Case |
|---|---|---|
inspection_vpc_id | ID of inspection VPC | VPC peering, routing configuration |
inspection_vpc_cidr | CIDR of inspection VPC | Route table configuration |
gwlb_arn | Gateway Load Balancer ARN | GWLB endpoint creation |
gwlb_endpoint_az1_id | GWLB endpoint ID in AZ1 | Spoke VPC route tables |
gwlb_endpoint_az2_id | GWLB endpoint ID in AZ2 | Spoke VPC route tables |
fortigate_autoscale_group_name | BYOL ASG name | CloudWatch, monitoring |
fortigate_ondemand_autoscale_group_name | PAYG ASG name | CloudWatch, monitoring |
lambda_function_name | Lifecycle Lambda function name | CloudWatch logs, debugging |
dynamodb_table_name | License tracking table name | License management |
s3_bucket_name | License storage bucket name | License management |
tgw_attachment_id | TGW attachment ID | TGW routing configuration |
Best Practices
Pre-Deployment
- Plan capacity thoroughly: Use Autoscale Group Capacity guidance
- Test in dev/test first: Validate configuration before production
- Document customizations: Maintain runbook of configuration decisions
- Review security groups: Ensure least-privilege access
- Coordinate with network team: Verify CIDR allocations don’t conflict
During Deployment
- Monitor Lambda logs: Watch for errors during instance launch
- Verify license assignments: Check first instance gets licensed before scaling
- Test connectivity incrementally: Validate routing at each step
- Document public IPs: Save instance IPs for troubleshooting access
Post-Deployment
- Configure firewall policies immediately: Don’t leave FortiGates in pass-through mode
- Enable security profiles: IPS, Application Control, Web Filtering
- Set up monitoring: CloudWatch alarms, FortiGate logging
- Test failover scenarios: Verify autoscaling behavior
- Document recovery procedures: Maintain runbook for common issues
Ongoing Operations
- Monitor autoscale events: Review CloudWatch metrics weekly
- Update FortiOS regularly: Test updates in dev first
- Review firewall logs: Look for blocked traffic patterns
- Optimize scaling thresholds: Adjust based on observed traffic
- Plan capacity additions: Add licenses/entitlements before needed
Cleanup
Destroying the Deployment
To destroy the autoscale_template infrastructure:
cd terraform/autoscale_template
terraform destroyType yes when prompted.
Warning
Destroy Order is Critical
If you also deployed existing_vpc_resources, destroy in this order:
- First: Destroy
autoscale_template(this template) - Second: Destroy
existing_vpc_resources
Why? The inspection VPC has a Transit Gateway attachment to the TGW created by existing_vpc_resources. Destroying the TGW first will cause the attachment deletion to fail.
# Correct order:
cd terraform/autoscale_template
terraform destroy
cd ../existing_vpc_resources
terraform destroySelective Cleanup
To destroy only specific components:
# Destroy only BYOL ASG
terraform destroy -target=module.fortigate_byol_asg
# Destroy only on-demand ASG
terraform destroy -target=module.fortigate_ondemand_asg
# Destroy only Lambda and DynamoDB
terraform destroy -target=module.lambda_functions
terraform destroy -target=module.dynamodb_tableVerify Complete Cleanup
After destroying, verify no resources remain:
# Check VPCs
aws ec2 describe-vpcs --filters "Name=tag:cp,Values=acme" "Name=tag:env,Values=prod"
# Check running instances
aws ec2 describe-instances \
--filters "Name=instance-state-name,Values=running" \
"Name=tag:cp,Values=acme"
# Check GWLB
aws elbv2 describe-load-balancers \
--query 'LoadBalancers[?contains(LoadBalancerName, `acme`)]'
# Check Lambda functions
aws lambda list-functions --query 'Functions[?contains(FunctionName, `acme`)]'Summary
The autoscale_template deploys FortiGate autoscale into an existing Inspection VPC discovered via Fortinet-Role tags:
✅ Tag-based resource discovery: Finds Inspection VPC resources via Fortinet-Role tags
✅ Complete autoscale infrastructure: FortiGate ASG, GWLB, Lambda, IAM
✅ Flexible deployment options: Centralized, distributed, or hybrid architectures
✅ Multiple licensing models: BYOL, FortiFlex, PAYG, or hybrid
✅ Management options: Dedicated ENI, dedicated VPC, FortiManager integration
✅ Production-ready: High availability, autoscaling, lifecycle management
Key Requirements:
- Run
existing_vpc_resourcesfirst to create Inspection VPC with Fortinet-Role tags - Ensure
cpandenvvalues match between both templates for tag discovery
Next Steps:
- Review Solution Components for configuration options
- See Licensing Options for cost optimization
- Check FortiManager Integration for centralized management
Document Version: 1.0
Last Updated: November 2025
Terraform Module Version: Compatible with terraform-aws-cloud-modules v1.0+































