Using the Simplified Template to Deploy a FortiGate Autoscale Group

CloudCSE Version: v25.4.d
Revision:
Last updated: Thu, Feb 26, 2026 18:37:39 UTC
Copyright© 2026 Fortinet, Inc. All rights reserved. Fortinet®, FortiGate®, FortiCare® and FortiGuard®, and certain other marks are registered trademarks of Fortinet, Inc., and other Fortinet names herein may also be registered and/or common law trademarks of Fortinet. All other product or company names may be trademarks of their respective owners. Performance and other metrics contained herein were attained in internal lab tests under ideal conditions, and actual performance and other results may vary. Network variables, different network environments and other conditions may affect performance results. Nothing herein represents any binding commitment by Fortinet, and Fortinet disclaims all warranties, whether express or implied, except to the extent Fortinet enters a binding written contract, signed by Fortinet’s General Counsel, with a purchaser that expressly warrants that the identified product will perform according to certain expressly-identified performance metrics and, in such event, only the specific performance metrics expressly identified in such binding written contract shall be binding on Fortinet. For absolute clarity, any such warranty will be limited to performance in the same ideal conditions as in Fortinet’s internal lab tests. Fortinet disclaims in full any covenants, representations, and guarantees pursuant hereto, whether express or implied. Fortinet reserves the right to change, modify, transfer, or otherwise revise this publication without notice, and the most current version of the publication shall be applicable.

Subsections of Simplified FortiGate Autoscale Template

Introduction

Example Diagram Example Diagram

Welcome

This documentation provides comprehensive guidance for deploying FortiGate autoscale groups in AWS using the FortiGate Autoscale Simplified Template. This template serves as an accessible wrapper around Fortinet’s enterprise-grade FortiGate Autoscale Templates, dramatically reducing deployment complexity while maintaining full architectural flexibility.

Purpose and Scope

The official FortiGate autoscale templates available in the terraform-aws-cloud-modules repository deliver powerful capabilities for deploying elastic, scalable security architectures in AWS. However, these templates require:

  • Deep familiarity with complex Terraform variable structures
  • Strict adherence to specific syntax requirements
  • Extensive knowledge of AWS networking and FortiGate architectures
  • Significant time investment to understand configuration dependencies

The Simplified Template addresses these challenges by:

  • Abstracting complexity: Encapsulates intricate configuration patterns into intuitive boolean variables and straightforward parameters
  • Accelerating deployment: Reduces configuration time from hours to minutes through common-use-case defaults
  • Maintaining flexibility: Retains access to advanced features while providing sensible defaults for standard deployments
  • Reducing errors: Minimizes misconfiguration risks through validated input patterns and clear parameter descriptions

What This Template Provides

The Simplified Template enables rapid deployment of FortiGate autoscale groups by simplifying configuration of:

Core Infrastructure

  • Network architecture: VPC creation or integration with existing network resources
  • Subnet design: Automated subnet allocation across multiple Availability Zones
  • Transit Gateway integration: Optional connectivity to existing Transit Gateway hubs
  • Load balancing: AWS Gateway Load Balancer (GWLB) configuration and target group management

Autoscale Configuration

  • Capacity management: Minimum, maximum, and desired instance counts
  • Scaling policies: CPU-based thresholds and CloudWatch alarm configuration
  • Instance specifications: FortiGate version, instance type, and AMI selection
  • High availability: Multi-AZ distribution and health check parameters

Licensing and Management

  • Licensing flexibility: Support for BYOL, PAYG, and FortiFlex licensing models
  • License automation: Automated license file distribution or token generation
  • Hybrid licensing: Configuration for combining multiple license types
  • FortiManager integration: Optional centralized management and policy orchestration

Security and Access

  • Management access: Dedicated management interfaces or combined data/management design
  • Key pair configuration: SSH access for administrative operations
  • Security groups: Automated creation of appropriate ingress/egress rules
  • IAM roles: Lambda function permissions for license and lifecycle management

Egress Strategies

  • Elastic IP allocation: Per-instance EIP assignment for consistent source NAT
  • NAT Gateway integration: Shared NAT Gateway configuration for cost optimization
  • Route management: Automated routing table updates for egress traffic flows

Common Use Cases

This template is specifically designed for the most frequently deployed FortiGate autoscale architectures:

  1. Centralized Inspection with Transit Gateway: Single inspection VPC serving multiple spoke VPCs through Transit Gateway routing
  2. Dedicated Management VPC: Isolated management plane for FortiManager/FortiAnalyzer integration with production traffic inspection VPC
  3. Hybrid Licensing Architectures: Cost-optimized deployments combining BYOL/FortiFlex baseline capacity with PAYG burst capacity
  4. Existing Infrastructure Integration: Deployment into pre-existing VPCs, subnets, and Transit Gateway environments

How It Works

The Simplified Template approach:

  1. Variable Abstraction: Translates complex nested map structures into simple boolean flags and direct parameters
  2. Conditional Logic: Automatically enables or disables features based on use-case selection
  3. Default Values: Provides production-ready defaults for parameters not requiring customization
  4. Validation: Implements input validation to catch configuration errors before deployment
  5. Module Invocation: Dynamically constructs proper syntax for underlying enterprise templates
  6. Output Standardization: Presents consistent outputs regardless of architecture variation

Prerequisites

Before using this template, ensure you have:

Required Knowledge

  • Basic understanding of AWS networking concepts (VPCs, subnets, route tables)
  • Familiarity with Terraform workflow (init, plan, apply, destroy)
  • General understanding of FortiGate firewall concepts
  • AWS account with appropriate permissions for VPC, EC2, Lambda, and IAM resource creation

Required Tools

  • Terraform: Version 1.0 or later (Download)
  • AWS CLI: Configured with appropriate credentials (Installation Guide)
  • Git: For cloning the repository
  • Text Editor: For editing terraform.tfvars configuration files

AWS Resources

  • AWS Account: With permissions to create VPCs, subnets, EC2 instances, Lambda functions, and IAM roles
  • Service Quotas: Sufficient EC2 instance limits for desired autoscale group size
  • S3 Bucket (for BYOL): Storage location for FortiGate license files
  • Key Pair: Existing EC2 key pair for SSH access to FortiGate instances

Optional Resources

  • FortiManager: For centralized management (if integration is desired)
  • FortiAnalyzer: For centralized logging and reporting
  • Transit Gateway: If integrating with existing hub-and-spoke architecture
  • FortiFlex Account: If using FortiFlex licensing model

Documentation Structure

This guide is organized into the following sections:

  1. Introduction (this section): Overview, purpose, and prerequisites
  2. Overview: Architecture patterns, key benefits, and solution capabilities
  3. Licensing: Detailed comparison of BYOL, PAYG, and FortiFlex licensing options
  4. Solution Components: In-depth explanation of architectural elements and configuration options
  5. Templates: Step-by-step deployment procedures and configuration examples

Additional Resources

For comprehensive FortiGate and FortiOS documentation beyond the scope of this deployment guide, please reference:

Support and Feedback

For technical support:

For documentation feedback or Simplified Template enhancement requests, please reach out through your Fortinet account team or technical contacts.

Getting Started

Ready to deploy? Proceed to the Overview section to understand the architecture patterns available, or jump directly to the Templates section to begin configuration and deployment.

Overview

Introduction

FortiOS natively supports AWS Autoscaling capabilities, enabling dynamic horizontal scaling of FortiGate clusters within AWS environments. This solution leverages AWS Gateway Load Balancer (GWLB) to intelligently distribute traffic across FortiGate instances in the autoscale group. The cluster dynamically adjusts its capacity based on configurable thresholds—automatically launching new instances when the cluster size falls below the minimum threshold and terminating instances when capacity exceeds the maximum threshold. As instances are added or removed, they are seamlessly registered with or deregistered from associated GWLB target groups, ensuring continuous traffic inspection capabilities while maintaining optimal cluster performance and capacity.

Key Benefits

This autoscaling solution delivers several strategic advantages for AWS security architectures:

Elastic Scalability

  • Horizontal scaling: Automatically scales FortiGate cluster capacity in response to traffic patterns and resource utilization
  • Cost optimization: Scales down during low-traffic periods to reduce operational costs
  • Performance assurance: Scales up during peak demand to maintain consistent security inspection throughput

Flexible Licensing Options

  • Hybrid licensing model: Supports combination of BYOL (Bring Your Own License), FortiFlex usage-based licensing for baseline capacity, and AWS Marketplace PAYG (Pay-As-You-Go) for elastic burst capacity
  • License optimization: Minimize costs by using BYOL/FortiFlex licenses for steady-state workloads and PAYG for temporary scale-out events
  • Simplified license management: Automated license token injection during instance launch via Lambda functions

High Availability and Configuration Management

  • Automated configuration synchronization: Primary FortiGate instance automatically synchronizes security policies and configuration to secondary instances using FortiOS native HA sync mechanisms
  • FortiManager integration: Optional centralized management through FortiManager for policy orchestration, compliance monitoring, and operational visibility across the autoscale group
  • Consistent security posture: Configuration drift prevention ensures all instances enforce identical security policies

Architectural Flexibility

  • Centralized inspection architecture: Single inspection VPC model with Transit Gateway integration for hub-and-spoke topology
  • Distributed inspection architecture: Multiple inspection points for geo-distributed workloads (coming soon)
  • Deployment patterns: Support for single-arm (1-ENI) and dual-arm (2-ENI) FortiGate deployments

Internet Egress Options

  • Elastic IP (EIP) NAT: Each FortiGate instance can leverage individual EIPs for source NAT, providing consistent egress IP addresses for allowlist scenarios
  • NAT Gateway integration: Alternative architecture using shared NAT Gateways for cost-optimized egress traffic when static source IPs are not required
  • Hybrid egress design: Combine EIP and NAT Gateway approaches based on application requirements

Architecture Considerations

This simplified template streamlines the deployment of FortiGate autoscale groups by abstracting infrastructure complexity while providing customization options for:

  • VPC and subnet configuration
  • Licensing strategy selection
  • FortiManager/FortiAnalyzer integration
  • Network interface design (dedicated management ENI options)
  • Scaling policies and thresholds
  • Transit Gateway attachment and routing

Additional Solutions

Fortinet offers several complementary AWS security architectures optimized for different use cases:

  • FGCP HA (Single AZ): Active-passive high availability within a single Availability Zone for maximum configuration synchronization and stateful failover
  • FGCP HA (Multi-AZ): Active-passive high availability across multiple Availability Zones for enhanced resilience
  • Transit Gateway with FortiGate inspection: Centralized security inspection for multi-VPC environments
  • Distributed Gateway Load Balancer architectures: Regional traffic inspection patterns

For comprehensive information on Fortinet’s AWS security portfolio, deployment guides, and architectural best practices, visit www.fortinet.com/aws.

Licensing

Overview

FortiGate autoscale deployments in AWS support three distinct licensing models, each optimized for different operational requirements, cost structures, and scaling behaviors. The choice of licensing strategy significantly impacts deployment complexity, operational costs, and the ability to dynamically scale capacity in response to demand.

This template supports all three licensing models and enables hybrid licensing configurations where multiple license types coexist within the same autoscale group, providing maximum flexibility for cost optimization and capacity management.


Licensing Options

AWS Marketplace Pay-As-You-Go (PAYG)

Best for: Proof of concepts, temporary workloads, elastic burst capacity

AWS Marketplace PAYG licensing offers the simplest deployment path with zero upfront licensing requirements. Instances are billed hourly through your AWS account based on instance type and included FortiGuard services.

Advantages

  • Zero configuration: No license files, tokens, or registration required
  • Instant deployment: Instances launch immediately without license provisioning delays
  • Elastic scaling: Ideal for autoscale groups that frequently scale out and in
  • No commitment: Pay only for actual runtime hours with no long-term contracts
  • Consolidated billing: All costs appear on AWS invoices alongside infrastructure charges

Considerations

  • Higher per-hour cost: Premium pricing compared to BYOL or FortiFlex over extended periods
  • Service bundle locked: Cannot customize FortiGuard service subscriptions; you receive the bundle included with the marketplace offering
  • Limited cost optimization: No volume discounts or prepaid savings
  • Vendor lock-in: Cannot migrate licenses to on-premises or other cloud providers

When to Use

  • Development, testing, and staging environments
  • Proof-of-concept deployments with undefined timelines
  • Burst capacity in hybrid licensing architectures (scale beyond BYOL/FortiFlex baseline)
  • Short-term projects (< 6 months) where simplicity outweighs cost
  • Disaster recovery standby capacity that remains dormant most of the time

Implementation Notes

  • Select PAYG AMI from AWS Marketplace during launch template configuration
  • No Lambda-based license management required
  • Instances automatically activate upon boot
  • FortiGuard services update immediately without additional registration

Bring Your Own License (BYOL)

Best for: Long-term production deployments with predictable capacity requirements

BYOL licensing leverages perpetual or term-based FortiGate-VM licenses purchased directly from Fortinet or authorized resellers. This model provides the lowest per-instance operating cost for sustained workloads but requires manual license file management.

Advantages

  • Lowest operating cost: Significant savings (40-60%) compared to PAYG for long-term deployments
  • Custom service bundles: Select specific FortiGuard subscriptions (UTP, ATP, Enterprise) based on security requirements
  • Portable licenses: Migrate licenses between environments (AWS, Azure, on-premises) with proper licensing terms
  • Volume discounts: Enterprise agreements provide additional cost reductions at scale
  • Predictable budgeting: Fixed annual or multi-year costs independent of instance runtime

Considerations

  • Manual license management: Requires obtaining, storing, and deploying license files for each instance
  • Upfront capital expense: Purchase licenses before deployment
  • Reduced flexibility: Fixed license count limits maximum autoscale capacity unless additional licenses are procured
  • License tracking overhead: Must maintain inventory of assigned vs. available licenses
  • Decommissioning process: Requires license recovery when scaling in or decommissioning environments

When to Use

  • Production workloads with predictable, steady-state capacity requirements
  • Long-term deployments (> 1 year) where cost savings justify management overhead
  • Organizations with existing Fortinet licensing agreements or ELAs
  • Environments requiring specific FortiGuard service combinations not available in marketplace offerings
  • Hybrid licensing architectures as the baseline capacity tier

Implementation Notes

  • Store license files in S3 bucket accessible by Lambda function
  • Lambda function reads license files and applies them during instance boot
  • Configure lic_folder_path variable to point to license file directory
  • Naming convention: License files should match naming pattern expected by Lambda (e.g., sequential numbering)
  • DynamoDB table tracks license assignments to prevent duplicate usage
  • Decommissioned instances return licenses to available pool for reuse

License File Requirements

licenses/
├── FGVM01-001.lic
├── FGVM01-002.lic
├── FGVM01-003.lic
└── FGVM01-004.lic

Critical: Ensure sufficient licenses exist for asg_max_size. If licenses are exhausted during scale-out, new instances will remain unlicensed and non-functional.


FortiFlex (Usage-Based Licensing)

Best for: Dynamic workloads requiring flexibility with optimized costs for medium to long-term deployments

FortiFlex (formerly Flex-VM) is Fortinet’s consumption-based, points-driven licensing program that combines the flexibility of PAYG with cost structures approaching BYOL. Points are consumed daily based on FortiGate configuration (CPU count, service package), and licenses are dynamically provisioned via API tokens.

Advantages

  • Flexible scaling: Provision and deprovision licenses on-demand through API integration
  • Optimized costs: 20-40% savings compared to PAYG for sustained workloads
  • Automated license lifecycle: Lambda function generates license tokens automatically during instance launch
  • Right-sizing capability: Change CPU count or service packages dynamically; pay only for what you consume
  • Simplified license management: No physical license files; tokens generated via API calls
  • Point pooling: Share point allocations across multiple deployments and cloud providers
  • Burst capacity support: Quickly provision additional licenses without procurement delays

Considerations

  • Initial setup complexity: Requires FortiFlex program registration, configuration templates, and API integration
  • Point management: Monitor point consumption to prevent negative balance or service interruption
  • Active entitlement management: Must create/stop entitlements to control costs
  • API dependency: Relies on connectivity to FortiFlex API endpoints during instance provisioning
  • Grace period risks: Running negative balance triggers 90-day grace period; service stops if not resolved
  • Minimum commitment: Some FortiFlex programs require minimum annual consumption

When to Use

  • Production workloads with variable but predictable traffic patterns
  • Multi-environment deployments (dev, staging, production) sharing point pools
  • Organizations pursuing cloud-first strategies without legacy perpetual licenses
  • Architectures requiring frequent right-sizing of FortiGate instances
  • Deployments spanning multiple cloud providers or hybrid architectures
  • Cost-conscious autoscale groups with moderate to high uptime requirements

Implementation Notes

  • Register FortiFlex program and purchase point packs via FortiCare portal
  • Create FortiGate-VM configurations in FortiFlex portal defining CPU count and service packages
  • Generate API credentials through IAM portal with FortiFlex permissions
  • Configure Lambda function environment variables with FortiFlex API credentials
  • Lambda function creates entitlements and retrieves license tokens during instance launch
  • Entitlements automatically STOP when instances terminate, halting point consumption
  • Monitor point balance via FortiFlex portal or API to prevent service interruption

FortiFlex Prerequisites

  1. FortiFlex Program Registration:

    • Purchase program SKU: FC-10-ELAVR-221-02-XX (12, 36, or 60 months)
    • Register program in FortiCare at https://support.fortinet.com
    • Wait up to 4 hours for program validation
  2. Point Pack Purchase:

    • Annual packs: LIC-ELAVM-10K (10,000 points, 1-year term with rollover)
    • Multi-year packs: LIC-ELAVMMY-50K-XX (50,000 points, 3-5 year terms)
    • Bulk packs: LIC-ELAVMMY-BULK-SEAT (100,000 points per seat, minimum 10 seats)
  3. Configuration Creation:

    • Define VM specifications (CPU count, service package, VDOMs)
    • Example: 2-CPU FortiGate with UTP bundle = ~6.5 points/day
    • Use FortiFlex Calculator to estimate consumption: https://fndn.fortinet.net/index.php?/tools/fortiflex/
  4. API Access Setup:

    • Create IAM permission profile including FortiFlex portal
    • Create API user and download credentials
    • Obtain API token via authentication endpoint
    • Store credentials securely (AWS Secrets Manager recommended)

Point Consumption Examples

ConfigurationDaily PointsMonthly Points (30 days)Annual Points
1 CPU, FortiCare Premium1.6349595
2 CPU, UTP Bundle6.521962,380
4 CPU, ATP Bundle26.087829,519
8 CPU, Enterprise Bundle104.323,13038,077

Note: Actual consumption varies based on specific service selections and VDOM count. Always use the FortiFlex Calculator for accurate estimates.


Hybrid Licensing Architecture

Overview

The autoscale template supports hybrid licensing configurations where multiple license types coexist within separate Auto Scaling Groups (ASGs). This architecture provides cost optimization by using BYOL or FortiFlex for baseline capacity and PAYG for elastic burst capacity.

Architecture Pattern

┌─────────────────────────────────────────────────────┐
│              GWLB Target Group                      │
│                  (Unified)                          │
└────────┬────────────────────────────────┬───────────┘
         │                                │
         ▼                                ▼
┌─────────────────┐              ┌─────────────────┐
│  BYOL/FortiFlex │              │   PAYG ASG      │
│       ASG       │              │                 │
│                 │              │                 │
│  Min: 2         │              │  Min: 0         │
│  Max: 4         │              │  Max: 8         │
│  Desired: 2     │              │  Desired: 0     │
│                 │              │                 │
│ (Baseline)      │              │ (Burst)         │
└─────────────────┘              └─────────────────┘

Configuration Strategy

  1. Primary ASG (BYOL or FortiFlex):

    • Configure with minimum = desired capacity
    • Sets baseline capacity for steady-state traffic
    • Lower per-instance cost for sustained operation
    • Example: min_size = 2, max_size = 4, desired_capacity = 2
  2. Secondary ASG (PAYG):

    • Configure with minimum = 0, desired = 0
    • Remains dormant during normal operations
    • Scales out only when primary ASG reaches maximum capacity
    • Example: min_size = 0, max_size = 8, desired_capacity = 0
  3. Scaling Coordination:

    • Configure CloudWatch alarms with staggered thresholds
    • Primary ASG scales at lower CPU threshold (e.g., 60%)
    • Secondary ASG scales at higher CPU threshold (e.g., 75%)
    • Provides buffer for primary ASG to stabilize before burst scaling

Cost Optimization Example

Scenario: E-commerce application with baseline 4 Gbps throughput, occasional spikes to 12 Gbps

Hybrid Configuration:

  • Primary: 4x c6i.xlarge (4 vCPUs) with FortiFlex

    • Daily points: 4 instances × 26.08 points = 104.32 points/day
    • Monthly cost: ~$X (based on point pricing)
    • Handles baseline traffic continuously
  • Secondary: 0-8x c6i.xlarge with PAYG

    • Hourly cost: $Y per instance
    • Scales only during traffic spikes (estimated 10% of time)
    • Monthly cost: 8 instances × $Y/hour × 720 hours × 0.10 = $Z

Savings vs. Pure PAYG: Approximately 35-45% reduction for this traffic pattern

Implementation Notes

  • Both ASGs register with same GWLB target group for unified traffic distribution
  • Each ASG requires separate launch template with appropriate licensing configuration
  • CloudWatch alarms must reference correct ASG names for scaling actions
  • Lambda function handles license provisioning independently for each ASG
  • Monitor scaling activities to validate primary ASG exhausts capacity before secondary ASG activates

License Selection Decision Tree

START: What is your deployment scenario?

├─ POC / Testing / Short-term project (< 6 months)
  └─ Use: AWS Marketplace PAYG
     └─ Rationale: Simplicity, no upfront investment, easy teardown

├─ Long-term production (> 12 months) with steady-state capacity
  └─ Do you have existing Fortinet licenses or ELA?
     ├─ YES  Use: BYOL
       └─ Rationale: Lowest cost, leverage existing investment
     └─ NO  Use: FortiFlex
        └─ Rationale: Flexible, better cost than PAYG, no upfront licensing

├─ Production with variable traffic patterns
  └─ Use: Hybrid (FortiFlex + PAYG)
     └─ Rationale: Baseline cost optimization with elastic burst capacity

└─ Multi-environment deployment (dev/staging/prod)
   └─ Use: FortiFlex
      └─ Rationale: Point pooling across environments, on-demand provisioning

Best Practices

General Recommendations

  1. Calculate total cost of ownership (TCO):

    • Project instance runtime hours over 12-36 months
    • Factor in scaling frequency and burst capacity requirements
    • Include license management overhead costs for BYOL
    • Use FortiFlex Calculator for accurate point consumption estimates
  2. Start with PAYG for prototyping:

    • Validate architecture and sizing before committing to licenses
    • Measure actual traffic patterns to inform license type selection
    • Convert to BYOL or FortiFlex after requirements stabilize
  3. Implement hybrid licensing for cost optimization:

    • Use BYOL/FortiFlex for baseline capacity that runs 24/7
    • Use PAYG for burst capacity that scales intermittently
    • Monitor scaling patterns monthly and adjust ASG configurations
  4. Automate license lifecycle management:

    • Use Lambda functions for automated license provisioning
    • Implement DynamoDB tracking for BYOL license assignments
    • Enable CloudWatch alarms for FortiFlex point balance monitoring
    • Store FortiFlex API credentials in AWS Secrets Manager

BYOL-Specific Best Practices

  1. Maintain license inventory:

    • Track assigned vs. available licenses in spreadsheet or CMDB
    • Reserve 10-20% buffer above asg_max_size for maintenance windows
    • Implement automated alerts when available licenses fall below threshold
  2. Standardize license file naming:

    • Use consistent naming convention (e.g., FGVMXX-001.lic)
    • Document naming pattern in deployment runbooks
    • Ensure Lambda function matches naming pattern logic
  3. Test license recovery:

    • Verify decommissioned instances return licenses to pool
    • Validate DynamoDB table updates correctly
    • Practice license recovery procedures before production incidents

FortiFlex-Specific Best Practices

  1. Monitor point consumption actively:

    • Review Point Usage reports weekly in FortiFlex portal
    • Set up email notifications for low balance (90/60/30 day thresholds)
    • Correlate point consumption with CloudWatch ASG metrics
  2. Plan point pack purchases:

    • Purchase points early in program year to maximize rollover (annual packs)
    • Use multi-year packs for long-term stable deployments to avoid rollover complexity
    • Maintain 20-30% buffer above projected consumption
  3. Optimize entitlement lifecycle:

    • STOP entitlements immediately after instance termination to halt point consumption
    • Use Lambda automation to stop entitlements within minutes of scale-in events
    • Review STOPPED entitlements weekly and delete if no longer needed
  4. Right-size FortiGate configurations:

    • Start with minimal CPU count and scale up as needed
    • Use A La Carte service packages for cost optimization when not all services required
    • Adjust configurations quarterly based on actual usage patterns

Troubleshooting

Common Licensing Issues

BYOL: Instances Launch Without License

Symptoms: FortiGate instance boots but no license is applied; limited functionality

Causes:

  • License file not found in S3 bucket
  • Incorrect lic_folder_path variable
  • Lambda function lacks S3 permissions
  • License file naming doesn’t match Lambda logic
  • All licenses already assigned (pool exhausted)

Resolution:

  1. Verify license files exist in S3 bucket: aws s3 ls s3://<bucket>/licenses/
  2. Check Lambda CloudWatch logs for S3 access errors
  3. Validate IAM role attached to Lambda has s3:GetObject permission
  4. Confirm available licenses exist in DynamoDB tracking table
  5. Manually apply license via FortiGate CLI: execute restore config license.lic

FortiFlex: License Token Generation Fails

Symptoms: Instance launches but does not activate; no serial number assigned

Causes:

  • FortiFlex API credentials expired or invalid
  • Insufficient points in FortiFlex account
  • FortiFlex program expired
  • Network connectivity issues to FortiFlex API
  • Configuration ID not found or deactivated

Resolution:

  1. Check Lambda CloudWatch logs for API authentication errors
  2. Verify FortiFlex API credentials: curl test authentication endpoint
  3. Log into FortiFlex portal and check point balance
  4. Confirm program status and expiration date
  5. Verify configuration exists and is active in FortiFlex portal
  6. Test network connectivity from Lambda to https://support.fortinet.com

Hybrid Licensing: Secondary ASG Scales Before Primary Exhausted

Symptoms: PAYG instances launch while primary ASG has available capacity

Causes:

  • CloudWatch alarm thresholds misconfigured
  • Alarm evaluation periods too short
  • ASG cooldown periods insufficient
  • Stale CloudWatch metrics

Resolution:

  1. Review CloudWatch alarm configurations for both ASGs
  2. Increase primary ASG alarm threshold (e.g., 60% → 70%)
  3. Lower secondary ASG alarm threshold (e.g., 75% → 80%)
  4. Extend alarm evaluation periods to 3-5 minutes
  5. Implement alarm dependencies (secondary alarm checks primary ASG size)

License Not Applied After Instance Boot

Symptoms: Instance operational but running in limited mode or showing expired license

Causes:

  • User-data script failed during execution
  • License injection command syntax error
  • Network connectivity issues during boot
  • FortiGate version mismatch with license

Resolution:

  1. SSH to FortiGate instance and check status: get system status
  2. Review user-data execution logs: /var/log/cloud-init-output.log
  3. Manually inject license:
    • BYOL: execute restore config tftp <license.lic> <tftp_server>
    • FortiFlex: execute vm-license <TOKEN>
  4. Verify network connectivity: execute ping fortiguard.com
  5. Check FortiOS version compatibility with license type

Additional Resources

Official Documentation

Tools & Calculators

Support Channels

  • FortiCare Portal: support.fortinet.com
  • FortiFlex Portal: FortiCare > Services > Assets & Accounts > FortiFlex
  • Technical Support: Open support ticket for licensing issues
  • Sales Team: Contact for enterprise licensing agreements or volume discounts

Summary

Choosing the appropriate licensing model for your FortiGate autoscale deployment requires careful evaluation of deployment duration, traffic patterns, operational complexity tolerance, and budget constraints. This template supports all licensing models and hybrid configurations, enabling you to optimize costs while maintaining the flexibility to adapt to changing requirements.

Quick Selection Guide:

  • PAYG: Simplicity matters more than cost; short-term or highly variable workloads
  • BYOL: Lowest cost for long-term, predictable capacity; you have existing licenses
  • FortiFlex: Balance of flexibility and cost; dynamic workloads without upfront licenses
  • Hybrid: Best cost optimization; combine baseline BYOL/FortiFlex with PAYG burst capacity

Solution Components

The FortiGate Autoscale Simplified Template abstracts complex architectural patterns into configurable components that can be enabled or customized through the terraform.tfvars file.

This section provides detailed explanations of each component, configuration options, and architectural considerations to help you design the optimal deployment for your requirements.

What You’ll Learn

This section covers the major architectural elements available in the template:

  • Internet Egress Options: Choose between EIP or NAT Gateway architectures
  • Firewall Architecture: Understand 1-ARM vs 2-ARM configurations
  • Management Isolation: Configure dedicated management ENI and VPC options
  • Licensing: Manage BYOL licenses and integrate FortiFlex API
  • FortiManager Integration: Enable centralized management and policy orchestration
  • Capacity Planning: Configure autoscale group sizing and scaling strategies
  • Primary Protection: Implement scale-in protection for configuration stability
  • Additional Options: Fine-tune instance specifications and advanced settings

Each component page includes:

  • Configuration examples
  • Architecture diagrams
  • Best practices
  • Troubleshooting guidance
  • Use case recommendations

Select a component from the navigation menu to learn more about specific configuration options.

Subsections of Solution Components

Internet Egress Options

Overview

The FortiGate autoscale solution provides two distinct architectures for internet egress traffic, each optimized for different operational requirements and cost considerations.


Option 1: Elastic IP (EIP) per Instance

Each FortiGate instance in the autoscale group receives a dedicated Elastic IP address. All traffic destined for the public internet is source-NATed behind the instance’s assigned EIP.

Configuration

access_internet_mode = "eip"

Architecture Behavior

In EIP mode, the architecture routes all internet-bound traffic to port2 (the public interface). The route table for the public subnet directs traffic to the Internet Gateway (IGW), where automatic source NAT to the associated EIP occurs.

EIP Diagram EIP Diagram

Advantages

  • No NAT Gateway costs: Eliminates monthly NAT Gateway charges ($0.045/hour + data processing)
  • Distributed egress: Each instance has independent internet connectivity
  • Simplified troubleshooting: Per-instance source IP simplifies traffic flow analysis
  • No single point of failure: Loss of one instance’s EIP doesn’t affect others

Considerations

  • Unpredictable IP addresses: EIPs are allocated from AWS’s pool; you cannot predict or specify the assigned addresses
  • Allowlist complexity: Destinations requiring IP allowlisting must accommodate a pool of EIPs (one per maximum autoscale capacity)
  • IP churn during scaling: Scale-out events introduce new source IPs; scale-in events remove them
  • Limited EIP quota: AWS accounts have default limits (5 EIPs per region, increased upon request)

Best Use Cases

  • Cost-sensitive deployments where NAT Gateway charges exceed EIP allocation costs
  • Environments where destination allowlisting is not required
  • Architectures prioritizing distributed egress over consistent source IPs
  • Development and testing environments with limited budget

Option 2: NAT Gateway

All FortiGate instances share one or more NAT Gateways deployed in public subnets. Traffic is source-NATed to the NAT Gateway’s static Elastic IP address.

Configuration

access_internet_mode = "nat_gw"

Architecture Behavior

NAT Gateway mode requires additional subnet and route table configuration. Internet-bound traffic is first routed to the NAT Gateway in the public subnet, which performs source NAT to its static EIP before forwarding to the IGW.

NAT Gateway Diagram NAT Gateway Diagram

Advantages

  • Predictable source IP: Single, stable public IP address for the lifetime of the NAT Gateway
  • Simplified allowlisting: Destinations only need to allowlist one IP address (per Availability Zone)
  • High throughput: NAT Gateway supports up to 45 Gbps per AZ
  • Managed service: AWS handles NAT Gateway scaling and availability
  • Independent of FortiGate scaling: Source IP remains constant during scale-in/scale-out events

Considerations

  • Additional costs: $0.045/hour per NAT Gateway + $0.045 per GB data processed
  • Per-AZ deployment: Multi-AZ architectures require NAT Gateway in each AZ for fault tolerance
  • Additional subnet requirements: Requires dedicated NAT Gateway subnet in each AZ
  • Route table complexity: Additional route tables needed for NAT Gateway routing

Cost Analysis Example

Scenario: 4 FortiGate instances processing 10 TB/month egress traffic

EIP Mode:

  • 4 EIP allocations: $0 (first EIP free, then $0.00/hour per EIP)
  • Total monthly: ~$0 (minimal costs)

NAT Gateway Mode (2 AZs):

  • 2 NAT Gateways: 2 × $0.045/hour × 730 hours = $65.70
  • Data processing: 10,000 GB × $0.045 = $450.00
  • Total monthly: $515.70

Decision Point: NAT Gateway makes sense when consistent source IP requirement justifies the additional cost.

Best Use Cases

  • Production environments requiring predictable source IPs
  • Compliance scenarios where destination IP allowlisting is mandatory
  • High-volume egress traffic to SaaS providers with IP allowlisting requirements
  • Architectures where operational simplicity outweighs additional cost

Decision Matrix

FactorEIP ModeNAT Gateway Mode
Monthly CostMinimal$500+ (varies with traffic)
Source IP PredictabilityVariable (changes with scaling)Stable
Allowlisting ComplexityHigh (multiple IPs)Low (single IP per AZ)
ThroughputPer-instance limitUp to 45 Gbps per AZ
Operational ComplexityLowMedium
Best ForDev/test, cost-sensitiveProduction, compliance-driven

Next Steps

After selecting your internet egress option, proceed to Firewall Architecture to configure the FortiGate interface model.

Firewall Architecture

Overview

FortiGate instances can operate in single-arm (1-ARM) or dual-arm (2-ARM) network configurations, fundamentally changing traffic flow patterns through the firewall.

Configuration

firewall_policy_mode = "1-arm"  # or "2-arm"

Firewall Policy Mode Firewall Policy Mode


Architecture Overview

The 2-ARM configuration deploys FortiGate instances with distinct “trusted” (private) and “untrusted” (public) interfaces, providing clear network segmentation.

Traffic Flow:

  1. Traffic arrives at GWLB Endpoints (GWLBe) in the inspection VPC
  2. GWLB load-balances traffic across healthy FortiGate instances
  3. Traffic encapsulated in Geneve tunnels arrives at FortiGate port1 (data plane)
  4. FortiGate inspects traffic and applies security policies
  5. Internet-bound traffic exits via port2 (public interface)
  6. Port2 traffic is source-NATed via EIP or NAT Gateway
  7. Return traffic follows reverse path back through Geneve tunnels

Interface Assignments

  • port1: Data plane interface for GWLB connectivity (Geneve tunnel termination)
  • port2: Public interface for internet egress (with optional dedicated management when enabled)

Network Interfaces Visualization

Network Interfaces Network Interfaces

The FortiGate GUI displays both physical interfaces and logical Geneve tunnel interfaces. Traffic inspection occurs on the logical tunnel interfaces, while physical port2 handles egress.

Advantages

  • Clear network segmentation: Separate trusted and untrusted zones
  • Traditional firewall model: Familiar architecture for network security teams
  • Simplified policy creation: North-South policies align with interface direction
  • Better traffic visibility: Distinct ingress/egress paths ease troubleshooting
  • Dedicated management option: Port2 can be isolated for management traffic

Best Use Cases

  • Production deployments requiring clear network segmentation
  • Environments with security policies mandating separate trusted/untrusted zones
  • Architectures where dedicated management interface is required
  • Standard north-south inspection use cases

1-ARM Configuration

Architecture Overview

The 1-ARM configuration uses a single interface (port1) for all data plane traffic, eliminating the need for a second network interface.

Traffic Flow:

  1. Traffic arrives at port1 encapsulated in Geneve tunnels from GWLB
  2. FortiGate inspects traffic and applies security policies
  3. Traffic is hairpinned back through the same Geneve tunnel it arrived on
  4. Traffic returns to originating distributed VPC through GWLB
  5. Distributed VPC uses its own internet egress path (IGW/NAT Gateway)

This “bump-in-the-wire” architecture is the typical 1-ARM pattern for distributed inspection, where the FortiGate provides security inspection but traffic egresses from the spoke VPC, not the inspection VPC.

Important Behavior: Stateful Load Balancing

GWLB Statefulness: The Gateway Load Balancer maintains connection state tables for traffic flows.

Primary Traffic Pattern (Distributed Architecture):

  • ✅ Traffic enters via Geneve tunnel → FortiGate inspection → Hairpins back through same Geneve tunnel
  • ✅ Distributed VPC handles actual internet egress via its own IGW/NAT Gateway
  • ✅ This “bump-in-the-wire” model provides security inspection without routing traffic through inspection VPC

Key Requirement: Symmetric routing through the GWLB. Traffic must return via the same Geneve tunnel it arrived on to maintain proper state table entries.

Info

Centralized Egress Architecture (Transit Gateway Pattern)

In centralized egress deployments with Transit Gateway, the traffic flow is fundamentally different and represents the primary use case for internet egress through the inspection VPC:

Traffic Flow:

  1. Spoke VPC traffic routes to Transit Gateway
  2. TGW routes traffic to inspection VPC
  3. Traffic enters GWLBe (same AZ to avoid cross-AZ charges)
  4. GWLB forwards traffic through Geneve tunnel to FortiGate
  5. FortiGate inspects traffic and applies security policies
  6. Traffic exits port1 (1-ARM) or port2 (2-ARM) toward internet
  7. Egress via EIP or NAT Gateway in inspection VPC
  8. Response traffic returns via same interface to same Geneve tunnel

This is the standard architecture for centralized internet egress where:

  • All spoke VPCs route internet-bound traffic through the inspection VPC
  • FortiGate autoscale group provides centralized security inspection AND NAT
  • Single egress point simplifies security policy management and reduces costs
  • Requires careful route table configuration to maintain symmetric routing

When to use: Centralized egress architectures where spoke VPCs do NOT have their own internet gateways.

Note

Distributed Architecture - Alternative Pattern (Advanced Use Case)

In distributed architectures where spoke VPCs have their own internet egress, it is possible (but not typical) to configure traffic to exit through the inspection VPC instead of hairpinning:

  • Traffic enters via Geneve tunnel → Exits port1 to internet → Response returns via port1 to same Geneve tunnel

This pattern requires:

  • Careful route table configuration in the inspection VPC
  • Specific firewall policies on the FortiGate
  • Proper symmetric routing to maintain GWLB state tables

This is rarely used in distributed architectures since spoke VPCs typically handle their own egress. The standard bump-in-the-wire pattern (hairpin through same Geneve tunnel) is recommended when spoke VPCs have internet gateways.

Interface Assignments

  • port1: Combined data plane (Geneve) and egress (internet) interface

Advantages

  • Reduced complexity: Single interface simplifies routing and subnet allocation
  • Lower costs: Fewer ENIs to manage and potential for smaller instance types
  • Simplified subnet design: Only requires one data subnet per AZ

Considerations

  • Hairpinning pattern: Traffic typically hairpins back through same Geneve tunnel
  • Higher port1 bandwidth requirements: All traffic flows through single interface (both directions)
  • Limited management options: Cannot enable dedicated management ENI in true 1-ARM mode
  • Symmetric routing requirement: All traffic must egress and return via port1 for proper state table maintenance

Best Use Cases

  • Cost-optimized deployments with lower throughput requirements
  • Simple north-south inspection without management VPC integration
  • Development and testing environments
  • Architectures where simplified subnet design is prioritized

Comparison Matrix

Factor1-ARM2-ARM
Interfaces Required1 (port1)2 (port1 + port2)
Network ComplexityLowerHigher
CostLowerSlightly higher
Management IsolationNot availableAvailable
Traffic PatternHairpin (distributed) or egress (centralized)Clear ingress/egress separation
Best ForSimple deployments, cost optimizationProduction, clear segmentation

Next Steps

After selecting your firewall architecture, proceed to Dedicated Management ENI to learn about management plane isolation options.

Management Isolation Options

Overview

The FortiGate autoscale solution provides multiple approaches to isolating management traffic from data plane traffic, ranging from shared interfaces to complete physical network separation.

This page covers three progressive levels of management isolation, allowing you to choose the appropriate security posture for your deployment requirements.


Option 1: Combined Data + Management (Default)

Architecture Overview

In the default configuration, port2 serves dual purposes:

  • Data plane: Internet egress for inspected traffic (in 2-ARM mode)
  • Management plane: GUI, SSH, SNMP access

Configuration

enable_dedicated_management_eni = false
enable_dedicated_management_vpc = false

Characteristics

  • Simplest configuration: No additional interfaces or VPCs required
  • Lower cost: Minimal infrastructure overhead
  • Shared security groups: Same rules govern data and management traffic
  • Single failure domain: Management access tied to data plane availability

When to Use

  • Development and testing environments
  • Proof-of-concept deployments
  • Budget-constrained projects
  • Simple architectures without compliance requirements

Option 2: Dedicated Management ENI

Architecture Overview

Port2 is removed from the data plane and dedicated exclusively to management functions. FortiOS configures the interface with set dedicated-to management, placing it in an isolated VRF with independent routing.

Dedicated Management ENI Dedicated Management ENI

Configuration

enable_dedicated_management_eni = true

How It Works

  1. Dedicated-to attribute: FortiOS configures port2 with set dedicated-to management
  2. Separate VRF: Port2 is placed in an isolated VRF with independent routing table
  3. Policy restrictions: FortiGate prevents creation of firewall policies using port2
  4. Management-only traffic: GUI, SSH, SNMP, and FortiManager/FortiAnalyzer connectivity

FortiOS Configuration Impact

The dedicated management ENI can be verified in the FortiGate GUI:

GUI Dedicated Management ENI GUI Dedicated Management ENI

The interface shows the dedicated-to: management attribute and separate VRF assignment, preventing data plane traffic from using this interface.

Important Compatibility Notes

Warning

Critical Limitation: 2-ARM + NAT Gateway + Dedicated Management ENI

When combining:

  • firewall_policy_mode = "2-arm"
  • access_internet_mode = "nat_gw"
  • enable_dedicated_management_eni = true

Port2 will NOT receive an Elastic IP address. This is a valid configuration, but imposes connectivity restrictions:

  • Cannot access FortiGate management from public internet
  • Can access via private IP through AWS Direct Connect or VPN
  • Can access via management VPC (see Option 3 below)

If you require public internet access to the FortiGate management interface with NAT Gateway egress, either:

  1. Use access_internet_mode = "eip" (assigns EIP to port2)
  2. Use dedicated management VPC with separate internet connectivity (Option 3)
  3. Implement AWS Systems Manager Session Manager for private connectivity

Characteristics

  • Clear separation of concerns: Management traffic isolated from data plane
  • Independent security policies: Separate security groups for management interface
  • Enhanced security posture: Reduces attack surface on management plane
  • Moderate complexity: Requires additional subnet and routing configuration

When to Use

  • Production deployments requiring management isolation
  • Security-conscious environments
  • Architectures without dedicated management VPC
  • Compliance requirements for management plane separation

Option 3: Dedicated Management VPC (Full Isolation)

Architecture Overview

The dedicated management VPC provides complete physical network separation by deploying FortiGate management interfaces in an entirely separate VPC from the data plane.

Dedicated Management VPC Architecture Dedicated Management VPC Architecture

Configuration

enable_dedicated_management_vpc = true
dedicated_management_vpc_tag = "your-mgmt-vpc-tag"
dedicated_management_public_az1_subnet_tag = "your-az1-subnet-tag"
dedicated_management_public_az2_subnet_tag = "your-az2-subnet-tag"

Benefits

  • Physical network separation: Management traffic never traverses inspection VPC
  • Independent internet connectivity: Management VPC has dedicated IGW or VPN
  • Centralized management infrastructure: FortiManager and FortiAnalyzer deployed in management VPC
  • Separate security controls: Management VPC security groups independent of data plane
  • Isolated failure domains: Management VPC issues don’t affect data plane

Management VPC Creation Options

The existing_vpc_resources template creates the management VPC with standardized tags that the simplified template automatically discovers.

Advantages:

  • Management VPC lifecycle independent of inspection VPC
  • FortiManager/FortiAnalyzer persistence across inspection VPC redeployments
  • Separation of concerns for infrastructure management

Default Tags (automatically created):

Default Tags Management VPC Default Tags Management VPC Default Tags Management Subnets Default Tags Management Subnets

Configuration (terraform.tfvars):

enable_dedicated_management_vpc = true
dedicated_management_vpc_tag = "acme-test-management-vpc"
dedicated_management_public_az1_subnet_tag = "acme-test-management-public-az1-subnet"
dedicated_management_public_az2_subnet_tag = "acme-test-management-public-az2-subnet"

Option B: Use Existing Management VPC

If you have an existing management VPC with custom tags, configure the template to discover it:

Non-Default Tags Management Non-Default Tags Management

Configuration:

enable_dedicated_management_vpc = true
dedicated_management_vpc_tag = "my-custom-mgmt-vpc-tag"
dedicated_management_public_az1_subnet_tag = "my-custom-mgmt-public-az1-tag"
dedicated_management_public_az2_subnet_tag = "my-custom-mgmt-public-az2-tag"

The template uses these tags to locate the management VPC and subnets via Terraform data sources.

Behavior When Enabled

When enable_dedicated_management_vpc = true:

  1. Automatic ENI creation: Template creates dedicated management ENI (port2) in management VPC subnets
  2. Implies dedicated management ENI: Automatically sets enable_dedicated_management_eni = true
  3. VPC peering/TGW: Management VPC must have connectivity to inspection VPC for HA sync
  4. Security group creation: Appropriate security groups created for management traffic

Network Connectivity Requirements

Management VPC → Inspection VPC Connectivity:

  • Required for FortiGate HA synchronization between instances
  • Typically implemented via VPC peering or Transit Gateway attachment
  • Must allow TCP port 443 (HA sync), TCP 22 (SSH), ICMP (health checks)

Management VPC → Internet Connectivity:

  • Required for FortiGuard services (signature updates, licensing)
  • Required for administrator access to FortiGate management interfaces
  • Can be via Internet Gateway, NAT Gateway, or AWS Direct Connect

Characteristics

  • Highest security posture: Complete physical isolation
  • Greatest flexibility: Independent infrastructure lifecycle
  • Higher complexity: Requires VPC peering or TGW configuration
  • Additional cost: Separate VPC infrastructure and data transfer charges

When to Use

  • Enterprise production deployments
  • Strict compliance requirements (PCI-DSS, HIPAA, etc.)
  • Multi-account AWS architectures
  • Environments with dedicated management infrastructure
  • Organizations with existing management VPCs for network security appliances

Comparison Matrix

FactorCombined (Default)Dedicated ENIDedicated VPC
Security IsolationLowMediumHigh
ComplexityLowestMediumHighest
CostLowestLowMedium
Management AccessVia data plane interfaceVia dedicated interfaceVia separate VPC
Failure Domain IsolationNoPartialComplete
VPC Peering RequiredNoNoYes
Compliance SuitabilityBasicGoodExcellent
Best ForDev/test, simple deploymentsProduction, security-consciousEnterprise, compliance-driven

Decision Tree

Use this decision tree to select the appropriate management isolation level:

1. Is this a production deployment?
   ├─ No → Combined Data + Management (simplest)
   └─ Yes → Continue to question 2

2. Do you have compliance requirements for management plane isolation?
   ├─ No → Dedicated Management ENI (good balance)
   └─ Yes → Continue to question 3

3. Do you have existing management VPC infrastructure?
   ├─ Yes → Dedicated Management VPC (leverage existing)
   └─ No → Evaluate cost/benefit:
       ├─ High security requirements → Dedicated Management VPC
       └─ Moderate requirements → Dedicated Management ENI

Deployment Patterns

Pattern 1: Dedicated ENI + EIP Mode

firewall_policy_mode = "2-arm"
access_internet_mode = "eip"
enable_dedicated_management_eni = true
  • Port2 receives EIP for public management access
  • Suitable for environments without management VPC
  • Simplified deployment with direct internet management access

Pattern 2: Dedicated ENI + Management VPC

firewall_policy_mode = "2-arm"
access_internet_mode = "nat_gw"
enable_dedicated_management_vpc = true
dedicated_management_vpc_tag = "my-mgmt-vpc"
  • Port2 connects to separate management VPC
  • Management VPC has dedicated internet gateway or VPN connectivity
  • Preferred for production environments with strict network segmentation

Pattern 3: Combined Management (Default)

firewall_policy_mode = "2-arm"
access_internet_mode = "eip"
enable_dedicated_management_eni = false
  • Port2 remains in data plane
  • Management access shares public interface with egress traffic
  • Simplest configuration but lacks management plane isolation

Best Practices

  1. Enable dedicated management ENI for production: Provides clear separation of concerns
  2. Use dedicated management VPC for enterprise deployments: Optimal security posture
  3. Document connectivity requirements: Ensure operations teams understand access paths
  4. Test connectivity before production: Verify alternative access methods work
  5. Plan for failure scenarios: Ensure backup access methods (SSM, VPN) are available
  6. Use existing_vpc_resources template for management VPC: Separates lifecycle management
  7. Document tag conventions: Ensure consistent tagging across environments
  8. Monitor management interface health: Set up CloudWatch alarms for management connectivity

Troubleshooting

Issue: Cannot access FortiGate management interface

Check:

  1. Security groups allow inbound traffic on management port (443, 22)
  2. Route tables provide path from your location to management interface
  3. If using dedicated management VPC, verify VPC peering or TGW is operational
  4. If using NAT Gateway mode, verify you have alternative access method (VPN, Direct Connect)

Issue: Management interface has no public IP

Cause: Using access_internet_mode = "nat_gw" with dedicated management ENI

Solutions:

  1. Switch to access_internet_mode = "eip" to receive public IP on port2
  2. Enable enable_dedicated_management_vpc = true with separate internet connectivity
  3. Use AWS Systems Manager Session Manager for private access
  4. Configure VPN or Direct Connect for private network access

Issue: HA sync not working with dedicated management VPC

Check:

  1. VPC peering or TGW attachment is configured between management and inspection VPCs
  2. Security groups allow TCP 443 between FortiGate instances
  3. Route tables in both VPCs have routes to each other’s subnets
  4. Network ACLs permit required traffic

Next Steps

After configuring management isolation, proceed to Licensing Options to choose between BYOL, FortiFlex, or PAYG.

Licensing Options

Overview

The FortiGate autoscale solution supports three distinct licensing models, each optimized for different use cases, cost structures, and operational requirements. You can use a single licensing model or combine them in hybrid configurations for optimal cost efficiency.


Licensing Model Comparison

FactorBYOLFortiFlexPAYG
Total Cost (12 months)LowestMediumHighest
Upfront InvestmentHighMediumNone
License ManagementManual (files)API-drivenNone
FlexibilityLowHighHighest
Capacity ConstraintsYes (license pool)Soft (point balance)None
Best ForLong-term, predictableVariable, flexibleShort-term, simple
Setup ComplexityMediumHighLowest

Option 1: BYOL (Bring Your Own License)

Overview

BYOL uses traditional FortiGate-VM license files that you purchase from Fortinet or resellers. The template automates license distribution through S3 bucket storage and Lambda-based assignment.

License Directory Structure License Directory Structure

Configuration

asg_license_directory = "asg_license"
asg_byol_asg_min_size = 2
asg_byol_asg_max_size = 4

Directory Structure Requirements

Place BYOL license files in the directory specified by asg_license_directory:

terraform/autoscale_template/
├── terraform.tfvars
├── asg_license/
   ├── FGVM01-001.lic
   ├── FGVM01-002.lic
   ├── FGVM01-003.lic
   └── FGVM01-004.lic

Automated License Assignment

  1. Terraform uploads .lic files to S3 during terraform apply
  2. Lambda retrieves available licenses when instances launch
  3. DynamoDB tracks assignments to prevent duplicates
  4. Lambda injects license via user-data script
  5. Licenses return to pool when instances terminate

Critical Capacity Planning

Warning

License Pool Exhaustion

Ensure your license directory contains at minimum licenses equal to asg_byol_asg_max_size.

What happens if licenses are exhausted:

  • New BYOL instances launch but remain unlicensed
  • Unlicensed instances operate at 1 Mbps throughput
  • FortiGuard services will not activate
  • If PAYG ASG is configured, scaling continues using on-demand instances

Recommended: Provision 20% more licenses than max_size

Characteristics

  • Lowest total cost: Best value for long-term (12+ months)
  • Predictable costs: Fixed licensing regardless of usage
  • ⚠️ License management: Requires managing physical files
  • ⚠️ Upfront investment: Must purchase licenses in advance

When to Use

  • Long-term production (12+ months)
  • Predictable, steady-state workloads
  • Existing FortiGate BYOL licenses
  • Cost-conscious deployments

Option 2: FortiFlex (Usage-Based Licensing)

Overview

FortiFlex provides consumption-based, API-driven licensing. Points are consumed daily based on configuration, offering flexibility and cost optimization compared to PAYG.

Prerequisites

  1. Register FortiFlex Program via FortiCare
  2. Purchase Point Packs
  3. Create Configurations in FortiFlex portal
  4. Generate API Credentials via IAM

For detailed setup, see Licensing Section.

Configuration

fortiflex_username      = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
fortiflex_password      = "xxxxxxxxxxxxxxxxxxxxx"
fortiflex_sn_list       = ["FGVMELTMxxxxxxxx"]
fortiflex_configid_list = ["My_4CPU_Config"]
Warning

FortiFlex Serial Number List - Optional

  • If defined: Use entitlements from specific programs only
  • If omitted: Use any available entitlements with matching configurations

Important: Entitlements must be created manually in FortiFlex portal before deployment.

Obtaining Required Values

1. API Username and Password:

  • Navigate to Services > IAM in FortiCare
  • Create permission profile with FortiFlex Read/Write access
  • Create API user and download credentials
  • Username is UUID in credentials file

2. Serial Number List:

  • Navigate to Services > Assets & Accounts > FortiFlex
  • View your FortiFlex programs
  • Note serial numbers from program details

3. Configuration ID List:

  • In FortiFlex portal, go to Configurations
  • Configuration ID is the Name field you assigned

Match CPU counts:

fgt_instance_type = "c6i.xlarge"  # 4 vCPUs
fortiflex_configid_list = ["My_4CPU_Config"]  # Must match
Warning

Security Best Practice

Never commit FortiFlex credentials to version control. Use:

  • Terraform Cloud sensitive variables
  • AWS Secrets Manager
  • Environment variables: TF_VAR_fortiflex_username
  • HashiCorp Vault

Lambda Integration Behavior

At instance launch:

  1. Lambda authenticates to FortiFlex API
  2. Creates new entitlement under specified configuration
  3. Receives and injects license token
  4. Instance activates, point consumption begins

At instance termination:

  1. Lambda calls API to STOP entitlement
  2. Point consumption halts immediately
  3. Entitlement preserved for reactivation

Troubleshooting

Problem: Instances don’t activate license

  • Check Lambda CloudWatch logs for API errors
  • Verify FortiFlex portal for failed entitlements
  • Confirm network connectivity to FortiFlex API

Problem: “Insufficient points” error

  • Check point balance in FortiFlex portal
  • Purchase additional point packs
  • Verify configurations use expected CPU counts

Characteristics

  • Flexible consumption: Pay only for what you use
  • No license file management: API-driven automation
  • Lower cost than PAYG: Typically 20-40% less
  • ⚠️ Point-based: Requires monitoring consumption
  • ⚠️ API credentials: Additional security considerations

When to Use

  • Variable workloads with unpredictable scaling
  • Development and testing
  • Short to medium-term (3-12 months)
  • Burst capacity in hybrid architectures

Option 3: PAYG (Pay-As-You-Go)

Overview

PAYG uses AWS Marketplace on-demand instances with licensing included in hourly EC2 charge.

Configuration

asg_ondemand_asg_min_size = 0
asg_ondemand_asg_max_size = 4
asg_ondemand_asg_desired_size = 0

How It Works

  1. Accept FortiGate-VM AWS Marketplace terms
  2. Lambda launches instances using Marketplace AMI
  3. FortiGate activates automatically via AWS
  4. Hourly licensing cost added to EC2 charge

Characteristics

  • Simplest option: Zero license management
  • No upfront commitment: Pay per running hour
  • Instant availability: No license pool constraints
  • ⚠️ Highest hourly cost: Premium pricing for convenience

When to Use

  • Proof-of-concept and evaluation
  • Very short-term (< 3 months)
  • Burst capacity in hybrid architectures
  • Zero license administration requirement

Cost Comparison Example

Scenario: 2 FortiGate-VM instances (c6i.xlarge, 4 vCPU, UTP) running 24/7

DurationBYOLFortiFlexPAYGWinner
1 month$2,730$1,030$1,460FortiFlex
3 months$4,190$3,090$4,380FortiFlex
12 months$10,760$12,360$17,520BYOL
24 months$19,520$24,720$35,040BYOL

Note: Illustrative costs. Actual pricing varies by term and bundle.


Hybrid Licensing Strategies

# BYOL for baseline
asg_license_directory = "asg_license"
asg_byol_asg_min_size = 2
asg_byol_asg_max_size = 4

# PAYG for burst
asg_ondemand_asg_max_size = 4

Best for: Production with occasional spikes

Strategy 2: FortiFlex Baseline + PAYG Burst

# FortiFlex for flexible baseline
fortiflex_configid_list = ["My_4CPU_Config"]
asg_byol_asg_max_size = 4

# PAYG for burst
asg_ondemand_asg_max_size = 4

Best for: Variable workloads with unpredictable spikes

Strategy 3: All BYOL (Cost-Optimized)

asg_license_directory = "asg_license"
asg_byol_asg_min_size = 2
asg_byol_asg_max_size = 6
asg_ondemand_asg_max_size = 0

Best for: Stable, predictable workloads

Strategy 4: All PAYG (Simplest)

asg_byol_asg_max_size = 0
asg_ondemand_asg_min_size = 2
asg_ondemand_asg_max_size = 8

Best for: POC, short-term, extreme variability


Decision Tree

1. Expected deployment duration?
   ├─ < 3 months  PAYG
   ├─ 3-12 months  FortiFlex or evaluate costs
   └─ > 12 months  BYOL + PAYG burst

2. Workload predictable?
   ├─ Yes, stable  BYOL
   └─ No, variable  FortiFlex or Hybrid

3. Want to manage license files?
   ├─ No  FortiFlex or PAYG
   └─ Yes, for cost savings  BYOL

4. Tolerance for complexity?
   ├─ Low  PAYG
   ├─ Medium  FortiFlex
   └─ High (cost focus)  BYOL

Best Practices

  1. Calculate TCO: Use comparison matrix for your scenario
  2. Start simple: Begin with PAYG for POC, optimize for production
  3. Monitor costs: Track consumption via CloudWatch and FortiFlex reports
  4. Provision buffer: 20% more licenses/entitlements than max_size
  5. Secure credentials: Never commit FortiFlex credentials to git
  6. Test assignment: Verify Lambda logs show successful injection
  7. Plan exhaustion: Configure PAYG burst as safety net
  8. Document strategy: Ensure ops team understands hybrid configs

Next Steps

After configuring licensing, proceed to FortiManager Integration for centralized management.

FortiManager Integration

Overview

The template supports optional integration with FortiManager for centralized management, policy orchestration, and configuration synchronization across the autoscale group.

Configuration

Enable FortiManager integration by setting the following variables in terraform.tfvars:

enable_fortimanager_integration = true
fortimanager_ip                 = "10.0.100.50"
fortimanager_sn                 = "FMGVM0000000001"
fortimanager_vrf_select         = 1

Variable Definitions

VariableTypeRequiredDescription
enable_fortimanager_integrationbooleanYesMaster switch to enable/disable FortiManager integration
fortimanager_ipstringYesFortiManager IP address or FQDN accessible from FortiGate management interfaces
fortimanager_snstringYesFortiManager serial number for device registration
fortimanager_vrf_selectnumberNoVRF ID for routing to FortiManager (default: 0 for global VRF)

How FortiManager Integration Works

When enable_fortimanager_integration = true:

  1. Lambda generates FortiOS config: Lambda function creates config system central-management stanza
  2. Primary instance registration: Only the primary FortiGate instance registers with FortiManager
  3. VDOM exception configured: Lambda adds config system vdom-exception to prevent central-management config from syncing to secondaries
  4. Configuration synchronization: Primary instance syncs configuration to secondary instances via FortiGate-native HA sync
  5. Policy deployment: Policies deployed from FortiManager propagate through primary → secondary sync

Generated FortiOS Configuration

Lambda automatically generates the following configuration on the primary instance only:

config system vdom-exception
    edit 0
        set object system.central-management
    next
end

config system central-management
    set type fortimanager
    set fmg 10.0.100.50
    set serial-number FMGVM0000000001
    set vrf-select 1
end

Secondary instances do not receive central-management configuration, preventing:

  • Orphaned device entries on FortiManager during scale-in events
  • Confusion about which instance is authoritative for policy
  • Unnecessary FortiManager license consumption

Network Connectivity Requirements

FortiGate → FortiManager:

  • TCP 541: FortiManager to FortiGate communication (FGFM protocol)
  • TCP 514 (optional): Syslog if logging to FortiManager
  • HTTPS 443: FortiManager GUI access for administrators

Ensure:

  • Security groups allow traffic from FortiGate management interfaces to FortiManager
  • Route tables provide path to FortiManager IP
  • Network ACLs permit required traffic
  • VRF routing configured if using non-default VRF

VRF Selection

The fortimanager_vrf_select parameter specifies which VRF to use for FortiManager connectivity:

Common scenarios:

  • 0 (default): Use global VRF; FortiManager accessible via default routing table
  • 1 or higher: Use specific management VRF; FortiManager accessible via separate routing domain

When to use non-default VRF:

  • FortiManager in separate management VPC requiring VPC peering or TGW
  • Network segmentation requires management traffic in dedicated VRF
  • Multiple VRFs configured and explicit path selection needed

FortiManager 7.6.3+ Critical Requirement

Warning

CRITICAL: FortiManager 7.6.3+ Requires VM Device Recognition

Starting with FortiManager version 7.6.3, VM serial numbers are not recognized by default for security purposes.

If you deploy FortiGate-VM instances with enable_fortimanager_integration = true to a FortiManager 7.6.3 or later WITHOUT enabling VM device recognition, instances will FAIL to register.

Required Configuration on FortiManager 7.6.3+:

Before deploying FortiGate instances, log into FortiManager CLI and enable VM device recognition:

config system global
    set fgfm-allow-vm enable
end

Verify the setting:

show system global | grep fgfm-allow-vm

Important notes:

  • This configuration must be completed BEFORE deploying FortiGate-VM instances
  • When upgrading from FortiManager < 7.6.3, existing managed VM devices continue functioning, but new VM devices cannot be added until fgfm-allow-vm is enabled
  • This setting is global and affects all ADOMs on the FortiManager
  • This is a one-time configuration change per FortiManager instance

Verification after deployment:

  1. Navigate to Device Manager > Device & Groups in FortiManager GUI
  2. Confirm FortiGate-VM instances appear as unauthorized devices (not as errors)
  3. Authorize devices as normal

Troubleshooting if instances fail to register:

  1. Check FortiManager version: get system status
  2. If version is 7.6.3 or later, verify fgfm-allow-vm is enabled
  3. If disabled, enable it and wait 1-5 minutes for FortiGate instances to retry registration
  4. Check FortiManager logs: diagnose debug application fgfmd -1

FortiManager Workflow

After deployment:

  1. Verify device registration:

    • Log into FortiManager GUI
    • Navigate to Device Manager > Device & Groups
    • Confirm primary FortiGate instance appears as unauthorized device
  2. Authorize device:

    • Right-click on unauthorized device
    • Select Authorize
    • Assign to appropriate ADOM and device group
  3. Install policy package:

    • Create or assign policy package to authorized device
    • Click Install to push policies to FortiGate
  4. Verify configuration sync:

    • Make configuration change on FortiManager
    • Install policy package to primary FortiGate
    • Verify change appears on secondary FortiGate instances via HA sync

Best Practices

  1. Pre-configure FortiManager: Create ADOMs, device groups, and policy packages before deploying autoscale group
  2. Test in non-production: Validate FortiManager integration in dev/test environment first
  3. Monitor device status: Set up FortiManager alerts for device disconnections
  4. Document policy workflow: Ensure team understands FortiManager → Primary → Secondary sync pattern
  5. Plan for primary failover: If primary instance fails, new primary automatically registers with FortiManager
  6. Backup FortiManager regularly: Critical single point of management; ensure proper backup strategy

Reference Documentation

For complete FortiManager integration details, including User Managed Scaling (UMS) mode, see the project file: FortiManager Integration Configuration


Next Steps

After configuring FortiManager integration, proceed to Autoscale Group Capacity to configure instance counts and scaling behavior.

Autoscale Group Capacity

Overview

Configure the autoscale group size parameters to define minimum, maximum, and desired instance counts for both BYOL and on-demand (PAYG) autoscale groups.

Configuration

Autoscale Group Capacity Autoscale Group Capacity

# BYOL ASG capacity
asg_byol_asg_min_size         = 1
asg_byol_asg_max_size         = 2
asg_byol_asg_desired_size     = 1

# On-Demand (PAYG) ASG capacity
asg_ondemand_asg_min_size     = 0
asg_ondemand_asg_max_size     = 2
asg_ondemand_asg_desired_size = 0

Parameter Definitions

ParameterDescriptionRecommendations
min_sizeMinimum number of instances ASG maintainsSet to baseline capacity requirement
max_sizeMaximum number of instances ASG can scale toSet based on peak traffic projections + 20% buffer
desired_sizeTarget number of instances ASG attempts to maintainTypically equals min_size for baseline capacity

Capacity Planning Strategies

Objective: Optimize costs by using BYOL for steady-state traffic and PAYG for unpredictable spikes

# BYOL handles baseline 24/7 traffic
asg_byol_asg_min_size = 2
asg_byol_asg_max_size = 4
asg_byol_asg_desired_size = 2

# PAYG handles burst traffic only
asg_ondemand_asg_min_size = 0
asg_ondemand_asg_max_size = 6
asg_ondemand_asg_desired_size = 0

Scaling behavior:

  1. Normal operations: 2 BYOL instances handle traffic
  2. Traffic increases: BYOL ASG scales up to 4 instances
  3. Traffic continues increasing: PAYG ASG scales from 0 → 6 instances
  4. Traffic decreases: PAYG ASG scales down to 0, then BYOL ASG scales down to 2

Strategy 2: All PAYG (Simplest)

Objective: Maximum flexibility with zero license management overhead

# No BYOL instances
asg_byol_asg_min_size = 0
asg_byol_asg_max_size = 0
asg_byol_asg_desired_size = 0

# All capacity is PAYG
asg_ondemand_asg_min_size = 2
asg_ondemand_asg_max_size = 8
asg_ondemand_asg_desired_size = 2

Use cases:

  • Proof of concept or testing
  • Short-term projects (< 6 months)
  • Extreme variability where license planning is impractical

Strategy 3: All BYOL (Lowest Cost)

Objective: Minimum operating costs for long-term, predictable workloads

# All capacity is BYOL
asg_byol_asg_min_size = 2
asg_byol_asg_max_size = 6
asg_byol_asg_desired_size = 2

# No PAYG instances
asg_ondemand_asg_min_size = 0
asg_ondemand_asg_max_size = 0
asg_ondemand_asg_desired_size = 0

Requirements:

  • Sufficient BYOL licenses for max_size (6 in this example)
  • Predictable traffic patterns that rarely exceed max capacity
  • Willingness to accept capacity ceiling (no burst beyond BYOL max)

CloudWatch Alarm Integration

Autoscale group scaling is triggered by CloudWatch alarms monitoring CPU utilization:

Default thresholds (set in underlying module):

  • Scale-out alarm: CPU > 70% for 2 consecutive periods (2 minutes)
  • Scale-in alarm: CPU < 30% for 2 consecutive periods (2 minutes)

Customization (requires editing underlying module):

# Located in module: fortinetdev/cloud-modules/aws
scale_out_threshold = 80  # Higher threshold = more aggressive cost optimization
scale_in_threshold  = 20  # Lower threshold = more aggressive cost optimization

Capacity Planning Calculator

Formula: Capacity Needed = (Peak Gbps Throughput) / (Per-Instance Gbps) × 1.2

Example:

  • Peak throughput requirement: 8 Gbps
  • c6i.xlarge (4 vCPU) with IPS enabled: ~2 Gbps per instance
  • Calculation: 8 / 2 × 1.2 = 4.8 → round up to 5 instances
  • Set max_size = 5 or higher for safety margin

Important Considerations

Tip

Testing Capacity Settings

For initial deployments and testing:

  1. Start with min_size = 1 and max_size = 2 to verify traffic flows correctly
  2. Test scaling by generating load and monitoring ASG behavior
  3. Once validated, increase capacity to production values via AWS Console or Terraform update
  4. No need to destroy/recreate stack just to change capacity settings

Next Steps

After configuring capacity, proceed to Primary Scale-In Protection to protect the primary instance from being terminated during scale-in events.

Primary Scale-In Protection

Overview

Protect the primary FortiGate instance from scale-in events to maintain configuration synchronization stability and prevent unnecessary primary elections.

Configuration

Scale-in Protection Scale-in Protection

primary_scalein_protection = true

Why Protect the Primary Instance?

In FortiGate autoscale architecture:

  • Primary instance: Elected leader responsible for configuration management and HA sync
  • Secondary instances: Receive configuration from primary via FortiGate-native HA synchronization

Without scale-in protection:

  1. AWS autoscaling may select primary instance for termination during scale-in
  2. Remaining instances must elect new primary
  3. Configuration may be temporarily unavailable during election
  4. Potential for configuration loss if primary was processing updates

With scale-in protection:

  1. AWS autoscaling only terminates secondary instances
  2. Primary instance remains stable unless it is the last instance
  3. Configuration synchronization continues uninterrupted
  4. Predictable autoscale group behavior

How It Works

The primary_scalein_protection variable is passed through to the autoscale group configuration:

Scale-in Passthru 1 Scale-in Passthru 1

In the underlying Terraform module (autoscale_group.tf):

Scale-in Passthru 2 Scale-in Passthru 2

AWS autoscaling respects the protection attribute and never selects protected instances for scale-in events.


Verification

You can verify scale-in protection in the AWS Console:

  1. Navigate to EC2 > Auto Scaling Groups
  2. Select your autoscale group
  3. Click Instance management tab
  4. Look for Scale-in protection column showing “Protected” for primary instance

When Protection is Removed

Scale-in protection automatically removes when:

  • Instance is the last remaining instance in the ASG (respecting min_size)
  • Manual termination via AWS Console or API (protection can be overridden)
  • Autoscale group is deleted

Best Practices

  1. Always enable for production: Set primary_scalein_protection = true for production deployments
  2. Consider disabling for dev/test: Development environments may not require protection
  3. Monitor primary health: Protected instances still fail health checks and can be replaced
  4. Document protection status: Ensure operations teams understand why primary instance is protected

AWS Documentation Reference

For more information on AWS autoscaling instance protection:


Next Steps

After configuring primary protection, review Additional Configuration Options for fine-tuning instance specifications and advanced settings.

Additional Configuration Options

Overview

This section covers additional configuration options for fine-tuning FortiGate instance specifications and advanced deployment settings.


FortiGate Instance Specifications

Instance Type Selection

fgt_instance_type = "c7gn.xlarge"

Instance type selection considerations:

  • c6i/c7i series: Intel-based compute-optimized (best for x86 workloads)
  • c6g/c7g/c7gn series: AWS Graviton (ARM-based, excellent performance)
  • Sizing: Choose vCPU count matching expected throughput requirements

Common instance types for FortiGate:

Instance TypevCPUsMemoryNetwork PerformanceBest For
c6i.large24 GBUp to 12.5 GbpsSmall deployments, dev/test
c6i.xlarge48 GBUp to 12.5 GbpsStandard production workloads
c6i.2xlarge816 GBUp to 12.5 GbpsHigh-throughput environments
c7gn.xlarge48 GBUp to 30 GbpsHigh-performance networking
c7gn.2xlarge816 GBUp to 30 GbpsVery high-performance networking

FortiOS Version

fortios_version = "7.4.5"

Version specification options:

  • Exact version (e.g., "7.4.5"): Pin to specific version for consistency across environments
  • Major version (e.g., "7.4"): Automatically use latest minor version within major release
  • Latest: Omit or use "latest" to always deploy newest available version

Recommendations:

  • Production: Use exact version numbers to prevent unexpected changes
  • Dev/Test: Use major version or latest to test new features and fixes
  • Always test new FortiOS versions in non-production before upgrading production deployments

Version considerations:

  • Newer versions may include critical security fixes
  • Performance improvements and new features
  • Potential breaking changes in configuration syntax
  • Always review release notes before upgrading

FortiGate GUI Port

fortigate_gui_port = 443

Common options:

  • 443 (default): Standard HTTPS port
  • 8443: Alternate HTTPS port (some organizations prefer moving GUI off default port for security)
  • 10443: Another common alternate port

When changing the GUI port:

  • Update security group rules to allow traffic to new port
  • Update documentation and runbooks with new port
  • Existing sessions will be dropped when port changes
  • Coordinate change with operations team

Gateway Load Balancer Cross-Zone Load Balancing

allow_cross_zone_load_balancing = true
  • GWLB distributes traffic to healthy FortiGate instances in any Availability Zone
  • Better utilization of capacity during partial AZ failures
  • Improved overall availability and fault tolerance
  • Traffic can flow to any healthy instance regardless of AZ

Disabled (false)

  • GWLB only distributes traffic to instances in same AZ as GWLB endpoint
  • Traffic remains within single AZ (lowest latency)
  • Reduced capacity during AZ-specific health issues
  • Must maintain sufficient capacity in each AZ independently

Decision Factors

Enable for:

  • Production environments requiring maximum availability
  • Multi-AZ deployments where instance distribution may be uneven
  • Architectures where AZ-level failures must be transparent to applications
  • Workloads where availability is prioritized over lowest latency

Disable for:

  • Workloads with strict latency requirements
  • Architectures with guaranteed even instance distribution across AZs
  • Environments with predictable AZ-local traffic patterns
  • Data residency requirements mandating AZ-local processing

Recommendation: Enable for production deployments to maximize availability and capacity utilization


SSH Key Pair

keypair_name = "my-fortigate-keypair"

Purpose: SSH key pair for emergency CLI access to FortiGate instances

Best practices:

  • Create dedicated key pair for FortiGate instances (separate from application instances)
  • Store private key securely in password manager or AWS Secrets Manager
  • Rotate key pairs periodically (every 6-12 months)
  • Document key pair name and location in runbooks
  • Limit access to private key to authorized personnel only

Creating a key pair:

# Via AWS CLI
aws ec2 create-key-pair --key-name my-fortigate-keypair --query 'KeyMaterial' --output text > my-fortigate-keypair.pem
chmod 400 my-fortigate-keypair.pem

# Or via AWS Console: EC2 > Key Pairs > Create Key Pair

Resource Tagging

resource_tags = {
  Environment = "Production"
  Project     = "FortiGate-Autoscale"
  Owner       = "security-team@example.com"
  CostCenter  = "CC-12345"
}

Common tags to include:

  • Environment: Production, Development, Staging, Test
  • Project: Project or application name
  • Owner: Team or individual responsible for resources
  • CostCenter: For cost allocation and chargeback
  • ManagedBy: Terraform, CloudFormation, etc.
  • CreatedDate: When resources were initially deployed

Benefits of comprehensive tagging:

  • Cost allocation and reporting
  • Resource organization and filtering
  • Access control policies
  • Automation and orchestration
  • Compliance and governance

Summary Checklist

Before proceeding to deployment, verify you’ve configured:

  • Internet Egress: EIP or NAT Gateway mode selected
  • Firewall Architecture: 1-ARM or 2-ARM mode chosen
  • Management Isolation: Dedicated ENI and/or VPC configured (if required)
  • Licensing: BYOL directory populated or FortiFlex configured
  • FortiManager: Integration enabled (if centralized management required)
  • Capacity: ASG min/max/desired sizes set appropriately
  • Primary Protection: Scale-in protection enabled for production
  • Instance Specs: Instance type and FortiOS version selected
  • Additional Options: GUI port, cross-zone LB, key pair, tags configured

Next Steps

You’re now ready to proceed to the Summary page for a complete overview of all solution components, or jump directly to Templates to begin deployment.

Solution Components Summary

Overview

This summary provides a comprehensive reference of all solution components covered in this section, with quick decision guides and configuration references.


Component Quick Reference

1. Internet Egress Options

OptionHourly CostData ProcessingMonthly Cost (2 AZs)Source IPBest For
EIP Mode$0.005/IPNone~$7.20VariableCost-sensitive, dev/test
NAT Gateway$0.045/NAT × 2$0.045/GB~$65 base + data†StableProduction, compliance

Data processing example: 1 TB/month = $45 additional cost
Total NAT Gateway cost estimate: $65 (base) + $45 (1TB data) = $110/month for 2 AZs with 1TB egress

access_internet_mode = "eip"  # or "nat_gw"

Key Decision: Do you need predictable source IPs for allowlisting (white-listing)?

  • Yes → NAT Gateway (stable IPs, higher cost)
  • No → EIP (variable IPs, lower cost)

2. Firewall Architecture

ModeInterfacesComplexityBest For
2-ARMport1 + port2HigherProduction, clear segmentation
1-ARMport1 onlyLowerSimplified routing
firewall_policy_mode = "2-arm"  # or "1-arm"

3. Management Isolation

Three progressive levels:

  1. Combined (Default): Port2 serves data + management
  2. Dedicated ENI: Port2 dedicated to management only
  3. Dedicated VPC: Complete physical network separation
enable_dedicated_management_eni = true
enable_dedicated_management_vpc = true

4. Licensing Options

ModelBest ForCost (12 months)Management
BYOLLong-term, predictableLowestLicense files
FortiFlexVariable, flexibleMediumAPI-driven
PAYGShort-term, simpleHighestNone required

Hybrid Strategy (Recommended): BYOL baseline + PAYG burst


5. FortiManager Integration

enable_fortimanager_integration = true
fortimanager_ip                 = "10.0.100.50"
fortimanager_sn                 = "FMGVM0000000001"

⚠️ Critical: FortiManager 7.6.3+ requires fgfm-allow-vm enabled before deployment


6. Autoscale Group Capacity

asg_byol_asg_min_size = 2
asg_byol_asg_max_size = 4
asg_ondemand_asg_max_size = 4

Formula: Capacity = (Peak Gbps / Per-Instance Gbps) × 1.2


7. Primary Scale-In Protection

primary_scalein_protection = true

Always enable for production to prevent primary instance termination during scale-in.


8. Additional Configuration

fgt_instance_type               = "c6i.xlarge"
fortios_version                 = "7.4.5"
fortigate_gui_port              = 443
allow_cross_zone_load_balancing = true
keypair_name                    = "my-fortigate-keypair"

Common Deployment Patterns

Pattern 1: Production with Maximum Isolation

access_internet_mode = "nat_gw"
firewall_policy_mode = "2-arm"
enable_dedicated_management_eni = true
enable_dedicated_management_vpc = true
asg_license_directory = "asg_license"
enable_fortimanager_integration = true
primary_scalein_protection = true

Use case: Enterprise production, compliance-driven


Pattern 2: Development and Testing

access_internet_mode = "eip"
firewall_policy_mode = "1-arm"
asg_ondemand_asg_min_size = 1
asg_ondemand_asg_max_size = 2
enable_fortimanager_integration = false

Use case: Development, testing, POC


Pattern 3: Balanced Production

access_internet_mode = "nat_gw"
firewall_policy_mode = "2-arm"
enable_dedicated_management_eni = true
fortiflex_username = "your-api-username"
enable_fortimanager_integration = true
primary_scalein_protection = true

Use case: Standard production, flexible licensing


Decision Tree

1. Do you need predictable source IPs for allowlisting?
   ├─ Yes  NAT Gateway (~$110/month for 2 AZs + 1TB data)
   └─ No  EIP (~$7/month)

2. Dedicated management interface?
   ├─ Yes  2-ARM + Dedicated ENI
   └─ No  1-ARM

3. Complete management isolation?
   ├─ Yes  Dedicated Management VPC
   └─ No  Dedicated ENI or skip

4. Licensing model?
   ├─ Long-term (12+ months)  BYOL
   ├─ Variable workload  FortiFlex
   ├─ Short-term (< 3 months)  PAYG
   └─ Best optimization  BYOL + PAYG hybrid

5. Centralized policy management?
   ├─ Yes  Enable FortiManager
   └─ No  Standalone

6. Production deployment?
   ├─ Yes  Enable primary scale-in protection
   └─ No  Optional

Pre-Deployment Checklist

Infrastructure:

  • AWS account with permissions
  • VPC architecture designed
  • Subnet CIDR planning complete
  • Transit Gateway configured (if needed)

Licensing:

  • BYOL: License files ready (≥ max_size)
  • FortiFlex: Program registered, API credentials
  • PAYG: Marketplace subscription accepted

FortiManager (if applicable):

  • FortiManager deployed and accessible
  • FortiManager 7.6.3+: fgfm-allow-vm enabled
  • ADOMs and device groups created
  • Network connectivity verified

Configuration:

  • terraform.tfvars populated
  • SSH key pair created
  • Resource tags defined
  • Instance type selected

Troubleshooting Quick Reference

IssueCheck
No internet connectivityRoute tables, IGW, NAT GW, EIP
Management inaccessibleSecurity groups, routing, EIP
License not activatingLambda logs, S3, DynamoDB, FortiFlex API
FortiManager registration failsfgfm-allow-vm, network, serial number
Scaling not workingCloudWatch alarms, ASG health checks
Primary terminatedVerify protection enabled

Next Steps

Proceed to Templates for step-by-step deployment procedures.


Additional Resources

Templates

Deployment Templates

The FortiGate Autoscale Simplified Template provides modular Terraform templates for deploying autoscale architectures in AWS. This section covers both templates and their integration patterns.

Available Templates

Templates Overview

Understand the template architecture, choose deployment patterns, and learn how templates work together.

existing_vpc_resources Template (Optional)

Create supporting infrastructure for lab and test environments including management VPC, Transit Gateway, and spoke VPCs with traffic generators.

autoscale_template (Required)

Deploy the core FortiGate autoscale infrastructure including inspection VPC, Gateway Load Balancer, and FortiGate autoscale groups.


Quick Start Paths

For Lab/Test Environments

  1. Start with Templates Overview to understand architecture
  2. Deploy existing_vpc_resources for complete test environment
  3. Deploy autoscale_template connected to created resources
  4. Time: ~30-40 minutes

For Production Deployments

  1. Review Templates Overview for integration patterns
  2. Skip existing_vpc_resources template
  3. Deploy autoscale_template to existing infrastructure
  4. Time: ~15-20 minutes

Template Coordination

When using both templates together, ensure these variables match exactly:

  • aws_region
  • availability_zone_1 and availability_zone_2
  • cp (customer prefix)
  • env (environment)
  • vpc_cidr_management
  • vpc_cidr_spoke

See Templates Overview for detailed coordination requirements.


What’s Next?

Subsections of Templates

Templates Overview

Introduction

The FortiGate Autoscale Simplified Template consists of two complementary Terraform templates that work together to deploy a complete FortiGate autoscale architecture in AWS:

  1. existing_vpc_resources (Required First): Creates the Inspection VPC and supporting infrastructure with Fortinet-Role tags for resource discovery
  2. autoscale_template (Required Second): Deploys the FortiGate autoscale group into the existing Inspection VPC
Warning

Important Workflow Change

The autoscale_template now deploys into existing VPCs rather than creating them. You must run existing_vpc_resources first to create the Inspection VPC with proper Fortinet-Role tags, then run autoscale_template to deploy the FortiGate autoscale group.

This modular approach allows you to:

  • Separate VPC infrastructure from FortiGate deployment for better lifecycle management
  • Use tag-based resource discovery for flexible integration
  • Create a complete lab environment including management VPC, Transit Gateway, and spoke VPCs with traffic generators
  • Mix and match components based on your specific requirements

Template Architecture

Component Relationships

┌─────────────────────────────────────────────────────────────────┐
│ existing_vpc_resources Template (Run First)                     │
│                                                                 │
│  ┌──────────────────┐    ┌─────────────────┐                    │
│  │ Management VPC   │    │ Transit Gateway │                    │
│  │ - FortiManager   │    │ - Spoke VPCs    │                    │
│  │ - FortiAnalyzer  │    │ - Linux Instances                    │
│  │ - Jump Box       │    │ - Test Traffic  │                    │
│  └──────────────────┘    └─────────────────┘                    │
│          │                       │                              │
│          └───────────┬───────────┘                              │
│                      │                                          │
│  ┌───────────────────▼───────────────────┐                      │
│  │ Inspection VPC (with Fortinet-Role    │                      │
│  │ tags for resource discovery)          │                      │
│  │ - Public/Private/GWLBE Subnets        │                      │
│  │ - Route Tables, IGW, NAT GW           │                      │
│  │ - TGW Attachment (optional)           │                      │
│  └───────────────────────────────────────┘                      │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘
                       │ (Fortinet-Role tag discovery)
┌──────────────────────┼──────────────────────────────────────────┐
│ autoscale_template (Run Second)   │                               │
│                      │                                          │
│  ┌────────────────── ▼ ────────────────┐                        │
│  │ Deploys INTO Inspection VPC         │                        │
│  │ - FortiGate Autoscale Group         │                        │
│  │ - Gateway Load Balancer             │                        │
│  │ - GWLB Endpoints                    │                        │
│  │ - Lambda Functions                  │                        │
│  │ - Route modifications               │                        │
│  └─────────────────────────────────────┘                        │
│                                                                 │
└─────────────────────────────────────────────────────────────────┘

Fortinet-Role Tag Discovery

The autoscale_template discovers existing resources using Fortinet-Role tags. This tag-based approach provides:

  • Decoupled lifecycle management: VPC infrastructure can persist while FortiGate deployments are updated
  • Flexible integration: Works with any VPC that has the correct tags, not just those created by existing_vpc_resources
  • Clear resource ownership: Tags explicitly identify resources intended for FortiGate integration

Quick Decision Tree

Use this decision tree to determine your deployment approach:

1. Do you have existing VPCs with Fortinet-Role tags?
   ├─ YES → Deploy autoscale_template only
   │         (Resources discovered via Fortinet-Role tags)
   └─ NO → Continue to question 2

2. Do you need a complete lab environment for testing?
   ├─ YES → Deploy existing_vpc_resources (all components)
   │         Then deploy autoscale_template
   │         See: Lab Environment Pattern
   └─ NO → Continue to question 3

3. Do you need centralized management (FortiManager/FortiAnalyzer)?
   ├─ YES → Deploy existing_vpc_resources (with management VPC)
   │         Then deploy autoscale_template
   │         See: Management VPC Pattern
   └─ NO → Deploy existing_vpc_resources (inspection VPC only)
           Then deploy autoscale_template
           See: Minimal Deployment Pattern
Info

Key Point: The autoscale_template always requires an existing Inspection VPC with Fortinet-Role tags. Use existing_vpc_resources to create this infrastructure, or manually tag your existing VPCs.


Template Comparison

Aspectexisting_vpc_resourcesautoscale_template
Required?Yes (creates Inspection VPC)Yes (deploys FortiGate)
Run OrderFirstSecond
PurposeVPC infrastructure with Fortinet-Role tagsFortiGate autoscale deployment
CreatesInspection VPC, Management VPC, TGW, Spoke VPCsFortiGate ASG, GWLB, Lambda, route modifications
DiscoveryN/A (creates resources)Uses Fortinet-Role tags
CostVPC infrastructure costsFortiGate instance costs
LifecyclePersistent infrastructureCan be redeployed independently
Production UseYes (or tag existing VPCs)Always

Common Integration Patterns

Pattern 1: Complete Lab Environment

Use case: Full-featured testing environment with management and traffic generation

Templates needed:

  1. ✅ existing_vpc_resources (with all components enabled including Inspection VPC)
  2. ✅ autoscale_template (deploys into Inspection VPC via Fortinet-Role tags)

What you get:

  • Inspection VPC with Fortinet-Role tags for resource discovery
  • Management VPC with FortiManager, FortiAnalyzer, and Jump Box
  • Transit Gateway with spoke VPCs
  • Linux instances for traffic generation
  • FortiGate autoscale group with GWLB
  • Complete end-to-end testing environment

Estimated cost: ~$300-400/month for complete lab

Deployment time: ~25-30 minutes

Next steps: Lab Environment Workflow


Pattern 2: Production Integration (Existing VPCs)

Use case: Deploy FortiGate inspection to existing production infrastructure

Templates needed:

  1. ⚠️ Manual tagging of existing VPCs with Fortinet-Role tags, OR
  2. ✅ existing_vpc_resources (inspection VPC only, to create properly tagged infrastructure)
  3. ✅ autoscale_template (discovers resources via Fortinet-Role tags)

Prerequisites:

  • Existing VPCs must have Fortinet-Role tags (see Required Tags)
  • OR use existing_vpc_resources to create new Inspection VPC with correct tags
  • Network connectivity established

What you get:

  • FortiGate autoscale group with GWLB deployed into existing/tagged VPC
  • Integration with existing Transit Gateway
  • Tag-based resource discovery for flexibility

Estimated cost: ~$150-250/month (FortiGates only, plus any new VPC infrastructure)

Deployment time: ~15-20 minutes (plus tagging time if manual)

Next steps: Production Integration Workflow


Pattern 3: Management VPC Only

Use case: Testing FortiManager/FortiAnalyzer integration without spoke VPCs

Templates needed:

  1. ✅ existing_vpc_resources (Inspection VPC + management VPC components)
  2. ✅ autoscale_template (with FortiManager integration enabled)

What you get:

  • Inspection VPC with Fortinet-Role tags
  • Dedicated management VPC with FortiManager and FortiAnalyzer
  • FortiGate autoscale group managed by FortiManager
  • No Transit Gateway or spoke VPCs

Estimated cost: ~$300/month

Deployment time: ~20-25 minutes

Next steps: Management VPC Workflow


Pattern 4: Minimal Inspection VPC Only

Use case: Simplest deployment for testing FortiGate autoscale

Templates needed:

  1. ✅ existing_vpc_resources (Inspection VPC only)
  2. ✅ autoscale_template (without TGW attachment)

Configuration:

# existing_vpc_resources
enable_build_inspection_vpc   = true
enable_build_management_vpc   = false
enable_build_existing_subnets = false

What you get:

  • Inspection VPC with Fortinet-Role tags
  • FortiGate autoscale group with GWLB
  • No management infrastructure or spoke VPCs

Estimated cost: ~$150-200/month

Deployment time: ~15 minutes

Next steps: Minimal Deployment Workflow


Required Fortinet-Role Tags

The autoscale_template discovers existing resources using Fortinet-Role tags. These tags are automatically created by existing_vpc_resources, or you can manually apply them to existing VPCs.

Required Tags for Inspection VPC

Resource TypeFortinet-Role Tag ValueRequired
VPC{cp}-{env}-inspection-vpcYes
Internet Gateway{cp}-{env}-inspection-igwYes
Public Subnet AZ1{cp}-{env}-inspection-public-az1Yes
Public Subnet AZ2{cp}-{env}-inspection-public-az2Yes
GWLBE Subnet AZ1{cp}-{env}-inspection-gwlbe-az1Yes
GWLBE Subnet AZ2{cp}-{env}-inspection-gwlbe-az2Yes
Private Subnet AZ1{cp}-{env}-inspection-private-az1Yes
Private Subnet AZ2{cp}-{env}-inspection-private-az2Yes
Public Route Table AZ1{cp}-{env}-inspection-public-rt-az1Yes
Public Route Table AZ2{cp}-{env}-inspection-public-rt-az2Yes
GWLBE Route Table AZ1{cp}-{env}-inspection-gwlbe-rt-az1Yes
GWLBE Route Table AZ2{cp}-{env}-inspection-gwlbe-rt-az2Yes
Private Route Table AZ1{cp}-{env}-inspection-private-rt-az1Yes
Private Route Table AZ2{cp}-{env}-inspection-private-rt-az2Yes
NAT Gateway AZ1{cp}-{env}-inspection-natgw-az1If nat_gw mode
NAT Gateway AZ2{cp}-{env}-inspection-natgw-az2If nat_gw mode
Mgmt Subnet AZ1{cp}-{env}-inspection-management-az1If dedicated mgmt ENI
Mgmt Subnet AZ2{cp}-{env}-inspection-management-az2If dedicated mgmt ENI
Mgmt Route Table AZ1{cp}-{env}-inspection-management-rt-az1If dedicated mgmt ENI
Mgmt Route Table AZ2{cp}-{env}-inspection-management-rt-az2If dedicated mgmt ENI
TGW Attachment{cp}-{env}-inspection-tgw-attachmentIf TGW enabled
TGW Route Table{cp}-{env}-inspection-tgw-rtbIf TGW enabled

Example: For cp="acme" and env="test", the VPC tag would be acme-test-inspection-vpc

Optional Tags for Management VPC

Resource TypeFortinet-Role Tag ValueRequired
VPC{cp}-{env}-management-vpcIf dedicated mgmt VPC
Public Subnet AZ1{cp}-{env}-management-public-az1If dedicated mgmt VPC
Public Subnet AZ2{cp}-{env}-management-public-az2If dedicated mgmt VPC

Deployment Workflows

Lab Environment Workflow

Objective: Create complete testing environment from scratch

# Step 1: Deploy existing_vpc_resources (creates Inspection VPC with Fortinet-Role tags)
cd terraform/existing_vpc_resources
cp terraform.tfvars.example terraform.tfvars
# Edit: Enable all components:
#   enable_build_inspection_vpc   = true
#   enable_build_management_vpc   = true
#   enable_build_existing_subnets = true
#   enable_fortimanager           = true
#   enable_fortianalyzer          = true
terraform init && terraform apply

# Step 2: Note outputs (Fortinet-Role tags created automatically)
terraform output  # Save TGW name and FortiManager IP

# Step 3: Deploy autoscale_template (discovers VPCs via Fortinet-Role tags)
cd ../autoscale_template
cp terraform.tfvars.example terraform.tfvars
# Edit: Use SAME cp and env values (critical for tag discovery)
#       Set attach_to_tgw_name from Step 2 output
#       Configure FortiManager integration
terraform init && terraform apply

# Step 4: Verify
ssh -i ~/.ssh/keypair.pem ec2-user@<jump-box-ip>
curl http://<linux-instance-ip>  # Test connectivity

Time to complete: 30-40 minutes

Warning

Critical: The cp and env variables must match between both templates for Fortinet-Role tag discovery to work.

See detailed guide: existing_vpc_resources Template


Production Integration Workflow

Objective: Deploy FortiGate inspection into existing or new Inspection VPC

Option A: Tag Existing VPCs Manually

If you have existing VPCs you want to use:

  1. Apply Fortinet-Role tags to your existing VPC resources (see Required Tags)
  2. Deploy autoscale_template with matching cp and env values

Option B: Create New Inspection VPC (Recommended)

# Step 1: Deploy existing_vpc_resources (Inspection VPC only)
cd terraform/existing_vpc_resources
cp terraform.tfvars.example terraform.tfvars
# Edit:
#   enable_build_inspection_vpc   = true
#   enable_build_management_vpc   = false
#   enable_build_existing_subnets = true  # if TGW needed
#   attach_to_tgw_name            = "production-tgw"  # existing TGW
terraform init && terraform apply

# Step 2: Deploy autoscale_template
cd ../autoscale_template
cp terraform.tfvars.example terraform.tfvars
# Edit: Use SAME cp and env values
#       Set attach_to_tgw_name to production TGW
#       Configure production-appropriate capacity
terraform init && terraform apply

# Step 3: Update TGW route tables (if needed)
# Route spoke VPC traffic (0.0.0.0/0) to inspection VPC attachment

# Step 4: Test and validate
# Verify traffic flows through FortiGate

Time to complete: 20-30 minutes

See detailed guide: autoscale_template


Management VPC Workflow

Objective: Deploy management infrastructure with FortiManager/FortiAnalyzer

# Step 1: Deploy existing_vpc_resources (Inspection + Management VPCs)
cd terraform/existing_vpc_resources
cp terraform.tfvars.example terraform.tfvars
# Edit:
#   enable_build_inspection_vpc   = true
#   enable_build_management_vpc   = true
#   enable_fortimanager           = true
#   enable_fortianalyzer          = true
#   enable_build_existing_subnets = false
terraform init && terraform apply

# Step 2: Configure FortiManager
# Access FortiManager GUI: https://<fmgr-ip>
# Enable VM device recognition if FMG 7.6.3+
config system global
    set fgfm-allow-vm enable
end

# Step 3: Deploy autoscale_template
cd ../autoscale_template
cp terraform.tfvars.example terraform.tfvars
# Edit: Use SAME cp and env values
#       enable_fortimanager_integration = true
#       fortimanager_ip = <from Step 1 output>
#       enable_dedicated_management_vpc = true
terraform init && terraform apply

# Step 4: Authorize devices on FortiManager
# Device Manager > Device & Groups
# Right-click unauthorized device > Authorize

Time to complete: 25-35 minutes


Minimal Deployment Workflow

Objective: Deploy FortiGate with minimal infrastructure

# Step 1: Deploy existing_vpc_resources (Inspection VPC only)
cd terraform/existing_vpc_resources
cp terraform.tfvars.example terraform.tfvars
# Edit:
#   enable_build_inspection_vpc   = true
#   enable_build_management_vpc   = false
#   enable_build_existing_subnets = false
#   inspection_access_internet_mode = "eip"  # simpler, lower cost
terraform init && terraform apply

# Step 2: Deploy autoscale_template
cd ../autoscale_template
cp terraform.tfvars.example terraform.tfvars
# Edit: Use SAME cp and env values
#       enable_tgw_attachment = false
#       access_internet_mode = "eip"
terraform init && terraform apply

# Step 3: Note GWLB endpoint IDs for spoke VPC integration
terraform output gwlb_endpoint_az1_id
terraform output gwlb_endpoint_az2_id

# Step 4: Integrate spoke VPCs
# Deploy GWLB endpoints in spoke VPCs
# Update spoke VPC route tables to point to GWLB endpoints

Time to complete: 20-25 minutes (plus spoke VPC endpoint deployment)


When to Use Each Template

existing_vpc_resources - Always Required First

The existing_vpc_resources template is required to create the Inspection VPC with proper Fortinet-Role tags. Use it when:

Any new FortiGate autoscale deployment

  • Creates Inspection VPC with all required subnets and tags
  • Can optionally include Management VPC, TGW, and spoke VPCs
  • Provides foundation for autoscale_template deployment

Creating a lab or test environment

  • Enable all components for complete testing environment
  • Includes FortiManager/FortiAnalyzer for management testing
  • Traffic generators in spoke VPCs for load testing

Production deployments with new infrastructure

  • Creates properly tagged VPCs for FortiGate deployment
  • Can attach to existing Transit Gateway
  • Separates VPC lifecycle from FortiGate lifecycle

Alternative to existing_vpc_resources:

⚠️ Manually tag existing VPCs (advanced users only)

  • Apply Fortinet-Role tags to existing VPCs following the tag schema
  • Ensures all required resources (subnets, route tables, IGW, etc.) are properly tagged
  • Useful when you cannot create new VPCs

autoscale_template - Always Required Second

The autoscale_template deploys FortiGate into the existing Inspection VPC:

All FortiGate autoscale deployments

  • Discovers Inspection VPC via Fortinet-Role tags
  • Deploys FortiGate ASG, GWLB, Lambda functions
  • Modifies route tables to enable traffic inspection

Can be redeployed independently

  • Inspection VPC persists between FortiGate redeployments
  • Allows FortiGate version upgrades without VPC changes
  • Simplifies lifecycle management

Template Variable Coordination

When using both templates together, certain variables must match exactly for Fortinet-Role tag discovery to work:

Critical Variables for Tag Discovery

VariablePurposeImpact if Mismatched
cp (customer prefix)Fortinet-Role tag prefixautoscale_template cannot find VPCs
env (environment)Fortinet-Role tag prefixautoscale_template cannot find VPCs
aws_regionAWS regionResources in wrong region
availability_zone_1First AZSubnet discovery fails
availability_zone_2Second AZSubnet discovery fails
Warning

Critical: The cp and env variables form the prefix for all Fortinet-Role tags. If these don’t match between templates, the autoscale_template will fail with “no matching VPC/Subnet found” errors.

Example Coordinated Configuration

existing_vpc_resources/terraform.tfvars:

aws_region          = "us-west-2"
availability_zone_1 = "a"
availability_zone_2 = "c"
cp                  = "acme"      # Creates tags like "acme-test-inspection-vpc"
env                 = "test"
vpc_cidr_ns_inspection = "10.0.0.0/16"
vpc_cidr_management    = "10.3.0.0/16"

autoscale_template/terraform.tfvars:

aws_region          = "us-west-2"  # MUST MATCH
availability_zone_1 = "a"          # MUST MATCH
availability_zone_2 = "c"          # MUST MATCH
cp                  = "acme"       # MUST MATCH - used for tag lookup
env                 = "test"       # MUST MATCH - used for tag lookup
vpc_cidr_inspection = "10.0.0.0/16"  # Should match existing VPC CIDR
vpc_cidr_management = "10.3.0.0/16"  # Should match if using management VPC

attach_to_tgw_name = "acme-test-tgw"  # Matches cp-env naming convention

How Tag Discovery Works

When autoscale_template runs, it looks up resources like this:

# autoscale_template/vpc_inspection.tf
data "aws_vpc" "inspection" {
  filter {
    name   = "tag:Fortinet-Role"
    values = ["${var.cp}-${var.env}-inspection-vpc"]  # e.g., "acme-test-inspection-vpc"
  }
}

This is why matching cp and env values is essential.


Next Steps

Choose your deployment pattern and proceed to the appropriate template guide:

  1. Lab/Test Environment: Start with existing_vpc_resources Template
  2. Production Deployment: Go directly to autoscale_template
  3. Need to review components?: See Solution Components
  4. Need licensing guidance?: See Licensing Options

Summary

The FortiGate Autoscale Simplified Template uses a two-phase deployment approach with Fortinet-Role tag discovery:

TemplatePurposeRun OrderCreates
existing_vpc_resourcesVPC infrastructureFirstInspection VPC, Management VPC, TGW, Spoke VPCs (with Fortinet-Role tags)
autoscale_templateFortiGate deploymentSecondFortiGate ASG, GWLB, Lambda (discovers VPCs via tags)

Key Principles:

  1. Run existing_vpc_resources first - Creates Inspection VPC with Fortinet-Role tags
  2. Match cp and env values - Critical for tag discovery between templates
  3. autoscale_template deploys into existing VPCs - Does not create VPC infrastructure

Recommended Starting Point:

  • First-time users: Deploy both templates for complete lab environment
  • Production deployments: Use existing_vpc_resources for new Inspection VPC, or manually tag existing VPCs
  • All deployments: Ensure cp and env match between templates

existing_vpc_resources Template

Overview

The existing_vpc_resources template creates the Inspection VPC and supporting infrastructure required for the FortiGate autoscale deployment. All resources are tagged with Fortinet-Role tags that allow the autoscale_template to discover and deploy into them.

Warning

This template must be run BEFORE autoscale_template. The autoscale_template discovers VPCs using Fortinet-Role tags created by this template. If you skip this template, you must manually apply the required tags to your existing VPCs.


What It Creates

Existing Resources Diagram Existing Resources Diagram

The template conditionally creates the following components based on boolean variables. All resources are tagged with Fortinet-Role tags for discovery by autoscale_template.

Component Overview

ComponentPurposeRequiredTypical Cost/Month
Inspection VPCVPC for FortiGate autoscale deploymentYes~$50 (VPC/networking)
Management VPCCentralized management infrastructureNo~$50 (VPC/networking)
FortiManagerPolicy management and orchestrationNo~$73 (m5.large)
FortiAnalyzerLogging and reportingNo~$73 (m5.large)
Jump BoxBastion host for secure accessNo~$7 (t3.micro)
Transit GatewayCentral hub for VPC interconnectivityNo~$36 + data transfer
Spoke VPCs (East/West)Simulated workload VPCsNo~$50 (networking)
Linux InstancesHTTP servers and traffic generatorsNo~$14 (2x t3.micro)

Total estimated cost for complete lab: ~$300-400/month


Component Details

0. Inspection VPC (Required)

Purpose: The VPC where FortiGate autoscale group will be deployed by autoscale_template

Configuration variable:

enable_build_inspection_vpc = true

What gets created:

Inspection VPC (10.0.0.0/16)
├── Public Subnet AZ1 (FortiGate login/management)
├── Public Subnet AZ2
├── GWLBE Subnet AZ1 (Gateway Load Balancer Endpoints)
├── GWLBE Subnet AZ2
├── Private Subnet AZ1 (TGW attachment)
├── Private Subnet AZ2
├── Management Subnet AZ1 (optional - dedicated management ENI)
├── Management Subnet AZ2 (optional)
├── Internet Gateway
├── NAT Gateways (optional - if nat_gw mode)
├── Route Tables (per subnet type and AZ)
└── TGW Attachment (optional - if TGW enabled)

Fortinet-Role tags applied (for autoscale_template discovery):

ResourceFortinet-Role Tag
VPC{cp}-{env}-inspection-vpc
IGW{cp}-{env}-inspection-igw
Public Subnet AZ1{cp}-{env}-inspection-public-az1
Public Subnet AZ2{cp}-{env}-inspection-public-az2
GWLBE Subnet AZ1{cp}-{env}-inspection-gwlbe-az1
GWLBE Subnet AZ2{cp}-{env}-inspection-gwlbe-az2
Private Subnet AZ1{cp}-{env}-inspection-private-az1
Private Subnet AZ2{cp}-{env}-inspection-private-az2
Public RT AZ1{cp}-{env}-inspection-public-rt-az1
Public RT AZ2{cp}-{env}-inspection-public-rt-az2
GWLBE RT AZ1{cp}-{env}-inspection-gwlbe-rt-az1
GWLBE RT AZ2{cp}-{env}-inspection-gwlbe-rt-az2
Private RT AZ1{cp}-{env}-inspection-private-rt-az1
Private RT AZ2{cp}-{env}-inspection-private-rt-az2
NAT GW AZ1{cp}-{env}-inspection-natgw-az1 (if nat_gw mode)
NAT GW AZ2{cp}-{env}-inspection-natgw-az2 (if nat_gw mode)
TGW Attachment{cp}-{env}-inspection-tgw-attachment (if TGW enabled)
TGW Route Table{cp}-{env}-inspection-tgw-rtb (if TGW enabled)

Example: For cp="acme" and env="test", tags would be acme-test-inspection-vpc, acme-test-inspection-public-az1, etc.

Warning

Critical Variable Coordination

The cp and env values used here must match exactly in autoscale_template for tag discovery to work. Mismatched values will cause autoscale_template to fail with “no matching VPC found” errors.

Inspection VPC Internet Mode

inspection_access_internet_mode = "nat_gw"  # or "eip"
  • nat_gw: Creates NAT Gateways for FortiGate internet access (recommended for production)
  • eip: FortiGates use Elastic IPs directly (simpler, lower cost)

Inspection VPC Dedicated Management ENI

inspection_enable_dedicated_management_eni = true

Creates additional management subnets within the Inspection VPC for dedicated management interfaces on FortiGate instances.


1. Management VPC (Optional)

Purpose: Centralized management infrastructure isolated from production traffic

Components:

  • Dedicated VPC with public and private subnets across two Availability Zones
  • Internet Gateway for external connectivity
  • Security groups for management traffic
  • Fortinet-Role tags for discovery by autoscale_template

Configuration variable:

enable_build_management_vpc = true

What gets created:

Management VPC (10.3.0.0/16)
├── Public Subnet AZ1 (10.3.1.0/24)
├── Public Subnet AZ2 (10.3.2.0/24)
├── Internet Gateway
└── Route Tables

Fortinet-Role tags applied (for autoscale_template discovery):

ResourceFortinet-Role Tag
VPC{cp}-{env}-management-vpc
Public Subnet AZ1{cp}-{env}-management-public-az1
Public Subnet AZ2{cp}-{env}-management-public-az2

FortiManager (Optional within Management VPC)

Configuration:

enable_fortimanager = true
fortimanager_instance_type = "m5.large"
fortimanager_os_version = "7.4.5"
fortimanager_host_ip = "10"  # Results in .3.0.10

Access:

  • GUI: https://<FortiManager-Public-IP>
  • SSH: ssh admin@<FortiManager-Public-IP>
  • Default credentials: admin / <instance-id>

Use cases:

  • Testing FortiManager integration with autoscale group
  • Centralized policy management demonstrations
  • Device orchestration testing

FortiAnalyzer (Optional within Management VPC)

Configuration:

enable_fortianalyzer = true
fortianalyzer_instance_type = "m5.large"
fortianalyzer_os_version = "7.4.5"
fortianalyzer_host_ip = "11"  # Results in .3.0.11

Access:

  • GUI: https://<FortiAnalyzer-Public-IP>
  • SSH: ssh admin@<FortiAnalyzer-Public-IP>
  • Default credentials: admin / <instance-id>

Use cases:

  • Centralized logging for autoscale group
  • Reporting and analytics demonstrations
  • Log retention testing

Jump Box (Optional within Management VPC)

Configuration:

enable_jump_box = true
jump_box_instance_type = "t3.micro"

Access:

ssh -i ~/.ssh/keypair.pem ec2-user@<jump-box-public-ip>

Use cases:

  • Secure access to spoke VPC instances
  • Testing connectivity without FortiGate in path (via debug attachment)
  • Management access to FortiGate private IPs

Management VPC TGW Attachment (Optional)

Configuration:

enable_mgmt_vpc_tgw_attachment = true

Purpose: Connects management VPC to Transit Gateway, allowing:

  • Jump box access to spoke VPC Linux instances
  • FortiManager/FortiAnalyzer access to FortiGate instances via TGW
  • Alternative management access paths

Routing:

  • Management VPC → TGW → Spoke VPCs
  • Can be combined with enable_debug_tgw_attachment for bypass testing

2. Transit Gateway and Spoke VPCs (Optional)

Purpose: Simulates production multi-VPC environment for traffic generation and testing

Configuration variable:

enable_build_existing_subnets = true

What gets created:

Transit Gateway
├── East Spoke VPC (192.168.0.0/24)
│   ├── Public Subnet AZ1
│   ├── Private Subnet AZ1
│   ├── NAT Gateway (optional)
│   └── Linux Instance (optional)
├── West Spoke VPC (192.168.1.0/24)
│   ├── Public Subnet AZ1
│   ├── Private Subnet AZ1
│   ├── NAT Gateway (optional)
│   └── Linux Instance (optional)
└── TGW Route Tables
    ├── Spoke-to-Spoke (via inspection VPC)
    └── Inspection-to-Internet

Transit Gateway

Configuration:

# Created automatically when enable_build_existing_subnets = true
# Named: {cp}-{env}-tgw

Purpose:

  • Central hub for VPC interconnectivity
  • Enables centralized egress architecture
  • Allows east-west traffic inspection

Attachments:

  • East Spoke VPC
  • West Spoke VPC
  • Inspection VPC (created by autoscale_template)
  • Management VPC (if enable_mgmt_vpc_tgw_attachment = true)
  • Debug attachment (if enable_debug_tgw_attachment = true)

Spoke VPCs (East and West)

Configuration:

vpc_cidr_east = "192.168.0.0/24"
vpc_cidr_west = "192.168.1.0/24"
vpc_cidr_spoke = "192.168.0.0/16"  # Supernet

Components per spoke VPC:

  • Public and private subnets
  • NAT Gateway for internet egress
  • Route tables for internet and TGW connectivity
  • Security groups for instance access

Linux Instances (Traffic Generators)

Configuration:

enable_east_linux_instances = true
east_linux_instance_type = "t3.micro"

enable_west_linux_instances = true
west_linux_instance_type = "t3.micro"

What they provide:

  • HTTP server on port 80 (for connectivity testing)
  • Internet egress capability (for testing FortiGate inspection)
  • East-West traffic generation between spoke VPCs

Testing with Linux instances:

# From jump box or another instance
curl http://<linux-instance-ip>
# Returns: "Hello from <hostname>"

# Generate internet egress traffic
ssh ec2-user@<linux-instance-ip>
curl http://www.google.com  # Traffic goes through FortiGate

Debug TGW Attachment (Optional)

Configuration:

enable_debug_tgw_attachment = true

Purpose: Creates a bypass attachment from Management VPC directly to Transit Gateway, allowing traffic to flow:

Jump Box → TGW → Spoke VPC Linux Instances (bypassing FortiGate inspection)

Debug path use cases:

  • Validate spoke VPC connectivity independent of FortiGate inspection
  • Compare latency/throughput with and without inspection
  • Troubleshoot routing issues by eliminating FortiGate as variable
  • Generate baseline traffic patterns for capacity planning
Warning

Security Consideration

The debug attachment bypasses FortiGate inspection entirely. Do not enable in production environments. This is strictly for testing and validation purposes.


Configuration Scenarios

Scenario 1: Complete Lab Environment

Use case: Full-featured lab for testing all capabilities

# Inspection VPC (Required)
enable_build_inspection_vpc            = true
inspection_access_internet_mode        = "nat_gw"
inspection_enable_dedicated_management_eni = false

# Management VPC Components
enable_build_management_vpc    = true
enable_fortimanager            = true
enable_fortianalyzer           = true
enable_jump_box                = true
enable_mgmt_vpc_tgw_attachment = true

# Spoke VPC Components
enable_build_existing_subnets  = true
enable_east_linux_instances    = true
enable_west_linux_instances    = true
enable_debug_tgw_attachment    = true

What you get: Complete environment with inspection VPC (with Fortinet-Role tags), management, spoke VPCs, traffic generators, and debug path

Cost: ~$300-400/month

Best for: Training, demonstrations, comprehensive testing


Scenario 2: Inspection + Management VPC Only

Use case: Testing FortiManager/FortiAnalyzer integration without spoke VPCs

# Inspection VPC (Required)
enable_build_inspection_vpc            = true
inspection_access_internet_mode        = "eip"

# Management VPC Components
enable_build_management_vpc    = true
enable_fortimanager            = true
enable_fortianalyzer           = true
enable_jump_box                = false
enable_mgmt_vpc_tgw_attachment = false

# Spoke VPC Components
enable_build_existing_subnets  = false

What you get: Inspection VPC (with Fortinet-Role tags) and Management VPC with FortiManager and FortiAnalyzer

Cost: ~$200/month

Best for: FortiManager/FortiAnalyzer integration testing, centralized management evaluation


Scenario 3: Inspection VPC + Traffic Generation

Use case: Testing autoscale with traffic generators, no management VPC

# Inspection VPC (Required)
enable_build_inspection_vpc            = true
inspection_access_internet_mode        = "nat_gw"

# Management VPC Components
enable_build_management_vpc    = false

# Spoke VPC Components
enable_build_existing_subnets  = true
enable_east_linux_instances    = true
enable_west_linux_instances    = true
enable_debug_tgw_attachment    = false

What you get: Inspection VPC (with Fortinet-Role tags), Transit Gateway, and spoke VPCs with Linux instances

Cost: ~$100-150/month

Best for: Autoscale behavior testing, load testing, capacity planning


Scenario 4: Minimal Inspection VPC Only

Use case: Lowest cost configuration - Inspection VPC only

# Inspection VPC (Required)
enable_build_inspection_vpc            = true
inspection_access_internet_mode        = "eip"  # Lower cost than nat_gw

# Management VPC Components
enable_build_management_vpc    = false

# Spoke VPC Components
enable_build_existing_subnets  = false

What you get: Inspection VPC with Fortinet-Role tags only - minimum required for autoscale_template

Cost: ~$30-50/month (VPC infrastructure only)

Best for: Minimal FortiGate testing, cost-sensitive environments, integration with existing TGW/spoke VPCs


Step-by-Step Deployment

Prerequisites

  • AWS account with appropriate permissions
  • Terraform 1.0 or later installed
  • AWS CLI configured with credentials
  • Git installed
  • SSH keypair created in target AWS region

Step 1: Clone the Repository

Clone the repository containing both templates:

git clone https://github.com/FortinetCloudCSE/Autoscale-Simplified-Template.git
cd Autoscale-Simplified-Template/terraform/existing_vpc_resources

Clone Repository Clone Repository

Step 2: Create terraform.tfvars

Copy the example file and customize:

cp terraform.tfvars.example terraform.tfvars

Step 3: Configure Core Variables

Region and Availability Zones

Region and AZ Region and AZ

aws_region         = "us-west-2"
availability_zone_1 = "a"
availability_zone_2 = "c"
Tip

Availability Zone Selection

Choose AZs that:

  • Support your desired instance types
  • Have sufficient capacity
  • Match your production environment (if testing for production)

Verify AZ availability:

aws ec2 describe-availability-zones --region us-west-2

Customer Prefix and Environment

Customer Prefix and Environment Customer Prefix and Environment

These values are prepended to all resources for identification:

cp  = "acme"    # Customer prefix
env = "test"    # Environment: prod, test, dev

Result: Resources named like acme-test-management-vpc, acme-test-tgw, etc.

Customer Prefix Example Customer Prefix Example

Warning

Critical: Variable Coordination

These cp and env values must match between existing_vpc_resources and autoscale_template for proper resource discovery via tags.

Step 4: Configure Component Flags

Inspection VPC (Required)

The Inspection VPC is required and must be enabled. This creates the VPC where FortiGate autoscale group will be deployed.

enable_build_inspection_vpc            = true
inspection_access_internet_mode        = "nat_gw"  # or "eip"
inspection_enable_dedicated_management_eni = false  # or true for dedicated mgmt ENI
Info

Fortinet-Role Tags: All Inspection VPC resources are automatically tagged with Fortinet-Role tags using the pattern {cp}-{env}-inspection-*. These tags are used by autoscale_template to discover the VPC resources.

Management VPC (Optional)

Build Management VPC Build Management VPC

enable_build_management_vpc = true

Spoke VPCs and Transit Gateway (Optional)

Build Existing Subnets Build Existing Subnets

enable_build_existing_subnets = true

Step 5: Configure Optional Components

FortiManager and FortiAnalyzer

FortiManager and FortiAnalyzer Options FortiManager and FortiAnalyzer Options

enable_fortimanager  = true
fortimanager_instance_type = "m5.large"
fortimanager_os_version = "7.4.5"
fortimanager_host_ip = "10"  # .3.0.10 within management VPC CIDR

enable_fortianalyzer = true
fortianalyzer_instance_type = "m5.large"
fortianalyzer_os_version = "7.4.5"
fortianalyzer_host_ip = "11"  # .3.0.11 within management VPC CIDR
Info

Instance Sizing Recommendations

For testing/lab environments:

  • FortiManager: m5.large (minimum)
  • FortiAnalyzer: m5.large (minimum)

For heavier workloads or production evaluation:

  • FortiManager: m5.xlarge or m5.2xlarge
  • FortiAnalyzer: m5.xlarge or larger (depends on log volume)

Management VPC Transit Gateway Attachment

Management VPC TGW Attachment Management VPC TGW Attachment

enable_mgmt_vpc_tgw_attachment = true

This allows jump box and management instances to reach spoke VPC Linux instances for testing.

Linux Traffic Generators

Linux Instances Linux Instances

enable_jump_box = true
jump_box_instance_type = "t3.micro"

enable_east_linux_instances = true
east_linux_instance_type = "t3.micro"

enable_west_linux_instances = true
west_linux_instance_type = "t3.micro"

Debug TGW Attachment

enable_debug_tgw_attachment = true

Enables bypass path for connectivity testing without FortiGate inspection.

Step 6: Configure Network CIDRs

Management and Spoke CIDRs Management and Spoke CIDRs

vpc_cidr_management = "10.3.0.0/16"
vpc_cidr_east       = "192.168.0.0/24"
vpc_cidr_west       = "192.168.1.0/24"
vpc_cidr_spoke      = "192.168.0.0/16"  # Supernet for all spoke VPCs
Warning

CIDR Planning

Ensure CIDRs:

  • Don’t overlap with existing networks
  • Match between existing_vpc_resources and autoscale_template
  • Have sufficient address space for growth
  • Align with corporate IP addressing standards

Step 7: Configure Security Variables

keypair = "my-aws-keypair"  # Must exist in target region
my_ip   = "203.0.113.10/32" # Your public IP for SSH access
Tip

Security Group Source IP

The my_ip variable restricts SSH and HTTPS access to management interfaces.

For dynamic IPs, consider:

  • Using a CIDR range: "203.0.113.0/24"
  • VPN endpoint IP if accessing via corporate VPN
  • Multiple IPs: Configure directly in security groups after deployment

Step 8: Deploy the Template

Initialize Terraform:

terraform init

Review the execution plan:

terraform plan

Expected output will show resources to be created based on enabled flags.

Deploy the infrastructure:

terraform apply

Type yes when prompted to confirm.

Expected deployment time: 10-15 minutes

Deployment progress:

Apply complete! Resources: 47 added, 0 changed, 0 destroyed.

Outputs:

east_linux_instance_ip = "192.168.0.50"
fortianalyzer_public_ip = "52.10.20.30"
fortimanager_public_ip = "52.10.20.40"
jump_box_public_ip = "52.10.20.50"
management_vpc_id = "vpc-0123456789abcdef0"
tgw_id = "tgw-0123456789abcdef0"
tgw_name = "acme-test-tgw"
west_linux_instance_ip = "192.168.1.50"

Step 9: Verify Deployment

Verify Management VPC

aws ec2 describe-vpcs --filters "Name=tag:Name,Values=acme-test-management-vpc"

Expected: VPC ID and CIDR information

Access FortiManager (if enabled)

# Get public IP from outputs
terraform output fortimanager_public_ip

# Access GUI
open https://<FortiManager-Public-IP>

# Or SSH
ssh admin@<FortiManager-Public-IP>
# Default password: <instance-id>

First-time FortiManager setup:

  1. Login with admin / instance-id
  2. Change password when prompted
  3. Complete initial setup wizard
  4. Navigate to Device Manager > Device & Groups

Enable VM device recognition (FortiManager 7.6.3+):

config system global
    set fgfm-allow-vm enable
end

Access FortiAnalyzer (if enabled)

# Get public IP from outputs
terraform output fortianalyzer_public_ip

# Access GUI
open https://<FortiAnalyzer-Public-IP>

# Or SSH  
ssh admin@<FortiAnalyzer-Public-IP>

Verify Transit Gateway (if enabled)

aws ec2 describe-transit-gateways --filters "Name=tag:Name,Values=acme-test-tgw"

Expected: Transit Gateway in “available” state

Test Linux Instances (if enabled)

# Get instance IPs from outputs
terraform output east_linux_instance_ip
terraform output west_linux_instance_ip

# Test HTTP connectivity (if jump box enabled)
ssh -i ~/.ssh/keypair.pem ec2-user@<jump-box-ip>
curl http://<east-linux-ip>
# Expected: "Hello from ip-192-168-0-50"

Step 10: Save Outputs for autoscale_template

Save key outputs for use in autoscale_template configuration:

# Save all outputs
terraform output > ../outputs.txt

# Or save specific values
echo "tgw_name: $(terraform output -raw tgw_name)" >> ../autoscale_template/terraform.tfvars
echo "fortimanager_ip: $(terraform output -raw fortimanager_private_ip)" >> ../autoscale_template/terraform.tfvars

Outputs Reference

The template provides these outputs. Note that autoscale_template discovers resources via Fortinet-Role tags rather than using output values directly.

Inspection VPC Outputs

OutputDescriptionNotes
inspection_vpc_idID of inspection VPCDiscovered by autoscale_template via Fortinet-Role tag
inspection_vpc_cidrCIDR of inspection VPCUsed for route table configuration

Management and Supporting Infrastructure Outputs

OutputDescriptionUsed By autoscale_template
management_vpc_idID of management VPCVPC peering or TGW routing
management_vpc_cidrCIDR of management VPCRoute table configuration
tgw_idTransit Gateway IDTGW attachment
tgw_nameTransit Gateway name tagattach_to_tgw_name variable
fortimanager_private_ipFortiManager private IPfortimanager_ip variable
fortimanager_public_ipFortiManager public IPGUI/SSH access
fortianalyzer_private_ipFortiAnalyzer private IPFortiGate syslog configuration
fortianalyzer_public_ipFortiAnalyzer public IPGUI/SSH access
jump_box_public_ipJump box public IPSSH bastion access
east_linux_instance_ipEast spoke instance IPConnectivity testing
west_linux_instance_ipWest spoke instance IPConnectivity testing
Info

Tag-Based Discovery: The autoscale_template discovers Inspection VPC resources using Fortinet-Role tags rather than relying on output values. This allows the templates to be run independently as long as the cp and env values match.


Post-Deployment Configuration

Configure FortiManager for Integration

If you enabled FortiManager and plan to integrate with autoscale group:

  1. Access FortiManager GUI: https://<FortiManager-Public-IP>

  2. Change default password:

    • Login with admin / <instance-id>
    • Follow password change prompts
  3. Enable VM device recognition (7.6.3+):

    config system global
        set fgfm-allow-vm enable
    end
  4. Create ADOM for autoscale group (optional):

    • Device Manager > ADOM
    • Create ADOM for organizing autoscale FortiGates
  5. Note FortiManager details for autoscale_template:

    • Private IP: From outputs
    • Serial number: Get from CLI: get system status

Configure FortiAnalyzer for Logging

If you enabled FortiAnalyzer:

  1. Access FortiAnalyzer GUI: https://<FortiAnalyzer-Public-IP>

  2. Change default password

  3. Configure log settings:

    • System Settings > Storage
    • Configure log retention policies
    • Enable features needed for testing
  4. Note FortiAnalyzer private IP for FortiGate syslog configuration


Important Notes

Resource Lifecycle Considerations

Warning

Management Resource Persistence

If you deploy the existing_vpc_resources template:

  • Management VPC and resources (FortiManager, FortiAnalyzer) will be destroyed when you run terraform destroy
  • If you want management resources to persist across inspection VPC redeployments, consider:
    • Deploying management VPC separately with different Terraform state
    • Using existing management infrastructure instead of template-created resources
    • Setting appropriate lifecycle rules in Terraform to prevent destruction

Cost Optimization Tips

Info

Managing Lab Costs

The existing_vpc_resources template can create expensive resources:

  • FortiManager m5.large: $0.10/hour ($73/month)
  • FortiAnalyzer m5.large: $0.10/hour ($73/month)
  • Transit Gateway: $0.05/hour (~$36/month) + data processing charges
  • NAT Gateways: $0.045/hour each (~$33/month each)

Cost reduction strategies:

  • Use smaller instance types (t3.micro, t3.small) where possible
  • Disable FortiManager/FortiAnalyzer if not testing those features
  • Destroy resources when not actively testing
  • Use AWS Cost Explorer to monitor spend
  • Consider AWS budgets and alerts

Example budget-conscious configuration:

enable_fortimanager = false    # Save $73/month
enable_fortianalyzer = false   # Save $73/month
jump_box_instance_type = "t3.micro"  # Use smallest size
east_linux_instance_type = "t3.micro"
west_linux_instance_type = "t3.micro"

State File Management

Store Terraform state securely:

# backend.tf (optional - recommended for teams)
terraform {
  backend "s3" {
    bucket = "my-terraform-state"
    key    = "existing-vpc-resources/terraform.tfstate"
    region = "us-west-2"
    encrypt = true
    dynamodb_table = "terraform-locks"
  }
}

Troubleshooting

Issue: Terraform Fails with “Resource Already Exists”

Symptoms:

Error: Error creating VPC: VpcLimitExceeded

Solutions:

  • Check VPC limits in your AWS account
  • Clean up unused VPCs
  • Request limit increase via AWS Support

Issue: Cannot Access FortiManager/FortiAnalyzer

Symptoms:

  • Timeout when accessing GUI
  • SSH connection refused

Solutions:

  1. Verify security groups allow your IP:

    aws ec2 describe-security-groups --group-ids <sg-id>
  2. Check instance is running:

    aws ec2 describe-instances --filters "Name=tag:Name,Values=*fortimanager*"
  3. Verify my_ip variable matches your current public IP:

    curl ifconfig.me
  4. Check instance system log for boot issues:

    aws ec2 get-console-output --instance-id <instance-id>

Issue: Transit Gateway Attachment Pending

Symptoms:

  • TGW attachment stuck in “pending” state
  • Spoke VPCs can’t communicate

Solutions:

  1. Wait 5-10 minutes for attachment to complete
  2. Check TGW route tables are configured
  3. Verify no CIDR overlaps between VPCs
  4. Check TGW attachment state:
    aws ec2 describe-transit-gateway-attachments

Issue: Linux Instances Not Reachable

Symptoms:

  • Cannot curl or SSH to Linux instances

Solutions:

  1. Verify you’re accessing from jump box (if not public)
  2. Check security groups allow port 80 and 22
  3. Verify NAT Gateway is functioning for internet access
  4. Check route tables in spoke VPCs

Issue: High Costs After Deployment

Symptoms:

  • AWS bill higher than expected

Solutions:

  1. Check what’s running:

    aws ec2 describe-instances --filters "Name=instance-state-name,Values=running"
  2. Identify expensive resources:

    # Use AWS Cost Explorer in AWS Console
    # Filter by resource tags: cp and env
  3. Shut down unused components:

    terraform destroy -target=module.fortimanager
    terraform destroy -target=module.fortianalyzer
  4. Or destroy entire deployment:

    terraform destroy

Cleanup

Destroying Resources

To destroy the existing_vpc_resources infrastructure:

cd terraform/existing_vpc_resources
terraform destroy

Type yes when prompted.

Warning

Destroy Order is Critical

If you also deployed autoscale_template, destroy it FIRST before destroying existing_vpc_resources:

# Step 1: Destroy autoscale_template
cd terraform/autoscale_template
terraform destroy

# Step 2: Destroy existing_vpc_resources  
cd ../existing_vpc_resources
terraform destroy

Why? The inspection VPC has a Transit Gateway attachment to the TGW created by existing_vpc_resources. Destroying the TGW first will cause the attachment deletion to fail.

Selective Cleanup

To destroy only specific components:

# Destroy only FortiManager
terraform destroy -target=module.fortimanager

# Destroy only spoke VPCs and TGW
terraform destroy -target=module.transit_gateway
terraform destroy -target=module.spoke_vpcs

# Destroy only management VPC
terraform destroy -target=module.management_vpc

Verify Complete Cleanup

After destroying, verify no resources remain:

# Check VPCs
aws ec2 describe-vpcs --filters "Name=tag:cp,Values=acme" "Name=tag:env,Values=test"

# Check Transit Gateways
aws ec2 describe-transit-gateways --filters "Name=tag:cp,Values=acme"

# Check running instances
aws ec2 describe-instances --filters "Name=instance-state-name,Values=running" "Name=tag:cp,Values=acme"

Next Steps

After deploying existing_vpc_resources, proceed to deploy the autoscale_template to create the FortiGate autoscale group and inspection VPC.

Key information to carry forward:

  • Transit Gateway name (from outputs)
  • FortiManager private IP (if enabled)
  • FortiAnalyzer private IP (if enabled)
  • Same cp and env values

Recommended next reading:

autoscale_template

Overview

The autoscale_template deploys the FortiGate autoscale group into an existing Inspection VPC. It discovers VPC resources using Fortinet-Role tags created by the existing_vpc_resources template.

Warning

Prerequisites: You must run existing_vpc_resources FIRST to create the Inspection VPC with proper Fortinet-Role tags. Alternatively, you can manually apply the required tags to existing VPCs.

Info

This template is required for all deployments. It deploys the FortiGate autoscale group, Gateway Load Balancer, Lambda functions, and configures routes for traffic inspection.


What It Creates

The autoscale_template discovers the existing Inspection VPC via Fortinet-Role tags and deploys FortiGate autoscale components into it:

Resource Discovery (via Fortinet-Role Tags)

ResourceTag PatternPurpose
Inspection VPC{cp}-{env}-inspection-vpcVPC for FortiGate deployment
Subnets{cp}-{env}-inspection-{type}-{az}Public, GWLBE, Private subnets
Route Tables{cp}-{env}-inspection-{type}-rt-{az}For route modifications
IGW{cp}-{env}-inspection-igwInternet connectivity
NAT Gateways{cp}-{env}-inspection-natgw-{az}If nat_gw mode
TGW Attachment{cp}-{env}-inspection-tgw-attachmentIf TGW enabled

Components Created

ComponentPurposeAlways Created
FortiGate Autoscale GroupsBYOL and/or on-demand instance groups✅ Yes
Gateway Load BalancerDistributes traffic across FortiGate instances✅ Yes
GWLB EndpointsConnection points in each AZ✅ Yes
Lambda FunctionsLifecycle management and licensing automation✅ Yes
DynamoDB TableLicense tracking and state management✅ Yes (if BYOL)
S3 BucketLicense file storage and Lambda code✅ Yes (if BYOL)
IAM RolesPermissions for Lambda and EC2 instances✅ Yes
Security GroupsNetwork access control✅ Yes
CloudWatch AlarmsAutoscaling triggers✅ Yes
Route ModificationsPoints private subnets to GWLB endpoints✅ Yes (if enabled)

Optional Components

ComponentPurposeEnabled By
Transit Gateway AttachmentConnection to TGW for centralized architectureenable_tgw_attachment
Dedicated Management ENIIsolated management interfaceenable_dedicated_management_eni
Dedicated Management VPC ConnectionManagement in separate VPCenable_dedicated_management_vpc
FortiManager IntegrationCentralized policy managementenable_fortimanager_integration
East-West InspectionInter-spoke traffic inspectionenable_east_west_inspection

Architecture Patterns

The autoscale_template supports multiple deployment patterns:

Pattern 1: Centralized Architecture with TGW

Configuration:

enable_tgw_attachment = true
attach_to_tgw_name = "production-tgw"

Traffic flow:

Spoke VPCs → TGW → Inspection VPC → FortiGate → GWLB → Internet

Use cases:

  • Production centralized egress
  • Multi-VPC environments
  • East-west traffic inspection

Pattern 2: Distributed Architecture (No TGW)

Configuration:

enable_tgw_attachment = false

Traffic flow:

Spoke VPC → GWLB Endpoint → FortiGate → Internet Gateway

Use cases:

  • Distributed security architecture
  • Per-VPC inspection requirements
  • Bump-in-the-wire deployments

Pattern 3: Hybrid with Management VPC

Configuration:

enable_tgw_attachment = true
enable_dedicated_management_vpc = true
enable_fortimanager_integration = true

Traffic flow:

Data: Spoke VPCs → TGW → FortiGate → Internet
Management: FortiGate → Management VPC → FortiManager

Use cases:

  • Enterprise deployments
  • Centralized management requirements
  • Compliance-driven architectures

Integration Modes

Fortinet-Role Tag Discovery

The autoscale_template discovers all Inspection VPC resources using Fortinet-Role tags. This is how it finds the VPC, subnets, route tables, and other resources created by existing_vpc_resources.

How discovery works:

# autoscale_template looks up resources like this:
data "aws_vpc" "inspection" {
  filter {
    name   = "tag:Fortinet-Role"
    values = ["${var.cp}-${var.env}-inspection-vpc"]
  }
}

data "aws_subnet" "inspection_public_az1" {
  filter {
    name   = "tag:Fortinet-Role"
    values = ["${var.cp}-${var.env}-inspection-public-az1"]
  }
}
Warning

Critical: The cp and env variables must match exactly between existing_vpc_resources and autoscale_template for tag discovery to work.

Integration with existing_vpc_resources

When deploying after existing_vpc_resources:

Required variable coordination:

# MUST MATCH existing_vpc_resources values (for Fortinet-Role tag discovery)
aws_region          = "us-west-2"
availability_zone_1 = "a"
availability_zone_2 = "c"
cp                  = "acme"      # MUST MATCH - used for tag lookup
env                 = "test"      # MUST MATCH - used for tag lookup

# Connect to created TGW (if enabled in existing_vpc_resources)
enable_tgw_attachment = true
attach_to_tgw_name    = "acme-test-tgw"  # From existing_vpc_resources output

# Connect to management VPC (if created in existing_vpc_resources)
enable_dedicated_management_vpc = true
# Management VPC also discovered via Fortinet-Role tags

# FortiManager integration (if enabled in existing_vpc_resources)
enable_fortimanager_integration = true
fortimanager_ip = "10.3.0.10"  # From existing_vpc_resources output
fortimanager_sn = "FMGVM0000000001"

Integration with Manually Tagged VPCs

If you have existing VPCs that you want to use instead of creating new ones with existing_vpc_resources, you must apply Fortinet-Role tags to all required resources:

Required tags (see Templates Overview for complete list):

  • VPC: {cp}-{env}-inspection-vpc
  • Subnets: {cp}-{env}-inspection-{public|gwlbe|private}-az{1|2}
  • Route Tables: {cp}-{env}-inspection-{type}-rt-az{1|2}
  • IGW: {cp}-{env}-inspection-igw

Configuration:

# Match your tag prefix
cp  = "acme"
env = "prod"

# Connect to existing production TGW
enable_tgw_attachment = true
attach_to_tgw_name = "production-tgw"  # Your existing TGW

# Use existing management infrastructure
enable_fortimanager_integration = true
fortimanager_ip = "10.100.50.10"  # Your existing FortiManager
fortimanager_sn = "FMGVM1234567890"

Step-by-Step Deployment

Prerequisites

  • ✅ AWS account with appropriate permissions
  • ✅ Terraform 1.0 or later installed
  • ✅ AWS CLI configured with credentials
  • ✅ SSH keypair created in target AWS region
  • ✅ FortiGate licenses (if using BYOL) or FortiFlex account (if using FortiFlex)
  • existing_vpc_resources deployed (creates Inspection VPC with Fortinet-Role tags)
  • OR existing VPCs with Fortinet-Role tags applied manually
Warning

Required: The Inspection VPC must exist with proper Fortinet-Role tags before running this template. Run existing_vpc_resources first, or manually tag your existing VPCs.

Step 1: Navigate to Template Directory

cd Autoscale-Simplified-Template/terraform/autoscale_template

Step 2: Create terraform.tfvars

cp terraform.tfvars.example terraform.tfvars

Step 3: Configure Core Variables

Region and Availability Zones

Region and AZ Region and AZ

aws_region         = "us-west-2"
availability_zone_1 = "a"
availability_zone_2 = "c"
Warning

Variable Coordination

If you deployed existing_vpc_resources, these values MUST MATCH exactly:

  • aws_region
  • availability_zone_1
  • availability_zone_2
  • cp (customer prefix)
  • env (environment)

Mismatched values will cause resource discovery failures and deployment errors.

Customer Prefix and Environment

Customer Prefix and Environment Customer Prefix and Environment

cp  = "acme"    # Customer prefix - MUST MATCH existing_vpc_resources
env = "test"    # Environment - MUST MATCH existing_vpc_resources
Warning

Critical for Tag Discovery

These values form the prefix for Fortinet-Role tags used to discover the Inspection VPC. For example, with cp="acme" and env="test", the template looks for:

  • VPC with tag Fortinet-Role = acme-test-inspection-vpc
  • Subnets with tags like Fortinet-Role = acme-test-inspection-public-az1

If these don’t match the tags created by existing_vpc_resources, the template will fail with “no matching VPC found” errors.

Step 4: Configure Security Variables

Security Variables Security Variables

keypair                 = "my-aws-keypair"  # Must exist in target region
my_ip                   = "203.0.113.10/32" # Your public IP for management access
fortigate_asg_password  = "SecurePassword123!"  # Admin password for FortiGates
Warning

Password Requirements

The fortigate_asg_password must meet FortiOS password requirements:

  • Minimum 8 characters
  • At least one uppercase letter
  • At least one lowercase letter
  • At least one number
  • No special characters that might cause shell escaping issues

Never commit passwords to version control. Consider using:

  • Terraform variables marked as sensitive
  • Environment variables: TF_VAR_fortigate_asg_password
  • AWS Secrets Manager
  • HashiCorp Vault

Step 5: Configure Transit Gateway Integration

TGW Attachment TGW Attachment

To connect to Transit Gateway:

enable_tgw_attachment = true

TGW Name TGW Name

Specify TGW name:

# If using existing_vpc_resources template
attach_to_tgw_name = "acme-test-tgw"  # Matches existing_vpc_resources output

# If using existing production TGW
attach_to_tgw_name = "production-tgw"  # Your production TGW name
Tip

Finding Your Transit Gateway Name

If you don’t know your TGW name:

aws ec2 describe-transit-gateways \
  --query 'TransitGateways[*].[Tags[?Key==`Name`].Value | [0], TransitGatewayId]' \
  --output table

The attach_to_tgw_name should match the Name tag of your Transit Gateway.

To skip TGW attachment (distributed architecture):

enable_tgw_attachment = false

East-West Inspection (requires TGW attachment):

enable_east_west_inspection = true  # Routes spoke-to-spoke traffic through FortiGate

Step 6: Configure Architecture Options

Firewall Mode

firewall_policy_mode = "2-arm"  # or "1-arm"

Recommendations:

  • 2-arm: Recommended for most deployments (better throughput)
  • 1-arm: Use when simplified routing is required

See Firewall Architecture for detailed comparison.

Internet Egress Mode

access_internet_mode = "nat_gw"  # or "eip"

Recommendations:

  • nat_gw: Production deployments (higher availability)
  • eip: Lower cost, simpler architecture

See Internet Egress for detailed comparison.

Step 7: Configure Management Options

Dedicated Management ENI

enable_dedicated_management_eni = true

Separates management traffic from data plane. Recommended for production.

Dedicated Management VPC

enable_dedicated_management_vpc = true

# If using existing_vpc_resources with default tags:
dedicated_management_vpc_tag = "acme-test-management-vpc"
dedicated_management_public_az1_subnet_tag = "acme-test-management-public-az1-subnet"
dedicated_management_public_az2_subnet_tag = "acme-test-management-public-az2-subnet"

# If using existing management VPC with custom tags:
dedicated_management_vpc_tag = "my-custom-mgmt-vpc-tag"
dedicated_management_public_az1_subnet_tag = "my-custom-mgmt-az1-tag"
dedicated_management_public_az2_subnet_tag = "my-custom-mgmt-az2-tag"

See Management Isolation for options and recommendations.

Info

Automatic Implication

When enable_dedicated_management_vpc = true, the template automatically sets enable_dedicated_management_eni = true. You don’t need to configure both explicitly.

Step 8: Configure Licensing

License Variables License Variables

The template supports three licensing models. Choose one or combine them for hybrid licensing.

Option 1: BYOL (Bring Your Own License)

asg_license_directory = "asg_license"  # Directory containing .lic files

Prerequisites:

  1. Create the license directory:

    mkdir asg_license
  2. Place license files in the directory:

    terraform/autoscale_template/
    ├── terraform.tfvars
    ├── asg_license/
       ├── FGVM01-001.lic
       ├── FGVM01-002.lic
       ├── FGVM01-003.lic
       └── FGVM01-004.lic
  3. Ensure you have at least as many licenses as asg_byol_asg_max_size

Warning

License Pool Exhaustion

If you run out of BYOL licenses:

  • New BYOL instances launch but remain unlicensed
  • Unlicensed instances operate at 1 Mbps throughput
  • FortiGuard services will not activate
  • If on-demand ASG is configured, scaling continues using PAYG instances

Recommended: Provision 20% more licenses than asg_byol_asg_max_size

Option 2: FortiFlex (API-Driven)

fortiflex_username      = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"  # API username (UUID)
fortiflex_password      = "xxxxxxxxxxxxxxxxxxxxx"  # API password
fortiflex_sn_list       = ["FGVMELTMxxxxxxxx"]  # Optional: specific program serial numbers
fortiflex_configid_list = ["My_4CPU_Config"]  # Configuration names (must match CPU count)

Prerequisites:

  1. Register FortiFlex program via FortiCare
  2. Purchase point packs
  3. Create configurations matching your instance types
  4. Generate API credentials via IAM portal

CPU count matching:

fgt_instance_type = "c6i.xlarge"  # 4 vCPUs
fortiflex_configid_list = ["My_4CPU_Config"]  # MUST have 4 CPUs configured
Warning

Security Best Practice

Never commit FortiFlex credentials to version control. Use:

  • Terraform Cloud sensitive variables
  • AWS Secrets Manager
  • Environment variables: TF_VAR_fortiflex_username and TF_VAR_fortiflex_password
  • HashiCorp Vault

Example using environment variables:

export TF_VAR_fortiflex_username="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
export TF_VAR_fortiflex_password="xxxxxxxxxxxxxxxxxxxxx"
terraform apply

See FortiFlex Setup Guide for complete configuration details.

Option 3: PAYG (AWS Marketplace)

# No explicit configuration needed
# Just set on-demand ASG capacities

asg_byol_asg_min_size = 0
asg_byol_asg_max_size = 0

asg_ondemand_asg_min_size = 2
asg_ondemand_asg_max_size = 8

Prerequisites:

  • Accept FortiGate-VM terms in AWS Marketplace
  • No license files or API credentials required
  • Licensing cost included in hourly EC2 charge

Combine licensing models for cost optimization:

# BYOL for baseline capacity (lowest cost)
asg_license_directory = "asg_license"
asg_byol_asg_min_size = 2
asg_byol_asg_max_size = 4

# PAYG for burst capacity (highest flexibility)
asg_ondemand_asg_min_size = 0
asg_ondemand_asg_max_size = 4

See Licensing Options for detailed comparison and cost analysis.

Step 9: Configure Autoscale Group Capacity

# BYOL ASG
asg_byol_asg_min_size     = 2
asg_byol_asg_max_size     = 4
asg_byol_asg_desired_size = 2

# On-Demand ASG  
asg_ondemand_asg_min_size     = 0
asg_ondemand_asg_max_size     = 4
asg_ondemand_asg_desired_size = 0

# Primary scale-in protection
primary_scalein_protection = true

Capacity planning guidance:

Deployment TypeRecommended Configuration
Development/Testmin=1, max=2, desired=1
Small Productionmin=2, max=4, desired=2
Medium Productionmin=2, max=8, desired=4
Large Productionmin=4, max=16, desired=6

Scaling behavior:

  • BYOL instances scale first (up to asg_byol_asg_max_size)
  • On-demand instances scale when BYOL capacity exhausted
  • CloudWatch alarms trigger scale-out at 80% CPU (default)
  • Scale-in occurs at 30% CPU (default)

See Autoscale Group Capacity for detailed planning.

Step 10: Configure FortiGate Specifications

fgt_instance_type = "c7gn.xlarge"
fortios_version   = "7.4.5"
fortigate_gui_port = 443

Instance type recommendations:

Use CaseRecommended TypevCPUsNetwork Performance
Testing/Labt3.xlarge4Up to 5 Gbps
Small Productionc6i.xlarge4Up to 12.5 Gbps
Medium Productionc6i.2xlarge8Up to 12.5 Gbps
High Performancec7gn.xlarge4Up to 25 Gbps
Very High Performancec7gn.4xlarge1650 Gbps

FortiOS version selection:

  • Use latest stable release for new deployments
  • Test new versions in dev/test before production
  • Check FortiOS Release Notes for compatibility

Step 11: Configure FortiManager Integration (Optional)

enable_fortimanager_integration = true
fortimanager_ip                 = "10.3.0.10"  # FortiManager IP
fortimanager_sn                 = "FMGVM0000000001"  # FortiManager serial number
fortimanager_vrf_select         = 1  # VRF for management routing
Warning

FortiManager 7.6.3+ Configuration Required

If using FortiManager 7.6.3 or later, you must enable VM device recognition before deploying:

On FortiManager CLI:

config system global
    set fgfm-allow-vm enable
end

Verify the setting:

show system global | grep fgfm-allow-vm

Without this configuration, FortiGate-VM instances will fail to register with FortiManager.

See FortiManager Integration for complete details.

FortiManager integration behavior:

  • Lambda generates config system central-management on primary FortiGate only
  • Primary FortiGate registers with FortiManager as unauthorized device
  • VDOM exception prevents sync to secondary instances
  • Configuration syncs from FortiManager → Primary → Secondaries

See FortiManager Integration Configuration for advanced options including UMS mode.

Step 12: Configure Network CIDRs

vpc_cidr_inspection = "10.0.0.0/16"
vpc_cidr_management = "10.3.0.0/16"  # Must match existing_vpc_resources if used
vpc_cidr_spoke      = "192.168.0.0/16"  # Supernet for all spoke VPCs
vpc_cidr_east       = "192.168.0.0/24"
vpc_cidr_west       = "192.168.1.0/24"

subnet_bits = 8  # /16 + 8 = /24 subnets
Warning

CIDR Planning Considerations

Ensure:

  • ✅ No overlap with existing networks
  • ✅ Management VPC CIDR matches existing_vpc_resources if used
  • ✅ Spoke supernet encompasses all individual spoke VPC CIDRs
  • ✅ Sufficient address space for growth
  • ✅ Alignment with corporate IP addressing standards

Common mistakes:

  • ❌ Overlapping inspection VPC with management VPC
  • ❌ Spoke CIDR too small for number of VPCs
  • ❌ Mismatched CIDRs between templates

Step 13: Configure GWLB Endpoint Names

endpoint_name_az1 = "asg-gwlbe_az1"
endpoint_name_az2 = "asg-gwlbe_az2"

These names are used for route table lookups when configuring TGW routing or spoke VPC routing.

Step 14: Configure Additional Options

FortiGate System Autoscale

enable_fgt_system_autoscale = true

Enables FortiGate-native HA synchronization between instances. Recommended to leave enabled.

CloudWatch Alarms

# Scale-out threshold (default: 80% CPU)
scale_out_threshold = 80

# Scale-in threshold (default: 30% CPU)
scale_in_threshold = 30

Adjust based on your traffic patterns and capacity requirements.

Step 15: Review Complete Configuration

Review your complete terraform.tfvars file before deployment. Here’s a complete example:

Click to expand complete example terraform.tfvars
#-----------------------------------------------------------------------
# Core Configuration
#-----------------------------------------------------------------------
aws_region          = "us-west-2"
availability_zone_1 = "a"
availability_zone_2 = "c"
cp                  = "acme"
env                 = "prod"

#-----------------------------------------------------------------------
# Security
#-----------------------------------------------------------------------
keypair                = "acme-keypair"
my_ip                  = "203.0.113.10/32"
fortigate_asg_password = "SecurePassword123!"

#-----------------------------------------------------------------------
# Transit Gateway
#-----------------------------------------------------------------------
enable_tgw_attachment      = true
attach_to_tgw_name         = "acme-prod-tgw"
enable_east_west_inspection = true

#-----------------------------------------------------------------------
# Architecture Options
#-----------------------------------------------------------------------
firewall_policy_mode = "2-arm"
access_internet_mode = "nat_gw"

#-----------------------------------------------------------------------
# Management Options
#-----------------------------------------------------------------------
enable_dedicated_management_eni = true
enable_dedicated_management_vpc = true
dedicated_management_vpc_tag = "acme-prod-management-vpc"
dedicated_management_public_az1_subnet_tag = "acme-prod-management-public-az1-subnet"
dedicated_management_public_az2_subnet_tag = "acme-prod-management-public-az2-subnet"

#-----------------------------------------------------------------------
# FortiManager Integration
#-----------------------------------------------------------------------
enable_fortimanager_integration = true
fortimanager_ip                 = "10.3.0.10"
fortimanager_sn                 = "FMGVM0000000001"
fortimanager_vrf_select         = 1

#-----------------------------------------------------------------------
# Licensing - Hybrid BYOL + PAYG
#-----------------------------------------------------------------------
asg_license_directory = "asg_license"

#-----------------------------------------------------------------------
# Autoscale Group Capacity
#-----------------------------------------------------------------------
# BYOL baseline
asg_byol_asg_min_size     = 2
asg_byol_asg_max_size     = 4
asg_byol_asg_desired_size = 2

# PAYG burst
asg_ondemand_asg_min_size     = 0
asg_ondemand_asg_max_size     = 4
asg_ondemand_asg_desired_size = 0

# Scale-in protection
primary_scalein_protection = true

#-----------------------------------------------------------------------
# FortiGate Specifications
#-----------------------------------------------------------------------
fgt_instance_type       = "c6i.xlarge"
fortios_version         = "7.4.5"
fortigate_gui_port      = 443
enable_fgt_system_autoscale = true

#-----------------------------------------------------------------------
# Network CIDRs
#-----------------------------------------------------------------------
vpc_cidr_inspection = "10.0.0.0/16"
vpc_cidr_management = "10.3.0.0/16"
vpc_cidr_spoke      = "192.168.0.0/16"
vpc_cidr_east       = "192.168.0.0/24"
vpc_cidr_west       = "192.168.1.0/24"
subnet_bits         = 8

#-----------------------------------------------------------------------
# GWLB Endpoints
#-----------------------------------------------------------------------
endpoint_name_az1 = "acme-prod-gwlbe-az1"
endpoint_name_az2 = "acme-prod-gwlbe-az2"

Step 16: Deploy the Template

Initialize Terraform:

terraform init

Review the execution plan:

terraform plan

Expected output will show ~40-60 resources to be created.

Deploy the infrastructure:

terraform apply

Type yes when prompted.

Expected deployment time: 15-20 minutes

Deployment progress indicators:

  • VPC and networking: ~2 minutes
  • Security groups and IAM: ~1 minute
  • Lambda functions and DynamoDB: ~2 minutes
  • GWLB and endpoints: ~5 minutes
  • FortiGate instances launching: ~5-10 minutes

Step 17: Monitor Deployment

Watch CloudWatch logs for Lambda execution:

# Get Lambda function name from Terraform
terraform output lambda_function_name

# Stream logs
aws logs tail /aws/lambda/<function-name> --follow

Watch Auto Scaling Group activity:

# Get ASG name
aws autoscaling describe-auto-scaling-groups \
  --query 'AutoScalingGroups[?contains(AutoScalingGroupName, `acme-prod`)].AutoScalingGroupName'

# Watch instance launches
aws autoscaling describe-scaling-activities \
  --auto-scaling-group-name <asg-name> \
  --max-records 10

Step 18: Verify Deployment

Check FortiGate Instances

# List running FortiGate instances
aws ec2 describe-instances \
  --filters "Name=tag:cp,Values=acme" \
           "Name=tag:env,Values=prod" \
           "Name=instance-state-name,Values=running" \
  --query 'Reservations[*].Instances[*].[InstanceId,PublicIpAddress,Tags[?Key==`Name`].Value|[0]]' \
  --output table

Access FortiGate GUI

# Get FortiGate public IP
terraform output fortigate_instance_ips

# Access GUI
open https://<fortigate-public-ip>:443

Login credentials:

  • Username: admin
  • Password: Value from fortigate_asg_password variable

Verify License Assignment

For BYOL:

# SSH to FortiGate
ssh -i ~/.ssh/keypair.pem admin@<fortigate-ip>

# Check license status
get system status

# Look for:
# Serial-Number: FGVMxxxxxxxxxx (not FGVMEVXXXXXXXXX)
# License Status: Valid

For FortiFlex:

  • Check Lambda CloudWatch logs for successful API calls
  • Verify entitlements created in FortiFlex portal
  • Check FortiGate shows licensed status

For PAYG:

  • Instances automatically licensed via AWS
  • Verify license status in FortiGate GUI

Verify Transit Gateway Attachment

aws ec2 describe-transit-gateway-attachments \
  --filters "Name=state,Values=available" \
           "Name=resource-type,Values=vpc" \
  --query 'TransitGatewayAttachments[?contains(Tags[?Key==`Name`].Value|[0], `inspection`)]'

Verify FortiManager Registration

If FortiManager integration enabled:

  1. Access FortiManager GUI: https://<fortimanager-ip>
  2. Navigate to Device Manager > Device & Groups
  3. Look for unauthorized device with serial number matching primary FortiGate
  4. Right-click device and select Authorize

Test Traffic Flow

From jump box (if using existing_vpc_resources):

# SSH to jump box
ssh -i ~/.ssh/keypair.pem ec2-user@<jump-box-ip>

# Test internet connectivity (should go through FortiGate)
curl https://www.google.com

# Test spoke VPC connectivity
curl http://<linux-instance-ip>

On FortiGate:

# SSH to FortiGate
ssh -i ~/.ssh/keypair.pem admin@<fortigate-ip>

# Monitor real-time traffic
diagnose sniffer packet any 'host 192.168.0.50' 4

# Check firewall policies
get firewall policy

# View active sessions
diagnose sys session list

Post-Deployment Configuration

Configure TGW Route Tables

If you enabled enable_tgw_attachment = true, configure Transit Gateway route tables to route traffic through inspection VPC:

For Centralized Egress

Spoke VPC route table (route internet traffic to inspection VPC):

# Get inspection VPC TGW attachment ID
INSPECT_ATTACH_ID=$(aws ec2 describe-transit-gateway-attachments \
  --filters "Name=resource-type,Values=vpc" \
           "Name=tag:Name,Values=*inspection*" \
  --query 'TransitGatewayAttachments[0].TransitGatewayAttachmentId' \
  --output text)

# Add default route to spoke route table
aws ec2 create-transit-gateway-route \
  --destination-cidr-block 0.0.0.0/0 \
  --transit-gateway-route-table-id <spoke-rt-id> \
  --transit-gateway-attachment-id $INSPECT_ATTACH_ID

Inspection VPC route table (route spoke traffic to internet):

# This is typically configured automatically by the template
# Verify it exists:
aws ec2 describe-transit-gateway-route-tables \
  --transit-gateway-route-table-ids <inspection-rt-id>

For East-West Inspection

If you enabled enable_east_west_inspection = true:

Spoke-to-spoke traffic routes through inspection VPC automatically.

Verify routing:

# From east spoke instance
ssh ec2-user@<east-linux-ip>
ping <west-linux-ip>  # Should succeed and be inspected by FortiGate

# Check FortiGate logs
diagnose debug flow trace start 10
diagnose debug enable
# Generate traffic and watch logs

Configure FortiGate Policies

Access FortiGate GUI and configure firewall policies:

Basic Internet Egress Policy

Policy & Objects > Firewall Policy > Create New

Name: Internet-Egress
Incoming Interface: port1 (or TGW interface)
Outgoing Interface: port2 (internet interface)
Source: all
Destination: all
Service: ALL
Action: ACCEPT
NAT: Enable
Logging: All Sessions

East-West Inspection Policy

Policy & Objects > Firewall Policy > Create New

Name: East-West-Inspection
Incoming Interface: port1 (TGW interface)
Outgoing Interface: port1 (TGW interface)
Source: 192.168.0.0/16
Destination: 192.168.0.0/16
Service: ALL
Action: ACCEPT
NAT: Disable
Logging: All Sessions
Security Profiles: Enable IPS, Application Control, etc.

Configure FortiManager (If Enabled)

  1. Authorize FortiGate device:

    • Device Manager > Device & Groups
    • Right-click unauthorized device > Authorize
    • Assign to ADOM
  2. Create policy package:

    • Policy & Objects > Policy Package
    • Create new package
    • Add firewall policies
  3. Install policy:

    • Select device
    • Policy & Objects > Install
    • Select package
    • Click Install
  4. Verify sync to secondary instances:

    • Check secondary FortiGate instances
    • Policies should appear automatically via HA sync

Monitoring and Operations

CloudWatch Metrics

Key metrics to monitor:

# CPU utilization (triggers autoscaling)
aws cloudwatch get-metric-statistics \
  --namespace AWS/EC2 \
  --metric-name CPUUtilization \
  --dimensions Name=AutoScalingGroupName,Value=<asg-name> \
  --start-time 2024-01-01T00:00:00Z \
  --end-time 2024-01-02T00:00:00Z \
  --period 3600 \
  --statistics Average

# Network throughput
aws cloudwatch get-metric-statistics \
  --namespace AWS/EC2 \
  --metric-name NetworkIn \
  --dimensions Name=AutoScalingGroupName,Value=<asg-name> \
  --start-time 2024-01-01T00:00:00Z \
  --end-time 2024-01-02T00:00:00Z \
  --period 3600 \
  --statistics Sum

Lambda Function Logs

Monitor license assignment and lifecycle events:

# Stream Lambda logs
aws logs tail /aws/lambda/<function-name> --follow

# Search for errors
aws logs filter-log-events \
  --log-group-name /aws/lambda/<function-name> \
  --filter-pattern "ERROR"

# Search for license assignments
aws logs filter-log-events \
  --log-group-name /aws/lambda/<function-name> \
  --filter-pattern "license"

Auto Scaling Group Activity

# View scaling activities
aws autoscaling describe-scaling-activities \
  --auto-scaling-group-name <asg-name> \
  --max-records 20

# View current capacity
aws autoscaling describe-auto-scaling-groups \
  --auto-scaling-group-names <asg-name> \
  --query 'AutoScalingGroups[0].[MinSize,DesiredCapacity,MaxSize]'

Troubleshooting

Issue: Instances Launch But Don’t Get Licensed

Symptoms:

  • Instances running but showing unlicensed
  • Throughput limited to 1 Mbps
  • FortiGuard services not working

Causes and Solutions:

For BYOL:

  1. Check license files exist in directory:

    ls -la asg_license/
  2. Check S3 bucket has licenses uploaded:

    aws s3 ls s3://<bucket-name>/licenses/
  3. Check Lambda CloudWatch logs for errors:

    aws logs tail /aws/lambda/<function-name> --follow | grep -i error
  4. Verify DynamoDB table has available licenses:

    aws dynamodb scan --table-name <table-name>

For FortiFlex:

  1. Check Lambda CloudWatch logs for API errors
  2. Verify FortiFlex credentials are correct
  3. Check point balance in FortiFlex portal
  4. Verify configuration ID matches instance CPU count
  5. Check entitlements created in FortiFlex portal

For PAYG:

  1. Verify AWS Marketplace subscription is active
  2. Check instance profile has correct permissions
  3. Verify internet connectivity from FortiGate

Issue: Cannot Access FortiGate GUI

Symptoms:

  • Timeout when accessing FortiGate IP
  • Connection refused

Solutions:

  1. Verify instance is running:

    aws ec2 describe-instances --instance-ids <instance-id>
  2. Check security groups allow your IP:

    aws ec2 describe-security-groups --group-ids <sg-id>
  3. Verify you’re using correct port (default 443):

    https://<fortigate-ip>:443
  4. Try alternate access methods:

    # SSH to check if instance is responsive
    ssh -i ~/.ssh/keypair.pem admin@<fortigate-ip>
    
    # Check system status
    get system status
  5. If using dedicated management VPC:

    • Ensure you’re accessing via correct IP (management interface)
    • Check VPC peering or TGW attachment is working
    • Verify route tables allow return traffic

Issue: Traffic Not Flowing Through FortiGate

Symptoms:

  • No traffic visible in FortiGate logs
  • Connectivity tests bypass FortiGate
  • Sessions not appearing on FortiGate

Solutions:

  1. Verify TGW routing (if using TGW):

    # Check TGW route tables
    aws ec2 describe-transit-gateway-route-tables \
      --transit-gateway-id <tgw-id>
    
    # Verify routes point to inspection VPC attachment
    aws ec2 search-transit-gateway-routes \
      --transit-gateway-route-table-id <spoke-rt-id> \
      --filters "Name=state,Values=active"
  2. Check GWLB health checks:

    aws elbv2 describe-target-health \
      --target-group-arn <gwlb-target-group-arn>
  3. Verify FortiGate firewall policies:

    # SSH to FortiGate
    ssh admin@<fortigate-ip>
    
    # Check policies
    get firewall policy
    
    # Enable debug
    diagnose debug flow trace start 10
    diagnose debug enable
    # Generate traffic and watch logs
  4. Check spoke VPC route tables (for distributed architecture):

    # Verify routes point to GWLB endpoints
    aws ec2 describe-route-tables \
      --filters "Name=vpc-id,Values=<spoke-vpc-id>"

Issue: Primary Election Issues

Symptoms:

  • No primary instance elected
  • Multiple instances think they’re primary
  • HA sync not working

Solutions:

  1. Check Lambda logs for election logic:

    aws logs tail /aws/lambda/<function-name> --follow | grep -i primary
  2. Verify enable_fgt_system_autoscale = true:

    # On FortiGate
    get system auto-scale
  3. Check for network connectivity between instances:

    # From one FortiGate, ping another
    execute ping <other-fortigate-private-ip>
  4. Manually verify auto-scale configuration:

    # SSH to FortiGate
    ssh admin@<fortigate-ip>
    
    # Check auto-scale config
    show system auto-scale
    
    # Should show:
    # set status enable
    # set role primary (or secondary)
    # set sync-interface "port1"
    # set psksecret "..."

Issue: FortiManager Integration Not Working

Symptoms:

  • FortiGate doesn’t appear in FortiManager device list
  • Device shows as unauthorized but can’t authorize
  • Connection errors in FortiManager

Solutions:

  1. Verify FortiManager 7.6.3+ VM recognition enabled:

    # On FortiManager CLI
    show system global | grep fgfm-allow-vm
    # Should show: set fgfm-allow-vm enable
  2. Check network connectivity:

    # From FortiGate
    execute ping <fortimanager-ip>
    
    # Check FortiManager reachability
    diagnose debug application fgfmd -1
    diagnose debug enable
  3. Verify central-management config:

    # On FortiGate
    show system central-management
    
    # Should show:
    # set type fortimanager
    # set fmg <fortimanager-ip>
    # set serial-number <fmgr-sn>
  4. Check FortiManager logs:

    # On FortiManager CLI
    diagnose debug application fgfmd -1
    diagnose debug enable
    # Watch for connection attempts from FortiGate
  5. Verify only primary instance has central-management config:

    # On primary: Should have config
    show system central-management
    
    # On secondary: Should NOT have config (or be blocked by vdom-exception)
    show system vdom-exception

Outputs Reference

Important outputs from the template:

terraform output
OutputDescriptionUse Case
inspection_vpc_idID of inspection VPCVPC peering, routing configuration
inspection_vpc_cidrCIDR of inspection VPCRoute table configuration
gwlb_arnGateway Load Balancer ARNGWLB endpoint creation
gwlb_endpoint_az1_idGWLB endpoint ID in AZ1Spoke VPC route tables
gwlb_endpoint_az2_idGWLB endpoint ID in AZ2Spoke VPC route tables
fortigate_autoscale_group_nameBYOL ASG nameCloudWatch, monitoring
fortigate_ondemand_autoscale_group_namePAYG ASG nameCloudWatch, monitoring
lambda_function_nameLifecycle Lambda function nameCloudWatch logs, debugging
dynamodb_table_nameLicense tracking table nameLicense management
s3_bucket_nameLicense storage bucket nameLicense management
tgw_attachment_idTGW attachment IDTGW routing configuration

Best Practices

Pre-Deployment

  1. Plan capacity thoroughly: Use Autoscale Group Capacity guidance
  2. Test in dev/test first: Validate configuration before production
  3. Document customizations: Maintain runbook of configuration decisions
  4. Review security groups: Ensure least-privilege access
  5. Coordinate with network team: Verify CIDR allocations don’t conflict

During Deployment

  1. Monitor Lambda logs: Watch for errors during instance launch
  2. Verify license assignments: Check first instance gets licensed before scaling
  3. Test connectivity incrementally: Validate routing at each step
  4. Document public IPs: Save instance IPs for troubleshooting access

Post-Deployment

  1. Configure firewall policies immediately: Don’t leave FortiGates in pass-through mode
  2. Enable security profiles: IPS, Application Control, Web Filtering
  3. Set up monitoring: CloudWatch alarms, FortiGate logging
  4. Test failover scenarios: Verify autoscaling behavior
  5. Document recovery procedures: Maintain runbook for common issues

Ongoing Operations

  1. Monitor autoscale events: Review CloudWatch metrics weekly
  2. Update FortiOS regularly: Test updates in dev first
  3. Review firewall logs: Look for blocked traffic patterns
  4. Optimize scaling thresholds: Adjust based on observed traffic
  5. Plan capacity additions: Add licenses/entitlements before needed

Cleanup

Destroying the Deployment

To destroy the autoscale_template infrastructure:

cd terraform/autoscale_template
terraform destroy

Type yes when prompted.

Warning

Destroy Order is Critical

If you also deployed existing_vpc_resources, destroy in this order:

  1. First: Destroy autoscale_template (this template)
  2. Second: Destroy existing_vpc_resources

Why? The inspection VPC has a Transit Gateway attachment to the TGW created by existing_vpc_resources. Destroying the TGW first will cause the attachment deletion to fail.

# Correct order:
cd terraform/autoscale_template
terraform destroy

cd ../existing_vpc_resources
terraform destroy

Selective Cleanup

To destroy only specific components:

# Destroy only BYOL ASG
terraform destroy -target=module.fortigate_byol_asg

# Destroy only on-demand ASG
terraform destroy -target=module.fortigate_ondemand_asg

# Destroy only Lambda and DynamoDB
terraform destroy -target=module.lambda_functions
terraform destroy -target=module.dynamodb_table

Verify Complete Cleanup

After destroying, verify no resources remain:

# Check VPCs
aws ec2 describe-vpcs --filters "Name=tag:cp,Values=acme" "Name=tag:env,Values=prod"

# Check running instances
aws ec2 describe-instances \
  --filters "Name=instance-state-name,Values=running" \
           "Name=tag:cp,Values=acme"

# Check GWLB
aws elbv2 describe-load-balancers \
  --query 'LoadBalancers[?contains(LoadBalancerName, `acme`)]'

# Check Lambda functions
aws lambda list-functions --query 'Functions[?contains(FunctionName, `acme`)]'

Summary

The autoscale_template deploys FortiGate autoscale into an existing Inspection VPC discovered via Fortinet-Role tags:

Tag-based resource discovery: Finds Inspection VPC resources via Fortinet-Role tags ✅ Complete autoscale infrastructure: FortiGate ASG, GWLB, Lambda, IAM ✅ Flexible deployment options: Centralized, distributed, or hybrid architectures ✅ Multiple licensing models: BYOL, FortiFlex, PAYG, or hybrid ✅ Management options: Dedicated ENI, dedicated VPC, FortiManager integration ✅ Production-ready: High availability, autoscaling, lifecycle management

Key Requirements:

  • Run existing_vpc_resources first to create Inspection VPC with Fortinet-Role tags
  • Ensure cp and env values match between both templates for tag discovery

Next Steps:


Document Version: 1.0
Last Updated: November 2025
Terraform Module Version: Compatible with terraform-aws-cloud-modules v1.0+