Terraform Security Best Practices: Infrastructure as Code Done Right
After managing Terraform deployments across hundreds of cloud environments, I've learned that Infrastructure as Code security isn't just about writing secure configurations—it's about building security into every layer of your IaC pipeline.
The Security-First Terraform Approach
Why Terraform Security Matters
When your infrastructure is defined in code, a single misconfiguration can expose entire environments. I've seen companies lose millions due to accidentally public S3 buckets, overly permissive security groups, and leaked credentials in state files.
Here's my systematic approach to securing Terraform from development to production.
State Management Security
Remote State with Proper Encryption
Never store Terraform state locally or in version control. Here's my secure backend configuration:
# terraform/backend.tf terraform { backend "s3" { bucket = "mycompany-terraform-state-prod" key = "infrastructure/prod/terraform.tfstate" region = "us-west-2" encrypt = true kms_key_id = "arn:aws:kms:us-west-2:123456789012:key/12345678-1234-1234-1234-123456789012" dynamodb_table = "terraform-state-lock" workspace_key_prefix = "workspaces" # Additional security versioning = true # Prevent accidental deletion lifecycle { prevent_destroy = true } } required_providers { aws = { source = "hashicorp/aws" version = "~> 5.0" } } required_version = ">= 1.0" }
Secure State Bucket Configuration
# terraform/state-bucket.tf resource "aws_s3_bucket" "terraform_state" { bucket = "mycompany-terraform-state-${var.environment}" tags = { Name = "Terraform State" Environment = var.environment Purpose = "Infrastructure State Storage" } } resource "aws_s3_bucket_public_access_block" "terraform_state" { bucket = aws_s3_bucket.terraform_state.id block_public_acls = true block_public_policy = true ignore_public_acls = true restrict_public_buckets = true } resource "aws_s3_bucket_encryption" "terraform_state" { bucket = aws_s3_bucket.terraform_state.id rule { apply_server_side_encryption_by_default { kms_master_key_id = aws_kms_key.terraform_state.arn sse_algorithm = "aws:kms" } bucket_key_enabled = true } } resource "aws_s3_bucket_versioning" "terraform_state" { bucket = aws_s3_bucket.terraform_state.id versioning_configuration { status = "Enabled" } } resource "aws_s3_bucket_lifecycle_configuration" "terraform_state" { depends_on = [aws_s3_bucket_versioning.terraform_state] bucket = aws_s3_bucket.terraform_state.id rule { id = "terraform_state_lifecycle" status = "Enabled" noncurrent_version_expiration { noncurrent_days = 90 } abort_incomplete_multipart_upload { days_after_initiation = 7 } } } # KMS key for state encryption resource "aws_kms_key" "terraform_state" { description = "KMS key for Terraform state encryption" deletion_window_in_days = 7 enable_key_rotation = true policy = jsonencode({ Version = "2012-10-17" Statement = [ { Sid = "Enable IAM User Permissions" Effect = "Allow" Principal = { AWS = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:root" } Action = "kms:*" Resource = "*" }, { Sid = "Allow Terraform access" Effect = "Allow" Principal = { AWS = aws_iam_role.terraform_execution.arn } Action = [ "kms:Decrypt", "kms:DescribeKey", "kms:Encrypt", "kms:GenerateDataKey", "kms:ReEncrypt*" ] Resource = "*" } ] }) tags = { Name = "terraform-state-key" } } resource "aws_kms_alias" "terraform_state" { name = "alias/terraform-state" target_key_id = aws_kms_key.terraform_state.key_id } # State locking with DynamoDB resource "aws_dynamodb_table" "terraform_state_lock" { name = "terraform-state-lock" billing_mode = "PAY_PER_REQUEST" hash_key = "LockID" attribute { name = "LockID" type = "S" } server_side_encryption { enabled = true kms_key_arn = aws_kms_key.terraform_state.arn } point_in_time_recovery { enabled = true } tags = { Name = "terraform-state-lock" } }
Secrets Management
Never Hardcode Secrets
# ❌ NEVER DO THIS resource "aws_db_instance" "bad_example" { allocated_storage = 20 engine = "mysql" engine_version = "5.7" instance_class = "db.t3.micro" # DON'T HARDCODE CREDENTIALS! username = "admin" password = "supersecret123" # This will be in state file! } # ✅ DO THIS INSTEAD resource "aws_db_instance" "good_example" { allocated_storage = 20 engine = "mysql" engine_version = "5.7" instance_class = "db.t3.micro" # Use managed passwords manage_master_user_password = true # Or reference from external secrets manager username = var.db_username password = data.aws_secretsmanager_secret_version.db_password.secret_string } data "aws_secretsmanager_secret" "db_password" { name = "prod/database/master-password" } data "aws_secretsmanager_secret_version" "db_password" { secret_id = data.aws_secretsmanager_secret.db_password.id }
Secure Variable Handling
# variables.tf variable "db_username" { description = "Database master username" type = string sensitive = true } variable "environment" { description = "Environment name" type = string validation { condition = contains([ "dev", "staging", "prod" ], var.environment) error_message = "Environment must be dev, staging, or prod." } } variable "allowed_cidr_blocks" { description = "CIDR blocks allowed to access resources" type = list(string) validation { condition = length(var.allowed_cidr_blocks) > 0 error_message = "At least one CIDR block must be specified." } validation { condition = alltrue([ for cidr in var.allowed_cidr_blocks : can(regex("^([0-9]{1,3}\\.){3}[0-9]{1,3}/[0-9]{1,2}$", cidr)) ]) error_message = "All CIDR blocks must be valid IPv4 CIDR notation." } }
Network Security Patterns
VPC Security Best Practices
# vpc.tf - Secure VPC configuration resource "aws_vpc" "main" { cidr_block = var.vpc_cidr enable_dns_hostnames = true enable_dns_support = true tags = { Name = "${var.project_name}-vpc-${var.environment}" } } # Private subnets for application tier resource "aws_subnet" "private" { count = length(var.availability_zones) vpc_id = aws_vpc.main.id cidr_block = cidrsubnet(var.vpc_cidr, 8, count.index + 1) availability_zone = var.availability_zones[count.index] # Never auto-assign public IPs to private subnets map_public_ip_on_launch = false tags = { Name = "${var.project_name}-private-${var.availability_zones[count.index]}" Type = "Private" } } # Public subnets only for load balancers resource "aws_subnet" "public" { count = length(var.availability_zones) vpc_id = aws_vpc.main.id cidr_block = cidrsubnet(var.vpc_cidr, 8, count.index + 101) availability_zone = var.availability_zones[count.index] # Only auto-assign public IPs in public subnets map_public_ip_on_launch = true tags = { Name = "${var.project_name}-public-${var.availability_zones[count.index]}" Type = "Public" } } # Internet Gateway only for public subnets resource "aws_internet_gateway" "main" { vpc_id = aws_vpc.main.id tags = { Name = "${var.project_name}-igw" } } # NAT Gateways for private subnet internet access resource "aws_eip" "nat" { count = length(aws_subnet.public) domain = "vpc" depends_on = [aws_internet_gateway.main] tags = { Name = "${var.project_name}-nat-eip-${count.index + 1}" } } resource "aws_nat_gateway" "main" { count = length(aws_subnet.public) allocation_id = aws_eip.nat[count.index].id subnet_id = aws_subnet.public[count.index].id depends_on = [aws_internet_gateway.main] tags = { Name = "${var.project_name}-nat-${count.index + 1}" } }
Security Groups with Least Privilege
# security-groups.tf # Web tier security group - only allow necessary ports resource "aws_security_group" "web_tier" { name_prefix = "${var.project_name}-web-" vpc_id = aws_vpc.main.id # Inbound rules ingress { description = "HTTPS from ALB" from_port = 443 to_port = 443 protocol = "tcp" security_groups = [aws_security_group.alb.id] } ingress { description = "HTTP from ALB" from_port = 80 to_port = 80 protocol = "tcp" security_groups = [aws_security_group.alb.id] } # Outbound rules - be specific egress { description = "HTTPS to internet" from_port = 443 to_port = 443 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } egress { description = "Database access" from_port = 3306 to_port = 3306 protocol = "tcp" security_groups = [aws_security_group.database.id] } lifecycle { create_before_destroy = true } tags = { Name = "${var.project_name}-web-sg" } } # Database security group - very restrictive resource "aws_security_group" "database" { name_prefix = "${var.project_name}-db-" vpc_id = aws_vpc.main.id # Only allow access from web tier ingress { description = "MySQL from web tier" from_port = 3306 to_port = 3306 protocol = "tcp" security_groups = [aws_security_group.web_tier.id] } # No outbound rules needed for RDS lifecycle { create_before_destroy = true } tags = { Name = "${var.project_name}-db-sg" } } # ALB security group resource "aws_security_group" "alb" { name_prefix = "${var.project_name}-alb-" vpc_id = aws_vpc.main.id ingress { description = "HTTPS from internet" from_port = 443 to_port = 443 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } ingress { description = "HTTP from internet (redirect to HTTPS)" from_port = 80 to_port = 80 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } egress { description = "All outbound traffic" from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } lifecycle { create_before_destroy = true } tags = { Name = "${var.project_name}-alb-sg" } }
IAM Security Best Practices
Least Privilege IAM Policies
# iam.tf # Application execution role with minimal permissions resource "aws_iam_role" "app_execution" { name = "${var.project_name}-app-execution-${var.environment}" assume_role_policy = jsonencode({ Version = "2012-10-17" Statement = [ { Action = "sts:AssumeRole" Effect = "Allow" Principal = { Service = "ec2.amazonaws.com" } Condition = { StringEquals = { "aws:RequestedRegion" = var.aws_region } } } ] }) tags = { Name = "${var.project_name}-app-execution-role" } } # Custom policy with specific permissions resource "aws_iam_policy" "app_policy" { name = "${var.project_name}-app-policy-${var.environment}" policy = jsonencode({ Version = "2012-10-17" Statement = [ { Sid = "CloudWatchLogs" Effect = "Allow" Action = [ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents" ] Resource = "arn:aws:logs:${var.aws_region}:${data.aws_caller_identity.current.account_id}:log-group:/aws/${var.project_name}/*" }, { Sid = "SecretManagerAccess" Effect = "Allow" Action = [ "secretsmanager:GetSecretValue" ] Resource = [ "arn:aws:secretsmanager:${var.aws_region}:${data.aws_caller_identity.current.account_id}:secret:${var.environment}/${var.project_name}/*" ] }, { Sid = "ParameterStoreAccess" Effect = "Allow" Action = [ "ssm:GetParameter", "ssm:GetParameters" ] Resource = [ "arn:aws:ssm:${var.aws_region}:${data.aws_caller_identity.current.account_id}:parameter/${var.environment}/${var.project_name}/*" ] } ] }) } resource "aws_iam_role_policy_attachment" "app_policy" { role = aws_iam_role.app_execution.name policy_arn = aws_iam_policy.app_policy.arn } # Cross-account access with conditions resource "aws_iam_role" "cross_account_read" { name = "${var.project_name}-cross-account-read" assume_role_policy = jsonencode({ Version = "2012-10-17" Statement = [ { Effect = "Allow" Principal = { AWS = "arn:aws:iam::${var.trusted_account_id}:root" } Action = "sts:AssumeRole" Condition = { StringEquals = { "sts:ExternalId" = var.external_id } IpAddress = { "aws:SourceIp" = var.trusted_ip_ranges } DateGreaterThan = { "aws:CurrentTime" = "2024-01-01T00:00:00Z" } DateLessThan = { "aws:CurrentTime" = "2024-12-31T23:59:59Z" } } } ] }) }
Service-Specific IAM Patterns
# Lambda execution role with specific permissions resource "aws_iam_role" "lambda_execution" { name = "${var.project_name}-lambda-execution" assume_role_policy = jsonencode({ Version = "2012-10-17" Statement = [ { Action = "sts:AssumeRole" Effect = "Allow" Principal = { Service = "lambda.amazonaws.com" } } ] }) } resource "aws_iam_policy" "lambda_policy" { name = "${var.project_name}-lambda-policy" policy = jsonencode({ Version = "2012-10-17" Statement = [ { Effect = "Allow" Action = [ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents" ] Resource = "arn:aws:logs:*:*:*" }, { Effect = "Allow" Action = [ "s3:GetObject" ] Resource = "${aws_s3_bucket.app_bucket.arn}/*" Condition = { StringLike = { "s3:prefix" = ["uploads/*", "public/*"] } } } ] }) } # ECS task execution role resource "aws_iam_role" "ecs_task_execution" { name = "${var.project_name}-ecs-task-execution" assume_role_policy = jsonencode({ Version = "2012-10-17" Statement = [ { Action = "sts:AssumeRole" Effect = "Allow" Principal = { Service = "ecs-tasks.amazonaws.com" } } ] }) } resource "aws_iam_role_policy_attachment" "ecs_task_execution" { role = aws_iam_role.ecs_task_execution.name policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy" } # Custom ECS task role for application permissions resource "aws_iam_role" "ecs_task" { name = "${var.project_name}-ecs-task" assume_role_policy = jsonencode({ Version = "2012-10-17" Statement = [ { Action = "sts:AssumeRole" Effect = "Allow" Principal = { Service = "ecs-tasks.amazonaws.com" } } ] }) }
Resource Security Configuration
S3 Bucket Security
# s3.tf - Comprehensive S3 security resource "aws_s3_bucket" "app_bucket" { bucket = "${var.project_name}-${var.environment}-${random_id.bucket_suffix.hex}" tags = { Name = "${var.project_name}-app-bucket" Environment = var.environment } } resource "random_id" "bucket_suffix" { byte_length = 4 } # Block all public access resource "aws_s3_bucket_public_access_block" "app_bucket" { bucket = aws_s3_bucket.app_bucket.id block_public_acls = true block_public_policy = true ignore_public_acls = true restrict_public_buckets = true } # Enable encryption resource "aws_s3_bucket_server_side_encryption_configuration" "app_bucket" { bucket = aws_s3_bucket.app_bucket.id rule { apply_server_side_encryption_by_default { kms_master_key_id = aws_kms_key.s3_key.arn sse_algorithm = "aws:kms" } bucket_key_enabled = true } } # Enable versioning resource "aws_s3_bucket_versioning" "app_bucket" { bucket = aws_s3_bucket.app_bucket.id versioning_configuration { status = "Enabled" } } # MFA delete protection (requires manual setup) resource "aws_s3_bucket_mfa_delete" "app_bucket" { bucket = aws_s3_bucket.app_bucket.id mfa = var.enable_mfa_delete ? "Enabled" : "Disabled" } # Lifecycle policy resource "aws_s3_bucket_lifecycle_configuration" "app_bucket" { depends_on = [aws_s3_bucket_versioning.app_bucket] bucket = aws_s3_bucket.app_bucket.id rule { id = "main" status = "Enabled" expiration { days = 365 } noncurrent_version_expiration { noncurrent_days = 30 } abort_incomplete_multipart_upload { days_after_initiation = 7 } } } # Bucket policy with strict access controls resource "aws_s3_bucket_policy" "app_bucket" { bucket = aws_s3_bucket.app_bucket.id policy = jsonencode({ Version = "2012-10-17" Statement = [ { Sid = "DenyInsecureConnections" Effect = "Deny" Principal = "*" Action = "s3:*" Resource = [ aws_s3_bucket.app_bucket.arn, "${aws_s3_bucket.app_bucket.arn}/*" ] Condition = { Bool = { "aws:SecureTransport" = "false" } } }, { Sid = "DenyUnencryptedUploads" Effect = "Deny" Principal = "*" Action = "s3:PutObject" Resource = "${aws_s3_bucket.app_bucket.arn}/*" Condition = { StringNotEquals = { "s3:x-amz-server-side-encryption" = "aws:kms" } } } ] }) } # Notification for security events resource "aws_s3_bucket_notification" "app_bucket" { bucket = aws_s3_bucket.app_bucket.id cloudwatch_configuration { cloudwatch_configuration_id = "security-events" filter_prefix = "logs/" filter_suffix = ".log" events = ["s3:ObjectCreated:*", "s3:ObjectRemoved:*"] } }
RDS Security Configuration
# rds.tf - Secure RDS configuration resource "aws_db_subnet_group" "main" { name = "${var.project_name}-db-subnet-group" subnet_ids = aws_subnet.private[*].id tags = { Name = "${var.project_name}-db-subnet-group" } } resource "aws_db_parameter_group" "main" { family = "mysql8.0" name = "${var.project_name}-db-params" # Security-focused parameters parameter { name = "log_bin_trust_function_creators" value = "0" } parameter { name = "general_log" value = "1" } parameter { name = "slow_query_log" value = "1" } parameter { name = "long_query_time" value = "2" } } resource "aws_db_instance" "main" { identifier = "${var.project_name}-${var.environment}" allocated_storage = var.db_allocated_storage max_allocated_storage = var.db_max_allocated_storage storage_type = "gp3" storage_encrypted = true kms_key_id = aws_kms_key.rds_key.arn engine = "mysql" engine_version = "8.0.35" instance_class = var.db_instance_class # Use AWS managed password manage_master_user_password = true db_name = var.db_name username = var.db_username vpc_security_group_ids = [aws_security_group.database.id] db_subnet_group_name = aws_db_subnet_group.main.name parameter_group_name = aws_db_parameter_group.main.name # Security settings publicly_accessible = false multi_az = var.environment == "prod" ? true : false backup_retention_period = var.environment == "prod" ? 30 : 7 backup_window = "03:00-04:00" maintenance_window = "Sun:04:00-Sun:05:00" # Enable logging enabled_cloudwatch_logs_exports = ["error", "general", "slow_query"] # Enable automatic minor version upgrades auto_minor_version_upgrade = true # Performance Insights performance_insights_enabled = true performance_insights_kms_key_id = aws_kms_key.rds_key.arn performance_insights_retention_period = 7 # Monitoring monitoring_interval = 60 monitoring_role_arn = aws_iam_role.rds_enhanced_monitoring.arn deletion_protection = var.environment == "prod" ? true : false tags = { Name = "${var.project_name}-database" } } # Enhanced monitoring role resource "aws_iam_role" "rds_enhanced_monitoring" { name = "${var.project_name}-rds-enhanced-monitoring" assume_role_policy = jsonencode({ Version = "2012-10-17" Statement = [ { Action = "sts:AssumeRole" Effect = "Allow" Principal = { Service = "monitoring.rds.amazonaws.com" } } ] }) } resource "aws_iam_role_policy_attachment" "rds_enhanced_monitoring" { role = aws_iam_role.rds_enhanced_monitoring.name policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonRDSEnhancedMonitoringRole" }
Security Scanning and Validation
Terraform Security Automation
#!/bin/bash # security-scan.sh - Automated security scanning pipeline set -e echo "🔍 Starting Terraform Security Scan..." # 1. Format check echo "📝 Checking Terraform formatting..." terraform fmt -check -recursive # 2. Validate configuration echo "✅ Validating Terraform configuration..." terraform validate # 3. Run tfsec for security issues echo "🔒 Running tfsec security scan..." tfsec . --format json --out tfsec-results.json tfsec . --format checkstyle --out tfsec-results.xml # 4. Run Checkov for compliance echo "📋 Running Checkov compliance scan..." checkov -d . --framework terraform --output json --output-file checkov-results.json checkov -d . --framework terraform --output cli # 5. Run Terrascan echo "🎯 Running Terrascan..." terrascan scan -t aws -f . --output json > terrascan-results.json # 6. Run custom security checks echo "🔧 Running custom security checks..." python3 custom-security-checks.py # 7. Generate security report echo "📊 Generating security report..." python3 generate-security-report.py echo "✅ Security scan completed. Check results in security-report.html"
Custom Security Validation Script
#!/usr/bin/env python3 """ Custom Terraform Security Checks """ import json import os import re import sys from pathlib import Path class TerraformSecurityChecker: def __init__(self): self.issues = [] self.warnings = [] def check_hardcoded_secrets(self): """Check for hardcoded secrets in Terraform files""" secret_patterns = [ r'password\s*=\s*["\'][^"\']{8,}["\']', r'secret\s*=\s*["\'][^"\']{16,}["\']', r'api_key\s*=\s*["\'][^"\']{20,}["\']', r'private_key\s*=\s*["\'].*BEGIN.*PRIVATE.*KEY["\']' ] for tf_file in Path('.').rglob('*.tf'): content = tf_file.read_text() for pattern in secret_patterns: matches = re.findall(pattern, content, re.IGNORECASE) if matches: self.issues.append({ 'file': str(tf_file), 'issue': 'Hardcoded secret detected', 'details': f'Pattern: {pattern}', 'severity': 'HIGH' }) def check_public_access(self): """Check for resources that might be publicly accessible""" public_patterns = [ r'cidr_blocks\s*=\s*\[.*"0\.0\.0\.0/0".*\]', r'publicly_accessible\s*=\s*true', r'map_public_ip_on_launch\s*=\s*true' ] for tf_file in Path('.').rglob('*.tf'): content = tf_file.read_text() for pattern in public_patterns: matches = re.findall(pattern, content) if matches: self.warnings.append({ 'file': str(tf_file), 'issue': 'Potential public access', 'details': f'Pattern: {pattern}', 'severity': 'MEDIUM' }) def check_encryption_settings(self): """Check for missing encryption settings""" encryption_checks = [ ('aws_s3_bucket', 'server_side_encryption_configuration'), ('aws_db_instance', 'storage_encrypted'), ('aws_ebs_volume', 'encrypted'), ('aws_rds_cluster', 'storage_encrypted') ] for tf_file in Path('.').rglob('*.tf'): content = tf_file.read_text() for resource_type, encryption_attr in encryption_checks: # Find resource blocks resource_pattern = f'resource\s+"{resource_type}"\s+"\w+"\s*\{{' resources = re.findall(resource_pattern, content) if resources and encryption_attr not in content: self.issues.append({ 'file': str(tf_file), 'issue': f'Missing encryption for {resource_type}', 'details': f'Missing {encryption_attr}', 'severity': 'HIGH' }) def check_default_security_groups(self): """Check for usage of default security groups""" for tf_file in Path('.').rglob('*.tf'): content = tf_file.read_text() if 'security_groups = ["default"]' in content: self.issues.append({ 'file': str(tf_file), 'issue': 'Using default security group', 'details': 'Default security groups should not be used', 'severity': 'MEDIUM' }) def generate_report(self): """Generate security report""" print("🔍 TERRAFORM SECURITY CHECK RESULTS") print("=" * 50) print(f"\n❌ CRITICAL ISSUES ({len(self.issues)}):") for issue in self.issues: print(f" File: {issue['file']}") print(f" Issue: {issue['issue']}") print(f" Details: {issue['details']}") print(f" Severity: {issue['severity']}\n") print(f"⚠️ WARNINGS ({len(self.warnings)}):") for warning in self.warnings: print(f" File: {warning['file']}") print(f" Issue: {warning['issue']}") print(f" Details: {warning['details']}") print(f" Severity: {warning['severity']}\n") # Return non-zero exit code if critical issues found if self.issues: print("❌ Security check failed due to critical issues") return 1 else: print("✅ No critical security issues found") return 0 if __name__ == "__main__": checker = TerraformSecurityChecker() checker.check_hardcoded_secrets() checker.check_public_access() checker.check_encryption_settings() checker.check_default_security_groups() exit_code = checker.generate_report() sys.exit(exit_code)
CI/CD Pipeline Security
GitHub Actions Terraform Security Workflow
# .github/workflows/terraform-security.yml name: Terraform Security Scan on: pull_request: paths: - 'terraform/**' push: branches: - main jobs: security-scan: runs-on: ubuntu-latest steps: - name: Checkout code uses: actions/checkout@v4 - name: Setup Terraform uses: hashicorp/setup-terraform@v3 with: terraform_version: ~1.6.0 - name: Terraform Format Check run: terraform fmt -check -recursive - name: Terraform Init run: terraform init -backend=false - name: Terraform Validate run: terraform validate - name: Run tfsec uses: aquasecurity/tfsec-action@v1.0.3 with: soft_fail: false format: sarif output: tfsec.sarif - name: Upload tfsec results to GitHub Security uses: github/codeql-action/upload-sarif@v3 with: sarif_file: tfsec.sarif - name: Run Checkov uses: bridgecrewio/checkov-action@master with: directory: . framework: terraform output_format: sarif output_file_path: checkov.sarif - name: Upload Checkov results uses: github/codeql-action/upload-sarif@v3 with: sarif_file: checkov.sarif - name: Run custom security checks run: | python3 scripts/custom-security-checks.py - name: Terraform Plan Security Review env: AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} run: | terraform plan -out=tfplan terraform show -json tfplan > tfplan.json python3 scripts/analyze-terraform-plan.py tfplan.json
Plan Analysis Script
#!/usr/bin/env python3 """ Analyze Terraform plan for security implications """ import json import sys class TerraformPlanAnalyzer: def __init__(self, plan_file): with open(plan_file, 'r') as f: self.plan = json.load(f) self.security_concerns = [] def analyze_resource_changes(self): """Analyze planned resource changes for security implications""" if 'resource_changes' not in self.plan: return for change in self.plan['resource_changes']: resource_type = change['type'] change_action = change['change']['actions'][0] # Check for security-sensitive resources being created/modified if resource_type in ['aws_security_group', 'aws_iam_policy', 'aws_s3_bucket']: if change_action in ['create', 'update']: self.analyze_security_resource(change) def analyze_security_resource(self, change): """Analyze specific security-sensitive resource changes""" resource_type = change['type'] after_values = change['change'].get('after', {}) if resource_type == 'aws_security_group': self.check_security_group_rules(change['name'], after_values) elif resource_type == 'aws_iam_policy': self.check_iam_policy(change['name'], after_values) elif resource_type == 'aws_s3_bucket': self.check_s3_bucket(change['name'], after_values) def check_security_group_rules(self, sg_name, values): """Check security group rules for overly permissive access""" ingress_rules = values.get('ingress', []) for rule in ingress_rules: cidr_blocks = rule.get('cidr_blocks', []) from_port = rule.get('from_port') to_port = rule.get('to_port') # Check for 0.0.0.0/0 access if '0.0.0.0/0' in cidr_blocks: # SSH access from anywhere if from_port == 22 or to_port == 22: self.security_concerns.append({ 'resource': sg_name, 'issue': 'SSH access from 0.0.0.0/0', 'severity': 'HIGH', 'recommendation': 'Restrict SSH access to specific IP ranges' }) # RDP access from anywhere if from_port == 3389 or to_port == 3389: self.security_concerns.append({ 'resource': sg_name, 'issue': 'RDP access from 0.0.0.0/0', 'severity': 'HIGH', 'recommendation': 'Restrict RDP access to specific IP ranges' }) def check_iam_policy(self, policy_name, values): """Check IAM policies for overly broad permissions""" policy_document = values.get('policy') if policy_document: try: policy = json.loads(policy_document) statements = policy.get('Statement', []) for statement in statements: actions = statement.get('Action', []) resources = statement.get('Resource', []) # Check for wildcard permissions if '*' in actions and '*' in resources: self.security_concerns.append({ 'resource': policy_name, 'issue': 'Overly broad IAM policy with * actions and * resources', 'severity': 'HIGH', 'recommendation': 'Use least privilege principle with specific actions and resources' }) except json.JSONDecodeError: pass def generate_report(self): """Generate security analysis report""" print("🔍 TERRAFORM PLAN SECURITY ANALYSIS") print("=" * 50) if not self.security_concerns: print("✅ No security concerns found in the Terraform plan") return 0 print(f"⚠️ Found {len(self.security_concerns)} security concerns:\n") for concern in self.security_concerns: print(f"Resource: {concern['resource']}") print(f"Issue: {concern['issue']}") print(f"Severity: {concern['severity']}") print(f"Recommendation: {concern['recommendation']}\n") # Fail if high severity issues found high_severity = [c for c in self.security_concerns if c['severity'] == 'HIGH'] if high_severity: print(f"❌ Found {len(high_severity)} HIGH severity security issues") return 1 return 0 if __name__ == "__main__": if len(sys.argv) != 2: print("Usage: python3 analyze-terraform-plan.py <plan.json>") sys.exit(1) analyzer = TerraformPlanAnalyzer(sys.argv[1]) analyzer.analyze_resource_changes() exit_code = analyzer.generate_report() sys.exit(exit_code)
Key Takeaways
Securing Terraform requires a layered approach:
Development Phase
- Never hardcode secrets - use external secret managers
- Validate inputs - use variable validation rules
- Follow least privilege - especially for IAM policies
- Enable encryption - for all data at rest and in transit
Pipeline Phase
- Automated scanning - integrate multiple security tools
- Plan analysis - review changes before deployment
- Security gates - fail builds on critical issues
- State protection - secure backend configuration
Runtime Phase
- Continuous monitoring - watch for configuration drift
- Regular audits - review and update security posture
- Incident response - have playbooks for security events
Remember: Infrastructure as Code amplifies both good and bad practices. A single misconfigured resource can create massive security exposure, but proper IaC security practices create a strong foundation for your entire cloud environment.
Want to stay current on cloud security and Infrastructure as Code best practices? Subscribe to my newsletter for weekly insights and real-world examples.