· PathShield Team · Tutorials  · 14 min read

AWS Config Rules for Security Compliance Automation - Complete 2025 Guide

Automate security compliance with AWS Config Rules. Get 50+ production-ready rules, custom scripts, and real-world examples for SOC 2, PCI, and HIPAA compliance.

Automate security compliance with AWS Config Rules. Get 50+ production-ready rules, custom scripts, and real-world examples for SOC 2, PCI, and HIPAA compliance.

AWS Config Rules for Security Compliance Automation - Complete 2025 Guide

Your auditor just asked for proof that your S3 buckets have always been encrypted and never publicly accessible. You have 10,000 buckets across 50 accounts. Manual checking would take weeks. AWS Config Rules solve this in minutes, but most teams only scratch the surface of what’s possible.

Why AWS Config Rules Are Essential for Modern Compliance

Manual compliance checking problems:

  • Takes forever and is error-prone
  • Only shows current state, not historical compliance
  • Can’t prove continuous compliance to auditors
  • Reactive instead of proactive

AWS Config Rules solve this by:

  • Continuously monitoring all resource configurations
  • Maintaining complete compliance history
  • Automatically detecting violations in real-time
  • Providing audit-ready compliance reports
  • Enabling automatic remediation

Understanding AWS Config Architecture

AWS Config works in three layers:

  1. Configuration Items (CIs): Snapshots of resource configurations
  2. Configuration History: Timeline of all configuration changes
  3. Config Rules: Policies that evaluate CIs for compliance
# Config Rule evaluation flow
def evaluate_compliance():
    """How Config Rules work under the hood"""
    
    # 1. Resource changes trigger Config
    resource_change = detect_s3_bucket_change()
    
    # 2. Config captures configuration snapshot
    configuration_item = capture_configuration_snapshot(resource_change)
    
    # 3. Config Rules evaluate the snapshot
    compliance_result = evaluate_rules(configuration_item)
    
    # 4. Results stored and actions triggered
    store_compliance_result(compliance_result)
    trigger_remediation_if_needed(compliance_result)
    
    return compliance_result

Essential Security Config Rules Every Startup Needs

1. S3 Security Rules (Prevent 80% of Data Breaches)

{
  "ConfigRuleName": "s3-bucket-public-access-prohibited",
  "Description": "Checks if S3 buckets are publicly accessible",
  "Source": {
    "Owner": "AWS",
    "SourceIdentifier": "S3_BUCKET_PUBLIC_ACCESS_PROHIBITED"
  },
  "Scope": {
    "ComplianceResourceTypes": ["AWS::S3::Bucket"]
  }
}

Complete S3 security rule set:

# Essential S3 Config Rules
s3_security_rules:
  - s3-bucket-public-access-prohibited
  - s3-bucket-server-side-encryption-enabled
  - s3-bucket-ssl-requests-only
  - s3-bucket-logging-enabled
  - s3-bucket-versioning-enabled
  - s3-bucket-default-lock-enabled
  - s3-bucket-policy-grantee-check

Custom S3 Rule for Sensitive Data:

# Lambda function for custom S3 compliance rule
import json
import boto3

def lambda_handler(event, context):
    """
    Custom Config Rule: S3 buckets with 'sensitive' tag must have:
    - Encryption enabled
    - Access logging enabled
    - Versioning enabled
    - No public access
    """
    
    config = boto3.client('config')
    s3 = boto3.client('s3')
    
    # Get the configuration item
    configuration_item = event['configurationItem']
    bucket_name = configuration_item['resourceName']
    
    compliance_type = 'COMPLIANT'
    annotation = ''
    
    try:
        # Check if bucket has 'sensitive' tag
        tags_response = s3.get_bucket_tagging(Bucket=bucket_name)
        tags = {tag['Key']: tag['Value'] for tag in tags_response['TagSet']}
        
        if tags.get('DataClassification', '').lower() == 'sensitive':
            # Enhanced checks for sensitive buckets
            violations = []
            
            # Check encryption
            try:
                s3.get_bucket_encryption(Bucket=bucket_name)
            except s3.exceptions.ClientError:
                violations.append('Missing encryption')
            
            # Check logging
            try:
                s3.get_bucket_logging(Bucket=bucket_name)
            except s3.exceptions.ClientError:
                violations.append('Missing access logging')
            
            # Check versioning
            versioning = s3.get_bucket_versioning(Bucket=bucket_name)
            if versioning.get('Status') != 'Enabled':
                violations.append('Versioning not enabled')
            
            # Check public access block
            try:
                pab = s3.get_public_access_block(Bucket=bucket_name)['PublicAccessBlockConfiguration']
                if not all([pab['BlockPublicAcls'], pab['IgnorePublicAcls'], 
                          pab['BlockPublicPolicy'], pab['RestrictPublicBuckets']]):
                    violations.append('Public access not fully blocked')
            except s3.exceptions.ClientError:
                violations.append('No public access block configured')
            
            if violations:
                compliance_type = 'NON_COMPLIANT'
                annotation = f"Sensitive bucket violations: {', '.join(violations)}"
    
    except s3.exceptions.ClientError as e:
        if e.response['Error']['Code'] == 'NoSuchTagSet':
            # No tags, not sensitive, compliant
            pass
        else:
            compliance_type = 'NOT_APPLICABLE'
            annotation = f"Error checking bucket: {str(e)}"
    
    # Return compliance evaluation
    evaluation = {
        'ComplianceResourceType': configuration_item['resourceType'],
        'ComplianceResourceId': configuration_item['resourceId'],
        'ComplianceType': compliance_type,
        'Annotation': annotation,
        'OrderingTimestamp': configuration_item['configurationItemCaptureTime']
    }
    
    config.put_evaluations(
        Evaluations=[evaluation],
        ResultToken=event['resultToken']
    )
    
    return {
        'statusCode': 200,
        'body': json.dumps('Evaluation complete')
    }

2. IAM Security Rules (Prevent Privilege Escalation)

# Critical IAM Config Rules
iam_security_rules:
  - iam-user-mfa-enabled
  - iam-root-access-key-check
  - iam-user-unused-credentials-check
  - iam-policy-no-statements-with-admin-access
  - iam-user-no-policies-check
  - access-keys-rotated
  - iam-password-policy

Custom IAM Rule for Cross-Account Trust:

# Custom rule to check dangerous cross-account trust relationships
def evaluate_iam_role_trust(event, context):
    """Check IAM roles for risky cross-account trust policies"""
    
    configuration_item = event['configurationItem']
    role_name = configuration_item['resourceName']
    
    iam = boto3.client('iam')
    
    try:
        role = iam.get_role(RoleName=role_name)
        assume_role_policy = role['Role']['AssumeRolePolicyDocument']
        
        violations = []
        
        for statement in assume_role_policy.get('Statement', []):
            if statement.get('Effect') != 'Allow':
                continue
                
            principal = statement.get('Principal', {})
            
            # Check for wildcard principal
            if principal == '*' or (isinstance(principal, dict) and principal.get('AWS') == '*'):
                violations.append('Allows any AWS principal (*)')
            
            # Check for external account access
            if isinstance(principal, dict) and 'AWS' in principal:
                aws_principals = principal['AWS']
                if isinstance(aws_principals, str):
                    aws_principals = [aws_principals]
                
                current_account = boto3.client('sts').get_caller_identity()['Account']
                
                for aws_principal in aws_principals:
                    if ':root' in aws_principal and not aws_principal.startswith(f'arn:aws:iam::{current_account}:root'):
                        external_account = aws_principal.split(':')[4]
                        violations.append(f'Trusts external account: {external_account}')
        
        compliance_type = 'NON_COMPLIANT' if violations else 'COMPLIANT'
        annotation = '; '.join(violations) if violations else 'No dangerous trust relationships found'
        
    except Exception as e:
        compliance_type = 'NOT_APPLICABLE'
        annotation = f'Error evaluating role: {str(e)}'
    
    return submit_evaluation(configuration_item, compliance_type, annotation)

3. Network Security Rules (Secure Your Perimeter)

# Network security Config Rules
network_security_rules:
  - vpc-default-security-group-closed
  - vpc-sg-open-only-to-authorized-ports
  - incoming-ssh-disabled
  - restricted-rdp
  - ec2-security-group-attached-to-eni
  - subnet-auto-assign-public-ip-disabled

Custom Network Security Rule:

def evaluate_security_group_compliance(event, context):
    """Custom rule for security group compliance with company policies"""
    
    configuration_item = event['configurationItem']
    sg_id = configuration_item['resourceId']
    
    ec2 = boto3.client('ec2')
    
    try:
        response = ec2.describe_security_groups(GroupIds=[sg_id])
        sg = response['SecurityGroups'][0]
        
        violations = []
        
        # Check ingress rules
        for rule in sg.get('IpPermissions', []):
            from_port = rule.get('FromPort', 0)
            to_port = rule.get('ToPort', 65535)
            
            # Check for dangerous port ranges
            dangerous_ports = {22: 'SSH', 3389: 'RDP', 3306: 'MySQL', 5432: 'PostgreSQL'}
            
            for ip_range in rule.get('IpRanges', []):
                if ip_range.get('CidrIp') == '0.0.0.0/0':
                    if from_port in dangerous_ports:
                        violations.append(f'{dangerous_ports[from_port]} ({from_port}) open to internet')
                    elif from_port == 0 and to_port == 65535:
                        violations.append('All ports open to internet')
            
            # Check for overly broad internal access
            for ip_range in rule.get('IpRanges', []):
                cidr = ip_range.get('CidrIp', '')
                if cidr.endswith('/8') or cidr.endswith('/16'):
                    violations.append(f'Overly broad CIDR: {cidr}')
        
        # Check if security group has description
        description = sg.get('Description', '')
        if not description or description == 'default VPC security group':
            violations.append('Missing meaningful description')
        
        compliance_type = 'NON_COMPLIANT' if violations else 'COMPLIANT'
        annotation = '; '.join(violations) if violations else 'Security group follows best practices'
        
    except Exception as e:
        compliance_type = 'NOT_APPLICABLE'
        annotation = f'Error evaluating security group: {str(e)}'
    
    return submit_evaluation(configuration_item, compliance_type, annotation)

4. Encryption and Data Protection Rules

# Encryption Config Rules
encryption_rules:
  - s3-bucket-server-side-encryption-enabled
  - rds-storage-encrypted
  - ebs-optimized-instance
  - encrypted-volumes
  - elasticsearch-encrypted-at-rest
  - redshift-cluster-configuration-check
  - dynamodb-table-encryption-enabled

Compliance Framework Implementation

SOC 2 Type II Config Rules Setup

#!/usr/bin/env python3
"""
SOC 2 Type II compliance using AWS Config Rules
"""

import boto3
import json

class SOC2ConfigSetup:
    def __init__(self):
        self.config = boto3.client('config')
        self.iam = boto3.client('iam')
        
    def deploy_soc2_rules(self):
        """Deploy all Config Rules required for SOC 2 Type II"""
        
        soc2_rules = {
            # Security (CC6)
            'access-control': [
                'iam-user-mfa-enabled',
                'iam-root-access-key-check',
                'iam-user-unused-credentials-check'
            ],
            'logical-access': [
                'incoming-ssh-disabled',
                'restricted-rdp',
                'ec2-instances-in-vpc'
            ],
            'data-protection': [
                's3-bucket-server-side-encryption-enabled',
                'rds-storage-encrypted',
                'encrypted-volumes'
            ],
            # Availability (CC7)
            'backup-recovery': [
                'db-instance-backup-enabled',
                's3-bucket-versioning-enabled'
            ],
            # Processing Integrity (CC8)
            'change-management': [
                'cloudtrail-enabled',
                's3-bucket-logging-enabled'
            ]
        }
        
        deployed_rules = []
        
        for category, rules in soc2_rules.items():
            print(f"\nDeploying {category} rules...")
            
            for rule_name in rules:
                try:
                    rule_config = self.get_managed_rule_config(rule_name)
                    
                    self.config.put_config_rule(
                        ConfigRule={
                            'ConfigRuleName': f'soc2-{rule_name}',
                            'Description': f'SOC 2 compliance rule for {category}',
                            'Source': rule_config['Source'],
                            'Scope': rule_config.get('Scope', {}),
                            'InputParameters': rule_config.get('InputParameters', '{}')
                        }
                    )
                    
                    deployed_rules.append(f'soc2-{rule_name}')
                    print(f"  ✅ Deployed: soc2-{rule_name}")
                    
                except Exception as e:
                    print(f"  ❌ Failed to deploy {rule_name}: {e}")
        
        return deployed_rules
    
    def get_managed_rule_config(self, rule_name):
        """Get configuration for AWS managed rules"""
        
        managed_rules = {
            'iam-user-mfa-enabled': {
                'Source': {
                    'Owner': 'AWS',
                    'SourceIdentifier': 'IAM_USER_MFA_ENABLED'
                }
            },
            'iam-root-access-key-check': {
                'Source': {
                    'Owner': 'AWS',
                    'SourceIdentifier': 'IAM_ROOT_ACCESS_KEY_CHECK'
                }
            },
            's3-bucket-server-side-encryption-enabled': {
                'Source': {
                    'Owner': 'AWS',
                    'SourceIdentifier': 'S3_BUCKET_SERVER_SIDE_ENCRYPTION_ENABLED'
                },
                'Scope': {
                    'ComplianceResourceTypes': ['AWS::S3::Bucket']
                }
            },
            'cloudtrail-enabled': {
                'Source': {
                    'Owner': 'AWS',
                    'SourceIdentifier': 'CLOUD_TRAIL_ENABLED'
                }
            }
            # Add more rules as needed
        }
        
        return managed_rules.get(rule_name, {})
    
    def create_soc2_compliance_report(self):
        """Generate SOC 2 compliance report"""
        
        # Get all SOC 2 rules
        rules = self.config.describe_config_rules()['ConfigRules']
        soc2_rules = [rule for rule in rules if rule['ConfigRuleName'].startswith('soc2-')]
        
        compliance_summary = {
            'compliant': 0,
            'non_compliant': 0,
            'not_applicable': 0,
            'insufficient_data': 0
        }
        
        detailed_results = []
        
        for rule in soc2_rules:
            rule_name = rule['ConfigRuleName']
            
            # Get compliance details
            compliance = self.config.get_compliance_details_by_config_rule(
                ConfigRuleName=rule_name
            )
            
            for result in compliance['EvaluationResults']:
                compliance_type = result['ComplianceType']
                compliance_summary[compliance_type.lower()] += 1
                
                detailed_results.append({
                    'rule': rule_name,
                    'resource': result['EvaluationResultIdentifier']['EvaluationResultQualifier']['ResourceId'],
                    'compliance': compliance_type,
                    'annotation': result.get('Annotation', ''),
                    'result_recorded_time': result['ResultRecordedTime'].isoformat()
                })
        
        # Generate report
        report = {
            'report_date': datetime.now().isoformat(),
            'compliance_summary': compliance_summary,
            'compliance_percentage': (compliance_summary['compliant'] / 
                                    sum(compliance_summary.values()) * 100),
            'detailed_results': detailed_results
        }
        
        # Save report
        with open(f'soc2_compliance_report_{datetime.now().strftime("%Y%m%d")}.json', 'w') as f:
            json.dump(report, f, indent=2)
        
        print(f"\n📊 SOC 2 Compliance Summary:")
        print(f"  Compliant: {compliance_summary['compliant']}")
        print(f"  Non-compliant: {compliance_summary['non_compliant']}")
        print(f"  Overall compliance: {report['compliance_percentage']:.1f}%")
        
        return report

# Usage
soc2_setup = SOC2ConfigSetup()
deployed_rules = soc2_setup.deploy_soc2_rules()
compliance_report = soc2_setup.create_soc2_compliance_report()

PCI DSS Compliance Rules

def deploy_pci_dss_rules():
    """Deploy Config Rules for PCI DSS compliance"""
    
    pci_requirements = {
        # Requirement 1: Firewall configuration
        'firewall': [
            'vpc-default-security-group-closed',
            'vpc-sg-open-only-to-authorized-ports',
            'incoming-ssh-disabled'
        ],
        # Requirement 2: Default passwords and security parameters
        'security_params': [
            'iam-password-policy',
            'ec2-instance-managed-by-systems-manager'
        ],
        # Requirement 3: Protect stored cardholder data
        'data_protection': [
            's3-bucket-server-side-encryption-enabled',
            'rds-storage-encrypted',
            'ebs-snapshot-public-read-prohibited'
        ],
        # Requirement 4: Encrypt transmission of cardholder data
        'encryption_transit': [
            's3-bucket-ssl-requests-only',
            'elb-predefined-security-policy-ssl-check'
        ],
        # Requirement 8: Identify and authenticate access
        'access_control': [
            'iam-user-mfa-enabled',
            'access-keys-rotated'
        ],
        # Requirement 10: Track and monitor all access
        'monitoring': [
            'cloudtrail-enabled',
            's3-bucket-logging-enabled',
            'vpc-flow-logs-enabled'
        ]
    }
    
    for requirement, rules in pci_requirements.items():
        print(f"Deploying PCI DSS {requirement} rules...")
        for rule in rules:
            deploy_config_rule(f'pci-{rule}', rule)

Advanced Config Rules Automation

Automated Remediation with Config Rules

#!/usr/bin/env python3
"""
Automated remediation for Config Rule violations
"""

import boto3
import json

class ConfigRuleRemediation:
    def __init__(self):
        self.config = boto3.client('config')
        self.s3 = boto3.client('s3')
        self.ec2 = boto3.client('ec2')
        self.ssm = boto3.client('ssm')
        
    def setup_auto_remediation(self):
        """Set up automatic remediation for common violations"""
        
        remediation_configs = [
            {
                'rule_name': 's3-bucket-public-access-prohibited',
                'remediation_action': 'fix_s3_public_access',
                'resource_type': 'AWS::S3::Bucket'
            },
            {
                'rule_name': 'ec2-security-group-attached-to-eni',
                'remediation_action': 'fix_unattached_security_group',
                'resource_type': 'AWS::EC2::SecurityGroup'
            },
            {
                'rule_name': 'ebs-optimized-instance',
                'remediation_action': 'enable_ebs_optimization',
                'resource_type': 'AWS::EC2::Instance'
            }
        ]
        
        for config in remediation_configs:
            self.create_remediation_configuration(config)
    
    def create_remediation_configuration(self, config):
        """Create remediation configuration for a Config Rule"""
        
        try:
            self.config.put_remediation_configuration(
                RemediationConfiguration={
                    'ConfigRuleName': config['rule_name'],
                    'TargetType': 'SSM_DOCUMENT',
                    'TargetId': 'AWSConfigRemediation-RemovePublicAccessFromS3Bucket',
                    'TargetVersion': '1',
                    'Parameters': {
                        'AutomationAssumeRole': {
                            'StaticValue': {
                                'Values': ['arn:aws:iam::ACCOUNT:role/ConfigRemediationRole']
                            }
                        },
                        'BucketName': {
                            'ResourceValue': {
                                'Value': 'RESOURCE_ID'
                            }
                        }
                    },
                    'ResourceType': config['resource_type'],
                    'Automatic': True,
                    'ExecutionControls': {
                        'SsmControls': {
                            'ConcurrentExecutionRatePercentage': 10,
                            'ErrorPercentage': 10
                        }
                    }
                }
            )
            
            print(f"✅ Remediation configured for {config['rule_name']}")
            
        except Exception as e:
            print(f"❌ Failed to configure remediation for {config['rule_name']}: {e}")
    
    def custom_s3_remediation(self, event, context):
        """Custom Lambda function for S3 bucket remediation"""
        
        # Parse Config Rule result
        config_item = json.loads(event['Records'][0]['Sns']['Message'])
        bucket_name = config_item['configurationItem']['resourceName']
        compliance_type = config_item['newEvaluationResult']['complianceType']
        
        if compliance_type == 'NON_COMPLIANT':
            try:
                # Fix S3 bucket public access
                self.s3.put_public_access_block(
                    Bucket=bucket_name,
                    PublicAccessBlockConfiguration={
                        'BlockPublicAcls': True,
                        'IgnorePublicAcls': True,
                        'BlockPublicPolicy': True,
                        'RestrictPublicBuckets': True
                    }
                )
                
                # Log remediation action
                print(f"🔧 Remediated public access for bucket: {bucket_name}")
                
                # Send notification
                self.send_remediation_notification(bucket_name, 'S3 Public Access Fixed')
                
            except Exception as e:
                print(f"❌ Failed to remediate bucket {bucket_name}: {e}")
    
    def send_remediation_notification(self, resource_name, action):
        """Send notification about remediation action"""
        
        sns = boto3.client('sns')
        
        message = {
            'resource': resource_name,
            'action': action,
            'timestamp': datetime.now().isoformat(),
            'automated': True
        }
        
        sns.publish(
            TopicArn='arn:aws:sns:region:account:config-remediation',
            Subject=f'Config Rule Remediation: {resource_name}',
            Message=json.dumps(message, indent=2)
        )

Multi-Account Config Rules Deployment

#!/usr/bin/env python3
"""
Deploy Config Rules across multiple AWS accounts
"""

import boto3
from concurrent.futures import ThreadPoolExecutor

class MultiAccountConfigDeployment:
    def __init__(self, org_role_name='OrganizationAccountAccessRole'):
        self.org_client = boto3.client('organizations')
        self.org_role_name = org_role_name
        
    def get_all_accounts(self):
        """Get all AWS accounts in the organization"""
        
        accounts = []
        paginator = self.org_client.get_paginator('list_accounts')
        
        for page in paginator.paginate():
            for account in page['Accounts']:
                if account['Status'] == 'ACTIVE':
                    accounts.append({
                        'Id': account['Id'],
                        'Name': account['Name'],
                        'Email': account['Email']
                    })
        
        return accounts
    
    def assume_role_in_account(self, account_id):
        """Assume role in target account"""
        
        sts = boto3.client('sts')
        
        role_arn = f'arn:aws:iam::{account_id}:role/{self.org_role_name}'
        
        response = sts.assume_role(
            RoleArn=role_arn,
            RoleSessionName=f'ConfigDeployment-{account_id}'
        )
        
        credentials = response['Credentials']
        
        return boto3.client('config',
            aws_access_key_id=credentials['AccessKeyId'],
            aws_secret_access_key=credentials['SecretAccessKey'],
            aws_session_token=credentials['SessionToken']
        )
    
    def deploy_rule_to_account(self, account_id, rule_config):
        """Deploy Config Rule to specific account"""
        
        try:
            config_client = self.assume_role_in_account(account_id)
            
            config_client.put_config_rule(ConfigRule=rule_config)
            
            return {
                'account_id': account_id,
                'rule_name': rule_config['ConfigRuleName'],
                'status': 'success'
            }
            
        except Exception as e:
            return {
                'account_id': account_id,
                'rule_name': rule_config['ConfigRuleName'],
                'status': 'error',
                'error': str(e)
            }
    
    def deploy_rules_to_all_accounts(self, rules):
        """Deploy Config Rules to all accounts in parallel"""
        
        accounts = self.get_all_accounts()
        results = []
        
        with ThreadPoolExecutor(max_workers=10) as executor:
            futures = []
            
            for account in accounts:
                for rule in rules:
                    future = executor.submit(
                        self.deploy_rule_to_account,
                        account['Id'],
                        rule
                    )
                    futures.append(future)
            
            for future in futures:
                results.append(future.result())
        
        # Report results
        successful = [r for r in results if r['status'] == 'success']
        failed = [r for r in results if r['status'] == 'error']
        
        print(f"\n📊 Multi-Account Deployment Results:")
        print(f"  ✅ Successful: {len(successful)}")
        print(f"  ❌ Failed: {len(failed)}")
        
        if failed:
            print(f"\n❌ Failed Deployments:")
            for failure in failed:
                print(f"  Account {failure['account_id']}: {failure['error']}")
        
        return results

# Usage
org_deployer = MultiAccountConfigDeployment()

# Define rules to deploy
essential_rules = [
    {
        'ConfigRuleName': 'org-s3-bucket-public-access-prohibited',
        'Source': {
            'Owner': 'AWS',
            'SourceIdentifier': 'S3_BUCKET_PUBLIC_ACCESS_PROHIBITED'
        }
    },
    {
        'ConfigRuleName': 'org-iam-user-mfa-enabled',
        'Source': {
            'Owner': 'AWS',
            'SourceIdentifier': 'IAM_USER_MFA_ENABLED'
        }
    }
]

deployment_results = org_deployer.deploy_rules_to_all_accounts(essential_rules)

Config Rules Monitoring and Reporting

Real-time Compliance Dashboard

#!/usr/bin/env python3
"""
Real-time compliance dashboard using Config Rules
"""

import boto3
import json
from datetime import datetime, timedelta
import matplotlib.pyplot as plt
import seaborn as sns

class ComplianceDashboard:
    def __init__(self):
        self.config = boto3.client('config')
        self.cloudwatch = boto3.client('cloudwatch')
        
    def get_compliance_summary(self):
        """Get overall compliance summary"""
        
        summary = self.config.get_compliance_summary_by_config_rule()
        
        return {
            'compliant': summary['ComplianceSummary']['ComplianceByConfigRule']['CompliantRuleCount'],
            'non_compliant': summary['ComplianceSummary']['ComplianceByConfigRule']['NonCompliantRuleCount'],
            'total_rules': summary['ComplianceSummary']['ComplianceByConfigRule']['TotalRuleCount']
        }
    
    def get_compliance_by_resource_type(self):
        """Get compliance breakdown by resource type"""
        
        resource_compliance = {}
        
        # Get all Config Rules
        rules = self.config.describe_config_rules()['ConfigRules']
        
        for rule in rules:
            rule_name = rule['ConfigRuleName']
            
            # Get compliance details for this rule
            try:
                compliance_details = self.config.get_compliance_details_by_config_rule(
                    ConfigRuleName=rule_name,
                    Limit=100
                )
                
                for result in compliance_details['EvaluationResults']:
                    resource_type = result['EvaluationResultIdentifier']['EvaluationResultQualifier']['ResourceType']
                    compliance_type = result['ComplianceType']
                    
                    if resource_type not in resource_compliance:
                        resource_compliance[resource_type] = {
                            'COMPLIANT': 0,
                            'NON_COMPLIANT': 0,
                            'NOT_APPLICABLE': 0,
                            'INSUFFICIENT_DATA': 0
                        }
                    
                    resource_compliance[resource_type][compliance_type] += 1
                    
            except Exception as e:
                print(f"Error getting compliance for rule {rule_name}: {e}")
        
        return resource_compliance
    
    def get_trending_compliance(self, days=30):
        """Get compliance trends over time"""
        
        end_time = datetime.now()
        start_time = end_time - timedelta(days=days)
        
        # Query CloudWatch for Config compliance metrics
        try:
            response = self.cloudwatch.get_metric_statistics(
                Namespace='AWS/Config',
                MetricName='ComplianceByConfigRule',
                Dimensions=[
                    {
                        'Name': 'RuleName',
                        'Value': 'ComplianceByConfigRule'
                    }
                ],
                StartTime=start_time,
                EndTime=end_time,
                Period=86400,  # Daily
                Statistics=['Average']
            )
            
            return sorted(response['Datapoints'], key=lambda x: x['Timestamp'])
            
        except Exception as e:
            print(f"Error getting compliance trends: {e}")
            return []
    
    def generate_compliance_report(self):
        """Generate comprehensive compliance report"""
        
        report = {
            'generated_at': datetime.now().isoformat(),
            'summary': self.get_compliance_summary(),
            'by_resource_type': self.get_compliance_by_resource_type(),
            'trending': self.get_trending_compliance(),
            'top_violations': self.get_top_violations(),
            'remediation_recommendations': self.get_remediation_recommendations()
        }
        
        # Calculate compliance percentage
        summary = report['summary']
        total_evaluations = summary['compliant'] + summary['non_compliant']
        compliance_percentage = (summary['compliant'] / total_evaluations * 100) if total_evaluations > 0 else 0
        
        report['compliance_percentage'] = compliance_percentage
        
        # Save report
        filename = f'compliance_report_{datetime.now().strftime("%Y%m%d_%H%M%S")}.json'
        with open(filename, 'w') as f:
            json.dump(report, f, indent=2, default=str)
        
        # Print summary
        print(f"\n📊 AWS Config Compliance Report")
        print(f"{'='*50}")
        print(f"Overall Compliance: {compliance_percentage:.1f}%")
        print(f"Compliant Resources: {summary['compliant']}")
        print(f"Non-Compliant Resources: {summary['non_compliant']}")
        print(f"Total Config Rules: {summary['total_rules']}")
        
        # Top resource types by violations
        resource_violations = {}
        for resource_type, compliance in report['by_resource_type'].items():
            resource_violations[resource_type] = compliance['NON_COMPLIANT']
        
        top_violators = sorted(resource_violations.items(), key=lambda x: x[1], reverse=True)[:5]
        
        print(f"\n🚨 Top Resource Types with Violations:")
        for resource_type, violations in top_violators:
            print(f"  {resource_type}: {violations} violations")
        
        return report
    
    def get_top_violations(self):
        """Get most common Config Rule violations"""
        
        violations = {}
        
        rules = self.config.describe_config_rules()['ConfigRules']
        
        for rule in rules:
            rule_name = rule['ConfigRuleName']
            
            try:
                compliance_details = self.config.get_compliance_details_by_config_rule(
                    ConfigRuleName=rule_name,
                    ComplianceTypes=['NON_COMPLIANT'],
                    Limit=50
                )
                
                violation_count = len(compliance_details['EvaluationResults'])
                if violation_count > 0:
                    violations[rule_name] = violation_count
                    
            except Exception as e:
                continue
        
        # Return top 10 violations
        return dict(sorted(violations.items(), key=lambda x: x[1], reverse=True)[:10])
    
    def get_remediation_recommendations(self):
        """Get remediation recommendations for violations"""
        
        recommendations = {
            's3-bucket-public-access-prohibited': {
                'priority': 'CRITICAL',
                'action': 'Enable S3 Block Public Access',
                'automation': 'Available via Config Remediation',
                'cost_impact': 'None'
            },
            'iam-user-mfa-enabled': {
                'priority': 'HIGH',
                'action': 'Enable MFA for all IAM users',
                'automation': 'Manual process required',
                'cost_impact': 'None'
            },
            'rds-storage-encrypted': {
                'priority': 'HIGH',
                'action': 'Enable encryption for RDS instances',
                'automation': 'Requires instance recreation',
                'cost_impact': 'Minimal (< 5% performance impact)'
            },
            'incoming-ssh-disabled': {
                'priority': 'HIGH',
                'action': 'Remove SSH access from security groups',
                'automation': 'Available via Config Remediation',
                'cost_impact': 'None'
            }
        }
        
        return recommendations

# Usage
dashboard = ComplianceDashboard()
compliance_report = dashboard.generate_compliance_report()

Cost Optimization Through Config Rules

Config Rules don’t just improve security—they can save significant money:

def calculate_config_roi():
    """Calculate ROI of Config Rules implementation"""
    
    # Typical costs avoided by Config Rules
    cost_savings = {
        'prevented_breaches': {
            'annual_savings': 500000,  # Average cost of a data breach
            'probability_reduction': 0.7  # 70% reduction in breach probability
        },
        'audit_efficiency': {
            'hours_saved_per_audit': 200,
            'audits_per_year': 2,
            'hourly_rate': 150
        },
        'compliance_automation': {
            'manual_hours_saved_monthly': 40,
            'hourly_rate': 100
        },
        'faster_remediation': {
            'incidents_per_year': 12,
            'hours_saved_per_incident': 8,
            'hourly_rate': 150
        }
    }
    
    annual_savings = (
        cost_savings['prevented_breaches']['annual_savings'] * 
        cost_savings['prevented_breaches']['probability_reduction'] +
        cost_savings['audit_efficiency']['hours_saved_per_audit'] * 
        cost_savings['audit_efficiency']['audits_per_year'] * 
        cost_savings['audit_efficiency']['hourly_rate'] +
        cost_savings['compliance_automation']['manual_hours_saved_monthly'] * 12 * 
        cost_savings['compliance_automation']['hourly_rate'] +
        cost_savings['faster_remediation']['incidents_per_year'] * 
        cost_savings['faster_remediation']['hours_saved_per_incident'] * 
        cost_savings['faster_remediation']['hourly_rate']
    )
    
    # Config costs
    config_costs = {
        'configuration_items': 1000000,  # 1M CIs
        'cost_per_ci': 0.003,  # $0.003 per CI
        'rule_evaluations': 500000,  # 500K evaluations
        'cost_per_evaluation': 0.001  # $0.001 per evaluation
    }
    
    annual_config_cost = (
        config_costs['configuration_items'] * config_costs['cost_per_ci'] +
        config_costs['rule_evaluations'] * config_costs['cost_per_evaluation']
    )
    
    roi = ((annual_savings - annual_config_cost) / annual_config_cost) * 100
    
    print(f"\n💰 Config Rules ROI Analysis:")
    print(f"Annual Savings: ${annual_savings:,.2f}")
    print(f"Annual Config Cost: ${annual_config_cost:,.2f}")
    print(f"Net Savings: ${annual_savings - annual_config_cost:,.2f}")
    print(f"ROI: {roi:.0f}%")
    
    return roi

# Calculate ROI
roi = calculate_config_roi()

Implementation Roadmap

Week 1: Foundation Setup

  1. Enable AWS Config in all regions and accounts
  2. Deploy essential security rules (S3, IAM, network)
  3. Set up Config data delivery to S3 bucket
  4. Create basic compliance dashboard

Week 2: Rule Expansion

  1. Add compliance-specific rules (SOC 2, PCI, HIPAA)
  2. Implement custom rules for your specific requirements
  3. Set up SNS notifications for violations
  4. Test manual remediation processes

Week 3: Automation

  1. Deploy automatic remediation for low-risk violations
  2. Create multi-account deployment scripts
  3. Set up compliance reporting automation
  4. Implement trending and analytics

Week 4: Optimization

  1. Fine-tune rule parameters based on false positives
  2. Optimize costs by adjusting evaluation frequency
  3. Create custom compliance reports for stakeholders
  4. Train team on Config Rules management

Conclusion

AWS Config Rules transform compliance from a manual, error-prone process into an automated, continuous practice. They provide the evidence auditors need, catch violations before they become breaches, and enable automated remediation that keeps your environment secure 24/7.

The ROI is compelling: most organizations see 10-20x return on their Config investment through prevented breaches, audit efficiency, and automation savings. More importantly, Config Rules give you confidence that your AWS environment maintains security posture continuously, not just during snapshot assessments.

Start implementing Config Rules today:

  1. Begin with the essential security rules in this guide
  2. Expand to compliance-specific rules for your industry
  3. Implement automated remediation for common violations
  4. Build continuous compliance into your culture

Want Config Rules implemented without the complexity? Tools like PathShield provide pre-configured rule sets, automated remediation, and continuous compliance monitoring—giving you enterprise-grade governance without the enterprise overhead.

Back to Blog

Related Posts

View All Posts »