· PathShield Team · Tutorials · 9 min read
Top 7 AWS Security Mistakes Startups Make (And How to Fix Them)
Learn the most common AWS security mistakes that cost startups millions. From exposed S3 buckets to weak IAM policies, discover practical fixes you can implement today.
Top 7 AWS Security Mistakes Startups Make (And How to Fix Them)
Every week, another startup makes headlines for exposing customer data through preventable AWS misconfigurations. These aren’t sophisticated attacks—they’re basic security mistakes that take minutes to fix but cost millions in damages. Here are the seven most dangerous AWS security mistakes startups make and exactly how to fix them.
Mistake #1: Public S3 Buckets with Sensitive Data
The Problem
S3 buckets public by default used to be AWS’s biggest security footgun. While AWS changed this, developers still manually make buckets public for “testing” and forget to lock them down. Result: Your customer data indexed by Google.
How Startups Get Burned
A health tech startup stored patient records in an S3 bucket. A developer made it public to debug a mobile app issue. Three months later, a security researcher found 50,000 medical records publicly accessible. Cost: $2.3M in HIPAA fines plus complete loss of customer trust.
The Fix
Step 1: Block all public access at the account level
aws s3control put-public-access-block \
--account-id 123456789012 \
--public-access-block-configuration \
"BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true"
Step 2: Audit existing buckets
# List all buckets
aws s3api list-buckets --query 'Buckets[*].Name' --output text | \
while read bucket; do
echo "Checking: $bucket"
aws s3api get-bucket-acl --bucket $bucket
aws s3api get-public-access-block --bucket $bucket 2>/dev/null || echo "No public access block"
done
Step 3: Enable S3 access logging
aws s3api put-bucket-logging --bucket my-important-bucket \
--bucket-logging-status file://logging.json
Where logging.json
contains:
{
"LoggingEnabled": {
"TargetBucket": "my-logging-bucket",
"TargetPrefix": "s3-access-logs/"
}
}
Prevention Checklist
- Enable account-level S3 Block Public Access
- Use bucket policies instead of ACLs
- Enable S3 access logging
- Set up AWS Config rules for S3 compliance
- Use CloudFront for public content delivery
Mistake #2: Hardcoded AWS Credentials in Code
The Problem
Developers embed AWS access keys directly in application code for convenience. These keys get committed to GitHub, deployed to production, or shared in Slack. One leaked key can compromise your entire AWS account.
How Startups Get Burned
A fintech startup’s developer pushed AWS credentials to a public GitHub repo. Within 6 minutes, automated bots had discovered the keys and spun up $65,000 worth of crypto mining instances. The startup’s runway was cut by 3 months overnight.
The Fix
Step 1: Use IAM roles instead of access keys
# Bad: Hardcoded credentials
import boto3
s3 = boto3.client(
's3',
aws_access_key_id='AKIAIOSFODNN7EXAMPLE',
aws_secret_access_key='wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY'
)
# Good: Use IAM role
import boto3
s3 = boto3.client('s3') # Automatically uses EC2/Lambda role
Step 2: Scan for exposed credentials
# Install and run git-secrets
brew install git-secrets
git secrets --install
git secrets --register-aws
git secrets --scan
Step 3: Rotate compromised keys immediately
# List all access keys
aws iam list-access-keys --user-name developer-user
# Delete compromised key
aws iam delete-access-key --access-key-id AKIAIOSFODNN7EXAMPLE --user-name developer-user
# Create new key (only if absolutely necessary)
aws iam create-access-key --user-name developer-user
Prevention Checklist
- Never create IAM user access keys unless absolutely necessary
- Use IAM roles for EC2, Lambda, and ECS
- Install pre-commit hooks to catch secrets
- Use AWS Secrets Manager for application secrets
- Enable AWS CloudTrail to monitor key usage
Mistake #3: Overly Permissive IAM Policies
The Problem
Startups grant AdministratorAccess
or use *
wildcards in IAM policies because it’s faster than figuring out specific permissions. This violates the principle of least privilege and creates massive security holes.
How Startups Get Burned
A SaaS startup gave all developers AdministratorAccess
. A junior developer’s laptop was compromised, giving attackers full AWS access. They deleted the production database, modified billing settings, and launched expensive instances. Recovery took 72 hours and cost 40% of customers.
The Fix
Step 1: Start with zero permissions and add incrementally
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::my-app-bucket/user-uploads/*"
},
{
"Effect": "Allow",
"Action": [
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:Query"
],
"Resource": "arn:aws:dynamodb:us-east-1:123456789012:table/Users"
}
]
}
Step 2: Use AWS Access Analyzer to find overly permissive policies
# Enable Access Analyzer
aws accessanalyzer create-analyzer \
--analyzer-name production-analyzer \
--type ACCOUNT
# List findings
aws accessanalyzer list-findings \
--analyzer-arn arn:aws:access-analyzer:us-east-1:123456789012:analyzer/production-analyzer
Step 3: Implement permission boundaries
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "*",
"Resource": "*"
},
{
"Effect": "Deny",
"Action": [
"iam:DeleteRole",
"iam:DeleteRolePolicy",
"iam:DeleteUser",
"iam:DeleteUserPolicy",
"ec2:TerminateInstances"
],
"Resource": "*",
"Condition": {
"StringNotEquals": {
"aws:RequestedRegion": "us-east-1"
}
}
}
]
}
Prevention Checklist
- Review and remove all
*
permissions - Use AWS managed policies as starting points
- Implement permission boundaries
- Enable Access Analyzer
- Regular IAM policy audits (monthly)
Mistake #4: Exposed RDS Databases to the Internet
The Problem
Developers configure RDS instances with 0.0.0.0/0
security group rules for convenience. This exposes databases directly to the internet, making them targets for automated attacks.
How Startups Get Burned
An e-commerce startup exposed their RDS MySQL database to the internet with a weak password. Attackers brute-forced the password and stole 100,000 customer records including credit card details. PCI compliance violations led to $500K in fines and loss of payment processing.
The Fix
Step 1: Move RDS to private subnets
# Create private subnet group
aws rds create-db-subnet-group \
--db-subnet-group-name private-subnet-group \
--db-subnet-group-description "Private subnets for RDS" \
--subnet-ids subnet-12345 subnet-67890
Step 2: Restrict security groups
# Create restricted security group
aws ec2 create-security-group \
--group-name rds-private-sg \
--description "Security group for private RDS access"
# Allow access only from application security group
aws ec2 authorize-security-group-ingress \
--group-id sg-123456 \
--protocol tcp \
--port 3306 \
--source-group sg-789012
Step 3: Enable encryption and backups
# Modify existing RDS instance
aws rds modify-db-instance \
--db-instance-identifier my-database \
--storage-encrypted \
--backup-retention-period 7 \
--preferred-backup-window "03:00-04:00"
Prevention Checklist
- Place RDS in private subnets only
- Use security groups, not IP whitelisting
- Enable encryption at rest
- Enable automated backups
- Use strong passwords or IAM database authentication
Mistake #5: No Security Group Hygiene
The Problem
Security groups accumulate rules over time. Developers add “temporary” rules that become permanent. Ports stay open long after they’re needed, creating unnecessary attack surface.
How Startups Get Burned
A startup left port 9200 (Elasticsearch) open to the world after a debugging session. Attackers found it, deleted all indices, and left a ransom note. The startup had no backups and lost 18 months of user analytics data.
The Fix
Step 1: Audit all security groups
import boto3
ec2 = boto3.client('ec2')
def audit_security_groups():
response = ec2.describe_security_groups()
for sg in response['SecurityGroups']:
for rule in sg['IpPermissions']:
for ip_range in rule.get('IpRanges', []):
if ip_range.get('CidrIp') == '0.0.0.0/0':
print(f"WARNING: {sg['GroupId']} allows {rule.get('FromPort')}-{rule.get('ToPort')} from anywhere")
audit_security_groups()
Step 2: Implement least-privilege security groups
# Create application-specific security groups
aws ec2 create-security-group \
--group-name web-servers \
--description "Security group for web servers"
# Allow only HTTPS from CloudFront
aws ec2 authorize-security-group-ingress \
--group-id sg-web \
--protocol tcp \
--port 443 \
--source-prefix-list pl-cloudfront
Step 3: Use AWS Config for compliance
# AWS Config rule to detect open security groups
config_rule = {
"ConfigRuleName": "restricted-common-ports",
"Source": {
"Owner": "AWS",
"SourceIdentifier": "RESTRICTED_INCOMING_TRAFFIC"
},
"Scope": {
"ComplianceResourceTypes": [
"AWS::EC2::SecurityGroup"
]
},
"InputParameters": json.dumps({
"blockedPort1": "22",
"blockedPort2": "3389",
"blockedPort3": "3306",
"blockedPort4": "5432"
})
}
Prevention Checklist
- Review security groups weekly
- Document why each rule exists
- Use descriptive security group names
- Implement AWS Config rules
- Set up alerts for security group changes
Mistake #6: Ignoring CloudTrail Logs
The Problem
Startups either don’t enable CloudTrail or enable it but never look at the logs. When a security incident occurs, they have no audit trail to understand what happened.
How Startups Get Burned
A startup discovered unauthorized charges of $30,000 on their AWS bill. Without CloudTrail logs, they couldn’t determine how attackers gained access, what resources were compromised, or what data was accessed. They had to assume total compromise and rebuild from scratch.
The Fix
Step 1: Enable CloudTrail for all regions
aws cloudtrail create-trail \
--name all-regions-trail \
--s3-bucket-name my-cloudtrail-bucket \
--is-multi-region-trail \
--enable-log-file-validation
Step 2: Set up automated alerting
import json
import boto3
def lambda_handler(event, context):
# Parse CloudTrail event
for record in event['Records']:
message = json.loads(record['Sns']['Message'])
for event_record in message.get('Records', []):
event_name = event_record.get('eventName')
user_identity = event_record.get('userIdentity', {})
# Alert on suspicious events
if event_name in ['DeleteBucket', 'DeleteDBInstance', 'DeleteUser']:
send_alert(f"CRITICAL: {event_name} by {user_identity.get('userName')}")
# Alert on root account usage
if user_identity.get('type') == 'Root':
send_alert("CRITICAL: Root account activity detected!")
Step 3: Use CloudWatch Insights for analysis
-- Find all failed authentication attempts
SELECT eventTime, userIdentity.userName, errorCode, errorMessage
FROM cloudtrail_logs
WHERE errorCode = 'UnauthorizedOperation'
ORDER BY eventTime DESC
LIMIT 100
Prevention Checklist
- Enable CloudTrail in all regions
- Enable log file validation
- Set up real-time alerts for critical events
- Store logs in a separate account
- Review logs weekly
Mistake #7: No Secrets Management Strategy
The Problem
Startups store API keys, database passwords, and other secrets in plain text—in environment variables, configuration files, or worse, in code. One compromised instance exposes everything.
How Startups Get Burned
A startup stored all production secrets in environment variables on EC2 instances. An SSRF vulnerability in their app allowed attackers to read the instance metadata and environment variables. They got database passwords, API keys, and encryption keys—full compromise in minutes.
The Fix
Step 1: Move secrets to AWS Secrets Manager
import boto3
import json
secrets_client = boto3.client('secretsmanager')
# Store a secret
secrets_client.create_secret(
Name='prod/myapp/database',
SecretString=json.dumps({
'username': 'admin',
'password': 'GeneratedPassword123!',
'host': 'mydb.123456.us-east-1.rds.amazonaws.com',
'port': 3306
})
)
# Retrieve a secret in your application
def get_secret(secret_name):
response = secrets_client.get_secret_value(SecretId=secret_name)
return json.loads(response['SecretString'])
# Use in application
db_config = get_secret('prod/myapp/database')
Step 2: Use IAM roles for secret access
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"secretsmanager:GetSecretValue"
],
"Resource": "arn:aws:secretsmanager:us-east-1:123456789012:secret:prod/myapp/*"
}
]
}
Step 3: Enable automatic rotation
# Lambda function for password rotation
def rotate_secret(event, context):
service_client = boto3.client('secretsmanager')
# Get the secret
metadata = service_client.describe_secret(SecretId=event['SecretId'])
# Create new password
new_password = generate_password()
# Update the database password
update_database_password(new_password)
# Store new version
service_client.put_secret_value(
SecretId=event['SecretId'],
SecretString=json.dumps({'password': new_password}),
VersionStages=['AWSPENDING']
)
Prevention Checklist
- Never store secrets in code or config files
- Use Secrets Manager or Parameter Store
- Enable automatic rotation
- Audit secret access via CloudTrail
- Use different secrets per environment
Quick Wins: Your First 24 Hours
If you’re reading this with a sinking feeling, here’s what to fix RIGHT NOW:
Hour 1-2: Enable S3 Block Public Access
aws s3control put-public-access-block --account-id $(aws sts get-caller-identity --query Account --output text) --public-access-block-configuration "BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true"
Hour 3-4: Audit and remove
0.0.0.0/0
security group rulesaws ec2 describe-security-groups --filters Name=ip-permission.cidr,Values='0.0.0.0/0' --query 'SecurityGroups[*].[GroupId,GroupName]' --output table
Hour 5-8: Enable CloudTrail and set up alerts
Hour 9-16: Remove hardcoded credentials and switch to IAM roles
Hour 17-24: Move secrets to Secrets Manager
The Cost of Prevention vs. The Cost of a Breach
- Prevention cost: ~$50/month in AWS services + 2 days of engineering time
- Average breach cost for startups: $500,000 + 35% customer churn + possible shutdown
The math is simple. These seven mistakes are entirely preventable with basic AWS security hygiene. Don’t become another cautionary tale.
Your Next Steps
- Run our security audit script (link in resources)
- Fix the critical issues (public S3, open databases, hardcoded credentials)
- Implement monitoring (CloudTrail, Config, GuardDuty)
- Schedule monthly reviews (security groups, IAM policies, access keys)
Remember: Perfect security is impossible, but these seven fixes will prevent 95% of breaches. Your customers trust you with their data. Don’t let preventable mistakes destroy that trust.
Need help visualizing your AWS security posture? Tools like PathShield can automatically scan for these misconfigurations and show you exactly what needs fixing—before attackers find them first.