Compliance & Audit
SOC 2 Type II
SOC 2 (System and Organization Controls 2) is an auditing standard developed by the AICPA that evaluates how service organizations manage customer data. Type II reports cover the operational effectiveness of controls over a period (typically 6–12 months), not just their design.
Five Trust Service Criteria
1. Security (Common Criteria — required)
Protection of the system against unauthorized access. This is the only mandatory criterion. Covers: logical and physical access controls, encryption, vulnerability management, incident response, change management.
2. Availability
System availability per SLA commitments. Covers: uptime monitoring, redundancy, disaster recovery, capacity planning, performance monitoring.
3. Processing Integrity
System processing is complete, valid, accurate, timely, and authorized. Covers: input/output validation, error handling, processing monitoring, data quality controls.
4. Confidentiality
Information designated as confidential is protected. Covers: data classification, encryption, access controls on sensitive data, NDAs, data retention/disposal.
5. Privacy
Personal information is collected, used, retained, disclosed, and disposed per the privacy notice. Covers: consent management, data subject rights, privacy by design, DPAs.
What Auditors Look For (Technical Controls)
| Control Area | Evidence Auditors Require | Implementation |
|---|---|---|
| Access Control | User access reviews (quarterly), MFA enabled for all users, least privilege policy | IAM audit reports, access review documentation, MFA enforcement policies |
| Monitoring | SIEM alerts, 24/7 monitoring, log retention (1+ year) | CloudTrail + S3 with 365-day retention, alerting runbooks |
| Incident Response | IR plan, tabletop exercises (annual), incident post-mortems | Documented IR playbooks, exercise records, ticket history |
| Change Management | All changes reviewed and approved before deployment | PR approvals, CI/CD audit logs, change request tickets |
| Encryption | Data encrypted at rest and in transit, key management documented | KMS key policies, TLS 1.2+ enforcement, S3/RDS encryption enabled |
| Vulnerability Management | Regular scans, SLA for remediation (critical: 24h, high: 7d) | Trivy/Inspector scan reports, patch management records |
| Vendor Risk | Third-party risk assessments, SoC2 reports from vendors | Vendor questionnaires, vendor SOC2 reports, DPAs |
ISO 27001
ISO/IEC 27001 is the international standard specifying requirements for an Information Security Management System (ISMS). It takes a risk-based approach — organizations identify risks, implement controls to mitigate them, and continuously improve.
ISMS Implementation Roadmap
Phase 1: Gap Analysis (Weeks 1–4)
- Assess current security posture against ISO 27001 Annex A controls
- Identify gaps and document findings in a gap analysis report
- Define scope of the ISMS (which systems, locations, processes)
- Secure executive sponsorship and budget
Phase 2: Risk Treatment (Weeks 5–10)
- Conduct formal risk assessment (identify assets, threats, vulnerabilities, likelihood, impact)
- Create Risk Register with risk ratings and treatment decisions (Accept, Mitigate, Transfer, Avoid)
- Develop Statement of Applicability (SoA) — document which Annex A controls apply and why
- Create Risk Treatment Plan with owners, timelines, and success criteria
Phase 3: Controls Implementation (Months 3–9)
- Implement selected controls from Annex A (all 93 controls should be considered)
- Develop required policies: IS Policy, Acceptable Use, Access Control, Incident Response, BCDR
- Train all staff on information security awareness
- Implement technical controls: MFA, encryption, SIEM, patch management, vulnerability scanning
- Establish supplier management and third-party risk processes
Phase 4: Internal Audit & Certification (Months 10–12)
- Conduct internal audit against all ISO 27001 requirements
- Management review meeting — review ISMS performance, risks, objectives
- Stage 1 audit: External auditor reviews documentation
- Stage 2 audit: External auditor tests control effectiveness on-site
- Achieve certification (valid 3 years with annual surveillance audits)
CIS Benchmarks
CIS Controls v8 — 18 Controls Overview
| # | Control | Implementation Group |
|---|---|---|
| 1 | Inventory and Control of Enterprise Assets | IG1 |
| 2 | Inventory and Control of Software Assets | IG1 |
| 3 | Data Protection | IG1 |
| 4 | Secure Configuration of Enterprise Assets and Software | IG1 |
| 5 | Account Management | IG1 |
| 6 | Access Control Management | IG1 |
| 7 | Continuous Vulnerability Management | IG2 |
| 8 | Audit Log Management | IG2 |
| 9 | Email and Web Browser Protections | IG2 |
| 10 | Malware Defenses | IG2 |
| 11 | Data Recovery | IG2 |
| 12 | Network Infrastructure Management | IG2 |
| 13 | Network Monitoring and Defense | IG2 |
| 14 | Security Awareness and Skills Training | IG2 |
| 15 | Service Provider Management | IG2 |
| 16 | Application Software Security | IG2 |
| 17 | Incident Response Management | IG3 |
| 18 | Penetration Testing | IG3 |
CIS Kubernetes Benchmark — kube-bench
# Run kube-bench on a node
docker run --pid=host -v /etc:/etc:ro -v /var:/var:ro \
-t aquasec/kube-bench:latest \
--version 1.27
# Run as a Kubernetes Job
apiVersion: batch/v1
kind: Job
metadata:
name: kube-bench
namespace: kube-system
spec:
template:
spec:
hostPID: true
containers:
- name: kube-bench
image: aquasec/kube-bench:latest
command: ["kube-bench"]
args: ["--json", "--outputfile", "/output/report.json"]
volumeMounts:
- name: var-lib-etcd
mountPath: /var/lib/etcd
readOnly: true
- name: var-lib-kubelet
mountPath: /var/lib/kubelet
readOnly: true
- name: etc-systemd
mountPath: /etc/systemd
readOnly: true
- name: etc-kubernetes
mountPath: /etc/kubernetes
readOnly: true
- name: output
mountPath: /output
restartPolicy: Never
volumes:
- name: var-lib-etcd
hostPath: { path: "/var/lib/etcd" }
- name: var-lib-kubelet
hostPath: { path: "/var/lib/kubelet" }
- name: etc-systemd
hostPath: { path: "/etc/systemd" }
- name: etc-kubernetes
hostPath: { path: "/etc/kubernetes" }
- name: output
emptyDir: {}
Sample kube-bench Output
[INFO] 1 Control Plane Security Configuration
[INFO] 1.2 API Server
[PASS] 1.2.1 Ensure that the --anonymous-auth argument is set to false
[PASS] 1.2.2 Ensure that the --token-auth-file parameter is not set
[FAIL] 1.2.6 Ensure that the --kubelet-certificate-authority argument is set as appropriate
[PASS] 1.2.7 Ensure that the --authorization-mode argument is not set to AlwaysAllow
[WARN] 1.2.10 Ensure that the admission control plugin EventRateLimit is set
[INFO] 4 Worker Node Security Configuration
[PASS] 4.1.1 Ensure that the kubelet service file permissions are set to 644 or more restrictive
[FAIL] 4.2.1 Ensure that the anonymous-auth argument is set to false
[PASS] 4.2.6 Ensure that the --protect-kernel-defaults argument is set to true
== Summary ==
42 checks PASS
8 checks FAIL
10 checks WARN
0 checks INFO
CIS AWS Foundations Benchmark — Security Hub
# Enable AWS Security Hub with CIS AWS Foundations standard
aws securityhub enable-security-hub \
--enable-default-standards \
--region ap-southeast-1
# Enable CIS AWS Foundations Benchmark v1.4.0
aws securityhub batch-enable-standards \
--standards-subscription-requests \
StandardsArn=arn:aws:securityhub:ap-southeast-1::standards/cis-aws-foundations-benchmark/v/1.4.0
# Get compliance summary
aws securityhub get-findings \
--filters '{"ComplianceStatus":[{"Value":"FAILED","Comparison":"EQUALS"}],"RecordState":[{"Value":"ACTIVE","Comparison":"EQUALS"}]}' \
--query 'Findings[*].[Title,Severity.Label,Compliance.Status]' \
--output table
PCI DSS
The Payment Card Industry Data Security Standard (PCI DSS) v4.0 applies to any organization that stores, processes, or transmits cardholder data. It has 12 main requirements organized into 6 goals.
12 PCI DSS Requirements Overview
| Goal | Requirement | Summary |
|---|---|---|
| Build & Maintain Secure Network | 1 | Install and maintain network security controls |
| 2 | Apply secure configurations to all system components | |
| Protect Account Data | 3 | Protect stored account data (truncate, mask, encrypt PANs) |
| 4 | Protect cardholder data with strong cryptography during transmission | |
| Maintain Vulnerability Management | 5 | Protect all systems against malware; maintain AV |
| 6 | Develop and maintain secure systems and software | |
| Implement Strong Access Control | 7 | Restrict access by business need-to-know |
| 8 | Identify users and authenticate access to system components (MFA required) | |
| Protect Physical Access | 9 | Restrict physical access to cardholder data |
| Monitor & Test Networks | 10 | Log and monitor all access to system components and cardholder data |
| 11 | Test security of systems and networks regularly (pen test annually) | |
| Maintain Information Security Policy | 12 | Support information security with organizational policies and programs |
Network Segmentation for CDE
The Cardholder Data Environment (CDE) must be isolated from all other networks. Proper segmentation can significantly reduce the scope of PCI DSS assessment.
# AWS VPC segmentation for CDE
# CDE VPC — strictly isolated
resource "aws_vpc" "cde" {
cidr_block = "10.10.0.0/16"
enable_dns_support = true
enable_dns_hostnames = true
tags = {
Name = "cde-vpc"
Environment = "production"
Compliance = "PCI-DSS"
}
}
# No VPC peering to non-CDE environments
# All egress through NAT Gateway with allowlist
resource "aws_security_group" "cde_app" {
name = "cde-application"
vpc_id = aws_vpc.cde.id
# Only allow HTTPS from payment processor IPs
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["192.0.2.0/24"] # Payment processor IP range
}
# Egress: only to payment processor and internal
egress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["192.0.2.0/24"]
}
}
Cloud Compliance Tools
AWS Config Rules
# Enable AWS Config
aws configservice put-configuration-recorder \
--configuration-recorder name=default,roleARN=arn:aws:iam::123456789012:role/config-role \
--recording-group allSupported=true,includeGlobalResources=true
# Deploy managed Config rules via CloudFormation/Terraform
resource "aws_config_config_rule" "s3_bucket_public_read" {
name = "s3-bucket-public-read-prohibited"
source {
owner = "AWS"
source_identifier = "S3_BUCKET_PUBLIC_READ_PROHIBITED"
}
tags = { Compliance = "CIS-AWS" }
}
resource "aws_config_config_rule" "root_mfa" {
name = "root-account-mfa-enabled"
source {
owner = "AWS"
source_identifier = "ROOT_ACCOUNT_MFA_ENABLED"
}
}
resource "aws_config_config_rule" "encrypted_volumes" {
name = "encrypted-volumes"
source {
owner = "AWS"
source_identifier = "ENCRYPTED_VOLUMES"
}
}
# Custom Config rule: ensure all EC2 instances use IMDSv2
resource "aws_config_config_rule" "ec2_imdsv2" {
name = "ec2-imdsv2-required"
source {
owner = "AWS"
source_identifier = "EC2_IMDSV2_CHECK"
}
}
GCP Security Command Center
# Enable Security Command Center (Premium)
gcloud scc settings update \
--organization=123456789 \
--enable-asset-discovery
# List active findings
gcloud scc findings list \
--organization=123456789 \
--filter="state=ACTIVE AND severity=CRITICAL" \
--format="table(name,category,resourceName,severity)"
# Export findings to BigQuery for analysis
gcloud scc findings list \
--organization=123456789 \
--format=json | bq load \
--source_format=NEWLINE_DELIMITED_JSON \
my-project:security.scc_findings \
-
# Create a notification config for real-time alerts
gcloud scc notifications create critical-findings \
--organization=123456789 \
--description="Notify on critical findings" \
--pubsub-topic=projects/my-project/topics/scc-alerts \
--filter="severity=CRITICAL OR severity=HIGH"
Audit Logging
What to Log
Critical Events That Must Be Logged
- API calls: Every API call to cloud control plane (AWS CloudTrail, GCP Cloud Audit Logs)
- Authentication events: Login success/failure, MFA usage, token issuance
- Authorization failures: Access denied events — critical for detecting probing/attacks
- Data access: Who accessed what data and when (S3 access logs, database query logs)
- Configuration changes: IAM changes, security group modifications, resource creation/deletion
- Privilege escalation: Role assumption, sudo usage, kubectl exec into pods
- Network flows: VPC Flow Logs, firewall allow/deny decisions
- Secret access: Who retrieved which secret from Vault/Secrets Manager
AWS CloudTrail — Production Configuration
# Terraform: CloudTrail with S3 encryption and integrity validation
resource "aws_cloudtrail" "main" {
name = "org-cloudtrail"
s3_bucket_name = aws_s3_bucket.cloudtrail.id
include_global_service_events = true
is_multi_region_trail = true
is_organization_trail = true
enable_log_file_validation = true # SHA-256 integrity hashing
kms_key_id = aws_kms_key.cloudtrail.arn
event_selector {
read_write_type = "All"
include_management_events = true
data_resource {
type = "AWS::S3::Object"
values = ["arn:aws:s3:::"] # Log all S3 object-level events
}
data_resource {
type = "AWS::Lambda::Function"
values = ["arn:aws:lambda"]
}
}
insight_selector {
insight_type = "ApiCallRateInsight"
}
tags = {
Environment = "production"
Compliance = "SOC2,PCI-DSS"
}
}
# S3 bucket for CloudTrail with lifecycle (1 year hot, then glacier)
resource "aws_s3_bucket_lifecycle_configuration" "cloudtrail" {
bucket = aws_s3_bucket.cloudtrail.id
rule {
id = "archive-logs"
status = "Enabled"
transition {
days = 90
storage_class = "STANDARD_IA"
}
transition {
days = 365
storage_class = "GLACIER"
}
expiration {
days = 2557 # 7 years for PCI DSS compliance
}
}
}
GCP Cloud Audit Logs
# Enable all audit log types for an organization
gcloud resource-manager org-policies set-policy \
--organization=123456789 \
audit_log_policy.yaml
# audit_log_policy.yaml (enable DATA_READ and DATA_WRITE for all services)
auditConfigs:
- service: allServices
auditLogConfigs:
- logType: ADMIN_READ
- logType: DATA_READ
- logType: DATA_WRITE
# Query audit logs in Cloud Logging
gcloud logging read \
'logName="projects/my-project/logs/cloudaudit.googleapis.com%2Factivity" AND severity>=WARNING' \
--limit=50 \
--format=json | jq '.[] | {time: .timestamp, user: .protoPayload.authenticationInfo.principalEmail, action: .protoPayload.methodName, resource: .resource.labels}'
Kubernetes Audit Policy
# k8s-audit-policy.yaml
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
# Log all requests to secrets, configmaps (sensitive resources)
- level: RequestResponse
resources:
- group: ""
resources: ["secrets", "configmaps"]
# Log pod exec/attach (privileged operations)
- level: RequestResponse
resources:
- group: ""
resources: ["pods/exec", "pods/attach", "pods/portforward"]
# Log RBAC changes
- level: RequestResponse
resources:
- group: "rbac.authorization.k8s.io"
resources: ["roles", "clusterroles", "rolebindings", "clusterrolebindings"]
# Log auth events (authentication failures logged at Metadata)
- level: Metadata
omitStages:
- RequestReceived
userGroups: ["system:unauthenticated"]
# Reduced logging for read-only operations on non-sensitive resources
- level: Metadata
verbs: ["get", "list", "watch"]
resources:
- group: ""
resources: ["pods", "services", "endpoints"]
- group: "apps"
resources: ["deployments", "replicasets"]
# Log all other requests at Request level
- level: Request
omitStages:
- RequestReceived
Compliance as Code
OPA for Compliance Enforcement
# Rego policy: Enforce encryption for all S3 buckets
package cloud.compliance.aws.s3
import future.keywords.in
deny[msg] {
bucket := input.resource.aws_s3_bucket[name]
not bucket_has_encryption(name)
msg := sprintf("S3 bucket '%v' must have server-side encryption enabled", [name])
}
bucket_has_encryption(bucket_name) {
encryption := input.resource.aws_s3_bucket_server_side_encryption_configuration[_]
encryption.bucket == input.resource.aws_s3_bucket[bucket_name].id
}
# Rego policy: No public S3 buckets
deny[msg] {
bucket := input.resource.aws_s3_bucket[name]
acl := bucket.acl
acl in ["public-read", "public-read-write", "authenticated-read"]
msg := sprintf("S3 bucket '%v' must not have public ACL: %v", [name, acl])
}
# Rego policy: All RDS instances must be encrypted
deny[msg] {
db := input.resource.aws_db_instance[name]
not db.storage_encrypted
msg := sprintf("RDS instance '%v' must have storage_encrypted = true", [name])
}
Terraform Compliance Checks
# terraform-compliance: BDD-style compliance testing
# features/s3_security.feature
Feature: S3 Security Controls
Scenario: All S3 buckets must have versioning enabled
Given I have aws_s3_bucket_versioning defined
When its status is defined
Then its status must match the "Enabled" regex
Scenario: S3 buckets must not allow public access
Given I have aws_s3_bucket_public_access_block defined
Then its block_public_acls must be true
And its block_public_policy must be true
And its ignore_public_acls must be true
And its restrict_public_buckets must be true
Scenario: All EC2 instances must use approved AMIs
Given I have aws_instance defined
When its ami is defined
Then its ami must match the "ami-0[a-z0-9]{16}" regex
# Run terraform-compliance
terraform-compliance \
--features ./features/ \
--planfile ./tfplan.json
GDPR Basics for Cloud Infrastructure
Key GDPR Requirements for Cloud Teams
- Data Residency: Personal data of EU residents must remain within the EU/EEA unless adequate safeguards exist (SCCs, adequacy decisions). Use AWS/GCP region constraints via SCP/org policies to enforce data residency.
- Right to Erasure: Systems must be capable of deleting all personal data for a given individual. Design data stores with deletion in mind; avoid data duplication across systems without tracking.
- Data Processing Agreements (DPAs): Any cloud provider processing personal data must have a DPA. AWS, GCP, and Azure all offer DPAs — ensure they are in place before processing EU data.
- Data Minimization: Only collect and process personal data that is strictly necessary for the stated purpose.
- Breach Notification: Report data breaches to the supervisory authority within 72 hours. Affected individuals must be notified if high risk. Requires robust monitoring and incident response.
# Enforce EU data residency with AWS SCP
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "EnforceEUDataResidency",
"Effect": "Deny",
"Action": [
"s3:CreateBucket",
"rds:CreateDBInstance",
"dynamodb:CreateTable",
"es:CreateDomain"
],
"Resource": "*",
"Condition": {
"StringNotEquals": {
"aws:RequestedRegion": [
"eu-west-1",
"eu-west-2",
"eu-west-3",
"eu-central-1",
"eu-north-1",
"eu-south-1"
]
}
}
}
]
}
- Identity: MFA enforced for all users, no shared accounts, root/owner accounts locked down, access reviews quarterly
- Encryption: All data encrypted at rest (KMS-managed keys), TLS 1.2+ in transit, no plaintext secrets in code or configs
- Logging: CloudTrail/Cloud Audit Logs enabled org-wide, logs retained 1+ year (7 years for PCI), log integrity validation enabled
- Network: Private subnets for workloads, VPC Flow Logs enabled, no 0.0.0.0/0 ingress rules except for known load balancers
- Vulnerability Management: Container and host scanning in CI/CD, critical CVEs remediated within 24h, high within 7 days
- Secrets: No hardcoded secrets, secrets manager in use, automatic rotation enabled, K8s secrets encrypted at rest
- Change Management: All infrastructure changes via IaC, PR-based approval, CI/CD audit trail, no manual changes to production
- Incident Response: IR plan documented and tested, SIEM alerts configured, on-call rotation in place, post-mortems conducted
- Backup: Automated backups enabled, backup encryption enabled, restore tested quarterly
- Compliance Scanning: CSPM tool active, CIS benchmarks scanned weekly, findings tracked in ticketing system with SLA