Hybrid Connectivity

Scope: Hybrid connectivity covers the network paths between on-premises infrastructure and cloud environments, and between multiple cloud providers. This page details configuration for AWS and GCP connectivity options, cross-cloud BGP routing, DNS resolution, and extended compute (Outposts, Anthos).

Connectivity Options Comparison

Option Bandwidth Latency SLA Cost Model Best For
IPsec VPN Up to ~1.25 Gbps per tunnel Variable (internet-dependent); typically 10–50 ms regional Best-effort (internet path) Low fixed cost; pay per GB egress Dev/test connectivity; backup path; SMB with low bandwidth needs
AWS Direct Connect 1 Gbps, 10 Gbps, or 100 Gbps dedicated ports Consistent, sub-millisecond to a few ms (private network) AWS provides 99.99% uptime SLA with redundant connections Monthly port fee + data transfer; no egress charges for DX traffic Production workloads; large data transfers; latency-sensitive apps
GCP Cloud Interconnect 10 Gbps or 100 Gbps (Dedicated); 50 Mbps–10 Gbps (Partner) Consistent private path; 1–5 ms for regional peering 99.99% SLA with two circuits in different metros Monthly circuit fee; reduced egress pricing for Interconnect traffic Production workloads; GCP-heavy architectures; BigQuery bulk loads
SD-WAN Aggregated over multiple underlay links (MPLS + internet) Variable; intelligent path selection can optimize Depends on underlay; SD-WAN adds reliability via path failover License per device/site + underlay circuit costs Branch office connectivity; replacing MPLS; multi-cloud with on-prem

AWS Site-to-Site VPN

AWS Site-to-Site VPN establishes an IPsec tunnel between a customer gateway (on-premises VPN device) and a virtual private gateway (or Transit Gateway) in AWS. Each connection creates two redundant tunnels for high availability.

BGP Setup and Redundant Tunnels

Redundancy model: AWS creates two tunnel endpoints per VPN connection, terminating in different AWS edge locations. For full redundancy, provision two VPN connections from two physically separate customer gateway devices (different ISP links). This achieves Active-Active or Active-Passive BGP failover at both the device and ISP level.
# Terraform: AWS Site-to-Site VPN with BGP via Transit Gateway

# Customer Gateway: your on-premises VPN device
resource "aws_customer_gateway" "on_prem_primary" {
  bgp_asn    = 65000          # Your on-premises ASN
  ip_address = "203.0.113.10" # Public IP of on-prem VPN router (primary)
  type       = "ipsec.1"
  tags       = { Name = "cgw-onprem-primary" }
}

resource "aws_customer_gateway" "on_prem_secondary" {
  bgp_asn    = 65000
  ip_address = "203.0.113.20" # Second router / second ISP link
  type       = "ipsec.1"
  tags       = { Name = "cgw-onprem-secondary" }
}

# VPN Connection 1: primary device → Transit Gateway
resource "aws_vpn_connection" "primary" {
  customer_gateway_id = aws_customer_gateway.on_prem_primary.id
  transit_gateway_id  = var.transit_gateway_id
  type                = "ipsec.1"

  # BGP inside tunnel addresses (AWS assigns from 169.254.x.x range)
  tunnel1_inside_cidr = "169.254.10.0/30"
  tunnel2_inside_cidr = "169.254.10.4/30"

  # IKEv2 with strong ciphers
  tunnel1_ike_versions                 = ["ikev2"]
  tunnel1_phase1_encryption_algorithms = ["AES256"]
  tunnel1_phase1_integrity_algorithms  = ["SHA2-256"]
  tunnel1_phase1_dh_group_numbers      = [14, 20]
  tunnel1_phase2_encryption_algorithms = ["AES256-GCM-16"]
  tunnel1_phase2_dh_group_numbers      = [14, 20]

  tunnel1_preshared_key = var.vpn_psk_1   # Store in AWS Secrets Manager
  tunnel2_preshared_key = var.vpn_psk_2

  tags = { Name = "vpn-onprem-primary" }
}

# VPN Connection 2: secondary device (for full redundancy)
resource "aws_vpn_connection" "secondary" {
  customer_gateway_id = aws_customer_gateway.on_prem_secondary.id
  transit_gateway_id  = var.transit_gateway_id
  type                = "ipsec.1"
  tunnel1_inside_cidr = "169.254.10.8/30"
  tunnel2_inside_cidr = "169.254.10.12/30"
  tunnel1_preshared_key = var.vpn_psk_3
  tunnel2_preshared_key = var.vpn_psk_4
  tags = { Name = "vpn-onprem-secondary" }
}

# On-premises router BGP config (Cisco IOS-XE example):
# router bgp 65000
#   neighbor 169.254.10.1 remote-as 64512      ! AWS ASN
#   neighbor 169.254.10.1 timers 10 30
#   network 10.0.0.0 mask 255.255.0.0          ! Advertise on-prem prefix
#   neighbor 169.254.10.1 route-map OUTBOUND out
#
# route-map OUTBOUND permit 10
#   set as-path prepend 65000 65000            ! Prefer primary tunnel via lower prepend

AWS Direct Connect

AWS Direct Connect provides a dedicated private network connection from your data center to an AWS Direct Connect location. Traffic never traverses the public internet, giving consistent throughput and sub-millisecond latency.

Physical Setup and Virtual Interfaces

Physical Path

You order a cross-connect at a Direct Connect location (colocation facility). Your router or WAN carrier connects to the AWS cage at that facility. AWS provisions a dedicated port (1G, 10G, or 100G) on their device. The physical link carries 802.1Q VLAN-tagged traffic, which is divided into Virtual Interfaces (VIFs).

VIF Type Purpose VLAN Tag BGP Peer
Private VIF Access resources inside a specific VPC or via Transit Gateway Unique per VIF; you assign BGP session to Virtual Private Gateway or Transit Gateway
Public VIF Access AWS public endpoints (S3, DynamoDB, SQS) over private path without NAT Unique per VIF BGP session to AWS public IP range; AWS advertises public prefixes
Transit VIF Connect to a Direct Connect Gateway, which routes to multiple VPCs via Transit Gateway across regions Unique per VIF BGP session to Direct Connect Gateway (ASN 64512 by default)

Resiliency Models

# Maximum Resilience (recommended for production):
#
# Data Center A               AWS Region
# [Router A] ─── DX Location 1 ─── [DX Connection 1] ─── Transit Gateway
# [Router A] ─── DX Location 2 ─── [DX Connection 2] ─┘
#
# Data Center B
# [Router B] ─── DX Location 1 ─── [DX Connection 3] ─── Transit Gateway
# [Router B] ─── DX Location 2 ─── [DX Connection 4] ─┘
#
# 4 connections, 2 DX locations, 2 physical data centers = no single point of failure
# Tolerates: DX location failure, data center failure, router failure

# High Resilience (cost-balanced):
# Single data center, two DX connections to two separate DX locations
# Tolerates: DX location failure, single connection failure

# Development / Non-Critical:
# Single DX connection + Site-to-Site VPN as backup
# On BGP failure of DX, VPN tunnels take over via lower BGP preference
# Terraform: Direct Connect Transit VIF → Direct Connect Gateway → Transit Gateway

resource "aws_dx_gateway" "main" {
  name            = "dxgw-main"
  amazon_side_asn = "64512"
}

resource "aws_dx_transit_virtual_interface" "primary" {
  connection_id    = var.dx_connection_id   # Physical DX connection ID from AWS console
  name             = "transit-vif-primary"
  vlan             = 100
  address_family   = "ipv4"
  bgp_asn          = 65000                  # Your on-prem ASN
  amazon_address   = "169.254.200.1/30"     # AWS side of BGP session
  customer_address = "169.254.200.2/30"     # Your router side
  bgp_auth_key     = var.bgp_auth_key       # MD5 auth for BGP session
  dx_gateway_id    = aws_dx_gateway.main.id
}

resource "aws_dx_gateway_association" "tgw" {
  dx_gateway_id         = aws_dx_gateway.main.id
  associated_gateway_id = var.transit_gateway_id
  allowed_prefixes      = ["10.0.0.0/8", "172.16.0.0/12"]  # On-prem prefixes to allow
}

GCP Cloud Interconnect

Cloud Interconnect provides high-bandwidth, low-latency connections between your on-premises network and Google's network. Dedicated Interconnect requires a physical colocation at a Google facility; Partner Interconnect uses a service provider.

Dedicated vs Partner Interconnect

Feature Dedicated Interconnect Partner Interconnect
Bandwidth per circuit 10 Gbps or 100 Gbps 50 Mbps to 10 Gbps (flexible)
Physical requirement Your router must be colocated at a Google colocation facility Connect via a Google-approved service provider (e.g., Equinix, Tata)
SLA 99.99% with 4 circuits (2 metros × 2 per metro) 99.99% with redundant provider connections
Provisioning time Weeks (physical cross-connect required) Days to weeks (provider-dependent)
Use case Large enterprise with colocation presence or high bandwidth needs Enterprise without direct colo presence; lower bandwidth requirements

VLAN Attachments and Cloud Router BGP

# Terraform: GCP Dedicated Interconnect with Cloud Router and BGP

# Cloud Router: manages BGP sessions for the Interconnect
resource "google_compute_router" "interconnect_router" {
  name    = "router-interconnect-asia-se1"
  network = var.shared_vpc_network
  region  = "asia-southeast1"

  bgp {
    asn               = 16550          # Google-assigned ASN for Cloud Router
    advertise_mode    = "CUSTOM"
    advertised_groups = ["ALL_SUBNETS"]   # Advertise all VPC subnets to on-prem

    # Advertise specific on-prem route summary back to other peers
    advertised_ip_ranges {
      range       = "10.0.0.0/8"
      description = "On-premises supernet"
    }
  }
}

# Interconnect attachment (VLAN) — one per circuit for redundancy
resource "google_compute_interconnect_attachment" "primary" {
  name                     = "interconnect-attach-primary"
  region                   = "asia-southeast1"
  router                   = google_compute_router.interconnect_router.id
  interconnect             = var.interconnect_id_primary   # Physical circuit ID
  type                     = "DEDICATED"
  bandwidth                = "BPS_10G"
  vlan_tag8021q            = 100          # VLAN tag on the physical circuit
  admin_enabled            = true
  candidate_subnets        = ["169.254.10.0/29"]   # BGP link-local range
}

resource "google_compute_interconnect_attachment" "secondary" {
  name          = "interconnect-attach-secondary"
  region        = "asia-southeast1"
  router        = google_compute_router.interconnect_router.id
  interconnect  = var.interconnect_id_secondary
  type          = "DEDICATED"
  bandwidth     = "BPS_10G"
  vlan_tag8021q = 200
  admin_enabled = true
  candidate_subnets = ["169.254.10.8/29"]
}

# BGP peer for primary attachment (auto-configured by GCP after attachment activation)
resource "google_compute_router_peer" "primary_peer" {
  name                      = "bgp-peer-primary"
  router                    = google_compute_router.interconnect_router.name
  region                    = "asia-southeast1"
  interface                 = google_compute_router_interface.primary.name
  peer_ip_address           = "169.254.10.2"     # Your router's BGP IP
  peer_asn                  = 65000              # Your on-prem ASN
  advertised_route_priority = 100               # Lower = higher preference (primary)
}

resource "google_compute_router_peer" "secondary_peer" {
  name                      = "bgp-peer-secondary"
  router                    = google_compute_router.interconnect_router.name
  region                    = "asia-southeast1"
  interface                 = google_compute_router_interface.secondary.name
  peer_ip_address           = "169.254.10.10"
  peer_asn                  = 65000
  advertised_route_priority = 200               # Higher value = secondary/backup path
}

GCP Cloud VPN — HA VPN with BGP

HA VPN provides a 99.99% SLA by creating two VPN tunnels across two GCP-managed gateway interfaces. Combined with a Cloud Router, it runs BGP for dynamic route exchange with the remote peer.

# Terraform: GCP HA VPN with Cloud Router and BGP

resource "google_compute_ha_vpn_gateway" "ha_vpn" {
  name    = "ha-vpn-gateway-asia-se1"
  network = var.shared_vpc_network
  region  = "asia-southeast1"
}

resource "google_compute_router" "vpn_router" {
  name    = "router-vpn-asia-se1"
  network = var.shared_vpc_network
  region  = "asia-southeast1"
  bgp {
    asn = 64514   # GCP Cloud Router ASN (choose a private ASN)
  }
}

# External (on-premises) VPN gateway — registers your peer device
resource "google_compute_external_vpn_gateway" "on_prem" {
  name            = "ext-vpn-on-prem"
  redundancy_type = "TWO_IPS_REDUNDANCY"   # Two on-prem routers
  interface {
    id         = 0
    ip_address = "203.0.113.10"   # Primary on-prem router public IP
  }
  interface {
    id         = 1
    ip_address = "203.0.113.20"   # Secondary on-prem router public IP
  }
}

# Tunnel 1: GCP interface 0 → on-prem interface 0
resource "google_compute_vpn_tunnel" "tunnel1" {
  name                            = "vpn-tunnel-1"
  region                          = "asia-southeast1"
  ha_vpn_gateway                  = google_compute_ha_vpn_gateway.ha_vpn.id
  ha_vpn_gateway_interface        = 0
  peer_external_gateway           = google_compute_external_vpn_gateway.on_prem.id
  peer_external_gateway_interface = 0
  router                          = google_compute_router.vpn_router.id
  shared_secret                   = var.vpn_shared_secret_1
  ike_version                     = 2
}

# Tunnel 2: GCP interface 1 → on-prem interface 1
resource "google_compute_vpn_tunnel" "tunnel2" {
  name                            = "vpn-tunnel-2"
  region                          = "asia-southeast1"
  ha_vpn_gateway                  = google_compute_ha_vpn_gateway.ha_vpn.id
  ha_vpn_gateway_interface        = 1
  peer_external_gateway           = google_compute_external_vpn_gateway.on_prem.id
  peer_external_gateway_interface = 1
  router                          = google_compute_router.vpn_router.id
  shared_secret                   = var.vpn_shared_secret_2
  ike_version                     = 2
}

# Cloud Router interfaces and BGP peers for each tunnel
resource "google_compute_router_interface" "if1" {
  name       = "router-if-1"
  router     = google_compute_router.vpn_router.name
  region     = "asia-southeast1"
  vpn_tunnel = google_compute_vpn_tunnel.tunnel1.name
  ip_range   = "169.254.20.1/30"
}

resource "google_compute_router_peer" "peer1" {
  name            = "bgp-peer-1"
  router          = google_compute_router.vpn_router.name
  region          = "asia-southeast1"
  interface       = google_compute_router_interface.if1.name
  peer_ip_address = "169.254.20.2"
  peer_asn        = 65000
  advertised_route_priority = 100
}

Cross-Cloud Connectivity: AWS to GCP via VPN

When workloads span AWS and GCP, establish a private network path via IPsec VPN with BGP, avoiding public internet routing. The two Cloud Routers (AWS VGW and GCP Cloud Router) establish BGP sessions through the VPN tunnels.

Architecture

# Cross-cloud connectivity diagram

AWS VPC (10.10.0.0/16)
  └── Virtual Private Gateway (VGW)  ASN: 64512
        └── VPN Connection (IPsec)
              └── [Internet / AWS-Google peering fabric]
                    └── GCP Cloud VPN Gateway
                          └── Cloud Router  ASN: 64514
                                └── GCP VPC (10.20.0.0/16)

# BGP route exchange:
# AWS VGW advertises: 10.10.0.0/16 → GCP Cloud Router
# GCP Cloud Router advertises: 10.20.0.0/16 → AWS VGW
# After BGP convergence, instances on both clouds can communicate privately

Terraform: AWS Side

# AWS: VGW and VPN connection to GCP

resource "aws_vpn_gateway" "gcp_peer" {
  vpc_id          = var.aws_vpc_id
  amazon_side_asn = 64512
  tags            = { Name = "vgw-to-gcp" }
}

resource "aws_customer_gateway" "gcp" {
  bgp_asn    = 64514            # GCP Cloud Router ASN
  ip_address = var.gcp_vpn_ip1  # GCP HA VPN gateway interface 0 public IP
  type       = "ipsec.1"
  tags       = { Name = "cgw-gcp-tunnel1" }
}

resource "aws_vpn_connection" "to_gcp" {
  customer_gateway_id = aws_customer_gateway.gcp.id
  vpn_gateway_id      = aws_vpn_gateway.gcp_peer.id
  type                = "ipsec.1"

  # Use static routing if BGP is not configured, or dynamic with BGP
  static_routes_only = false    # Enable BGP

  tunnel1_inside_cidr = "169.254.100.0/30"
  tunnel2_inside_cidr = "169.254.100.4/30"
  tunnel1_preshared_key = var.xcloud_psk_1
  tunnel2_preshared_key = var.xcloud_psk_2

  tags = { Name = "vpn-aws-to-gcp" }
}

# Route table update: route GCP VPC traffic via VGW
resource "aws_vpn_gateway_route_propagation" "gcp_routes" {
  vpn_gateway_id = aws_vpn_gateway.gcp_peer.id
  route_table_id = var.aws_private_route_table_id
}

Terraform: GCP Side

# GCP: HA VPN to AWS VGW

resource "google_compute_external_vpn_gateway" "aws_peer" {
  name            = "ext-vpn-aws"
  redundancy_type = "FOUR_IPS_REDUNDANCY"   # AWS creates 4 tunnel endpoints
  interface {
    id         = 0
    ip_address = aws_vpn_connection.to_gcp.tunnel1_address
  }
  interface {
    id         = 1
    ip_address = aws_vpn_connection.to_gcp.tunnel2_address
  }
}

resource "google_compute_vpn_tunnel" "to_aws_t1" {
  name                            = "tunnel-to-aws-1"
  region                          = "asia-southeast1"
  ha_vpn_gateway                  = google_compute_ha_vpn_gateway.ha_vpn.id
  ha_vpn_gateway_interface        = 0
  peer_external_gateway           = google_compute_external_vpn_gateway.aws_peer.id
  peer_external_gateway_interface = 0
  router                          = google_compute_router.vpn_router.id
  shared_secret                   = var.xcloud_psk_1
  ike_version                     = 2
}

resource "google_compute_router_peer" "aws_peer" {
  name            = "bgp-peer-aws"
  router          = google_compute_router.vpn_router.name
  region          = "asia-southeast1"
  interface       = google_compute_router_interface.to_aws.name
  peer_ip_address = "169.254.100.1"    # AWS tunnel1 inside IP (AWS side)
  peer_asn        = 64512              # AWS VGW ASN
}

DNS Resolution in Hybrid Environments

Consistent DNS resolution is critical in hybrid and multi-cloud environments. Workloads in cloud must resolve on-premises internal names, and on-premises hosts must resolve cloud-internal names.

AWS: Route 53 Resolver

# Terraform: Route 53 Resolver for hybrid DNS

# Inbound Resolver Endpoint: accept DNS queries FROM on-premises
# On-prem DNS forwards *.aws.internal to these IPs (in Elastic IPs in your VPC)
resource "aws_route53_resolver_endpoint" "inbound" {
  name      = "r53-resolver-inbound"
  direction = "INBOUND"
  security_group_ids = [aws_security_group.r53_resolver.id]

  ip_address {
    subnet_id = var.resolver_subnet_az1
  }
  ip_address {
    subnet_id = var.resolver_subnet_az2
  }
}

# Outbound Resolver Endpoint: send queries FROM AWS to on-premises DNS
resource "aws_route53_resolver_endpoint" "outbound" {
  name      = "r53-resolver-outbound"
  direction = "OUTBOUND"
  security_group_ids = [aws_security_group.r53_resolver.id]

  ip_address {
    subnet_id = var.resolver_subnet_az1
  }
  ip_address {
    subnet_id = var.resolver_subnet_az2
  }
}

# Forwarding rule: queries for corp.example.com → on-premises DNS
resource "aws_route53_resolver_rule" "forward_corp" {
  domain_name          = "corp.example.com"
  rule_type            = "FORWARD"
  resolver_endpoint_id = aws_route53_resolver_endpoint.outbound.id

  target_ip {
    ip   = "10.0.0.53"    # Primary on-prem DNS server
    port = 53
  }
  target_ip {
    ip   = "10.0.0.54"    # Secondary on-prem DNS server
    port = 53
  }
}

# Share the forwarding rule with all accounts in the Organization via RAM
resource "aws_route53_resolver_rule_association" "vpc_association" {
  resolver_rule_id = aws_route53_resolver_rule.forward_corp.id
  vpc_id           = var.workload_vpc_id
}

GCP: Cloud DNS Forwarding Zones

# Terraform: GCP DNS forwarding for hybrid DNS

# Private DNS zone for GCP-internal names (resolved within VPC only)
resource "google_dns_managed_zone" "private_gcp" {
  name        = "private-gcp-zone"
  dns_name    = "gcp.internal."
  description = "Private zone for GCP internal hostnames"
  visibility  = "private"

  private_visibility_config {
    networks {
      network_url = var.shared_vpc_network_url
    }
  }
}

# DNS peering zone: resolve on-prem names via Cloud DNS forwarding
resource "google_dns_managed_zone" "forward_to_onprem" {
  name        = "forward-corp-zone"
  dns_name    = "corp.example.com."
  description = "Forward corp.example.com queries to on-prem DNS"
  visibility  = "private"

  private_visibility_config {
    networks {
      network_url = var.shared_vpc_network_url
    }
  }

  forwarding_config {
    target_name_servers {
      ipv4_address    = "10.0.0.53"    # On-prem DNS via Interconnect/VPN
      forwarding_path = "private"      # Use private path (Interconnect/VPN), not internet
    }
    target_name_servers {
      ipv4_address    = "10.0.0.54"
      forwarding_path = "private"
    }
  }
}

# Outbound DNS policy: route all non-local queries through on-prem DNS
# (useful when on-prem DNS is authoritative for all internal names)
resource "google_dns_policy" "outbound_forwarding" {
  name                      = "outbound-to-onprem"
  enable_inbound_forwarding = true    # Allow on-prem to query GCP DNS

  alternative_name_server_config {
    target_name_servers {
      ipv4_address    = "10.0.0.53"
      forwarding_path = "private"
    }
  }

  networks {
    network_url = var.shared_vpc_network_url
  }
}

Anthos / Google Distributed Cloud

Anthos (now part of Google Distributed Cloud) extends GKE's managed Kubernetes control plane to clusters running outside of GCP — on-premises, on AWS, on Azure, or at the edge. It provides a single operational model across all environments.

Anthos Attached Clusters

Register any CNCF-conformant Kubernetes cluster (EKS, AKS, on-prem RKE2, k3s) with the GKE Fleet. The cluster becomes visible in Google Cloud Console, and you can apply Fleet-wide policies, Config Sync, and Policy Controller without re-provisioning.

Use cases: Unified RBAC policy across AWS EKS and GKE; Config Sync (GitOps) across all clusters; single-pane security posture.

Google Distributed Cloud (Hosted)

Google-managed hardware (nodes) deployed in your data center and operated by Google. You get GKE Autopilot-equivalent experience on-premises. Google manages the hardware lifecycle, OS updates, and the Kubernetes control plane. Connectivity back to GCP is mandatory for control plane communication.

Use cases: Data sovereignty (data stays on-prem); ultra-low latency edge; air-gapped environments with periodic sync.

# Register an on-premises Kubernetes cluster as an Anthos Attached Cluster

# Step 1: Install the Connect Agent on the cluster (run on cluster with kubectl)
# gcloud container fleet memberships register my-onprem-cluster \
#   --context=kubernetes-admin@my-onprem-cluster \
#   --service-account-key-file=connect-agent-sa-key.json \
#   --project=my-gcp-project \
#   --location=global

# Step 2: Terraform — register the membership
resource "google_gke_hub_membership" "on_prem" {
  membership_id = "on-prem-datacenter-cluster"
  project       = var.gcp_project_id

  endpoint {
    on_prem_cluster {
      resource_link = "//container.googleapis.com/projects/${var.gcp_project_id}/locations/global/memberships/on-prem-datacenter-cluster"
    }
  }
}

# Step 3: Enable Config Sync Fleet feature (applies GitOps to all member clusters)
resource "google_gke_hub_feature" "config_sync" {
  name     = "configmanagement"
  project  = var.gcp_project_id
  location = "global"
}

resource "google_gke_hub_feature_membership" "on_prem_config_sync" {
  project    = var.gcp_project_id
  location   = "global"
  feature    = google_gke_hub_feature.config_sync.name
  membership = google_gke_hub_membership.on_prem.membership_id

  configmanagement {
    config_sync {
      git {
        sync_repo   = "https://github.com/org/fleet-config"
        sync_branch = "main"
        policy_dir  = "clusters/on-prem-datacenter"
        secret_type = "token"
      }
    }
    policy_controller {
      enabled                    = true
      template_library_installed = true
    }
  }
}

AWS Outposts

AWS Outposts brings the same AWS hardware, APIs, and services to your on-premises data center. Workloads on Outposts use the same EC2, EKS, RDS, and ELB APIs as in the cloud — the Outpost is simply another Availability Zone extension of the parent AWS Region.

Outposts Rack

A full 42U rack of AWS hardware delivered and installed by AWS. Provides compute (EC2 instances), storage (EBS, S3 on Outposts), managed databases (RDS), and EKS nodes. Available in fixed configurations from 1 to 96 EC2 hosts. Requires a dedicated private network uplink back to the parent AWS Region (via Direct Connect or internet) for control plane communication.

Use cases: Large on-premises workloads requiring AWS APIs; local data processing before cloud upload; latency-sensitive manufacturing or trading systems; compliance requiring physical location control.

Outposts Server

A 1U or 2U rack-mount server for deployments where a full rack is impractical (branch office, factory floor, store). Supports EC2 instances and ECS containers. More limited instance types than the full rack, but provides consistent AWS APIs in space-constrained locations. Supports disconnected mode for brief periods if the uplink to AWS fails.

Use cases: Retail point-of-presence with local compute; remote office running AWS workloads; edge inference with SageMaker Edge Manager; factory automation with IoT data local processing.

Outposts networking requirement: Outposts must maintain low-latency connectivity to the parent AWS Region. The service link (control plane) requires a stable, low-latency connection. The local gateway (LGW) handles local traffic between Outpost subnets and on-premises networks. Data plane traffic (between instances on the Outpost and the parent VPC) can traverse the internet or Direct Connect.

Network Security in Hybrid Environments

Extending the network perimeter to on-premises and across cloud providers introduces security challenges that require layered controls at each boundary.

Firewall Rules and Micro-Segmentation

# AWS: Security Groups for hybrid workloads
# Best practice: reference on-prem CIDR explicitly; avoid 0.0.0.0/0 for any port

resource "aws_security_group" "app_tier" {
  name        = "sg-app-tier-${var.env}"
  description = "App tier: allow from on-prem and load balancer only"
  vpc_id      = var.vpc_id

  # Allow HTTPS from on-premises (via Direct Connect / VPN)
  ingress {
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = ["10.0.0.0/8"]   # On-prem supernet
    description = "HTTPS from on-premises network"
  }

  # Allow app traffic from load balancer security group only (not CIDR)
  ingress {
    from_port       = 8080
    to_port         = 8080
    protocol        = "tcp"
    security_groups = [var.alb_security_group_id]
    description     = "App port from ALB only"
  }

  # Allow egress to on-prem database only
  egress {
    from_port   = 1521
    to_port     = 1521
    protocol    = "tcp"
    cidr_blocks = ["10.0.10.0/24"]   # On-prem Oracle subnet
    description = "Oracle DB to on-premises"
  }

  # Deny all other egress by adding no default allow rule
  # (AWS SGs are deny-all by default for custom egress rules)

  tags = { Name = "sg-app-tier-${var.env}"; ManagedBy = "terraform" }
}

# GCP: VPC Firewall Rules with target tags (micro-segmentation)
resource "google_compute_firewall" "allow_app_from_onprem" {
  name    = "allow-app-from-onprem"
  network = var.shared_vpc_network

  allow {
    protocol = "tcp"
    ports    = ["443", "8080"]
  }

  source_ranges = ["10.0.0.0/8"]     # On-prem via Interconnect
  target_tags   = ["app-tier"]       # Only VMs tagged "app-tier" receive this rule
  priority      = 1000
  description   = "Allow app traffic from on-premises via Interconnect"
}

resource "google_compute_firewall" "deny_all_ingress" {
  name     = "deny-all-ingress-baseline"
  network  = var.shared_vpc_network
  priority = 65534                   # Lower than specific allow rules

  deny {
    protocol = "all"
  }

  source_ranges = ["0.0.0.0/0"]
  description   = "Baseline deny-all ingress; explicit allow rules override this"
}

Shared VPC Security Boundaries

# GCP Shared VPC security model:
#
# Host Project (owns VPC, subnets, firewall rules)
#   └── Firewall rules applied at VPC level — apply to ALL service projects
#       Service projects cannot override host project firewall rules
#       Service project workloads can only use subnets they are granted access to
#
# This enforces network security centrally:
# - The Network/Security team controls firewall rules in the host project
# - Product teams cannot create permissive rules in their service projects
# - VPC Service Controls add a second layer: even if network access exists,
#   data API calls outside the perimeter are denied

# AWS: equivalent model with Network Firewall in hub account
resource "aws_networkfirewall_firewall" "hub" {
  name                = "network-firewall-hub"
  firewall_policy_arn = aws_networkfirewall_firewall_policy.hub.arn
  vpc_id              = var.hub_vpc_id

  subnet_mapping {
    subnet_id = var.firewall_subnet_az1
  }
  subnet_mapping {
    subnet_id = var.firewall_subnet_az2
  }
}

resource "aws_networkfirewall_firewall_policy" "hub" {
  name = "hub-firewall-policy"

  firewall_policy {
    stateless_default_actions          = ["aws:forward_to_sfe"]
    stateless_fragment_default_actions = ["aws:forward_to_sfe"]

    stateful_rule_group_reference {
      resource_arn = aws_networkfirewall_rule_group.block_malicious.arn
    }
  }
}

resource "aws_networkfirewall_rule_group" "block_malicious" {
  name     = "block-malicious-domains"
  type     = "STATEFUL"
  capacity = 100

  rule_group {
    rules_source {
      rules_source_list {
        generated_rules_type = "DENYLIST"
        target_types         = ["HTTP_HOST", "TLS_SNI"]
        targets              = [
          ".malware-domain.com",
          ".phishing-domain.net"
        ]
      }
    }
  }
}
Zero Trust in Hybrid Networks: Perimeter firewalls alone are insufficient. Complement network controls with workload identity (mTLS via service mesh, AWS IAM credential-based authentication), continuous authorization (BeyondCorp / AWS Verified Access), and runtime threat detection (GuardDuty, SCC) so that even traffic that has crossed the network perimeter is verified at the application layer.
Section complete. You have covered:
  • Multi-cloud Overview — strategy, maturity model, workload placement, identity federation
  • Landing Zone Design — OU structure, SCPs, Org Policies, Terraform modules, guardrails
  • Hybrid Connectivity — VPN, Direct Connect, Interconnect, cross-cloud BGP, DNS, Outposts, Anthos, network security