Cloudflare → GCP → GKE Ingress Architecture
This document describes how ingress traffic flows from the public internet through Cloudflare into our GCP-based Kubernetes environments. It details the components, their responsibilities, and how they are connected in practice. This is a descriptive architecture of how things work today.
All external traffic to customer applications follows a single, consistent path:
Client → Cloudflare
Cloudflare acts as DNS, CDN, TLS termination, and first security layer.
User connects securely to Cloudflare edge using certificates issued by Google Trust Services.
Cloudflare → GCP Network Load Balancer (NLB)
Cloudflare opens a new, fully encrypted HTTPS connection to the origin.
TLS mode is Full (Strict), so Cloudflare validates the origin certificate.
GCP NLB → GKE Nodes (NodePort)
NLB performs pure L4 forwarding (TCP/443) with no TLS termination.
Forwarding rules point to GKE node instance groups.
NodePort → nginx-ingress Controller
nginx-ingress pods listen behind a NodePort and receive traffic forwarded by the NLB.
nginx-ingress terminates TLS using Let’s Encrypt certificates.
nginx-ingress → Backend Services
Based on hostname and path rules, requests are routed to respective Kubernetes Services and then Pods.
This architecture ensures TLS is present in both legs:
Client → Cloudflare (edge)
Cloudflare → nginx-ingress (origin)
Cloudflare is the public entry point for all environments.
DNS: All domains are hosted on Cloudflare; A/AAAA records point to Cloudflare proxies.
Proxying: Orange-cloud mode ensures traffic always passes Cloudflare.
TLS Termination: Public TLS is terminated at Cloudflare using their own managed certificates.
Re-Encryption: Cloudflare re-establishes TLS towards the GCP NLB endpoint.
Security Controls: WAF, bot filtering, geo-blocking, caching rules, rate-limiting (described in separate WAF doc).
Cloudflare edge certificates use Google Trust Services as CA.
Terraform provisions origin certificates (Let’s Encrypt) inside Kubernetes; Cloudflare validates these in Full Strict mode.
The GCP NLB is responsible for forwarding encrypted traffic from Cloudflare to the Kubernetes cluster.
Type: External TCP Network Load Balancer
TLS: No termination; carries encrypted traffic end-to-end.
Forwarding Rule: TCP/443
Backend: GKE node instance groups
Health Checks: Performed at TCP level to verify node availability
Works with Cloudflare proxying over TCP.
Supports NodePort service backend for ingress controllers.
Minimal complexity (L4 only) and reliable for encrypted pass-through.
Each customer operates in a dedicated GKE cluster.
All product workloads reside in a shared namespace.
Cluster resources include deployments, services, ingress objects, configmaps, secrets, cronjobs, etc.
Ingress routing is fully centralized through the nginx-ingress controller.
Nodes run a CNI plugin supporting standard Kubernetes networking.
NodePort range is used by nginx-ingress to expose its listeners.
The nginx-ingress controller is the main routing component inside the cluster.
Origin TLS termination: nginx-ingress holds Let’s Encrypt certificates via cert-manager.
Routing: Based on:
Hostnames (e.g.
api.customer.com,app.customer.com)Path prefixes (
/api,/admin, etc.)
Header propagation: Injects
X-Forwarded-For,X-Forwarded-Proto,X-Forwarded-Hostfor applications.Proxying: Balances traffic across backend pods.
cert-manager retrieves Let’s Encrypt certificates via HTTP-01 or DNS-01 (depending on setup).
Certificates stored as Kubernetes Secrets.
nginx-ingress automatically reloads configuration when Secrets change.
Request size limits
Timeouts
Redirects and rewrites
CORS settings
TLS is implemented in two layers:
Certificate belongs to Cloudflare (Google Trust Services CA).
Modern TLS, fast termination across global edge locations.
TLS fully re-encrypted.
nginx-ingress uses Let’s Encrypt certificates.
Cloudflare validates these certificates due to Full Strict mode.
Result: encryption is maintained end-to-end without exposing direct node IPs.
Routing is based on Kubernetes Ingress resources.
Hostname-based: Each application uses its own domain or subdomain.
Path-based: Internal APIs sharing the same domain are separated by path.
api.customer.com → / → api-service
app.customer.com → / → frontend-service
backend.customer.com → /api → backend-service
nginx-ingress reads all Ingress resources.
It generates an aggregated NGINX configuration.
Any update to an Ingress resource triggers a configuration reload.
The security posture is layered and defensive.
Cloudflare
Primary protection layer
TLS termination
WAF & bot protection (separate doc)
TLS Re-Encryption to Origin
Full Strict mode ensures trusted certs
GCP NLB
L4 forwarding only (no attack surface at L7)
nginx-ingress
Proper header sanitation
Enforces application routing rules
Kubernetes Network Controls
Standard CNI-level isolation
Each customer gets its own GKE cluster.
Simplifies responsibility boundaries and reduces cross-customer risk.
Shared namespace currently used for simplicity.
Per-service namespace separation can be introduced if needed.
Terraform provisions infrastructure.
Kubernetes manifests deployed via CI/CD or GitOps (depending on customer).
Cloudflare settings managed programmatically or manually, depending on domain.
This document intentionally excludes:
Detailed WAF configuration (refer to Cloudflare WAF & Edge Security Architecture)
Rate-limiting and caching policies
Bot mitigation configuration
Incident response procedures
These topics are handled in dedicated documentation.