Type Your Question
How to configure networking in Google Kubernetes Engine?
Sunday, 16 March 2025GOOGLE
Google Kubernetes Engine (GKE) provides a robust and flexible networking infrastructure to support diverse application needs. This guide provides a comprehensive overview of configuring networking within GKE, covering everything from basic cluster setup to advanced security and traffic management scenarios.
1. Understanding GKE Networking Fundamentals
At its core, GKE networking relies on Google Cloud's Virtual Private Cloud (VPC) network. Each GKE cluster resides within a VPC network, benefiting from its security, isolation, and connectivity features. Understanding these building blocks is essential:
- Virtual Private Cloud (VPC): A logically isolated section of the Google Cloud network where your GKE cluster and related resources reside. VPCs provide global, regional, and zonal resources and services, with private networking across those zones.
- VPC Subnets: Divisions within your VPC network, defining IP address ranges for resources within specific regions. GKE clusters require at least one subnet for node IP addresses, pod IP addresses (Container Address Resolution Protocol – CNI), and service IP addresses.
- Routes: Define the paths network traffic takes within your VPC network and between your VPC network and external networks. GKE automatically manages routes for internal cluster communication.
- Firewall Rules: Control network traffic based on source, destination, protocol, and port. GKE automatically manages certain firewall rules for cluster health and functionality, but you'll often define additional rules for application security.
2. GKE Cluster Networking Models
When creating a GKE cluster, you choose a networking model that determines how pods and services communicate within the cluster.
2.1. VPC-native Clusters (Recommended)
VPC-native clusters are the recommended networking model for most GKE deployments. In this model, pods receive IP addresses directly from a VPC subnet. Key benefits include:
- Simplified Networking: Pod IPs are routable within the VPC without NAT, simplifying communication with other VPC resources and on-premises networks.
- Scalability: Larger pod IP address space is available since you're utilizing a full VPC subnet range.
- Improved Security: VPC Firewall Rules can be applied directly to pod IPs, enhancing security control.
- Enhanced Observability: Easier troubleshooting and monitoring of pod-to-pod and pod-to-service traffic due to the routable IPs.
Creating a VPC-native cluster:
gcloud container clusters create CLUSTER_NAME \
--network=VPC_NETWORK \
--subnetwork=SUBNETWORK \
--cluster-ipv4-cidr=CLUSTER_CIDR \
--services-ipv4-cidr=SERVICES_CIDR \
--enable-ip-alias #This Enables VPC native
--zone=ZONE
Replace placeholders (CLUSTER_NAME, VPC_NETWORK, SUBNETWORK, CLUSTER_CIDR, SERVICES_CIDR, ZONE) with your desired values.
2.2. Routes-based Clusters (Legacy)
Routes-based clusters were the original GKE networking model. In this model, pods receive IP addresses from a cluster-internal IP range. Google Cloud routes are created to forward traffic to these pod IPs via the GKE nodes.
While still supported, routes-based clusters are generally discouraged in favor of VPC-native clusters due to their limitations in scalability, security, and troubleshooting. They may still be used in some legacy scenarios or when migrating existing clusters.
3. Network Policies
Network Policies provide fine-grained control over traffic between pods within your cluster. They allow you to define rules that restrict communication based on pod labels, namespaces, or even IP addresses (when using Cilium or Calico CNI implementations). This enables you to create a "zero-trust" network within your cluster, enhancing security and preventing unauthorized access between services.
Enabling Network Policy:
Network policy support depends on the CNI plugin used in your cluster. GKE supports the following:
- Calico: Use a Calico CNI Plugin.
- Cilium: Use Cilium CNI Plugin (Needs to install through Helm)
- Default (kube-router): Supported by default.
For Example Calico Network Policy : Enable during Cluster Creation
gcloud container clusters create CLUSTER_NAME \
--network-policy=CALICO \
... other parameters ...
Example Network Policy Manifest (YAML):
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-to-backend
namespace: default
spec:
podSelector:
matchLabels:
app: backend
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
This example policy allows pods labeled app: frontend
to communicate with pods labeled app: backend
within the default
namespace.
4. Kubernetes Services
Kubernetes Services abstract the underlying pods providing network connectivity for applications within your cluster. They act as stable endpoints for accessing groups of pods.
4.1. Service Types
- ClusterIP: (Default) Exposes the service on a cluster-internal IP. Only reachable from within the cluster.
- NodePort: Exposes the service on each Node's IP at a static port (the NodePort). Accessible from outside the cluster, typically combined with a LoadBalancer for production environments.
- LoadBalancer: Provision a Google Cloud Load Balancer to expose the service externally. Automatically creates the necessary resources (e.g., forwarding rules, firewall rules) for external access.
- ExternalName: Maps the service to an external DNS name. Useful for accessing services outside the cluster.
Example Service Manifest (YAML - LoadBalancer):
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 8080
type: LoadBalancer
5. Ingress
Ingress provides HTTP(S) routing to Services within your cluster from outside. It acts as a reverse proxy, routing traffic to the appropriate Service based on hostname and path.
5.1. GKE Ingress Controller
GKE provides an integrated Ingress controller based on the Google Cloud Load Balancer. This controller automatically configures a load balancer based on your Ingress resources, simplifying external access to your applications.
5.2. Example Ingress Manifest (YAML):
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "my-static-ip" #optional; requires a static IP in your GCP project
spec:
rules:
- host: myapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service
port:
number: 80
This example ingress routes traffic to myapp.example.com
to the service named my-service on port 80.
6. DNS Configuration
GKE uses CoreDNS for internal DNS resolution. CoreDNS provides name resolution for services and pods within the cluster. By default, CoreDNS is automatically configured when you create a cluster, resolving Kubernetes services to their associated ClusterIPs.
7. Advanced Networking Configuration
7.1. Shared VPC
Shared VPC allows multiple GKE clusters to share the same VPC network. This provides centralized network administration, reduces complexity, and ensures consistent network policies across multiple teams or applications.
7.2. Private Service Connect
Private Service Connect (PSC) allows your GKE clusters to access services exposed through PSC, without exposing those services to the public internet. This enhances security and simplifies network management when integrating with services across different projects or organizations.
7.3. Container Network Interface (CNI) Plugins
While GKE's default networking generally suffices, custom CNI plugins, like Cilium, Calico or Antrea offer added functionalities, particularly enhanced network policy enforcement, network observability, and integration with advanced network features.
Switching to a custom CNI often requires re-creating your cluster. Ensure careful planning and testing when opting for this route.
7.4. VPC Service Controls (VPC-SC)
VPC Service Controls further restrict access to Google Cloud services from your GKE cluster based on the originating project or VPC network. This provides a perimeter-based defense, preventing data exfiltration and mitigating risks associated with insider threats or compromised credentials. Using VPCSC, you prevent unintended calls to Cloud services from any other Project ID.
8. Monitoring and Troubleshooting
Effective networking relies on consistent monitoring and proper troubleshooting:
- Cloud Logging: Review Kubernetes events and audit logs to identify potential issues.
- Cloud Monitoring: Monitor network traffic metrics (e.g., latency, packet loss) to identify performance bottlenecks.
kubectl
Commands: Utilize commands likekubectl get services
,kubectl describe service
,kubectl get pods
andkubectl logs
for debugging.tcpdump
: Inside Pods, troubleshoot connectivity.ping
andtraceroute
: Validate reachability.
9. Security Best Practices
- Apply Network Policies: Implement network policies to restrict pod-to-pod communication.
- Use VPC Firewall Rules: Control ingress and egress traffic to your cluster.
- Enable VPC Service Controls: Protect your GKE environment from data exfiltration and unauthorized access.
- Regularly Audit Your Configuration: Periodically review and update your network configurations to ensure they are aligned with your security requirements.
- Principle of Least Privilege: For firewall and IAM policies, grant only the bare minimum to prevent accidental or purposeful misconfiguration of the services
Conclusion
Configuring networking effectively in GKE is critical for building secure, scalable, and resilient applications. By understanding the core concepts, choosing the appropriate networking model, and leveraging features like Network Policies, Ingress, and VPC Service Controls, you can optimize your GKE environment for your specific needs. Keep learning and test configuration frequently in Non-Prod. Regularly consulting the latest Google Cloud documentation is highly recommended to stay up-to-date with any recent GKE updates or changes
Kubernetes Engine GKE Networking Services Ingress 
Related