Automatically created firewall rules


This page describes the ingress allow VPC firewall rules that Google Kubernetes Engine (GKE) creates automatically in Google Cloud.

Applicable firewalls and egress firewalls

Ingress allow firewall rules created by GKE aren't the only applicable firewall rules that apply to nodes in a cluster. The complete set of applicable firewall rules for ingress and egress is defined from rules in hierarchical firewall policies, global network firewall policies, regional network firewall policies, and other VPC firewall rules.

Best practice:

Plan and design the configuration for your cluster, workloads and Services with your organization's Network administrators and Security engineers, and understand the firewall policy and rule evaluation order so you know which firewall rules take precedence.

GKE only creates ingress VPC firewall rules because GKE relies on the implied allowed egress lowest-priority firewall rule.

If you've configured egress deny firewall rules in your cluster's VPC network, you might have to create egress allow rules to permit communication between nodes, Pods, and the cluster's control plane. For example, if you've created an egress deny firewall rule for all protocols and ports and all destination IP addresses, you must create egress allow firewall rules in addition to the ingress rules that GKE creates automatically. Connectivity to control plane endpoints always uses TCP destination port 443, but connectivity among nodes and Pods of the cluster can use any protocol and destination port.

The following tools are useful to determine which firewall rules allow or deny traffic:

Firewall rules

GKE creates firewall rules automatically when creating the following resources:

  • GKE clusters
  • GKE Services
  • GKE Gateways and HTTPRoutes
  • GKE Ingresses

Unless otherwise specified, the priority for all automatically created firewall rules is 1000, which is the default value for firewall rules. If you would like more control over firewall behavior, you can create firewall rules with a higher priority. Firewall rules with a higher priority are applied before automatically created firewall rules.

GKE cluster firewall rules

GKE creates the following ingress firewall rules when creating a cluster:

Name Purpose Source Target (defines the destination) Protocol and ports Priority
gke-[cluster-name]-[cluster-hash]-master For Autopilot and Standard clusters that rely on VPC Network Peering for control plane private endpoint connectivity. Permits the control plane to access the kubelet and metrics-server on cluster nodes. Control plane IP address range (/28) Node tag TCP: 443 (metrics-server) and TCP: 10250 (kubelet) 1000
gke-[cluster-name]-[cluster-hash]-vms

Used for intra-cluster communication required by the Kubernetes networking model. Allows software running on nodes to send packets, with sources matching node IP addresses, to destination Pod IP and node IP addresses in the cluster. For example, traffic allowed by this rule includes:

  • Packets sent from system daemons, such as kubelet, to node and Pod IP address destinations of the cluster.
  • Packets sent from software running in Pods with hostNetwork:true to node and Pod IP address destinations of the cluster.
The node IP address range or a superset of this node IP address range:
  • For auto mode VPC networks, GKE uses the 10.128.0.0/9 CIDR because that range includes all current and future subnet primary IPv4 address ranges for the automatically created subnetworks.
  • For custom mode VPC networks, GKE uses the primary IPv4 address range of the cluster's subnet.
GKE does not update the source IPv4 range of this firewall rule if you expand the primary IPv4 range of the cluster's subnet. You must create the necessary ingress firewall rule manually if you expand the primary IPv4 range of the cluster's subnet.
Node tag TCP: 1-65535, UDP: 1-65535, ICMP 1000
gke-[cluster-name]-[cluster-hash]-all Permits traffic between all Pods on a cluster, as required by the Kubernetes networking model.

Pod CIDR

For clusters with discontiguous multi-Pod CIDR enabled, all Pod CIDR blocks used by the cluster.

Node tag TCP, UDP, SCTP, ICMP, ESP, AH 1000
gke-[cluster-hash]-ipv6-all For dual-stack network clusters only. Permits traffic between nodes and Pods on a cluster.

Same IP address range allocated in subnetIpv6CidrBlock.

Node tag TCP, UDP, SCTP, ICMP for IPv6, ESP, AH 1000
gke-[cluster-name]-[cluster-hash]-inkubelet Allow access to port 10255 (Kubelet read-only port) from internal Pod CIDRs and Node CIDRs in new GKE clusters running version 1.23.6 or later. Clusters running versions later than 1.26.4-gke.500 use the Kubelet authenticated port (10250) instead. Do not add firewall rules blocking 10250 within the cluster.

Internal Pod CIDRs and Node CIDRs.

Node tag TCP: 10255 999
gke-[cluster-name]-[cluster-hash]-exkubelet Deny public access to port 10255 in new GKE clusters running version 1.23.6 or later.

0.0.0.0/0

Node tag TCP: 10255 1000

GKE Service firewall rules

GKE creates the following ingress firewall rules when creating a Service:

Name Purpose Source Target (defines the destination) Protocol and ports
k8s-fw-[loadbalancer-hash] Permits ingress traffic to reach a Service. Source comes from spec.loadBalancerSourceRanges. Defaults to 0.0.0.0/0 if spec.loadBalancerSourceRanges is omitted.

For more details, see Firewall rules and source IP address allowlist.

LoadBalancer virtual IP address TCP and UDP on the ports specified in the Service manifest.
k8s-[cluster-id]-node-http-hc Permits health checks of an external passthrough Network Load Balancer Service when externalTrafficPolicy is set to Cluster.
  • 130.211.0.0/22
  • 35.191.0.0/16
  • 209.85.152.0/22
  • 209.85.204.0/22
LoadBalancer virtual IP address TCP: 10256
k8s-[loadbalancer-hash]-http-hc Permits health checks of an external passthrough Network Load Balancer Service when externalTrafficPolicy is set to Local.
  • 130.211.0.0/22
  • 35.191.0.0/16
  • 209.85.152.0/22
  • 209.85.204.0/22
Node tag TCP port defined by spec.healthCheckNodePort. Defaults to TCP port number 10256 if spec.healthCheckNodePort is omitted.

For more details, see Health check port.

k8s-[cluster-id]-node-hc Permits health checks of an internal passthrough Network Load Balancer Service when externalTrafficPolicy is set to Cluster.
  • 130.211.0.0/22
  • 35.191.0.0/16
  • 209.85.152.0/22
  • 209.85.204.0/22
Node tag TCP: 10256
[loadbalancer-hash]-hc Permits health checks of an internal passthrough Network Load Balancer Service when externalTrafficPolicy is set to Local.
  • 130.211.0.0/22
  • 35.191.0.0/16
  • 209.85.152.0/22
  • 209.85.204.0/22
Node tag TCP port defined by spec.healthCheckNodePort. Defaults to TCP port number 10256 if spec.healthCheckNodePort is omitted.

For more details, see Health check port.

k8s2-[cluster-id]-[namespace]-[service-name]-[suffixhash] Permits ingress traffic to reach a Service when one of the following is enabled:
  • GKE subsetting.
  • Backend service-based external passthrough Network Load Balancer.
  • Source comes from spec.loadBalancerSourceRanges. Defaults to 0.0.0.0/0 if spec.loadBalancerSourceRanges is omitted.

    For more details, see Firewall rules and source IP address allowlist.

    LoadBalancer virtual IP address TCP and UDP on the ports specified in the Service manifest.
    k8s2-[cluster-id]-[namespace]-[service-name]-[suffixhash]-fw Permits health checks of the Service when externalTrafficPolicy is set to Local and any of the following are enabled:
  • GKE subsetting.
  • Backend service-based external passthrough Network Load Balancer.
    • 130.211.0.0/22
    • 35.191.0.0/16
    • 209.85.152.0/22
    • 209.85.204.0/22
    LoadBalancer virtual IP address TCP port defined by spec.healthCheckNodePort. Defaults to TCP port number 10256 if spec.healthCheckNodePort is omitted.

    For more details, see Health check port.

    k8s2-[cluster-id]-l4-shared-hc-fw Permits health checks of the Service when externalTrafficPolicy is set to Cluster and any of the following are enabled:
  • GKE subsetting.
  • Backend service-based external passthrough Network Load Balancer.
    • 130.211.0.0/22
    • 35.191.0.0/16
    • 209.85.152.0/22
    • 209.85.204.0/22
    Node tag TCP: 10256
    gke-[cluster-name]-[cluster-hash]-mcsd Permits the control plane to access the kubelet and metrics-server on cluster nodes for Multi-cluster Services. This rule has a priority of 900. Health check IP addresses Node tag TCP, UDP, SCTP, ICMP, ESP, AH

    GKE Gateway firewall rules

    GKE creates the following Gateway firewall rules when creating a Gateway and HTTPRoute resources:

    Name Purpose Source Target (defines the destination) Protocol and ports
    • gkegw1-l7-[network]-[region/global]
    • gkemcg1-l7-[network]-[region/global]

    Permits health checks of a network endpoint group (NEG).

    The Gateway controller creates this rule when the first Gateway resource is created. The Gateway controller can update this rule if more Gateway resources are created.

    Node tag TCP: all container target ports (for NEGs)

    GKE Ingress firewall rules

    GKE creates the following Ingress firewall rules when creating an Ingress resource:

    Name Purpose Source Target (defines the destination) Protocol and ports
    k8s-fw-l7-[random-hash]

    Permits health checks of a NodePort Service or network endpoint group (NEG).

    The Ingress controller creates this rule when the first Ingress resource is created. The Ingress controller can update this rule if more Ingress resources are created.

    For GKE v1.17.13-gke.2600 or later:
    • 130.211.0.0/22
    • 35.191.0.0/16
    • User-defined proxy-only subnet ranges (for internal Application Load Balancers)
    Node tag TCP: 30000-32767, TCP:80 (for internal Application Load Balancers), TCP: all container target ports (for NEGs)

    Shared VPC

    When a cluster that is located in a Shared VPC uses a Shared VPC network, the Ingress controller cannot use the GKE service account in the service project to create and update ingress allow firewall rules in the host project. You can grant the GKE service account in a service project permissions to create and manage the firewall resources. For more information, see Shared VPC.

    Required firewall rule for expanded subnet

    If you expand the primary IPv4 range of the cluster's subnet, GKE does not automatically update the source range of the gke-[cluster-name]-[cluster-hash]-vms firewall rule. Because nodes in the cluster can receive IPv4 addresses from the expanded portion of the subnet's primary IPv4 range, you must manually create a firewall rule to allow communication between nodes of the cluster.

    The ingress firewall rule you must create must allow TCP and ICMP packets from the expanded primary subnet IPv4 source range, and it must at least apply to all nodes in the cluster.

    To create an ingress firewall rule that only applies to the cluster's nodes, set the firewall rule's target to the same target tag used by your cluster's automatically-created gke-[cluster-name]-[cluster-hash]-vms firewall rule.

    What's next