Fri. Aug 7th, 2020

CVE-2020-8558: Kubernetes Local Host Boundary Bypass Vulnerability Alert

1 min read

Kubernetes officially released a risk notification that the Kubernetes node settings allow neighboring hosts to bypass the local host boundary. The vulnerability number is CVE-2020-8558, and the vulnerability level is medium.

A security researcher discovered a security problem in kube-proxy. An attacker could access the local TCP/UDP of 127.0.0.1 on a neighboring node under the same Layer 2 domain through a container on the same LAN or on a cluster node. If the service bound to the port is not set up for authentication, it will make the service vulnerable to attacks.

SRE overload system

You may be vulnerable if:

  • You are running a vulnerable version (see below)
  • Your cluster nodes run in an environment where untrusted hosts share the same layer 2 domain (i.e. same LAN) as nodes
  • Your cluster allows untrusted pods to run containers with CAP_NET_RAW (the Kubernetes default is to allow this capability).
  • Your nodes (or hostnetwork pods) run any localhost-only services which do not require any further authentication. To list services that are potentially affected, run the following commands on nodes:
    – lsof +c 15 -P -n -i4TCP@127.0.0.1 -sTCP:LISTEN
     lsof +c 15 -P -n -i4UDP@127.0.0.1

    On a master node, an lsof entry like this indicates that the API server may be listening with an insecure port:

    COMMAND        PID  USER FD   TYPE DEVICE SIZE/OFF NODE NAME
    kube-apiserver 123  root  7u  IPv4  26799      0t0  TCP 127.0.0.1:8080 (LISTEN)

     

Affected version

  • kubelet/kube-proxy : v1.18.0-1.18.3
  • kubelet/kube-proxy : v1.17.0-1.17.6
  • kubelet/kube-proxy : <=1.16.10

Solution

The following versions contain the fix:

  • kubelet/kube-proxy master – fixed by #91569
  • kubelet/kube-proxy v1.18.4+ – fixed by #92038
  • kubelet/kube-proxy v1.17.7+ – fixed by #92039
  • kubelet/kube-proxy v1.16.11+ – fixed by #92040
Temporary patching suggestions:

Prior to upgrading, this vulnerability can be mitigated by manually adding an iptables rule on nodes. This rule will reject traffic to 127.0.0.1 which does not originate on the node.

 iptables -I INPUT –dst 127.0.0.0/8 ! –src 127.0.0.0/8 -m conntrack ! –ctstate RELATED,ESTABLISHED,DNAT -j DROP

Additionally, if your cluster does not already have the API Server insecure port disabled, we strongly suggest that you disable it. Add the following flag to your kubernetes API server command line: –insecure-port=0