Trying to understand Kubernetes networking
I previously built a single node Kubernetes cluster as a test environment to learn more about it. The first thing I want to try to understand is its networking. In particular the IP addresses that are listed are all 10.*
and my host’s network is a 192.168/24
. I understand each pod gets its own virtual ethernet interface and associated IP address, and these are generally private within the cluster (and firewalled out other than for exposed services). What does that actually look like?
$ ip route
default via 192.168.53.1 dev enx00e04c6851de
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.0.0/24 dev weave proto kernel scope link src 192.168.0.1
192.168.53.0/24 dev enx00e04c6851de proto kernel scope link src 192.168.53.147
Huh. No sign of any way to get to 10.107.66.138
(the IP my echoserver from the previous post is available on directly from the host). What about network interfaces? (under the cut because it’s lengthy)
ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: enx00e04c6851de: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:e0:4c:68:51:de brd ff:ff:ff:ff:ff:ff
inet 192.168.53.147/24 brd 192.168.53.255 scope global dynamic enx00e04c6851de
valid_lft 41571sec preferred_lft 41571sec
3: wlp1s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 74:d8:3e:70:3b:18 brd ff:ff:ff:ff:ff:ff
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:18:04:9e:08 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
5: datapath: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether d2:5a:fd:c1:56:23 brd ff:ff:ff:ff:ff:ff
7: weave: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UP group default qlen 1000
link/ether 12:82:8f:ed:c7:bf brd ff:ff:ff:ff:ff:ff
inet 192.168.0.1/24 brd 192.168.0.255 scope global weave
valid_lft forever preferred_lft forever
9: vethwe-datapath@vethwe-bridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master datapath state UP group default
link/ether b6:49:88:d6:6d:84 brd ff:ff:ff:ff:ff:ff
10: vethwe-bridge@vethwe-datapath: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP group default
link/ether 6e:6c:03:1d:e5:0e brd ff:ff:ff:ff:ff:ff
11: vxlan-6784: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65535 qdisc noqueue master datapath state UNKNOWN group default qlen 1000
link/ether 9a:af:c5:0a:b3:fd brd ff:ff:ff:ff:ff:ff
13: vethwepl534c0a6@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP group default
link/ether 1e:ac:f1:85:61:9a brd ff:ff:ff:ff:ff:ff link-netnsid 0
15: vethwepl9ffd6b6@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP group default
link/ether 56:ca:71:2a:ab:39 brd ff:ff:ff:ff:ff:ff link-netnsid 1
17: vethwepl62b369d@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP group default
link/ether e2:a0:bb:ee:fc:73 brd ff:ff:ff:ff:ff:ff link-netnsid 2
23: vethwepl6669168@if22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP group default
link/ether f2:e7:e6:95:e0:61 brd ff:ff:ff:ff:ff:ff link-netnsid 3
That looks like a collection of virtual ethernet devices that are being managed by the weave networking plugin, and presumably partnered inside each pod. They’re bridged to the weave
interface (the master weave
bit). Still no clues about the 10.*
range. What about ARP?
ip neigh
192.168.53.1 dev enx00e04c6851de lladdr e4:8d:8c:35:98:d5 DELAY
192.168.0.4 dev datapath lladdr da:22:06:96:50:cb STALE
192.168.0.2 dev weave lladdr 66:eb:ce:16:3c:62 REACHABLE
192.168.53.136 dev enx00e04c6851de lladdr 00:e0:4c:39:f2:54 REACHABLE
192.168.0.6 dev weave lladdr 56:a9:f0:d2:9e:f3 STALE
192.168.0.3 dev datapath lladdr f2:42:c9:c3:08:71 STALE
192.168.0.3 dev weave lladdr f2:42:c9:c3:08:71 REACHABLE
192.168.0.2 dev datapath lladdr 66:eb:ce:16:3c:62 STALE
192.168.0.6 dev datapath lladdr 56:a9:f0:d2:9e:f3 STALE
192.168.0.4 dev weave lladdr da:22:06:96:50:cb STALE
192.168.0.5 dev datapath lladdr fe:6f:1b:14:56:5a STALE
192.168.0.5 dev weave lladdr fe:6f:1b:14:56:5a REACHABLE
Nope. That just looks like addresses on the weave managed bridge. Alright. What about firewalling?
nft list ruleset
table ip nat {
chain DOCKER {
iifname "docker0" counter packets 0 bytes 0 return
}
chain POSTROUTING {
type nat hook postrouting priority srcnat; policy accept;
counter packets 531750 bytes 31913539 jump KUBE-POSTROUTING
oifname != "docker0" ip saddr 172.17.0.0/16 counter packets 1 bytes 84 masquerade
counter packets 525600 bytes 31544134 jump WEAVE
}
chain PREROUTING {
type nat hook prerouting priority dstnat; policy accept;
counter packets 180 bytes 12525 jump KUBE-SERVICES
fib daddr type local counter packets 23 bytes 1380 jump DOCKER
}
chain OUTPUT {
type nat hook output priority -100; policy accept;
counter packets 527005 bytes 31628455 jump KUBE-SERVICES
ip daddr != 127.0.0.0/8 fib daddr type local counter packets 285425 bytes 17125524 jump DOCKER
}
chain KUBE-MARK-DROP {
counter packets 0 bytes 0 meta mark set mark or 0x8000
}
chain KUBE-MARK-MASQ {
counter packets 0 bytes 0 meta mark set mark or 0x4000
}
chain KUBE-POSTROUTING {
mark and 0x4000 != 0x4000 counter packets 4622 bytes 277720 return
counter packets 0 bytes 0 meta mark set mark xor 0x4000
counter packets 0 bytes 0 masquerade
}
chain KUBE-KUBELET-CANARY {
}
chain INPUT {
type nat hook input priority 100; policy accept;
}
chain KUBE-PROXY-CANARY {
}
chain KUBE-SERVICES {
meta l4proto tcp ip daddr 10.96.0.10 tcp dport 9153 counter packets 0 bytes 0 jump KUBE-SVC-JD5MR3NA4I4DYORP
meta l4proto tcp ip daddr 10.107.66.138 tcp dport 8080 counter packets 1 bytes 60 jump KUBE-SVC-666FUMINWJLRRQPD
meta l4proto tcp ip daddr 10.111.16.129 tcp dport 443 counter packets 0 bytes 0 jump KUBE-SVC-EZYNCFY2F7N6OQA2
meta l4proto tcp ip daddr 10.96.9.41 tcp dport 443 counter packets 0 bytes 0 jump KUBE-SVC-EDNDUDH2C75GIR6O
meta l4proto tcp ip daddr 192.168.53.147 tcp dport 443 counter packets 0 bytes 0 jump KUBE-XLB-EDNDUDH2C75GIR6O
meta l4proto tcp ip daddr 10.96.9.41 tcp dport 80 counter packets 0 bytes 0 jump KUBE-SVC-CG5I4G2RS3ZVWGLK
meta l4proto tcp ip daddr 192.168.53.147 tcp dport 80 counter packets 0 bytes 0 jump KUBE-XLB-CG5I4G2RS3ZVWGLK
meta l4proto tcp ip daddr 10.96.0.1 tcp dport 443 counter packets 0 bytes 0 jump KUBE-SVC-NPX46M4PTMTKRN6Y
meta l4proto udp ip daddr 10.96.0.10 udp dport 53 counter packets 0 bytes 0 jump KUBE-SVC-TCOU7JCQXEZGVUNU
meta l4proto tcp ip daddr 10.96.0.10 tcp dport 53 counter packets 0 bytes 0 jump KUBE-SVC-ERIFXISQEP7F7OF4
fib daddr type local counter packets 3312 bytes 198720 jump KUBE-NODEPORTS
}
chain KUBE-NODEPORTS {
meta l4proto tcp tcp dport 31529 counter packets 0 bytes 0 jump KUBE-MARK-MASQ
meta l4proto tcp tcp dport 31529 counter packets 0 bytes 0 jump KUBE-SVC-666FUMINWJLRRQPD
meta l4proto tcp ip saddr 127.0.0.0/8 tcp dport 30894 counter packets 0 bytes 0 jump KUBE-MARK-MASQ
meta l4proto tcp tcp dport 30894 counter packets 0 bytes 0 jump KUBE-XLB-EDNDUDH2C75GIR6O
meta l4proto tcp ip saddr 127.0.0.0/8 tcp dport 32740 counter packets 0 bytes 0 jump KUBE-MARK-MASQ
meta l4proto tcp tcp dport 32740 counter packets 0 bytes 0 jump KUBE-XLB-CG5I4G2RS3ZVWGLK
}
chain KUBE-SVC-NPX46M4PTMTKRN6Y {
counter packets 0 bytes 0 jump KUBE-SEP-Y6PHKONXBG3JINP2
}
chain KUBE-SEP-Y6PHKONXBG3JINP2 {
ip saddr 192.168.53.147 counter packets 0 bytes 0 jump KUBE-MARK-MASQ
meta l4proto tcp counter packets 0 bytes 0 dnat to 192.168.53.147:6443
}
chain WEAVE {
# match-set weaver-no-masq-local dst counter packets 135966 bytes 8160820 return
ip saddr 192.168.0.0/24 ip daddr 224.0.0.0/4 counter packets 0 bytes 0 return
ip saddr != 192.168.0.0/24 ip daddr 192.168.0.0/24 counter packets 0 bytes 0 masquerade
ip saddr 192.168.0.0/24 ip daddr != 192.168.0.0/24 counter packets 33 bytes 2941 masquerade
}
chain WEAVE-CANARY {
}
chain KUBE-SVC-JD5MR3NA4I4DYORP {
counter packets 0 bytes 0 jump KUBE-SEP-6JI23ZDEH4VLR5EN
counter packets 0 bytes 0 jump KUBE-SEP-FATPLMAF37ZNQP5P
}
chain KUBE-SEP-6JI23ZDEH4VLR5EN {
ip saddr 192.168.0.2 counter packets 0 bytes 0 jump KUBE-MARK-MASQ
meta l4proto tcp counter packets 0 bytes 0 dnat to 192.168.0.2:9153
}
chain KUBE-SVC-TCOU7JCQXEZGVUNU {
counter packets 0 bytes 0 jump KUBE-SEP-JTN4UBVS7OG5RONX
counter packets 0 bytes 0 jump KUBE-SEP-4TCKAEJ6POVEFPVW
}
chain KUBE-SEP-JTN4UBVS7OG5RONX {
ip saddr 192.168.0.2 counter packets 0 bytes 0 jump KUBE-MARK-MASQ
meta l4proto udp counter packets 0 bytes 0 dnat to 192.168.0.2:53
}
chain KUBE-SVC-ERIFXISQEP7F7OF4 {
counter packets 0 bytes 0 jump KUBE-SEP-UPZX2EM3TRFH2ASL
counter packets 0 bytes 0 jump KUBE-SEP-KPHYKKPVMB473Z76
}
chain KUBE-SEP-UPZX2EM3TRFH2ASL {
ip saddr 192.168.0.2 counter packets 0 bytes 0 jump KUBE-MARK-MASQ
meta l4proto tcp counter packets 0 bytes 0 dnat to 192.168.0.2:53
}
chain KUBE-SEP-4TCKAEJ6POVEFPVW {
ip saddr 192.168.0.3 counter packets 0 bytes 0 jump KUBE-MARK-MASQ
meta l4proto udp counter packets 0 bytes 0 dnat to 192.168.0.3:53
}
chain KUBE-SEP-KPHYKKPVMB473Z76 {
ip saddr 192.168.0.3 counter packets 0 bytes 0 jump KUBE-MARK-MASQ
meta l4proto tcp counter packets 0 bytes 0 dnat to 192.168.0.3:53
}
chain KUBE-SEP-FATPLMAF37ZNQP5P {
ip saddr 192.168.0.3 counter packets 0 bytes 0 jump KUBE-MARK-MASQ
meta l4proto tcp counter packets 0 bytes 0 dnat to 192.168.0.3:9153
}
chain KUBE-SVC-666FUMINWJLRRQPD {
counter packets 1 bytes 60 jump KUBE-SEP-LYLDBZYLHY4MT3AQ
}
chain KUBE-SEP-LYLDBZYLHY4MT3AQ {
ip saddr 192.168.0.4 counter packets 0 bytes 0 jump KUBE-MARK-MASQ
meta l4proto tcp counter packets 1 bytes 60 dnat to 192.168.0.4:8080
}
chain KUBE-XLB-EDNDUDH2C75GIR6O {
fib saddr type local counter packets 0 bytes 0 jump KUBE-MARK-MASQ
fib saddr type local counter packets 0 bytes 0 jump KUBE-SVC-EDNDUDH2C75GIR6O
counter packets 0 bytes 0 jump KUBE-SEP-BLQHCYCSXY3NRKLC
}
chain KUBE-XLB-CG5I4G2RS3ZVWGLK {
fib saddr type local counter packets 0 bytes 0 jump KUBE-MARK-MASQ
fib saddr type local counter packets 0 bytes 0 jump KUBE-SVC-CG5I4G2RS3ZVWGLK
counter packets 0 bytes 0 jump KUBE-SEP-5XVRKWM672JGTWXH
}
chain KUBE-SVC-EDNDUDH2C75GIR6O {
counter packets 0 bytes 0 jump KUBE-SEP-BLQHCYCSXY3NRKLC
}
chain KUBE-SEP-BLQHCYCSXY3NRKLC {
ip saddr 192.168.0.5 counter packets 0 bytes 0 jump KUBE-MARK-MASQ
meta l4proto tcp counter packets 0 bytes 0 dnat to 192.168.0.5:443
}
chain KUBE-SVC-CG5I4G2RS3ZVWGLK {
counter packets 0 bytes 0 jump KUBE-SEP-5XVRKWM672JGTWXH
}
chain KUBE-SEP-5XVRKWM672JGTWXH {
ip saddr 192.168.0.5 counter packets 0 bytes 0 jump KUBE-MARK-MASQ
meta l4proto tcp counter packets 0 bytes 0 dnat to 192.168.0.5:80
}
chain KUBE-SVC-EZYNCFY2F7N6OQA2 {
counter packets 0 bytes 0 jump KUBE-SEP-JYW326XAJ4KK7QPG
}
chain KUBE-SEP-JYW326XAJ4KK7QPG {
ip saddr 192.168.0.5 counter packets 0 bytes 0 jump KUBE-MARK-MASQ
meta l4proto tcp counter packets 0 bytes 0 dnat to 192.168.0.5:8443
}
}
table ip filter {
chain DOCKER {
}
chain DOCKER-ISOLATION-STAGE-1 {
iifname "docker0" oifname != "docker0" counter packets 0 bytes 0 jump DOCKER-ISOLATION-STAGE-2
counter packets 0 bytes 0 return
}
chain DOCKER-ISOLATION-STAGE-2 {
oifname "docker0" counter packets 0 bytes 0 drop
counter packets 0 bytes 0 return
}
chain FORWARD {
type filter hook forward priority filter; policy drop;
iifname "weave" counter packets 213 bytes 54014 jump WEAVE-NPC-EGRESS
oifname "weave" counter packets 150 bytes 30038 jump WEAVE-NPC
oifname "weave" ct state new counter packets 0 bytes 0 log group 86
oifname "weave" counter packets 0 bytes 0 drop
iifname "weave" oifname != "weave" counter packets 33 bytes 2941 accept
oifname "weave" ct state related,established counter packets 0 bytes 0 accept
counter packets 0 bytes 0 jump KUBE-FORWARD
ct state new counter packets 0 bytes 0 jump KUBE-SERVICES
ct state new counter packets 0 bytes 0 jump KUBE-EXTERNAL-SERVICES
counter packets 0 bytes 0 jump DOCKER-USER
counter packets 0 bytes 0 jump DOCKER-ISOLATION-STAGE-1
oifname "docker0" ct state related,established counter packets 0 bytes 0 accept
oifname "docker0" counter packets 0 bytes 0 jump DOCKER
iifname "docker0" oifname != "docker0" counter packets 0 bytes 0 accept
iifname "docker0" oifname "docker0" counter packets 0 bytes 0 accept
}
chain DOCKER-USER {
counter packets 0 bytes 0 return
}
chain KUBE-FIREWALL {
mark and 0x8000 == 0x8000 counter packets 0 bytes 0 drop
ip saddr != 127.0.0.0/8 ip daddr 127.0.0.0/8 ct status dnat counter packets 0 bytes 0 drop
}
chain OUTPUT {
type filter hook output priority filter; policy accept;
ct state new counter packets 527014 bytes 31628984 jump KUBE-SERVICES
counter packets 36324809 bytes 6021214027 jump KUBE-FIREWALL
meta l4proto != esp mark and 0x20000 == 0x20000 counter packets 0 bytes 0 drop
}
chain INPUT {
type filter hook input priority filter; policy accept;
counter packets 35869492 bytes 5971008896 jump KUBE-NODEPORTS
ct state new counter packets 390938 bytes 23457377 jump KUBE-EXTERNAL-SERVICES
counter packets 36249774 bytes 6030068622 jump KUBE-FIREWALL
meta l4proto tcp ip daddr 127.0.0.1 tcp dport 6784 fib saddr type != local ct state != related,established counter packets 0 bytes 0 drop
iifname "weave" counter packets 907273 bytes 88697229 jump WEAVE-NPC-EGRESS
counter packets 34809601 bytes 5818213726 jump WEAVE-IPSEC-IN
}
chain KUBE-KUBELET-CANARY {
}
chain KUBE-PROXY-CANARY {
}
chain KUBE-EXTERNAL-SERVICES {
}
chain KUBE-NODEPORTS {
meta l4proto tcp tcp dport 32196 counter packets 0 bytes 0 accept
meta l4proto tcp tcp dport 32196 counter packets 0 bytes 0 accept
}
chain KUBE-SERVICES {
}
chain KUBE-FORWARD {
ct state invalid counter packets 0 bytes 0 drop
mark and 0x4000 == 0x4000 counter packets 0 bytes 0 accept
ct state related,established counter packets 0 bytes 0 accept
ct state related,established counter packets 0 bytes 0 accept
}
chain WEAVE-NPC-INGRESS {
}
chain WEAVE-NPC-DEFAULT {
# match-set weave-;rGqyMIl1HN^cfDki~Z$3]6!N dst counter packets 14 bytes 840 accept
# match-set weave-P.B|!ZhkAr5q=XZ?3}tMBA+0 dst counter packets 0 bytes 0 accept
# match-set weave-Rzff}h:=]JaaJl/G;(XJpGjZ[ dst counter packets 0 bytes 0 accept
# match-set weave-]B*(W?)t*z5O17G044[gUo#$l dst counter packets 0 bytes 0 accept
# match-set weave-iLgO^}{o=U/*%KE[@=W:l~|9T dst counter packets 9 bytes 540 accept
}
chain WEAVE-NPC {
ct state related,established counter packets 124 bytes 28478 accept
ip daddr 224.0.0.0/4 counter packets 0 bytes 0 accept
# PHYSDEV match --physdev-out vethwe-bridge --physdev-is-bridged counter packets 3 bytes 180 accept
ct state new counter packets 23 bytes 1380 jump WEAVE-NPC-DEFAULT
ct state new counter packets 0 bytes 0 jump WEAVE-NPC-INGRESS
}
chain WEAVE-NPC-EGRESS-ACCEPT {
counter packets 48 bytes 3769 meta mark set mark or 0x40000
}
chain WEAVE-NPC-EGRESS-CUSTOM {
}
chain WEAVE-NPC-EGRESS-DEFAULT {
# match-set weave-s_+ChJId4Uy_$}G;WdH|~TK)I src counter packets 0 bytes 0 jump WEAVE-NPC-EGRESS-ACCEPT
# match-set weave-s_+ChJId4Uy_$}G;WdH|~TK)I src counter packets 0 bytes 0 return
# match-set weave-E1ney4o[ojNrLk.6rOHi;7MPE src counter packets 31 bytes 2749 jump WEAVE-NPC-EGRESS-ACCEPT
# match-set weave-E1ney4o[ojNrLk.6rOHi;7MPE src counter packets 31 bytes 2749 return
# match-set weave-41s)5vQ^o/xWGz6a20N:~?#|E src counter packets 0 bytes 0 jump WEAVE-NPC-EGRESS-ACCEPT
# match-set weave-41s)5vQ^o/xWGz6a20N:~?#|E src counter packets 0 bytes 0 return
# match-set weave-sui%__gZ}{kX~oZgI_Ttqp=Dp src counter packets 0 bytes 0 jump WEAVE-NPC-EGRESS-ACCEPT
# match-set weave-sui%__gZ}{kX~oZgI_Ttqp=Dp src counter packets 0 bytes 0 return
# match-set weave-nmMUaDKV*YkQcP5s?Q[R54Ep3 src counter packets 17 bytes 1020 jump WEAVE-NPC-EGRESS-ACCEPT
# match-set weave-nmMUaDKV*YkQcP5s?Q[R54Ep3 src counter packets 17 bytes 1020 return
}
chain WEAVE-NPC-EGRESS {
ct state related,established counter packets 907425 bytes 88746642 accept
# PHYSDEV match --physdev-in vethwe-bridge --physdev-is-bridged counter packets 0 bytes 0 return
fib daddr type local counter packets 11 bytes 640 return
ip daddr 224.0.0.0/4 counter packets 0 bytes 0 return
ct state new counter packets 50 bytes 3961 jump WEAVE-NPC-EGRESS-DEFAULT
ct state new mark and 0x40000 != 0x40000 counter packets 2 bytes 192 jump WEAVE-NPC-EGRESS-CUSTOM
}
chain WEAVE-IPSEC-IN {
}
chain WEAVE-CANARY {
}
}
table ip mangle {
chain KUBE-KUBELET-CANARY {
}
chain PREROUTING {
type filter hook prerouting priority mangle; policy accept;
}
chain INPUT {
type filter hook input priority mangle; policy accept;
counter packets 35716863 bytes 5906910315 jump WEAVE-IPSEC-IN
}
chain FORWARD {
type filter hook forward priority mangle; policy accept;
}
chain OUTPUT {
type route hook output priority mangle; policy accept;
counter packets 35804064 bytes 5938944956 jump WEAVE-IPSEC-OUT
}
chain POSTROUTING {
type filter hook postrouting priority mangle; policy accept;
}
chain KUBE-PROXY-CANARY {
}
chain WEAVE-IPSEC-IN {
}
chain WEAVE-IPSEC-IN-MARK {
counter packets 0 bytes 0 meta mark set mark or 0x20000
}
chain WEAVE-IPSEC-OUT {
}
chain WEAVE-IPSEC-OUT-MARK {
counter packets 0 bytes 0 meta mark set mark or 0x20000
}
chain WEAVE-CANARY {
}
}
Wow. That’s a lot of nftables
entries, but it explains what’s going on. We have a nat entry for:
meta l4proto tcp ip daddr 10.107.66.138 tcp dport 8080 counter packets 1 bytes 60 jump KUBE-SVC-666FUMINWJLRRQPD
which ends up going to KUBE-SEP-LYLDBZYLHY4MT3AQ
and:
meta l4proto tcp counter packets 1 bytes 60 dnat to 192.168.0.4:8080
So packets headed for our echoserver are eventually ending up in a container that has a local IP address of 192.168.0.4
. Which we can see in our routing table via the weave
interface. Mystery explained. We can see the ingress for the externally visible HTTP service as well:
meta l4proto tcp ip daddr 192.168.33.147 tcp dport 80 counter packets 0 bytes 0 jump KUBE-XLB-CG5I4G2RS3ZVWGLK
which ends up redirected to:
meta l4proto tcp counter packets 0 bytes 0 dnat to 192.168.0.5:80
So from that we’d expect the IP inside the echoserver
pod to be 192.168.0.4
and the IP address instead our nginx ingress pod to be 192.168.0.5
. Let’s look:
root@udon:/# docker ps | grep echoserver
7cbb177bee18 k8s.gcr.io/echoserver "/usr/local/bin/run.…" 3 days ago Up 3 days k8s_echoserver_hello-node-59bffcc9fd-8hkgb_default_c7111c9e-7131-40e0-876d-be89d5ca1812_0
root@udon:/# docker exec -it 7cbb177bee18 /bin/bash
root@hello-node-59bffcc9fd-8hkgb:/# awk '/32 host/ { print f } {f=$2}' <<< "$(</proc/net/fib_trie)" | sort -u
127.0.0.1
192.168.0.4
It’s a slightly awkward method of determining the local IPs addresses due to the stripped down nature of the container, but it clearly shows the expected 192.168.0.4
address.
I’ve touched here upon the ability to actually enter a container and have a poke around its running environment by using docker
directly. Next step is to use that to investigate what containers have actually been spun up and what they’re doing. I’ll also revisit networking when I get to the point of building a multi-node cluster, to examine how the bridging between different hosts is done.