【Docker教程】Docker Linux网络命名空间

前言

到现在我们已经了解到Docker的镜像、容器等概念,我们可以在一台宿主机上运行一个或者多个容器,也可以将不同的容器运行在不同的宿主机上,那么这些容器是怎样进行互相通信呢?本文将介绍Linux网络命名空间概念,来帮助读者更好地了解到Docker容器之间的通信。

容器之间的通信(同一台宿主机上)

默认情况下,在同一台宿主机上的多个容器是可以互相访问的,容器的网络命名空间和宿主机的网络命名空间是相互独立隔离的

查看宿主机的网络命名空间

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:ad:3b:43 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
valid_lft 85639sec preferred_lft 85639sec
inet6 fe80::5054:ff:fead:3b43/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 08:00:27:7e:86:8c brd ff:ff:ff:ff:ff:ff
inet 192.168.215.20/24 brd 192.168.215.255 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fe7e:868c/64 scope link
valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:79:71:de:73 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever

新建两个容器放入后台运行

1
2
3
$ docker run -d --name demo1 busybox /bin/sh -c "while true;do sleep 3600;done"

$ docker run -d --name demo2 busybox /bin/sh -c "while true;do sleep 3600;done"

查看两个容器的网络命名空间

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
$ docker exec demo1 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
5: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever

$ docker exec demo2 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
7: eth0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever

这里可以看到宿主机、demo1和demo2的网络命名空间都是独立的,测试容器demo1和容器demo2是否可以互通

1
2
3
4
5
6
7
8
9
10
11
12
13
# 在 demo1 内 ping demo2容器的ip地址
$ docker exec test1 ping 172.17.0.3
PING 172.17.0.3 (172.17.0.3): 56 data bytes
64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.086 ms
64 bytes from 172.17.0.3: seq=1 ttl=64 time=0.104 ms
64 bytes from 172.17.0.3: seq=2 ttl=64 time=0.081 ms

# 在 demo2 内 ping demo1容器的ip地址
$ docker exec test2 ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.079 ms
64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.102 ms
64 bytes from 172.17.0.2: seq=2 ttl=64 time=0.101 ms

从上面看出容器demo1和容器demok2可以互相访问

为了更好地了解到在同一宿主机上容器间的通信,下面我们从Linux网络命名空间来为大家解释

在这之前我们先把我们刚才运行的demo1和demo2两个容器停止掉

1
$ sudo docker stop demo1 demo2
Linux 的网络命名空间(namespace)

下面我们将在宿主机上创建两个网络命名空间分别为test1和test2,然后再创建一对
Veth 接口,分别分配给test1和test2,这样test1和test2就能够连接起来,能够互相访问

创建网络命名空间
1
2
3
$ sudo ip netns add test1

$ sudo ip netns add test2
查看网络命名空间
1
2
3
$ sudo ip netns list
test2
test1

在docker容器中,我们可以查看容器的网络命名空间,同理我们也可以查看我们创建的网络命名空间详细信息

1
2
3
4
5
6
7
$ sudo ip netns exec test1 ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

$ sudo ip netns exec test2 ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

现在可以看到我们创建的两个命名空间test1和tes2只有本地的回环端口,也没有本地 ip 127.0.0.1 之类并且状态是DOWN的状态

查看本地的 ip link

1
2
3
4
5
6
7
8
9
$ sudo ip link 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 52:54:00:ad:3b:43 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 08:00:27:7e:86:8c brd ff:ff:ff:ff:ff:ff
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT
link/ether 02:42:79:71:de:73 brd ff:ff:ff:ff:ff:ff

现在本地只有四个 ip link 下面在宿主机上添加一对 link (vethl-test1 和 veth-test2)

1
$ sudo ip link add veth-test1 type veth peer name veth-test2

接下来我们再次查看本地的 ip link

1
2
3
4
5
6
7
8
9
10
11
12
13
$ sudo ip link 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 52:54:00:ad:3b:43 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
link/ether 08:00:27:7e:86:8c brd ff:ff:ff:ff:ff:ff
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT
link/ether 02:42:79:71:de:73 brd ff:ff:ff:ff:ff:ff
9: veth-test2@veth-test1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
link/ether 9e:44:34:0b:41:39 brd ff:ff:ff:ff:ff:ff
10: veth-test1@veth-test2: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
link/ether 5a:ed:e6:53:00:fb brd ff:ff:ff:ff:ff:ff

序号:9.(veth-test2@veth-test1),10.(veth-test1@veth-test2)就是我们刚添加的一对 link

将 veth-test1 这个接口添加到网络命名空间test1上

1
$ sudo ip link set veth-test1 netns test1

这时候我们会发现本地的 ip link 序号:10 消失了,而我们创建的test1网络命名空间多了 veth-test1,我们来查看test1的 ip link

1
2
3
4
5
$ sudo ip netns exec test1 ip link 
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN mode DEFAULT qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
10: veth-test1@if9: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
link/ether 5a:ed:e6:53:00:fb brd ff:ff:ff:ff:ff:ff link-netnsid 0

同理将 veth-test2 这个接口添加到网络命名空间test2上

1
$ sudo ip link set veth-test2 netns test2

现在我们把创建的veth-test1和veth-test2分别添加到了网络命名空间test1和test2上(此时他们都没有ip地址,状态也是DOWN的状态)

1
2
3
4
5
6
7
8
9
10
11
12
# test1
$ sudo ip netns exec test1 ip link
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN mode DEFAULT qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
10: veth-test1@if9: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
link/ether 5a:ed:e6:53:00:fb brd ff:ff:ff:ff:ff:ff link-netnsid 0

# test2
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN mode DEFAULT qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
9: veth-test2@if10: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT qlen 1000
link/ether 9e:44:34:0b:41:39 brd ff:ff:ff:ff:ff:ff link-netnsid 0

分别为两个接口 veth-test1veth-test2 分配ip地址,然后让其状态由DOWN变为UP (同时也可以把各自的回环端口 lo 状态也设置为 UP,不然到时候ping自己的时候ping不通)

1
2
3
4
5
6
7
8
9
10
11
# 为veth-test1和veth-test2分配IP地址
$ sudo ip netns exec test1 ip addr add 192.168.1.1/24 dev veth-test1
$ sudo ip netns exec test2 ip addr add 192.168.1.2/24 dev veth-test2

# 设置veth-test1和veth-test2的状态由DOWN变为UP
$ sudo ip netns exec test1 ip link set dev veth-test1 up
$ sudo ip netns exec test2 ip link set dev veth-test2 up

# 设置test1和test2的本地回环端口状态由DOWN变为UP
$ sudo ip netns exec test1 ip link set dev lo up
$ sudo ip netns exec test2 ip link set dev lo up

现在我们可以看到我们创建的test1和test2这两个网络命名空间上的veth-test1和veth-test2接口分配的IP地址,并且状态为UP

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# test1
$ sudo ip netns exec test1 ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
10: veth-test1@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
link/ether 5a:ed:e6:53:00:fb brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet 192.168.1.1/24 scope global veth-test1
valid_lft forever preferred_lft forever
inet6 fe80::58ed:e6ff:fe53:fb/64 scope link
valid_lft forever preferred_lft forever

# test2
$ sudo ip netns exec test2 ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
9: veth-test2@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
link/ether 9e:44:34:0b:41:39 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.1.2/24 scope global veth-test2
valid_lft forever preferred_lft forever
inet6 fe80::9c44:34ff:fe0b:4139/64 scope link
valid_lft forever preferred_lft forever

到此我们已经成功地完成了通过我们自己创建的test1和test2这两个网络命名空间,他们之间可以通过veth-test1和veth-test2互相访问

1
2
3
4
5
6
7
8
9
10
11
12
13
# test1 上 ping test2的ip
sudo ip netns exec test1 ping 192.168.1.2
PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data.
64 bytes from 192.168.1.2: icmp_seq=1 ttl=64 time=0.048 ms
64 bytes from 192.168.1.2: icmp_seq=2 ttl=64 time=0.059 ms
64 bytes from 192.168.1.2: icmp_seq=3 ttl=64 time=0.067 ms

# test2 上 ping test1的ip
$ sudo ip netns exec test2 ping 192.168.1.1
PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.
64 bytes from 192.168.1.1: icmp_seq=1 ttl=64 time=0.048 ms
64 bytes from 192.168.1.1: icmp_seq=2 ttl=64 time=0.052 ms
64 bytes from 192.168.1.1: icmp_seq=3 ttl=64 time=0.079 ms

在同一台宿主机上的docker多容器他们分别有各自的网络命名空间,他们之间的通信原理,和我们自己创建的test1和test2这两个网络命名空间他们之间的通信原理类似