linux 排查网络问题,docker 容器跑的 http 服务可以访问,但是在宿主机运行的http无法从其他机器访问? ╰─➤ docker restart rabbitmq3-management 2 ↵ Error response from daemon: Cannot restart container rabbitmq3-management: driver failed programming external connectivity on endpoint rabbitmq3-management (f6bf8d5245c463e0ccdbfb5340e09d460dea3925124be09c92612a5ee5823c8e): (iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 15692 -j DNAT --to-destination 172.21.2.2:15692 ! -i br-ea23e34daef4: iptables: No chain/target/match by that name. (exit status 1)) 之前因为服务器的内存条损坏,然后强制跳过内存条自检,把服务器重新成功了,现在服务器就带病跑在,还没有新的内存条替换 if __name__ == "__main__": uvicorn.run( app='api:app', host="0.0.0.0", port=9600, workers=1, ) 但是服务器重启后发现了问题,我在该服务器,跑了一个 fastapi,发现在自己访问自己可以 ─➤ http -v http://192.168.38.223:9600 GET / HTTP/1.1 Accept: */* Accept-Encoding: gzip, deflate Connection: keep-alive Host: 192.168.38.223:9600 User-Agent: HTTPie/2.6.0 HTTP/1.1 200 OK content-length: 25 content-type: application/json date: Thu, 01 Feb 2024 06:56:05 GMT server: uvicorn { "message": "Hello World" } 但是从其他机器访问这个服务器的 fastapi 的 9600 就不行 ─➤ http -v http://192.168.38.223:9600 GET / HTTP/1.1 Accept: */* Accept-Encoding: gzip, deflate, br Connection: keep-alive Host: 192.168.38.223:9600 User-Agent: HTTPie/3.2.2 HTTP/1.1 503 Service Unavailable Connection: close Content-Length: 0 Proxy-Connection: close 但是其他机器访问这个服务的 docker 跑的 http 服务都是可以的 比如这个机器上用 docker 跑了一个 rabbitmq server,从其他机器访问这个 rabbitmq sever 的 15672 端口是可以的 ─➤ http -v http://192.168.38.223:15672 GET / HTTP/1.1 Accept: */* Accept-Encoding: gzip, deflate, br Connection: keep-alive Host: 192.168.38.223:15672 User-Agent: HTTPie/3.2.2 HTTP/1.1 200 OK Connection: keep-alive Content-Length: 3056 Content-Security-Policy: script-src 'self' 'unsafe-eval' 'unsafe-inline'; object-src 'self' Content-Type: text/html Date: Thu, 01 Feb 2024 06:57:12 GMT Etag: "3550788022" Keep-Alive: timeout=4 Last-Modified: Thu, 24 Aug 2023 17:56:19 GMT Proxy-Connection: keep-alive Server: Cowboy Vary: origin 使用 netstat 查看,192.168.38.223 机器的 9600 端口确实是被监听着 ╰─➤ netstat -tulnp 1 ↵ (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:19530 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:5601 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:5432 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:5672 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:2224 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:3000 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:15692 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:15672 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:8929 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:9200 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:9091 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:9002 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:9000 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:9300 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:9600 0.0.0.0:* LISTEN 1636021/python tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:36672 0.0.0.0:* LISTEN - tcp 0 0 127.0.0.1:44127 0.0.0.0:* LISTEN 1598742/node tcp 0 0 127.0.0.1:44359 0.0.0.0:* LISTEN 1598878/code-8b3775 tcp 0 0 127.0.0.1:41939 0.0.0.0:* LISTEN 1598538/node tcp6 0 0 :::19530 :::* LISTEN - tcp6 0 0 :::5601 :::* LISTEN - tcp6 0 0 :::5432 :::* LISTEN - tcp6 0 0 :::5672 :::* LISTEN - tcp6 0 0 :::6379 :::* LISTEN - tcp6 0 0 :::7891 :::* LISTEN 1646/clash tcp6 0 0 :::7890 :::* LISTEN 1646/clash tcp6 0 0 :::8000 :::* LISTEN - tcp6 0 0 :::22 :::* LISTEN - tcp6 0 0 :::2224 :::* LISTEN - tcp6 0 0 :::3306 :::* LISTEN - tcp6 0 0 :::15692 :::* LISTEN - tcp6 0 0 :::15672 :::* LISTEN - tcp6 0 0 :::8929 :::* LISTEN - tcp6 0 0 :::9200 :::* LISTEN - tcp6 0 0 :::9091 :::* LISTEN - tcp6 0 0 :::9090 :::* LISTEN 1646/clash tcp6 0 0 :::9002 :::* LISTEN - tcp6 0 0 :::9000 :::* LISTEN - tcp6 0 0 :::9300 :::* LISTEN - udp 0 0 127.0.0.53:53 0.0.0.0:* - udp6 0 0 :::7891 :::* 1646/clash 我的机器(192.168.38.223)网络如下: ─➤ ip --color a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eno1: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 90:8d:6e:c2:5d:24 brd ff:ff:ff:ff:ff:ff altname enp24s0f0 inet 192.168.38.223/24 brd 192.168.38.255 scope global eno1 valid_lft forever preferred_lft forever inet6 fe80::928d:6eff:fec2:5d24/64 scope link valid_lft forever preferred_lft forever 3: eno2: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 90:8d:6e:c2:5d:25 brd ff:ff:ff:ff:ff:ff altname enp24s0f1 4: eno3: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 90:8d:6e:c2:5d:26 brd ff:ff:ff:ff:ff:ff altname enp25s0f0 5: eno4: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 90:8d:6e:c2:5d:27 brd ff:ff:ff:ff:ff:ff altname enp25s0f1 6: br-7abdd021226c: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:20:78:a1:26 brd ff:ff:ff:ff:ff:ff inet 172.21.7.1/24 brd 172.21.7.255 scope global br-7abdd021226c valid_lft forever preferred_lft forever 8: br-fae6ff4cbfe5: mtu 1500 qdisc noqueue state UP group default link/ether 02:42:a3:e3:7b:47 brd ff:ff:ff:ff:ff:ff inet 172.21.8.1/24 brd 172.21.8.255 scope global br-fae6ff4cbfe5 valid_lft forever preferred_lft forever inet6 fe80::42:a3ff:fee3:7b47/64 scope link valid_lft forever preferred_lft forever 9: br-1ad62c94cb59: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:e0:b5:64:9f brd ff:ff:ff:ff:ff:ff inet 172.21.4.1/24 brd 172.21.4.255 scope global br-1ad62c94cb59 valid_lft forever preferred_lft forever 10: br-72097f53c6c8: mtu 1500 qdisc noqueue state UP group default link/ether 02:42:2d:88:79:b3 brd ff:ff:ff:ff:ff:ff inet 172.21.5.1/24 brd 172.21.5.255 scope global br-72097f53c6c8 valid_lft forever preferred_lft forever inet6 fe80::42:2dff:fe88:79b3/64 scope link valid_lft forever preferred_lft forever 11: br-2c578316f047: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:f5:72:f5:5c brd ff:ff:ff:ff:ff:ff inet 172.21.1.1/24 brd 172.21.1.255 scope global br-2c578316f047 valid_lft forever preferred_lft forever 12: br-33e0a46249f7: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:b6:a2:c1:e3 brd ff:ff:ff:ff:ff:ff inet 192.168.49.1/24 brd 192.168.49.255 scope global br-33e0a46249f7 valid_lft forever preferred_lft forever 13: br-7c40d6bf640c: mtu 1500 qdisc noqueue state UP group default link/ether 02:42:e7:a5:67:9c brd ff:ff:ff:ff:ff:ff inet 172.21.3.1/24 brd 172.21.3.255 scope global br-7c40d6bf640c valid_lft forever preferred_lft forever inet6 fe80::42:e7ff:fea5:679c/64 scope link valid_lft forever preferred_lft forever 14: br-ae3a1dd6e320: mtu 1500 qdisc noqueue state UP group default link/ether 02:42:41:e9:55:06 brd ff:ff:ff:ff:ff:ff inet 172.21.0.1/24 brd 172.21.0.255 scope global br-ae3a1dd6e320 valid_lft forever preferred_lft forever inet6 fe80::42:41ff:fee9:5506/64 scope link valid_lft forever preferred_lft forever 15: br-ea23e34daef4: mtu 1500 qdisc noqueue state UP group default link/ether 02:42:77:fc:27:bf brd ff:ff:ff:ff:ff:ff inet 172.21.2.1/24 brd 172.21.2.255 scope global br-ea23e34daef4 valid_lft forever preferred_lft forever inet6 fe80::42:77ff:fefc:27bf/64 scope link valid_lft forever preferred_lft forever 16: br-eb248bb5b3fa: mtu 1500 qdisc noqueue state UP group default link/ether 02:42:49:87:4d:ff brd ff:ff:ff:ff:ff:ff inet 172.21.15.1/24 brd 172.21.15.255 scope global br-eb248bb5b3fa valid_lft forever preferred_lft forever inet6 fe80::42:49ff:fe87:4dff/64 scope link valid_lft forever preferred_lft forever 17: br-0cbe1b0ddf78: mtu 1500 qdisc noqueue state UP group default link/ether 02:42:fc:d6:05:b2 brd ff:ff:ff:ff:ff:ff inet 172.21.9.1/24 brd 172.21.9.255 scope global br-0cbe1b0ddf78 valid_lft forever preferred_lft forever inet6 fe80::42:fcff:fed6:5b2/64 scope link valid_lft forever preferred_lft forever 18: br-298fd4684d8e: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:7e:14:43:4b brd ff:ff:ff:ff:ff:ff inet 172.21.17.1/24 brd 172.21.17.255 scope global br-298fd4684d8e valid_lft forever preferred_lft forever 19: br-3fa489a3f1b3: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:37:b1:67:2f brd ff:ff:ff:ff:ff:ff inet 172.21.10.1/24 brd 172.21.10.255 scope global br-3fa489a3f1b3 valid_lft forever preferred_lft forever 20: br-bff545d104b6: mtu 1500 qdisc noqueue state UP group default link/ether 02:42:ee:12:b1:2e brd ff:ff:ff:ff:ff:ff inet 172.21.19.1/24 brd 172.21.19.255 scope global br-bff545d104b6 valid_lft forever preferred_lft forever inet6 fe80::42:eeff:fe12:b12e/64 scope link valid_lft forever preferred_lft forever 21: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:16:5c:70:8e brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever 23: vethc4971ff@if22: mtu 1500 qdisc noqueue master br-0cbe1b0ddf78 state UP group default link/ether 6e:1b:be:ce:63:4f brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet6 fe80::6c1b:beff:fece:634f/64 scope link valid_lft forever preferred_lft forever 25: vethbb38cd9@if24: mtu 1500 qdisc noqueue master br-72097f53c6c8 state UP group default link/ether 46:af:51:eb:82:5a brd ff:ff:ff:ff:ff:ff link-netnsid 5 inet6 fe80::44af:51ff:feeb:825a/64 scope link valid_lft forever preferred_lft forever 27: vetha994484@if26: mtu 1500 qdisc noqueue master br-ea23e34daef4 state UP group default link/ether 2e:62:df:af:e7:77 brd ff:ff:ff:ff:ff:ff link-netnsid 10 inet6 fe80::2c62:dfff:feaf:e777/64 scope link valid_lft forever preferred_lft forever 29: vetha936228@if28: mtu 1500 qdisc noqueue master br-fae6ff4cbfe5 state UP group default link/ether ea:9a:37:c2:7a:f9 brd ff:ff:ff:ff:ff:ff link-netnsid 9 inet6 fe80::e89a:37ff:fec2:7af9/64 scope link valid_lft forever preferred_lft forever 31: veth903d616@if30: mtu 1500 qdisc noqueue master br-7c40d6bf640c state UP group default link/ether fe:4f:15:d0:24:bb brd ff:ff:ff:ff:ff:ff link-netnsid 3 inet6 fe80::fc4f:15ff:fed0:24bb/64 scope link valid_lft forever preferred_lft forever 33: veth0fb5941@if32: mtu 1500 qdisc noqueue master br-ae3a1dd6e320 state UP group default link/ether da:81:51:b4:6e:ff brd ff:ff:ff:ff:ff:ff link-netnsid 4 inet6 fe80::d881:51ff:feb4:6eff/64 scope link valid_lft forever preferred_lft forever 35: veth03a943c@if34: mtu 1500 qdisc noqueue master br-bff545d104b6 state UP group default link/ether d6:0c:97:ce:c1:73 brd ff:ff:ff:ff:ff:ff link-netnsid 7 inet6 fe80::d40c:97ff:fece:c173/64 scope link valid_lft forever preferred_lft forever 39: veth3051cb6@if38: mtu 1500 qdisc noqueue master br-0cbe1b0ddf78 state UP group default link/ether a2:31:f3:14:e4:42 brd ff:ff:ff:ff:ff:ff link-netnsid 11 inet6 fe80::a031:f3ff:fe14:e442/64 scope link valid_lft forever preferred_lft forever 41: veth90b7282@if40: mtu 1500 qdisc noqueue master br-0cbe1b0ddf78 state UP group default link/ether 5e:b6:3c:e7:8e:52 brd ff:ff:ff:ff:ff:ff link-netnsid 1 inet6 fe80::5cb6:3cff:fee7:8e52/64 scope link valid_lft forever preferred_lft forever 43: vethb1255cd@if42: mtu 1500 qdisc noqueue master br-fae6ff4cbfe5 state UP group default link/ether 66:81:8d:a6:b2:54 brd ff:ff:ff:ff:ff:ff link-netnsid 8 inet6 fe80::6481:8dff:fea6:b254/64 scope link valid_lft forever preferred_lft forever 45: veth08c2693@if44: mtu 1500 qdisc noqueue master br-0cbe1b0ddf78 state UP group default link/ether c6:a5:cb:0e:0f:2a brd ff:ff:ff:ff:ff:ff link-netnsid 6 inet6 fe80::c4a5:cbff:fe0e:f2a/64 scope link valid_lft forever preferred_lft forever 6217: vethe2ecf76@if6216: mtu 1500 qdisc noqueue master br-eb248bb5b3fa state UP group default link/ether 16:6f:0a:c6:7c:f2 brd ff:ff:ff:ff:ff:ff link-netnsid 2 inet6 fe80::146f:aff:fec6:7cf2/64 scope link valid_lft forever preferred_lft forever 这该怎么办呢?有什么排查思路或者方向吗? 之前都是可以访问的,就是内存出问题重启后出现了这个问题。 不仅仅是 9600端口,我把 fastapi 改成其他端口都是不通的。甚至我把 docker 跑的 rabbitmq 关掉,释放 15672 端口,然后把 fastapi 绑定到 15672 端口,这是从其他电脑也无法访问 15672 了。(但是 docker 跑的 rabbitmq 的 15672 是可以被其他机器访问的) * * * 使用 nc 命令在我的 mac 上判断服务器(192.168.38.223)端口是否联通,会返回连接拒绝 ╰─➤ nc -zv 192.168.38.223 9600 130 ↵ nc: connectx to 192.168.38.223 port 9600 (tcp) failed: Connection refused 但是使用 httpie 命令,返回的还是 503 ╰─➤ http -v http://192.168.38.223:9600 GET / HTTP/1.1 Accept: */* Accept-Encoding: gzip, deflate, br Connection: keep-alive Host: 192.168.38.223:9600 User-Agent: HTTPie/3.2.2 HTTP/1.1 503 Service Unavailable Connection: close Content-Length: 0 Proxy-Connection: close 但是在服务器自己访问自己都是 ok 的 ╭─pon@T4GPU ~ ╰─➤ nc -zv 192.168.38.223 9600 Connection to 192.168.38.223 9600 port [tcp/*] succeeded! ╭─pon@T4GPU ~ ╰─➤ http -v http://192.168.38.223:9600 GET / HTTP/1.1 Accept: */* Accept-Encoding: gzip, deflate Connection: keep-alive Host: 192.168.38.223:9600 User-Agent: HTTPie/2.6.0 HTTP/1.1 200 OK content-length: 25 content-type: application/json date: Fri, 02 Feb 2024 01:39:17 GMT server: uvicorn { "message": "Hello World" } * * * 防火墙关闭了还是不行 ╭─pon@T4GPU ~ ╰─➤ sudo iptables -P INPUT ACCEPT sudo iptables -P FORWARD ACCEPT sudo iptables -P OUTPUT ACCEPT sudo iptables -F ╭─pon@T4GPU ~ ╰─➤ exit Connection to 192.168.38.223 closed. ╭─ponponon@MBP13ARM ~ ╰─➤ nc -zv 192.168.38.223 9600 nc: connectx to 192.168.38.223 port 9600 (tcp) failed: Connection refused * * * 我同样在 mac(192.168.35.150) 上去访问另一台服务器(192.168.38.191)的 fastapi,是可以的 ╭─ponponon@MBP13ARM ~ ╰─➤ nc -zv 192.168.38.191 9901 1 ↵ Connection to 192.168.38.191 port 9901 [tcp/*] succeeded! ╭─ponponon@MBP13ARM ~ ╰─➤ http -v http://192.168.38.191:9901 GET / HTTP/1.1 Accept: */* Accept-Encoding: gzip, deflate, br Connection: keep-alive Host: 192.168.38.191:9901 User-Agent: HTTPie/3.2.2 HTTP/1.1 200 OK Connection: keep-alive Content-Length: 25 Content-Type: application/json Date: Fri, 02 Feb 2024 01:50:07 GMT Keep-Alive: timeout=4 Proxy-Connection: keep-alive Server: uvicorn { "message": "hello world" } 所以应该不是外部网络的问题 * * * 之前都是好好的 现在是下面这样  * * * 更新 192.168.38.223 机器的路由表信息 (vtboss-plugin-3DGTRD6U) ╭─pon@T4GPU ~/code/work/vobile/vt/vtboss-plugin ‹master*› ╰─➤ route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.38.1 0.0.0.0 UG 0 0 0 eno1 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0 172.21.0.0 0.0.0.0 255.255.255.0 U 0 0 0 br-ae3a1dd6e320 172.21.1.0 0.0.0.0 255.255.255.0 U 0 0 0 br-2c578316f047 172.21.2.0 0.0.0.0 255.255.255.0 U 0 0 0 br-ea23e34daef4 172.21.3.0 0.0.0.0 255.255.255.0 U 0 0 0 br-7c40d6bf640c 172.21.4.0 0.0.0.0 255.255.255.0 U 0 0 0 br-1ad62c94cb59 172.21.5.0 0.0.0.0 255.255.255.0 U 0 0 0 br-72097f53c6c8 172.21.7.0 0.0.0.0 255.255.255.0 U 0 0 0 br-7abdd021226c 172.21.8.0 0.0.0.0 255.255.255.0 U 0 0 0 br-fae6ff4cbfe5 172.21.9.0 0.0.0.0 255.255.255.0 U 0 0 0 br-0cbe1b0ddf78 172.21.10.0 0.0.0.0 255.255.255.0 U 0 0 0 br-3fa489a3f1b3 172.21.15.0 0.0.0.0 255.255.255.0 U 0 0 0 br-eb248bb5b3fa 172.21.17.0 0.0.0.0 255.255.255.0 U 0 0 0 br-298fd4684d8e 172.21.19.0 0.0.0.0 255.255.255.0 U 0 0 0 br-bff545d104b6 192.168.38.0 0.0.0.0 255.255.255.0 U 0 0 0 eno1 192.168.49.0 0.0.0.0 255.255.255.0 U 0 0 0 br-33e0a46249f7 (vtboss-plugin-3DGTRD6U) ╭─pon@T4GPU ~/code/work/vobile/vt/vtboss-plugin ‹master*› ╰─➤ netstat -r Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface default localhost 0.0.0.0 UG 0 0 0 eno1 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0 172.21.0.0 0.0.0.0 255.255.255.0 U 0 0 0 br-ae3a1dd6e320 172.21.1.0 0.0.0.0 255.255.255.0 U 0 0 0 br-2c578316f047 172.21.2.0 0.0.0.0 255.255.255.0 U 0 0 0 br-ea23e34daef4 172.21.3.0 0.0.0.0 255.255.255.0 U 0 0 0 br-7c40d6bf640c 172.21.4.0 0.0.0.0 255.255.255.0 U 0 0 0 br-1ad62c94cb59 172.21.5.0 0.0.0.0 255.255.255.0 U 0 0 0 br-72097f53c6c8 172.21.7.0 0.0.0.0 255.255.255.0 U 0 0 0 br-7abdd021226c 172.21.8.0 0.0.0.0 255.255.255.0 U 0 0 0 br-fae6ff4cbfe5 172.21.9.0 0.0.0.0 255.255.255.0 U 0 0 0 br-0cbe1b0ddf78 172.21.10.0 0.0.0.0 255.255.255.0 U 0 0 0 br-3fa489a3f1b3 172.21.15.0 0.0.0.0 255.255.255.0 U 0 0 0 br-eb248bb5b3fa 172.21.17.0 0.0.0.0 255.255.255.0 U 0 0 0 br-298fd4684d8e 172.21.19.0 0.0.0.0 255.255.255.0 U 0 0 0 br-bff545d104b6 192.168.38.0 0.0.0.0 255.255.255.0 U 0 0 0 eno1 192.168.49.0 0.0.0.0 255.255.255.0 U 0 0 0 br-33e0a46249f7 (vtboss-plugin-3DGTRD6U) ╭─pon@T4GPU ~/code/work/vobile/vt/vtboss-plugin ‹master*› ╰─➤ ip -s route show default via 192.168.38.1 dev eno1 proto static 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 172.21.0.0/24 dev br-ae3a1dd6e320 proto kernel scope link src 172.21.0.1 172.21.1.0/24 dev br-2c578316f047 proto kernel scope link src 172.21.1.1 linkdown 172.21.2.0/24 dev br-ea23e34daef4 proto kernel scope link src 172.21.2.1 172.21.3.0/24 dev br-7c40d6bf640c proto kernel scope link src 172.21.3.1 172.21.4.0/24 dev br-1ad62c94cb59 proto kernel scope link src 172.21.4.1 linkdown 172.21.5.0/24 dev br-72097f53c6c8 proto kernel scope link src 172.21.5.1 172.21.7.0/24 dev br-7abdd021226c proto kernel scope link src 172.21.7.1 linkdown 172.21.8.0/24 dev br-fae6ff4cbfe5 proto kernel scope link src 172.21.8.1 172.21.9.0/24 dev br-0cbe1b0ddf78 proto kernel scope link src 172.21.9.1 172.21.10.0/24 dev br-3fa489a3f1b3 proto kernel scope link src 172.21.10.1 linkdown 172.21.15.0/24 dev br-eb248bb5b3fa proto kernel scope link src 172.21.15.1 172.21.17.0/24 dev br-298fd4684d8e proto kernel scope link src 172.21.17.1 linkdown 172.21.19.0/24 dev br-bff545d104b6 proto kernel scope link src 172.21.19.1 192.168.38.0/24 dev eno1 proto kernel scope link src 192.168.38.223 192.168.49.0/24 dev br-33e0a46249f7 proto kernel scope link src 192.168.49.1 linkdown 然后我在问题机器上抓包 (vtboss-plugin-3DGTRD6U) ╭─pon@T4GPU ~/code/work/vobile/vt/vtboss-plugin ‹master*› ╰─➤ sudo tcpdump -i eno1 port 9600 -n -vvv -w test.cap 130 ↵ tcpdump: listening on eno1, link-type EN10MB (Ethernet), snapshot length 262144 bytes 然后在 mac 上打开服务器抓包的 cap 文件,结果如下  * * * 我直接用 mac 上的 wireshark 抓包了试了一下,变成下面这样了 ╰─➤ http -v http://192.168.38.223:9600 GET / HTTP/1.1 Accept: */* Accept-Encoding: gzip, deflate, br Connection: keep-alive Host: 192.168.38.223:9600 User-Agent: HTTPie/3.2.2 HTTP/1.1 503 Service Unavailable Connection: close Content-Length: 0 Proxy-Connection: close  * * * 监听的端口没有问题 ╰─➤ netstat -tulnp | grep 2320406 (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) tcp 0 0 0.0.0.0:9600 0.0.0.0:* LISTEN 2320406/python * * * 更新 ifconfig eno1 的结果 ─➤ ifconfig eno1 eno1: flags=4163 mtu 1500 inet 192.168.38.223 netmask 255.255.255.0 broadcast 192.168.38.255 inet6 fe80::928d:6eff:fec2:5d24 prefixlen 64 scopeid 0x20 ether 90:8d:6e:c2:5d:24 txqueuelen 1000 (Ethernet) RX packets 1912389 bytes 541910038 (541.9 MB) RX errors 0 dropped 48496 overruns 0 frame 0 TX packets 1097342 bytes 510909874 (510.9 MB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device interrupt 35 * * * 更新 ethtool -S eno1 的结果 ╰─➤ ethtool -S eno1 NIC statistics: rx_octets: 541948234 rx_fragments: 0 rx_ucast_packets: 995624 rx_mcast_packets: 677808 rx_bcast_packets: 239396 rx_fcs_errors: 0 rx_align_errors: 0 rx_xon_pause_rcvd: 0 rx_xoff_pause_rcvd: 0 rx_mac_ctrl_rcvd: 0 rx_xoff_entered: 0 rx_frame_too_long_errors: 0 rx_jabbers: 0 rx_undersize_packets: 0 rx_in_length_errors: 0 rx_out_length_errors: 0 rx_64_or_less_octet_packets: 0 rx_65_to_127_octet_packets: 0 rx_128_to_255_octet_packets: 0 rx_256_to_511_octet_packets: 0 rx_512_to_1023_octet_packets: 0 rx_1024_to_1522_octet_packets: 0 rx_1523_to_2047_octet_packets: 0 rx_2048_to_4095_octet_packets: 0 rx_4096_to_8191_octet_packets: 0 rx_8192_to_9022_octet_packets: 0 tx_octets: 511129734 tx_collisions: 0 tx_xon_sent: 0 tx_xoff_sent: 0 tx_flow_control: 0 tx_mac_errors: 0 tx_single_collisions: 0 tx_mult_collisions: 0 tx_deferred: 0 tx_excessive_collisions: 0 tx_late_collisions: 0 tx_collide_2times: 0 tx_collide_3times: 0 tx_collide_4times: 0 tx_collide_5times: 0 tx_collide_6times: 0 tx_collide_7times: 0 tx_collide_8times: 0 tx_collide_9times: 0 tx_collide_10times: 0 tx_collide_11times: 0 tx_collide_12times: 0 tx_collide_13times: 0 tx_collide_14times: 0 tx_collide_15times: 0 tx_ucast_packets: 1097937 tx_mcast_packets: 83 tx_bcast_packets: 9 tx_carrier_sense_errors: 0 tx_discards: 0 tx_errors: 0 dma_writeq_full: 0 dma_write_prioq_full: 0 rxbds_empty: 0 rx_discards: 0 rx_errors: 0 rx_threshold_hit: 0 dma_readq_full: 0 dma_read_prioq_full: 0 tx_comp_queue_full: 0 ring_set_send_prod_index: 0 ring_status_update: 0 nic_irqs: 0 nic_avoided_irqs: 0 nic_tx_threshold_hit: 0 mbuf_lwm_thresh_hit: 0 * * * 这是我的 cpu 信息 ─➤ lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 32 On-line CPU(s) list: 0-31 Vendor ID: GenuineIntel Model name: Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz CPU family: 6 Model: 85 Thread(s) per core: 1 Core(s) per socket: 16 Socket(s): 2 Stepping: 7 CPU max MHz: 3900.0000 CPU min MHz: 1000.0000 BogoMIPS: 4600.00 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shad ow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni md_clear flush_l1d arch_capabilities Virtualization features: Virtualization: VT-x Caches (sum of all): L1d: 1 MiB (32 instances) L1i: 1 MiB (32 instances) L2: 32 MiB (32 instances) L3: 44 MiB (2 instances) NUMA: NUMA node(s): 2 NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30 NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31 Vulnerabilities: Gather data sampling: Mitigation; Microcode Itlb multihit: KVM: Mitigation: VMX disabled L1tf: Not affected Mds: Not affected Meltdown: Not affected Mmio stale data: Mitigation; Clear CPU buffers; SMT disabled Retbleed: Mitigation; Enhanced IBRS Spec rstack overflow: Not affected Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence Srbds: Not affected Tsx async abort: Mitigation; TSX disabled 内存应该是够的, 有 100多 GB 可用 (poster_keyword_search-vs4TvrqN) ╭─pon@T4GPU ~/code/work/vobile/vt/poster_keyword_search ‹master› ╰─➤ free -h 2 ↵ total used free shared buff/cache available Mem: 125Gi 17Gi 102Gi 130Mi 5.9Gi 107Gi Swap: 8.0Gi 0B 8.0Gi