Today's Question:  What does your personal desk look like?        GIVE A SHOUT

WireGuard VPN Troubleshooting

  sonic0002        2021-12-14 09:17:46       5,273        0    

When you are working as a network/cluster admin for you team to config the WireGuard VPN for other members, you may meet some of the problems below and these are the solution you can try to fix them.

-----------------------------------------------------------------------------------------------------------

Problem [1]: The WireGuard handshake request send from the client can be received from the server but the server's response was blocked at some where so the client can not receive server's response after you config multi layers of routing.

Test Network Config (setup WireGuard on EXSI VM) :

Solution: When you added multi layer routing and a server peer is behind firewall, the WireGuard server peer might wish to be able to receive incoming packets even when it is not sending any packets. Because stateful firewalls keep tracking of "connections" state, if a peer behind One-to-many NAT or a firewall wishes to receive incoming packets, he must keep the NAT/firewall mapping valid, by periodically sending keepalive packets. So turn on the One-to-many NAT on all your firewalls can easily fix this problem.

------------------------------------------------------------------------------------------------------------

Problem[2]: You VPN throughput/speed-test shows high data translation rate and all other network usage are fine, only RTSP video stream got delay.

Solution: The RTSP using the UDP to transfer video data, this problem is caused by the MTU(Maximum transmission unit) config difference between you RTSP source server computer and the WireGuard default setting.

Run Cmd below to check you computer's MTU config:

netsh interface ipv4 show subinterfaces

Run Wireshark to check your RTSP server/client package max size:

So we can find:

Server sending MTU config: 1500 Bytes

WireGuard UDP MTU default: 1420 Bytes

RTSP UDP config: 1414 Bytes

The default MTU of WireGuard is 1420 Bytes, compared with other devices where the usual size is 1492 or 1500.This will cause any device that thinks that it is sending a full packet to the WireGuard, to actually send more than one WireGuard packet because the packet will be broken into two, the second one almost empty. As the dominant factor in TCP/IP is the number of packets, because each requires synchronization and acknowledgement, this will slow down all communication.

The solution is to set the WireGuard to an MTU size that is the same as the rest of the network. on PPPoE connections, the maximum MTU is generally 1492 instead of widely used 1500, so the default MTU of WireGuard which is 1420, needs to be corrected to 1500+.

Add iptables rules dramatically improved performance from unusable for web browsing to streaming video:

iptables -A FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu 

on PostUp to the client configuration setting can fixe the remaining issues. With it, the client tells the server to use the correct MTU when sending packets to it.

The WireGuard config file detail (we use MTU = 1512Bytes):

[Interface
PrivateKey = CLIENT_PRIVATE_KEY
Address = 10.88.88.2/24
MTU = 1512
PostUp = ip route add SERVER_PUBLIC_IP/32 via 192.168.1.1 dev eth0; iptables -A FORWARD -i wg0 -m state --state RELATED,ESTABLISHED -j ACCEPT; iptables -A FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
PostDown = ip route del SERVER_PUBLIC_IP/32 via 192.168.1.1 dev eth0; iptables -D FORWARD -i wg0 -m state --state RELATED,ESTABLISHED -j ACCEPT; iptables -D FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
[Peer]
PublicKey = SERVER_PUBLIC_KEY
PresharedKey = PRESHARED_KEY
Endpoint = SERVER_PUBLIC_IP:51820
AllowedIPs = 0.0.0.0/0
PersistentKeepalive = 15]

Then the video delay problem will be fixed.

Note: This post is authorized to republish here by Yuancheng Liu, Senior Security Development Engineer at Trustwave. Original post is here.

WIREGUARD  VPN  TROUBLESHOOTING 

Share on Facebook  Share on Twitter  Share on Weibo  Share on Reddit 

  RELATED


  0 COMMENT


No comment for this article.