Tunnel Ethernet Traffic Over NDN

[This post is a repost from https://yoursunny.com/t/2017/tunnel-Ethernet-over-NDN/ written by NDN developer Junxiao Shi]

Named Data Networking (NDN) is a common network protocol for all applications and network environment. NDN’s network layer protocol runs on top of a best-effort packet delivery service, which includes physical channels such as Ethernet wires, and logical connections such as UDP or TCP tunnels over the existing Internet. Using this underlying connectivity, NDN provides a content retrieval service, which allows applications to fetch uniquely named “Data packets” each carrying a piece of data. The “data” could be practically anything: text file chunks, video frames, temperature sensor readings … they are all data. Likewise, a packet in a lower layer network protocol, such as an Ethernet frame, is also a piece of data. Therefore, it should be possible to encapsulate Ethernet traffic into NDN Data packets, and establish a Virtual Private Network (VPN) through NDN communication. This post describes the architecture of a proof-of-concept Ethernet-over-NDN tunneling program, and shows a simple performance benchmark over the real world Internet.

The Program

tap-tunnel creates an Ethernet tunnel between two nodes using NDN communication. Each node runs an instance of tap-tunnel.
This program collects packets sent into a TAP interface, and turn them into NDN packets. It then gains NDN connectivity by connecting to the local NDN Forwarding Daemon (NFD). The diagram below shows the overall architecture:

IP app                                              IP app
  |                                                    |
IP stack    tap-tunnel              tap-tunnel    IP stack
  |         /        \              /        \         |
TAP interface       NFD            NFD       TAP interface
                     |              |
                    UDP            UDP
                     |              |
                    global NDN testbed

tap-tunnel requires existing NDN reachability between the two nodes. Each node must have a globally routable NDN prefix, so that a tap-tunnel instance can send an Interest to the other instance. This reachability can be established with auto prefix propagation.

The tap-tunnel program contains four components:

  • A tun adapter sends and receives Ethernet frames to and from a TAP interface.
  • A payload queue buffers Ethernet frames received from the TAP interface, to be sent as NDN packets.
  • A consumer sends NDN Interests.
  • A producer sends NDN Data in reply to Interests.

Most Ethernet frames collected from the TAP interface are sent as the payload in NDN Data packets. Ethernet frames smaller than 100 octets can also be piggybacked on NDN Interest packets in the Exclude field; this is effective for transmitting an Ethernet frame containing a TCP acknowledgement. In either case, the recipient would inject the received Ethernet frame into the TAP interface on the other end of the tunnel.

Interest names are simply the routable prefix of the remote node, followed by a consecutive sequence number. The consumer keeps 30 Interests outstanding.
Whenever an Interest has been satisfied, Nacked, or timed out, a new Interest is sent to replace it. Having multiple outstanding Interests allows the producer to send back many Ethernet frames at once.

When the producer receives an Interests, if the payload queue is not empty, it replies with a Data carrying an Ethernet frame right away. Otherwise, the Interest is kept in a queue. If a new Ethernet frame is collected within the next 2 seconds, a Data is generated to reply the queued Interests. Queuing incoming Interests allows the Ethernet frame to be sent immediately, without needing to wait for a new incoming Interest.

If there were nothing to send within the 2-second period, the producer responds with an empty Data. I chose to send an empty Data, instead of sending a network Nack or not responding at all, because the empty Data informs the network that the producer is still alive. If a network Nack was used or the Interest was not answered, forwarding strategies on network nodes may treat this as a signal of link failure and start exploring alternate paths, leading to higher overhead and worse performance.

Make it a VPN

An Ethernet tunnel over NDN is just like any other logical Ethernet links. It is possible to make a VPN by configuring IP forwarding.

To make a VPN server, I need to enable IP forwarding, and create an iptables rule to provide NAT for traffic received from the VPN client:

<tt>sudo ip tuntap add dev tap0 mode tap user $(id -u)
sudo ip link set tap0 up
sudo ip addr add 192.168.41.0/31 dev tap0
tap-tunnel -l /vpn-server -r /vpn-client -i tap0 --outstandings 30 --lifetime 6000 --payloads 24 --ansdlr 2000

sudo sysctl -w net.ipv4.ip_forward=1
sudo iptables -t nat -A POSTROUTING -s 192.168.41.0/31 -j SNAT --to-source 10.0.2.15
</tt>

To make a VPN client, I need to add a default route toward VPN server via the TAP interface, but ensure the UDP tunnel toward the NDN testbed router (219.223.222.5) goes through the original network interface (10.0.2.2):

<tt>sudo ip tuntap add dev tap0 mode tap user $(id -u)
sudo ip link set tap0 up
sudo ip addr add 192.168.41.1/31 dev tap0
./tap-tunnel -l /vpn-client -r /vpn-server -i tap0 --outstandings 30 --lifetime 6000 --payloads 24 --ansdlr 2000

sudo ip route add 219.223.222.5/32 via 10.0.2.2
sudo ip route add 0.0.0.0/0 via 192.168.41.0
</tt>

Performance Benchmark

I tested the program over the global NDN testbed. The VPN server is a Ubuntu 14.04 virtual machine that connects to a router in Arizona, USA. It resides in the same building as the router. The VPN client is a Ubuntu 16.04 virtual machine. It is on a laptop located in Shanghai, China, uses wired network, and connects to a router in Shenzhen, China. The tests were performed on Sep 05, 2017 around 8AM UTC.

The first table shows ping round-trip time. I compared direct IP ping, IP ping tunneled over NDN, and ndnping. The command line for was ping -i 0.2 -c 100 -s 8 co1.securedragon.net (8-octet payload) or ping -i 0.2 -c 100 -s 1000 co1.securedragon.net (1000-octet payload); the destination host, co1.securedragon.net, is about 32ms away from the VPN server. The command line for ndnping was ndnping -i 200 -c 100 /vpn-server; the destination is the VPN server itself.

scenario trial-0 trial-1 trial-2 trial-3 trial-4 avg stdev
IP, 8-octet payload 209.650 208.953 209.168 209.293 209.024 209.218 0.275
tap-tunnel, 8-octet payload 404.242 404.756 404.931 403.498 405.155 404.516 0.661
IP, 1000-octet payload 228.566 229.045 228.864 229.207 229.233 228.983 0.276
tap-tunnel, 1000-octet payload 410.837 416.393 417.800 412.144 411.818 413.798 3.089
ndnping 346.223 346.527 346.274 346.142 346.810 346.395 0.273

From this table, I can see that while IP ping over tap-tunnel has much larger round-trip time than direct IP ping, most of the increase can be attributed to the delay in NDN network itself as observed by ndnping, and the additional delay caused by tap-tunnel is less than 35ms.

The number of packet losses during the ping test is shown in the second table:

scenario trial-0 trial-1 trial-2 trial-3 trial-4
IP, 8-octet payload 0 0 0 0 0
tap-tunnel, 8-octet payload 1 1 0 1 2
IP, 1000-octet payload 0 1 0 0 0
tap-tunnel, 1000-octet payload 7 7 4 4 3
ndnping 0 2 3 3 4

This table illustrates a drawback of Ethernet over NDN tunneling: higher packet loss. Most notably, the packet loss of IP tunnel with 1000-octet payload over tap-tunnel is much higher than other tests. One probable cause is that large ICMP packets cannot be piggybacked onto NDN Interests so each needs to generate a Data packet.

I also tested HTTP download from the same Colorado server. The third table reports download completion times for downloading a 1MB file, as measured by curl -w '%{time_total}' -r 0-1048576 http://co1.securedragon.net/100MB.test command.

scenario trial-0 trial-1 trial-2 trial-3 trial-4 avg stdev
IP 3.594 3.180 4.568 4.094 5.420 4.171 0.872
tap-tunnel 75.931 70.461 77.243 76.018 95.893 79.109 9.742

I can see that downloading over tap-tunnel is 19 times slower than a direct download. This is most likely caused by packet losses, which make TCP slow down the data transfer significantly.

Conclusion

This post presents tap-tunnel, a proof-of-concept implementation of tunneling Ethernet traffic over NDN Interests and Data. tap-tunnel may be used to establish a VPN-like connectivity, if combined with IP routing setup. Benchmark on the global NDN testbed and real world Internet indicated that tap-tunnel can achieve fairly good ping round-trip time, but TCP applications may suffer from packet loss and have poor performance.

–Junxiao Shi

Social tagging: > > > > >