TUNNELING APPLETALK THROUGH IP

Why tunnel AppleTalk?

On February 19, 2002, IT at Northwestern University stopped routing AppleTalk across subnets. Our departmental network is spread over five subnets (165.124.[102|221|222|225|253].0), with Macs and other devices that use AppleTalk in all five subnets. NU IT has recommended moving all AppleTalk services to TCP/IP. To aid this process, Julian Koh has created a Powerpoint presentation that describes migration strategies, along with their limitations. For convenience, an HTML version of Julian's presentation can be perused here.

Many of these strategies are difficult to implement in the context of the Microbiology-Immunology department for the following reasons:

Given these limitations, Ryan Kappes, and I decided that one solution would be to tunnel AppleTalk through TCP/IP and thereby route AppleTalk between the five subnets our department is in. This is how Cayman Gatorboxes routed AppleTalk between networks; unfortunately Netopia dropped the Gatorbox line when they bought Cayman. The closest equivalent product Netopia sells is the R9100 DSL router, along with an AppleTalk kit upgrade. This retails for about $550, can only connect to one other network, and is therefore inappropriate for our needs. There is a Macintosh-based approach to tunnel AppleTalk between networks, using MacUAR (or Macintosh Unix AppleTalk Router), a product from the University of Melbourne. MacUAR and UAR both tunnel AppleTalk through UDP. We did test UAR; although it is slow and somewhat unreliable, it does work. However neither MacUAR nor UAR is still developed or supported, and my attempts to contact the author at the University of Melbourne have not succeeded. So what is described below is a home-grown AppleTalk tunneling solution that I put together largely using existing software. Ryan and I implemented this solution together, and Ryan maintains it. I have described our setup below as it may be useful for anyone else who needs to tunnel AppleTalk between networks.

My solution uses Linux, although it will work with any modern Unix-like OS (FreeBSD, NetBSD, OpenBSD, Solaris, MacOS X). I picked Linux simply because of my familiarity with it, and that it runs on very cheap hardware. For hardware, we purchased five identifcal, headless, off-the-shelf, 733 Mhz Pentium III computers (Dream NetServer IDE), with 256 MB of RAM, and 3COM 3c905 Tornado NICs, for about $400 apiece. Slackware Linux 8.0 was loaded as the operating system on one of them. Several updates were applied to meet our requirements, including a new kernel - 2.4.18-ac3. This kernel was compiled to support the TUN/TAP interface as well as AppleTalk. The former was compiled as a module, while AppleTalk was built into the kernel. When we were satisfied with the Slackware installation on the first PC, Ryan used pcopy to make four copies of the hard-drive; one for each of the other four PCs. Once the PCs were ready, we proceeded to setup my solution to tunnel AppleTalk as described below.

Part I: Setting up the Linux TUN/TAP interface and Vtund

Linux kernels 2.2.x and 2.4.x support a universal TUN/TAP device driver that is also supported under Solaris, FreeBSD, and MacOSX. The TUN is a virtual point-to-point device network device that can be used to route IP. The TAP is a virtual ethernet network device that provides ethernet frame reception and transmission for user-space programs. There are many user-space programs that can use the TUN or TAP devices to create virtual private networks, including vtund, tincd, OpenVPN, and yavipind. Of these, yavipind is probably the most secure, but vtund is the easiest to setup, so that's what I used. The basic idea of tunneling is that ethernet frames received by virtual tapX devices are tunneled or encapsulated within TCP/IP packets and transmitted via the physical ethX device. When combined with atalkd from the Netatalk package, it is relatively straightforward to build a reliable, encrypted UDP or TCP tunnel between two or more networks. Generally speaking, it is advisable to use a UDP tunnel rather than a TCP tunnel to tunnel ethernet frames or UDP packets for reasons described by Olaf Titz. The datagram delivery protocol (DDP) used by AppleTalk is conceptually similar to UDP, and therefore it is probably better to tunnel DDP via UDP especially considering that the network connections between our five Linux PCs are very reliable. Because there are no potentially conflicting timers involved, TCP tunnels should work as well, albeit with slightly greater overhead.

To get support for /dev/tunXX and /dev/tapXX, compile a kernel with TUN/TAP support. Vtun support can be built as a kernel module. If you do so, add the line:

alias char-major-10-200 tun

to /etc/modules.conf, and run `depmod -a`. This will allow the module tun.o to be loaded automatically when needed by vtund. We use VTUN to create tap0, tap1, tap2, and tap3 interfaces on one PC (Linux-PC1), while only tap0 interfaces will be created on the other four PCs. On the first PC (that forms the center of this routing-hub), create the following devices, /dev/tap0, /dev/tap1, /dev/tap2, /dev/tap3, and /dev/net/tun, using these commands:

mknod -m 660 /dev/tap0 c 36 16
mknod -m 660 /dev/tap1 c 36 17
mknod -m 660 /dev/tap2 c 36 18
mknod -m 660 /dev/tap3 c 36 19
mknod -m 660 /dev/net/tun c 10 200

On the other four PCs, only the devices /dev/tap0 and /dev/net/tun need to be created. With the default Slackware Linux 8.0 installation, these devices should already be present.

Compile and install vtund. There is extensive documentation at the vtund site, as well as linked sites that explain the fundamentals of virtual tunneling. I built vtund with support for encryption using OpenSSL and compression using the Lempel-Ziv-Oberhumer (LZO) library.

After installing vtund, virtual tunnels were established between the five Linux PCs. To do this, the following vtund configuration files (/usr/local/etc/vtund.conf) were used. The following two commented sections "options" and "defaults", are common to all four vtund configuration files.

Common section:

options {
 port           5000;               # bind port 5000
 timeout        60;                 # timeout 1 minute
 ifconfig       /sbin/ifconfig;     # path to ifconfig
}

default {
 proto          udp;        # use UDP
 encrypt        no;         # no encryption
 compress       lzo:2;      # level 2 compression using LZO
 stat           yes;        # record stats to /usr/local/var/log/vtund/tunnel_name
 keepalive      yes;        # enable connection keepalive
 multi          killold;    # kill an old connection before allowing a new one
 speed          0;          # disable traffic shaping
}

Configuration sections specific for each of Linux PCs are shown below. A total of four tunnels were established. Linux-PC1 functions as a server for Linux-PC2, PC3, PC4 and PC5. The relationship is delineated more clearly below:

Tunnel 1: Linux-PC1 (server)    Linux-PC2 (client)
Tunnel 2: Linux-PC1 (server)    Linux-PC3 (client)
Tunnel 3: Linux-PC1 (server)    Linux-PC4 (client)
Tunnel 3: Linux-PC1 (server)    Linux-PC5 (client)

Linux-PC1 specific:

# An MTU of 650 is more than sufficient for DDP packets
# Server for PC2
t1-2 {
 password secret;
 type ether;
 device tap0;
 persist yes;
 up {
        ifconfig "%% 192.168.0.221 pointopoint 192.168.0.253 mtu 650";
    };
 down {
        ifconfig "%% down";
    };
}
#
# Server for PC3
t1-3 {
 password secret;
 type ether;
 device tap1;
 persist yes;
 up { 
        ifconfig "%% 192.168.1.222 pointopoint 192.168.1.253 mtu 650";
    };
 down {
        ifconfig "%% down";
    };
}
#
# Server for PC4
t1-4 {
 password secret;
 type ether;
 device tap2;
 persist yes;
 up {
        ifconfig "%% 192.168.2.102 pointopoint 192.168.2.253 mtu 650";
    };
 down {
        ifconfig "%% down";
    };
}
#
# Server for PC5
t1-5 {
 password secret;
 type ether;
 device tap3;
 persist yes;
 up {
        ifconfig "%% 192.168.3.225 pointopoint 192.168.3.253 mtu 650";
    };
 down {
        ifconfig "%% down";
    };
}

Linux-PC2 specific:

# client PC2 connecting to server PC1
t1-2 {
 password secret;
 type ether;
 persist yes;
 up {
        ifconfig "%% 192.168.0.253 pointopoint 192.168.0.221 mtu 650";
 };
 down {
        ifconfig "%% down";
 };
}

Linux-PC3 specific:

# client PC3 connecting to server PC1
t1-3 {
 password secret;
 type ether;
 persist yes;
 up {
        ifconfig "%% 192.168.1.253 pointopoint 192.168.1.222 mtu 650"; 
 };
 down {
        ifconfig "%% down";
 };
}

Linux-PC4 specific:

# client PC4 connecting to server PC1
t1-4 {
 password secret;
 type ether;
 persist yes;
 up {
        ifconfig "%% 192.168.2.253 pointopoint 192.168.2.102 mtu 650"; 
 };
 down {
        ifconfig "%% down";
 };
}

Linux-PC5 specific:

# client PC5 connecting to server PC1
t1-5 {
 password secret;
 type ether;
 persist yes;
 up {
        ifconfig "%% 192.168.3.253 pointopoint 192.168.3.224 mtu 650"; 
 };
 down {
        ifconfig "%% down";
 };
}

vtund is invoked differently on each of the four PCs. On PC1, it is invoked solely as a server. On PC2, PC3, and PC4 it is invoked solely as a client as detailed below:

Linux-PC1:

        vtund -s

Linux-PC2:

        vtund t1-2 real-ip-address-of-PC1 (or FQDN)

Linux-PC3:

        vtund t1-3 real-ipaddress-of-PC1 (or FQDN)

Linux-PC4:

        vtund t1-4 real-ipaddress-of-PC1 (or FQDN)

Linux-PC5:

        vtund t1-5 real-ipaddress-of-PC1 (or FQDN)

Because vtund is invoked as a server on PC1, configure the firewall on this PC so that the only four IP addresses that have access to TCP & UDP port 5000 are those of Linux-PC2, Linux-PC3, Linux-PC4, and Linux-PC5.

Once this is done, ifconfig can be used to ensure that each PC has the real interface eth0, and the virtual interfaces are configured correctly. PC1 will have four virtual intefaces (tap0, tap1, tap2, tap3), while the other four PCs will only have a single virtual interface (tap0). Ping can be used to check that all the interfaces are alive and routing IP packets (well ICMP anyway) correctly.

Part-II: Routing AppleTalk over the virtual tunnels

Perhaps the simplest way to "route" AppleTalk, would be to bridge the eth0, tap0, tap1, tap2 and tap3 interfaces on Linux-PC1, and the eth0 and tap0 interfaces on the other four PCs. Linux kernels 2.2 and 2.4 support bridging, and bridging utilites can be downloaded from the Linux Ethernet Bridging site. While bridging will work, promiscuously passing all ethernet frames between five subnets would likely disable portions of the network. As a possible solution bridging filters can be used to only allow Phase 2 EtherTalk packets through.

A safer and more reasonable approach is to use atalkd to route AppleTalk over the TAP tunnels setup by vtund. To do so, compile and install netatalk. At the time of writing the most recent release is 1.5.3.1, available from SourceForge. When running the configure script in the uppermost directory of the netatalk source hierarchy, be sure not to use the option "--disable-ddp". By default all netatalk configuration files are installed in /usr/local/etc/netatalk. The file "atalkd.conf" in this directory was edited on each of PC1, PC2, PC3, PC4, and PC5.

atalkd.conf for PC1:

eth0 -seed -phase 2 -net 2253 -addr 2253.102 -zone "Microbio-Immun"
tap0 -seed -phase 2 -net 60000 -addr 60000.253 -zone "Microbio-Immun"
tap1 -seed -phase 2 -net 60001 -addr 60001.253 -zone "Microbio-Immun"
tap2 -seed -phase 2 -net 60002 -addr 60002.253 -zone "Microbio-Immun"
tap3 -seed -phase 2 -net 60003 -addr 60003.253 -zone "Microbio-Immun"

atalkd.conf for PC2:

eth0 -seed -phase 2 -net 2221 -addr 2221.102 -zone "Microbio-Immun"
tap0 -phase 2 -zone "Microbio-Immun"

atalkd.conf for PC3:

eth0 -seed -phase 2 -net 2222 -addr 2222.102 -zone "Microbio-Immun"
tap0 -phase 2 -zone "Microbio-Immun"

atalkd.conf for PC4:

eth0 -seed -phase 2 -net 2102 -addr 2102.102 -zone "Microbio-Immun"
tap0 -phase 2 -zone "Microbio-Immun"

atalkd.conf for PC5:

eth0 -seed -phase 2 -net 2225 -addr 2225.102 -zone "Microbio-Immun"
tap0 -phase 2 -zone "Microbio-Immun"

Note that atalkd is configured as an AppleTalk seed router on the eth0 interface of all four PCs. This ensures that other AppleTalk devices on those networks (2253, 2221, 2222, 2102, 2225) will have a network number that can be routed through the tunnel. On PC1, atalkd is configured as seed router on tap0, tap1, tap2, and tap3 interfaces as well. Any network segment only needs a single seed router. Thus on PC2, PC3, PC4, and PC5 atalkd is not configured as a seed router on the tap0 interface. The atalkd.conf files were configured to be read-only (chmod 400), to prevent atalkd from re-writing them!

Finally, atalkd, was started in the same order that the virtual tunnels were started, i.e. beginning with PC1, and ending with PC5. Atalkd is started with the command:

/usr/local/sbin/atalkd -f /usr/local/etc/netatalk/atalkd.conf 

After a minute or two atalkd was started on PC2, PC3, PC4, and finally PC5 in sequence. When atalkd is running on all four PCs, ifconfig was used to ensure that the correct EtherTalk (DDP) address was assigned to the eth0 and tapX interfaces on each PC. The aecho utility was used to verify that AppleTalk packets were being routed appropriately.

In the atalkd setup described above, it is important that the vtun tunnels be started in the appropriate sequence, and that atalkd also be started in the correct sequence on each PC. I spent some time working this through and then came up with a simple solution that makes use of expect and ssh. Briefly here is what I do. The computers can be booted in any order. A script run from rc.local on PC1, uses ping to determine when PC2, PC3, PC4, and PC5 have booted. When this has happened, it uses expect to control ssh to run scripts remotely on PC2, PC3, PC4, and PC5 that start vtun tunnels between each of these computers and PC1. If these tunnels are started successfully, the script continues to run on PC1 and starts atalkd on PC1, before using expect+ssh to start atalkd on PC2, PC3, PC4, and PC5. While this works, it is not very secure to let expect control ssh. In my defence, these five PCs have no users other than root, the only daemons running are sshd, and vtund, and use very restrictive netfilter firewall rules.

Another problem is that transient loss of network connectivity causes the vtun tunnels to go down. If we were routing IP, this is easily remedied because vtund can be configured to run ifconfig to re-establish the appropriate routes. Not so simple with atalkd -- I don't know of any way to add an interface and a network to an already-running atalkd. So another hack - I run a cron script on PC1 that checks ever so often that the AppleTalk networks 60000, 60001, 60002, and 60003 are alive. If any of them are dead, the script proceeds to shut down atalkd and vtund on PC1, PC2, PC3, PC4, and PC5. The script then starts vtun tunnels between PC2, PC3, PC4, PC5 and PC1, and proceeds to load atalkd in the sequence described above.

A Kludge: Getting an ASIP server to appear in any zone

This section has nothing to do with tunneling, but describes a little hack that can be used to make an ASIP server appear in the desired zone. NU Academic Technologies distributes software updates through a server "Plato" that appears in the "Plato" zone. Plato runs AppleShareIP version 5.0 (or later), and therefore can be connected to via TCP/IP even after AppleTalk routing is terminated. That being said, it would still be more convenient if Plato could be connected to via the Chooser. Here's a way to make that happen. The assumption made is that a Linux PC running netatalk is available. To do this, use the "-proxy" and "-ipaddr" directives in afpd.conf. In this particular example, plato.at.northwestern.edu has the ip address 129.105.110.15. An appropriate configuration in afpd.conf would be:

"Plato (NU AT)" -transall -proxy -uamlist "" -ipaddr 129.105.110.15

As many afpproxy servers can be setup as desired using this method. I had a much more complicated way of doing this, using a TCP redirector, until Thomas Kaiser pointed out the much simpler solution described above.

Acknowledgements, Final Comments and Useful Links:

Johnn Nekman's suggestions were helpful in getting me started with vtund. Julian Koh's suggestions on AppleTalk routing were insightful. I thank Thomas Kaiser for his suggestions on using an afpd proxy. On the evening of February 6 2002, Julian turned off AppleTalk routing between our subnets between 6:30 pm and 8:30 pm. Ryan Kappes had previously setup and configured the hardware; between 6:30 and 7:45 he and I tested this approach to route AppleTalk between our five subnets. Network performance was good, but slower than native AppleTalk. This is only to be expected given the overhead associated with tunneling.

To measure overhead creating by tunneling, I created a 100 Mb file on my G4 running OS X, then mounted a DDP-only share from a Linux PC running netatalk in the same subnet as my G4. I timed copying the file from my G4 to the server using time. The average throughput was 750 kb/sec. When the server was moved to any of the other four subnets, the average throughput was 660 kb/sec, so tunneling adds about 15% overhead. This may be an overestimate, because I created the file on my G4 using completely random data (dd if=/dev/random of=testfile bs=4096 count=25600). Completely random data cannot be compressed; in fact compression of random data will increase its size. Therefore the true overhead is likely much less than 15%. Tests with HELIOS LanTest 2.5.2 indicated an average throughput of 725 kb/sec for native AppleTalk, and 680 kb/sec with tunneled AppleTalk. That is about a 6.5% overhead.

Vtund security is not easy to compromise, but is made easier if the authentication secret used by the client and server is weak and therefore easy to crack. Here is one way to generate passwords that are random and difficult to crack:

dd if=/dev/urandom bs=1024 count=1 2>/dev/null | md5sum | cut -d' ' -f1

External links:

Still have questions? Try Google, or comp.protocols.appletalk before emailing me.


Valid HTML 4.01! Ashok Aiyar
Last modified: Sun Jul 28 18:40:25 CDT 2002