As a Unix admin who deals mostly with Linux, reasons I'd rather use the FreeBSD network stack:<p>-Primarily, not dealing with iptables<p>-Relatedly, love working with ipfw[1] or pf[2]<p>-Interface names are based on the network driver, which is more consistent and useful (have a question about interface em0? man em[3] )<p>-A sane ifconfig[4] which is, you know, still updated<p>-Relatedly, no need for a bunch of different/new/inconsistent config programs (ip vs. ifconfig vs. iwconfig)<p>-CARP[5] is pretty amazing for redundancy and is stupid-simple to set up in FreeBSD10<p>-Both the OS and the network stack have been tested for IPv6-only support
I love this. I'll refer back to my post from several months ago about the poor performance of the Linux network stack (<a href="https://news.ycombinator.com/item?id=7286584" rel="nofollow">https://news.ycombinator.com/item?id=7286584</a>).<p>> 1. Data Plane Development Kit, which lets you skip the kernel IP stack (which takes thousands of CPU cycles to process) and do packet processing in userland taking just tens to hundreds of cycles per packet. <a href="http://dpdk.org/" rel="nofollow">http://dpdk.org/</a><p>I wish OS developers saw this as a problem. There is no reason kernel stacks should be so slow for tasks where all processing is done in the kernel. (For packets destined to userspace, you've got the syscall overhead to deal with.)<p>I recently tested the Linux network stack's PPS performance with an Intel X520 10GbE NIC. I used Debian testing, with the 3.12 kernel. My destination machine was an i7-3930K at stock speed. I wrote a simple kernel module adding an NF_IP_PRE_ROUTING hook returning NF_DROP with no processing, which would be the simplest possible code path. For a packet generator, I used another older machine with another X520, using the "pfsend" tool included with PF_RING, and the card in PF_RING DNA mode. That was easily able to saturate the link at line rate (14.8M PPS).<p>The result: the kernel was only able to sustain about 2.8M PPS.<p>I then loaded the DNA driver on the destination machine, used the included "pfcount" tool, and no packet drops - it was receiving the full 14M PPS.<p>I tested DPDK recently and had similar results.<p>I also modified the Linux ixgbe driver ixgbe_clean_rx_irq() function, and added a step in between the "fetch packets from RX ring" and "put packet in SKB and send to network stack" functions. Even when I added a bunch of useless comparisons for each packet, I was able to get ~12-13M PPS. I could get line rate by just dropping and not doing any processing.<p>sidenote (not included in the above post): I also tested FreeBSD. I'm just going off memory here, but it did sustain higher PPS than Linux (same hardware) but not anywhere near line rate. So netmap, PF_RING, DPDK, others are still the way to go if you want to do line rate processing.