For me, it came down to taking the rest of the system as seriously as the kernel. I first installed Linux back in the 0.9 days and it was interesting but no more so than 386BSD or TSX-32: boot the kernel and spend time trying to get applications to compile. Forward ahead a year or two to Slackware where packages installed in a few seconds rather than significant fractions of an hour (yay, 20MHz 386!) because they had binary packaging and updates were relatively simple, freeing time to write code in this hot new Perl language rather than trying to compile it.<p>I tried FreeBSD & OpenBSD repeatedly over the years, even running a key company server on OpenBSD for awhile in the late 90s / early 2000s - security was compelling - but I noticed two things:<p>1. BSD users treated updates like going to the dentist and put them off until forced - not without cause, as ports frequently either broke things or simply spent hours rebuilding the world - whereas Linux users generally spent time working on their actual job rather than impromptu sysadmin. "apt-get update && apt-get upgrade" had by then an established track record of Just Working and fresh install time for a complex system was measured in minutes for Debian (iops-limited) and, even as late as 2004 or so when we ditched the platform, days for FreeBSD even when performed by our resident FreeBSD advocate. I'm sure there are ways to automate it but while routine in the Linux world, I've never met a BSD user in person who actually did this.<p>2. The <i>BSD systems were simply less stable, often dramatically so, because the parts were never tested together: you had the kernel which is stable and deserves significant respect but everything else was a random hodgepodge of whatever versions happened to be installed the last time someone ran ports. Unlike, say, Debian or Red Hat there was no culture of testing complete systems so a few months after a new release you'd often encounter the kind of "foo needs libbar 1.2.3 but baaz needs 1.1.9" dependency mess which required you to spend time troubleshooting and tinkering – a class of problem which simply did not exist at the system level for most of the Linux world. It wasn't as bad as Solaris but the overall impression was more similar than I'd like.<p>One other observation: during years of using Linux, FreeBSD, OpenBSD, Solaris / Nexenta / etc. on a number of systems (the most I've managed personally at any point in time was around ~100) there were almost no times where the actual kernel mattered significantly in a positive direction. Performing benchmarks on our servers or cluster compute nodes showed no significant difference, so we went with easier management. On the desktop, again no significant performance difference so we went with easier management and better video driver support (eventually why many desktop users moved to OS X - no more GL wars). There was a period where more stable NFS might have been compelling but the </i>BSD and Linux NFS clients both sucked in similar ways (deadlocking most times a packet dropped) and the Linux client got better faster and we ended up automating dead mount detection with lazy-unmounts to reduce the user-visible damage.