For me, it came down to taking the rest of the system as seriously as the kernel. I first installed Linux back in the 0.9 days and it was interesting but no more so than 386BSD or TSX-32: boot the kernel and spend time trying to get applications to compile. Forward ahead a year or two to Slackware where packages installed in a few seconds rather than significant fractions of an hour (yay, 20MHz 386!) because they had binary packaging and updates were relatively simple, freeing time to write code in this hot new Perl language rather than trying to compile it.
I tried FreeBSD & OpenBSD repeatedly over the years, even running a key company server on OpenBSD for awhile in the late 90s / early 2000s - security was compelling - but I noticed two things:
1. BSD users treated updates like going to the dentist and put them off until forced - not without cause, as ports frequently either broke things or simply spent hours rebuilding the world - whereas Linux users generally spent time working on their actual job rather than impromptu sysadmin. "apt-get update && apt-get upgrade" had by then an established track record of Just Working and fresh install time for a complex system was measured in minutes for Debian (iops-limited) and, even as late as 2004 or so when we ditched the platform, days for FreeBSD even when performed by our resident FreeBSD advocate. I'm sure there are ways to automate it but while routine in the Linux world, I've never met a BSD user in person who actually did this.
2. The BSD systems were simply less stable, often dramatically so, because the parts were never tested together: you had the kernel which is stable and deserves significant respect but everything else was a random hodgepodge of whatever versions happened to be installed the last time someone ran ports. Unlike, say, Debian or Red Hat there was no culture of testing complete systems so a few months after a new release you'd often encounter the kind of "foo needs libbar 1.2.3 but baaz needs 1.1.9" dependency mess which required you to spend time troubleshooting and tinkering – a class of problem which simply did not exist at the system level for most of the Linux world. It wasn't as bad as Solaris but the overall impression was more similar than I'd like.
One other observation: during years of using Linux, FreeBSD, OpenBSD, Solaris / Nexenta / etc. on a number of systems (the most I've managed personally at any point in time was around ~100) there were almost no times where the actual kernel mattered significantly in a positive direction. Performing benchmarks on our servers or cluster compute nodes showed no significant difference, so we went with easier management. On the desktop, again no significant performance difference so we went with easier management and better video driver support (eventually why many desktop users moved to OS X - no more GL wars). There was a period where more stable NFS might have been compelling but the BSD and Linux NFS clients both sucked in similar ways (deadlocking most times a packet dropped) and the Linux client got better faster and we ended up automating dead mount detection with lazy-unmounts to reduce the user-visible damage.
BSD users treated updates like going to the dentist and put them off until forced
Not typical of any BSD sysadmin I've known, ever. OTOH if your "BSD users" were not sysadmins that would explain it.
install time ... days for FreeBSD even when performed by our resident FreeBSD advocate
Clearly your "FreeBSD advocate" did not know what he/she was doing. Without provisioning a FreeBSD install should not differ from an Ubuntu or RH install by more than a few minutes (in either direction).
But in a datacenter provisioning is critical, and FreeBSD ceeded that space to RH's kickstart long ago. BSD also tied a ball and chain deprecating multi-partition default root volumes (when 1G drives became commonplace.
But more than anything else FreeBSD lost this race by not developing user-friendly installation software (GUI or CLI).
But for those who do know how to install BSD and the difference between things as fundmental as portinstall vs makeworld the ongoing time spent updating and upgrading will pay that back may times over. This is because: A) the ports system works across _all_ versions and architectures i.e., you don't have to upgrade the entire OS simply to upgrade say mysql to 5.5, B) OS upgrades break far fewer apps in BSD than any version of Linux, C) kernel vulnerabilities average once every few years in BSD vs ever few weeks (sometimes days) in Linux, and D) backwards compatibility is far better (recall when RH tried to "deprecate" nslookup for example).
The BSD systems were simply less stable
This would be due to your user-sysadmin's skillset as everyone else's experience is the opposite (as noted in the OP).
>> BSD users treated updates like going to the dentist and put them off until forced
> Not typical of any BSD sysadmin I've known, ever. OTOH if your "BSD users" were not sysadmins that would explain it.
Professional sysadmins - but obviously more conservative than ones you know. That was the base install, a ton of ports and mucking around getting video drivers, etc. installed. This is for a fully-configured scientific workstation, not just a bare OS install, so there were a couple hundred packages installed by the time you include all of the various dependencies.
I'm certain that binary packages would have shaved a lot of time off of this (that's what I used to use on OpenBSD a decade ago) but that wasn't exactly strongly recommended at the time, which is why I mentioned culture — I'm certain you can manage FreeBSD better but in practice I have yet to meet anyone in person who actually does this. Small sample size and all but this is untrue of any Linux user I've met other than the Gentoo fanatics who consider tweaking CFLAGS a source of entertainment.
> A) the ports system works across _all_ versions and architectures i.e., you don't have to upgrade the entire OS simply to upgrade say mysql to 5.5, B) OS upgrades break far fewer apps in BSD than any version of Linux,
A) is comically untrue: not only can you easily compile newer packages but extensive backports repositories exist to make it easy and safe to install a new version of something important while keeping the rest of the system stable. For many distributions this exists as a vendor-provided service and there are others (e.g. IUSCommunity.org) which serve particular markets and a fair number of OSS projects maintain repos for the Debian & Red Hat worlds.
B) may be true in your experience but it's radically unlike mine. I've run many versions of Linux on many systems over the years and upgrades have been quite smooth - the Debian / Ubuntu world is the most stable but Red Hat isn't far behind. The key again is binary packages - moving from a known set of packages to a known set of newer packages makes it a lot easier to test an upgrade.
> C) kernel vulnerabilities average once every few years in BSD vs ever few weeks (sometimes days) in Linux
Highly debatable but Linux or BSD kernel vulnerabilities are rare enough not to be worth arguing over: most of the threat is in userland, which is a large part of why package management is so important. Far more systems are compromised by lax updates rather than zero-days.
I tried FreeBSD & OpenBSD repeatedly over the years, even running a key company server on OpenBSD for awhile in the late 90s / early 2000s - security was compelling - but I noticed two things:
1. BSD users treated updates like going to the dentist and put them off until forced - not without cause, as ports frequently either broke things or simply spent hours rebuilding the world - whereas Linux users generally spent time working on their actual job rather than impromptu sysadmin. "apt-get update && apt-get upgrade" had by then an established track record of Just Working and fresh install time for a complex system was measured in minutes for Debian (iops-limited) and, even as late as 2004 or so when we ditched the platform, days for FreeBSD even when performed by our resident FreeBSD advocate. I'm sure there are ways to automate it but while routine in the Linux world, I've never met a BSD user in person who actually did this.
2. The BSD systems were simply less stable, often dramatically so, because the parts were never tested together: you had the kernel which is stable and deserves significant respect but everything else was a random hodgepodge of whatever versions happened to be installed the last time someone ran ports. Unlike, say, Debian or Red Hat there was no culture of testing complete systems so a few months after a new release you'd often encounter the kind of "foo needs libbar 1.2.3 but baaz needs 1.1.9" dependency mess which required you to spend time troubleshooting and tinkering – a class of problem which simply did not exist at the system level for most of the Linux world. It wasn't as bad as Solaris but the overall impression was more similar than I'd like.
One other observation: during years of using Linux, FreeBSD, OpenBSD, Solaris / Nexenta / etc. on a number of systems (the most I've managed personally at any point in time was around ~100) there were almost no times where the actual kernel mattered significantly in a positive direction. Performing benchmarks on our servers or cluster compute nodes showed no significant difference, so we went with easier management. On the desktop, again no significant performance difference so we went with easier management and better video driver support (eventually why many desktop users moved to OS X - no more GL wars). There was a period where more stable NFS might have been compelling but the BSD and Linux NFS clients both sucked in similar ways (deadlocking most times a packet dropped) and the Linux client got better faster and we ended up automating dead mount detection with lazy-unmounts to reduce the user-visible damage.