The best one-word answer is, simply, "tradition".<p>However, it's tradition that's backed up by a huge raft of prior work in the data transmission and information theory sciences. When you break all of that stuff down, you're left facing questions like "how can I design a protocol so that over a lossy channel I can unambiguously ascertain the correct transmission and reception of a given piece of information?" The natural unit which falls out of this is the indivisible bit, zero or one, the presence or absence of a signal, over a certain measured interval of time.<p>The theoreticians and engineers who produced the first systems for digital data transmission were all informed by this work, and their fundamental challenge was to produce systems capable of reliable transmission and reception of streams of bits -- not necessarily 8-bit bytes, that being a higher-order concern left to higher-order devices in the network, for sake of modularity. (For a bewildering example as to why telecom techs wanted^H^H^H^H^H^Hstill want to leave this kind of thing to others to puzzle out, see, e.g. <a href="https://en.wikipedia.org/wiki/36-bit" rel="nofollow">https://en.wikipedia.org/wiki/36-bit</a>)<p>As a result, today, network engineers pay first attention to the bit-rate rather than the byte-rate capabilities of their equipment, and this is reflected in everything from the names of low-level protocols to ways of talking about circuits to the specifications of concrete product implementations, both in terms of the data-carrying capacities of network interfaces to the speeds offered to consumers.<p>I come from the network engineering world, as you might have guessed :) So, on the opposite side of your question, I find it terribly distressing when software like my Bittorrent client reports speeds to me in megabytes-per-second. I don't have an intuitive <i>feel</i> for what a megabyte per second is, but I can take that figure and multiply by eight and say "aha!" Roughly the same as old-school 10Mbps Ethernet.