Utilities are sized by capacity not the use of that capacity. The local water works doesn't bill based on mugs of coffee.<p>How many bytes go over the network is a function of the protocol used to send bits as bytes. If some bits are used as error detection or correction or handshake, then they are like the water used to rinse the mug and wash the coffee pot and left in the sink and turned into steam. In other words, what is or isn't a byte is subject to debate, but thanks to Shannon, bits are objectively measurable.
My guess is marketing reasons. If you use a smaller unit, you get a bigger number.<p>EDIT: Why the downvote? It's a perfectly logical explanation.
For years we measured speed in bits. 300 bps; 1200 bps.<p>It is confusing, especially because B per second is 8 times faster than b per second.<p>Also, what happens if you're using 7e2 or 8n1 or whatever? What counts as a byte?
The best one-word answer is, simply, "tradition".<p>However, it's tradition that's backed up by a huge raft of prior work in the data transmission and information theory sciences. When you break all of that stuff down, you're left facing questions like "how can I design a protocol so that over a lossy channel I can unambiguously ascertain the correct transmission and reception of a given piece of information?" The natural unit which falls out of this is the indivisible bit, zero or one, the presence or absence of a signal, over a certain measured interval of time.<p>The theoreticians and engineers who produced the first systems for digital data transmission were all informed by this work, and their fundamental challenge was to produce systems capable of reliable transmission and reception of streams of bits -- not necessarily 8-bit bytes, that being a higher-order concern left to higher-order devices in the network, for sake of modularity. (For a bewildering example as to why telecom techs wanted^H^H^H^H^H^Hstill want to leave this kind of thing to others to puzzle out, see, e.g. <a href="https://en.wikipedia.org/wiki/36-bit" rel="nofollow">https://en.wikipedia.org/wiki/36-bit</a>)<p>As a result, today, network engineers pay first attention to the bit-rate rather than the byte-rate capabilities of their equipment, and this is reflected in everything from the names of low-level protocols to ways of talking about circuits to the specifications of concrete product implementations, both in terms of the data-carrying capacities of network interfaces to the speeds offered to consumers.<p>I come from the network engineering world, as you might have guessed :) So, on the opposite side of your question, I find it terribly distressing when software like my Bittorrent client reports speeds to me in megabytes-per-second. I don't have an intuitive <i>feel</i> for what a megabyte per second is, but I can take that figure and multiply by eight and say "aha!" Roughly the same as old-school 10Mbps Ethernet.
My networking professor who worked for AT&T for years, before and after it was sold and up into the internet age. His explanation was that the phone companies (more like phone company) had existed since the late 1800's before computers, and even when they built and bought the first computers and computer networks bytes were not a unit of measurement. So the phone companies network the phone companies rules.<p>Edit: I'd like to add that the first networks were with dumb devices on the periphery, aka your phones, and smart devices in the core, aka AT&T computer controlled routers. There was no concept of Bytes in that network.
I think it has to do with avoiding ambiguity.<p>Granted, the byte hasn't been redefined, but given how every architecture and language in the past had a different standard for each ADS, one might want to be proactive.<p>Also see the mess SI vs. binary prefixes created.<p>As a sidenote, early modem producers used the baud (<a href="http://en.wikipedia.org/wiki/Baud" rel="nofollow">http://en.wikipedia.org/wiki/Baud</a>) which coincides with bps only in some cases.
One of the reasons for it is actually Marketing. If I were to provide an 8Mbps connection, an average person would not know the fact that 8b = 1B. From an ISP view, 8Mbps seems a better speed than 1MBps to an average non-tech customer.
> It is very confusing, because for everything else i.e file sizes the standard is bytes.<p>No, this isn't true. It was true once, but things have changed. Now there are two standards, one based on multiples of two, the other based on multiples of ten:<p><a href="http://en.wikipedia.org/wiki/Mebibit" rel="nofollow">http://en.wikipedia.org/wiki/Mebibit</a><p>And, more topically,<p><a href="http://en.wikipedia.org/wiki/Data_rate_units" rel="nofollow">http://en.wikipedia.org/wiki/Data_rate_units</a><p>1 kibibit = 1.024 kilobit<p>I'm not saying this is why ISPs use bits per second, but it certainly would serve as an explanation in the absence of a better one. Bits per second mean the same thing in all current schemes for describing a quantity of data or its velocity.
for computer storage bandwidth, the unit is bytes. for transmission bandwidth, the unit is bits.<p>i think this is mostly because on the network, transmission actually happens a bit at a time. whereas in a computer system, data is addressed at a byte level (wikipedia says it has evolved due to the fact that in the early days, a byte was used to store every character of text).<p>hence files have sizes in MBs etc, while ISPs have speeds in Mbps