From what I recall, the 1500 number was chosen to minimize the time wasted in the case of a collision. Collisions could happen on the original links because instead of point-to-point links like we have today, they all used the same shared channel. (This was in the days of hubs, before switches existed.) The CSMA (Carrier Sense, Multiple Access) algorithm would not transmit while another node was using the channel, then it would refrain from transmitting by a random length of time before sending. If two nodes began transmitting at the same time, a collision would occur and the Ethernet frames would be lost. Error detection and correction (if used) occurs at higher layers. So if a collision occurred with a 1500 byte MTU at 10Mbps, 1.2mS of time would be wasted. IIRC, the 1500 byte MTU was selected empirically.<p>Another reason for the short MTU was the accuracy of the crystal oscillator on each Ethernet card. I believe 10-Base Ethernet used bi-phase/Manchester encoding, but instead of recovering the clock from the data and using it to shift the bits in to the receiver, the cheap hardware would just assume that everything was in sync. So if any crystal oscillator was off by more than 0.12%, the final bits of the frame would be corrupted.<p>I've actually encountered Ethernet cards that had this problem. They would work fine with short frames, but then hang forever once a long one came across the wire. The first one I saw was a pain to troubleshoot. I could telnet into a host without trouble, but as soon as I typed 'ls', the session would hang. The ARP, SYN/ACK and Telnet login exchanges were small frames, but as soon as I requested the directory listing, the remote end sent frames at max MTU and they were never received. They would be perpetually re-tried and perpetually fail because of the frequency error of the oscillator in the (3C509) NIC.