The state of the art in other technologies is extremely fast: 10 Gbps for Wifi 6/6e, 10 Gbps for 5g, 48 Gbps for HDMI 2.1 (not networking, but common). Meanwhile, consumer ethernet is still predominantly 1 Gbps. Why hasn't this changed over the past twenty years?
10GBASE-T was actually standardised in 2006. PHY chips were available that year but reaching 100m was very difficult on unshielded cables - the chips were large, high power, expensive to make, 100m was only guaranteed if users upgraded the cables to Cat6A - and the solutions had relatively high latency due to the need for powerful error correction. After PHY vendors had 100m working - an enormous technical challenge - they were not inclined to release an 'easier'/cheaper 10m/30m version, since that would have enabled more competitors, reduced revenues and partitioned the market. But hardly anyone was deploying 10GBASE-T anywhere anyway. So the technology got stuck with low revenues/high prices supporting 100m reach.<p>Broadcom were making $$ from 1GBASET and were slow to develop/release 10GBASE-T, they held the market back and encouraged SFP+ since they did sell those chips. Intel/Cisco were inclined to wait for BRCM chips. The startups that developed 10GBASE-T (Solarflare initially led, then Teranetics, Aquantia emerged) were not successful quickly. Eventually Aquantia managed to partner with Intel & survived. Both Solarflare's PHY technology & Aquantia ended up being acquired by Marvell, Aquantia for significant $$ last year. Teranetics circuitously ended up as part of Broadcom. The rest of Solarflare was acquired by Xilinx.<p>10GBASE-T wasn't a good fit for data-centers due to the high
power/latency. So those customers went direct attach / SFP+ / optical. Which drove those prices down, and made them more attractive, further delaying 10GBASE-T volumes. Data centers got used to expensive cables, (relatively) cheap/simple low-power, low-latency transceivers. 10GBASE-T was a solution looking for a problem.<p>Eventually the 2.5G/5G Ethernet for Wifi back-haul opened the copper market up - those technologies reuse almost everything from 10G. Also automotive Ethernet too.
Chip/power scaling and increasing volumes for Wifi/ datacenter deployments has eventually driven down the cost to a point where 10GBASE-T is becoming more widespread/attractive.<p>The initial sales/marketing strategy for 10GBASE-T failed, sadly, and it has taken a long time to recover.
> 10 Gbps for Wifi 6/6e, 10 Gbps for 5g<p>These are theoretical speeds that don't include modulation/protocol overheads nor interference. When it comes to wireless the rule of thumb is to usually expect a third of the advertised bandwidth.
Because the cabling is expensive and hard to run for high speed.<p>HDMI max length is 3m, Thunderbolt is longer but requires an expensive active cable for really expensive fiber optic cable.
I think because 1Gbe is still enough. And the price difference to faster options is too high. Even as a power user with many servers and PCs at home I don't saturate my gigabit network. Except between my 2 NASes but I run fibre channel point to point there, which is also cheap (because there is zero to none demand for fibre channel cards on the used market)<p>So for now gigabit plus some dedicated faster links where needed is plenty and cheap.
Not only is it expensive but the 10GbE devices need actual real active cooling to run, plus they need special adapters and cables as well.<p>It's not just a simple cable and plug to connect something with 10Gbps, you need SFP+ then these extra adapters, then the cables.<p>I see that 2.5Gbe and 5Gbe will be the upgrade path for people to go with at home. I'm certainly going to upgrade to 2.5Gbe in the coming months.
If you actually <i>achieve</i> 1Gbps, you're laughing. That's pretty obscenely fast. It means you can download a gigabyte in around ten seconds.