I'm not buying the argument.
Firstly I don't think that designing competitive AI accelerators requires the resources that would restrict the market to a few high-end competitors.
From what I understand, they are a sea of low precision DSPs specialized for convolution, connected by a high-speed interconnect.
While I would hesitate to claim that they are simple to design, they don't necessarily have the staggering amount of design complexity of a modern CPU or GPU.<p>Second, for CPUs I don't think that a complete clean-sheet design makes sense for a lot of companies. Yes, I get it, Apple has taken the world by storm - but let's not forget, that Apple gets a lot of mileage from very few designs, probably the same CPU core ends up in the iPhone, iPad, Apple TV and now the MacBook - with slight modifications.
This is the same, for Samsung, ARM,h Qualcomm, Intel, etc. - most of their CPU designs can be aimed at the market for servers, game consoles, smart devices, PCs, Notebooks, etc.
If AWS designs a server core, their entire market is AWS instances probably - not a small one by any means, but smaller than a dedicated CPU manufacturer's, thus the sensible amount of investment they can make into CPU design is lower than that of a dedicated CPU manufacturer. I'm not saying it doesn't make sense for them - it probably does - but it's far too early to predict the demise of CPU companies.