I wonder if we'll eventually realize, similar to Solow's productivity paradox, that whatever efficiency "gains" we get from AI are just cancelled out by the increased need for fact-checking, some increased incidence of major errors being blindly trusted, and suboptimal outcomes (e.g. being bamboozled by good copy or fake reviews into paying for an inferior product). All this in addition to the opportunity cost of the brainpower and energy currently being poured into multiple largely-comparable models.