I’m confused by the evidence they use:<p>> to justify those costs, the technology must be able to solve complex problems, which it isn't designed to do<p>Planning and reasoning are the two greatest areas of research in AI right now, with an OOM more researchers devoted to it than there were to the first generation of generative AI architectures<p>> In our experience, even basic summarization tasks often yield illegible and nonsensical results<p>Summarization with current generation models is excellent. I can get a summarization of a several-hour-long-call with better recall than I could have had myself, for less than $2 in inference costs.<p>> even if costs decline, they would have to do so dramatically to make automating tasks with AI affordable<p>We’ve seen a literal 10x decrease in cost from gpt-4-32k to gpt-4o <i>in a single year</i> of AI development (3:1 cost blend). And that ignores that sonnet-3.5 is 50x cheaper than gpt-4-32k while getting better scores on pretty much all benchmarks?<p>> the human brain is 10,000x more effective per unit of power in performing cognitive tasks vs. generative AI<p>Patently false, we’re not untethered brains floating around and require shelter, food, and a ton of other energy intensive requirements to live, and an AI system can perform a task that it is designed to do easily 10-20x faster than a human could.<p>If anything this makes me more bullish about AI systems having a positive ROI; the criticisms they have are based on extraordinarily (if not nefariously) dumb assumptions.