I've been doing some futures thinking about generative AI, especially its use in academia.<p>I'm curious about what you think is guaranteed to come true about how these tools are used, and what is left to question.
Ads will appear. People will pay money to have their company/product injected into AI responses for relevant prompts. There will be controversy over how obviously (or not) the ads are labeled as such.
Models get smaller and perform better, requiring less GPU hardware.<p>The "hallucination" problem persists.<p>More prompt programming tools are created for AutoGPT-type task processing.<p>Lots of companies spend lots of money to roll out LLM-based apps but most fail because user adoption and rejection due to poor performance/hallucinations.<p>The successful solutions will be around assisting content creators and programmers (co-pilot, journalism, graphic design, etc.)<p>Will be built into internet search (google/bing) and office tools (word, excel) but won't be the primary way work gets done.
They will be overused and over valued. Vast swaths of people will believe they are actually perfect oracles. They will create higher quality spam for everyone.