For a pizza recipe I currently go to a website. I typically go with a domain name that I know and trust. Let's say it's bonappetit.com or epicurious.com. I then filter accordingly to the ingredients I have and what sounds good.<p>If I'm to use chatgpt or this proposed Web 3.0 AI interface I'm then trusting the LLM to give me a good recipe based on the scraping of the internet, which for sure includes bonappetit.com, but which can also include every single bullshit SEO factory that's set up to make money off of Google's Display Network ads. Something as benign as a pizza might be fine but obviously placing trust in other things can be catastrophic given enough time.<p>The corporate internet is made up of middlemen but it's always been a somewhat symbiotic relationship. Meta provides the protocol and moderation for people to communicate with each other. Google (used to) provide the links to what you're looking for, acting as the middleman to help you find the information. Airbnb let's you list a property and find renters easily, etc. The bigger they are the shittier they get, but there's still a symbiotic tradeoff here.<p>With AI and LLM there's no more middleman. It's about ingesting anything public on the internet and republishing it as their own while also charging a fee. I get why there's so much enthusiasm on this technology. It's fucking magic compared to five years ago and hey, you can build a company off of an API and become your own middleman to the last stop of the AI company.<p>But speedrun what Google has become and you'll see what's going to happen if these LLMs become the source of everything. At some point pizza recipe sites will cease to exist and you'll be subject to using chatGPT to get one. Choice matters.
Great post and very thoughtful ideas. I am really curious how these would come along, I'm watching kids applying prompting and their anticipation of what kind of results they want to see.
very cool idea - essentially that your search engine would also be a software engine, creating the "right" experience for each search term. it would interact, I presume, with the design and product aspects of websites the way chatGPT interacts with their data. identifying existing patterns and developing the ability to regurgitate them in an uncannily responsive way?
I think this is an interesting idea, but will be more interesting when implemented in "traditional" UI. For example, self-organizing navigation menus optimized for your work habits within a web app, button actions created for you automatically that perform common sets of work you perform in an app... etc...
I think it’s a good article with some food for thought.<p>TL;DR: Future websites will mostly be APIs interfacing with Local Learning Models (LLMs). This will enable user-driven tasks where agents determine the right sources and interfaces, rendering locally and connecting to necessary APIs.