Currently we are living in a thin-client epoch where many important compute stuff is done via SaaS (Google, Amazon, Wikipedia, StackOverflow, Dall-E, ...), but due to hardware and AI advancements it might be possible to compress much of stuff that matters into reasonable size (stable diffsion is around 4G and mobile phones might have 2T in the next 10 years).<p>Though important stuff like<p>- getting images (via iterations of stable diffusion)
- derving facts (via GPT iterations and wikipedia)
- mapping (via just compressing openstreetmap in 100G)
- math (via open source jupyter iterations)<p>What do you think?
(I think I'm understanding you correctly, if not, please let me know)<p>20 years is a long enough time that anything could happen, but I don't personally see it. The main thing is not size, it's keeping things updated.<p>You give Wikipedia as an example, the issue is not how big it is currently, but that it's constantly updating and evolving, so you either eat the cost of update every time you open the application, or you eat it when you load specific pages (and if you do that, you may as well just have a web browser).<p>And that same scenario will happen with most of the examples: Amazon with products/pricing, StackOverflow with questions/answers, OSM with routes/businesses. I just don't see why it would be better to store these things locally when, in Amazons case, I may only ever want to see a certain number of products.<p>My personal expectation is that hardware will get so tremendously good that software developers are able to be even lazier, meaning software still feels as slow as it always has.