I think the next step in evolution will be when we lose the notion of smart devices and local data, and just think in terms of screens and input devices. The closest thing we have right now is Google's Stadia: your games run in the cloud. You can use your TV (with the help of Chromecast) as a screen and you have a smart WiFi-controller that can also connect to Stadia, and together you have an experience. Or you use your smart phone and its touchscreen, and get the same content. I think this will be the model of the future experience. However, the screen content needs to adapt to the screen size and type, and to the input devices that are available. Stadia can't really do this yet. And it should even work with audio-only devices like Echos and Nest speakers.<p>You will certainly still have a phone in your pocket, but you can treat it more like a portable screen with touchscreen for input. There will be no "apps on your phone" or "data on your phone", but your phone provides that for you, just like all devices give you access to them. The big question will be where the code runs. I wouldn't count on a complete cloud solution like Stadia, because you can't guarantee a permanent network connection on your phone. My guess would be that the data and code will be synced with the cloud, but code can still run on those devices that are powerful enough. For all others, it runs in the cloud.<p>I also think that the app model will sooner or later cease to exist, because it limits what an AI can do. If you have separate apps for spreadsheets and music, the AI can't complete a seemingly trivial task like 'take the list of songs in that spreadsheet and play them'. There will be something more like recipe, to explain the AI how to read a spreadsheet and how to find and play a particular song on a music service, but all the logic that is now hidden in apps must be available to an AI to allow it to combine it.