Recently I started experimenting with GPT4-Vision, and found that often drawing my thoughts is easier than describing them using text. For example, I often need to do ad-hoc visualizations of data. Instead of describing what kind of diagram I want, I can just sketch it. And this works really well in many cases. The main reason I don't use drawing more often, is because it is sometimes clumsy without a digital pen or large touchscreen. But still I wonder whether we will be using sketches much more in future as part of our interfaces? How would input devices and applications look like that support this way of working? Do you think sketching in interfaces might be a thing in future or is this just some niche topic that will not have broad adoption?