i like a lot of the concepts in this article, if someone were to build this then i'd be very happy to use it, but they seem more incremental as opposed to game changing, very tactical or utilitarian, it's like taking patterns from modern websites and applying them to the desktop, drawers, tags, cards, filters, notifications. there were some more advanced things like the eye sensor, touch, and audio, but i was expecting some groundbreaking stuff since your original premise was about desktops not changing over the last 30 years.<p>a lot of sci-fi movies stretch the imagination, they tend to think outside the box, i was expecting something like that, like ghost in the shell hologram displays, or displays embbeded directly in the retina.<p>ultimately, it comes down to convenience, finding ways to meld the computer and physical worlds such that the lines are blurred. letting the computer be an extension of oneself, kind of like cars to humans. so for example, typing is a very unnatural thing, and to some degree there's a lot of churn translating from one's brain to the keyboard and then to the computer, voice would be a more natural way of input, but imagine trying to write code using voice only or this article (gasp), my throat would be dry after the first function. so voice isnt a full supplement, but i think having to tap things in a context sensitive way would be good, borrowing from web design, less clicks the better, imagine writing code by tapping, choosing functions instead of having to write a for loop every single time, checking for errors, etc.<p>my feedback would be to take several steps backwards, look wholistically at the computer, and dream of how humans could better, more efficiently, and more naturally interact with them, that to me is the crux of a what a desktop represents, it's the interface between man and comouter/machine some of the solutions that you dream up could very much be based on patterns found on mobile phones or web pages, but dont let that limit you, ultimate goal is make a computer an extension of one's hand, brain, etc, just like a well built car is an extension if one's foot and hand.<p>kind of what i'm thinking is that we could be walking around with supplemented displays that could be toggled on/off, dont really like the smart glasses, they're bulky, cumbersome, and generally stupid looking, but almost like ar, layered on top of reality, i can see certain statistics, or necessary things followed by actions, absorbing different things from multiple sensors to supplement my view.<p>to me, that's where innovation needs to take place, quite honestly mac os x and windows both have voice assistants, amazon echo as well, but i dont rely on these things as much, they are not as usuable at the moment, it's more of a toy. i think visual technology is not as usable either, ar glasses, holographic displays, we have a long way to go.<p>and ultimately the computer needs to get smarter about understanding our needs, machine learning is a general step in the right direction, but i'm talking about being able to learn, adapt, and tie lots of things together to make decisions or recommendations without having to massage data, create data models, or choose certain algorithms.