This reminds me of Jef Raskin’s humane interface: <a href="https://en.wikipedia.org/wiki/The_Humane_Interface" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/The_Humane_Interface</a><p>Edit: more specifically, Archy: <a href="https://en.wikipedia.org/wiki/Archy_(software)" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Archy_(software)</a> which has its roots in the “Zoomable user interfaces in scaleable vector graphics” paper (which I can’t find a working link for right now)
I have a touchscreen laptop. I love it, but UIs like these would make me love it even more. Having information detail literally at your fingertips sounds amazing.<p>This seems like such a sensible idea to me, it makes me wonder why it isn’t commonplace yet. I hope it will be!
Digital timelines are typically zoomable interfaces, though usually only in one dimension. I built one that scales 300+ orders of magnitude: <a href="http://www.timepasses.net/" rel="nofollow noreferrer">http://www.timepasses.net/</a>
"Semantic zoom" is a strange phrase because neither of those words describe what is happening (in this article). Note that the demos mostly involve clicking rather than wheel/zoom gestures.<p>The semantics of regular zoom are clear: scale+translate the viewport.<p>The semantics of "semantic" "zoom" are undefined. It's really just a smooth UI transition between different views. Those views could be anything, and the transition could be anything. So it's not really clear where "zoom" comes into this, aren't we just talking broadly about nice interactive UI transitions?<p>Sure, perhaps there is latent design space in using the mouse wheel / zoom gesture to navigate flat GUIs, but what is the accessibility story there?
I've often thought it was time to pick up Ben Bederson's work on Pad++ again, now that we have such great UI and hardware for zooming in interfaces. Here's a late-90s video of it in action: <a href="https://www.youtube.com/watch?v=BlIRYTuSv0Q">https://www.youtube.com/watch?v=BlIRYTuSv0Q</a>
Anyone remember eagle mode: <a href="https://sourceforge.net/projects/eaglemode/" rel="nofollow noreferrer">https://sourceforge.net/projects/eaglemode/</a> I remember playing with this in Windows XP days (perhaps even earlier). I think a lot of people assumed the window paradigm would eventually evolve, but it hasn't really.
I'm trying to do something very similar (if not exactly the same) to semantic zoom with architecture diagrams [0]. It's of course <i>much</i> simpler when the components are view-only and have a fixed UI. An entire design system for any kind of desktop/tablet app would be amazing.<p>[0] <a href="https://twitter.com/ilographs/status/1651011570330206209?t=a-htIY_b28zNOCtbL8qLqQ&s=19" rel="nofollow noreferrer">https://twitter.com/ilographs/status/1651011570330206209?t=a...</a>
Smalltalk solved this problem and many others a very long time ago but we are stuck with Unix and its variations. A live computational environment that can be molded by the individual for their own use cases was not commercially successful and so Smalltalk is now relegated to the dustbin of computing history instead of being the main paradigm for computer interaction[1].<p>1: <a href="https://lively-kernel.org/" rel="nofollow noreferrer">https://lively-kernel.org/</a>
Anyone remember the photo viewer from 10+ years ago that started with a tiled view of every photo and you zoomed in kind of organically to what you wanted? At the time it was revolutionary to me. Kind of in the Picasa era or earlier. I believe it was shareware or freeware and might have been Java-based.<p>And I'd love help with a search query. Can't figure out the right words to avoid links about restoring old photos, or details of various MS Photos applications over the years.
The general athesetic reminds me of the design concept "Mercury OS".<p><a href="https://uxdesign.cc/introducing-mercury-os-f4de45a04289" rel="nofollow noreferrer">https://uxdesign.cc/introducing-mercury-os-f4de45a04289</a>
I think you need a <i>grammar</i> of gestures. Without that you can't really replace CLI or GUI (or the new talking interfaces.)<p>Sign language is a language.
This is hard to get right, but awesomely powerful when it works. The only collaborative whiteboard I've seen do this well is Plectica[0]. It has its other quirks but nails this part nicely.<p>[0]: <a href="https://beta.plectica.com" rel="nofollow noreferrer">https://beta.plectica.com</a>
I really just want a 200ish dpi monitor that’s the size of a full wall whiteboard. Say something like 10 feet tall and 20 feet long. Touchscreen of course, but with alternate inputs too.
The examples on this blog are really quite neat. Can I download the software being shown, or are these just recordings of the author's private experiments?
This always looks amazing in the showcase video where the user does the expected gestures. But when you use it in real life, you want to kill yourself.
This looks good but will come out horrible in practice:<p>1. This requires so much thought to implement and I don't think the average UX team can implement it. A half ass implementation of this will probably be worse than a simple page with hyperlinks.<p>2. This requires so much thought that each implementation will naturally diverge and so the user have to deal with hundreds of different unique and horrible UX.<p>From the example, when the app zooms into the "pace" page, how do I go back to the summary?