The OS for Apple Vision Pro brings a new design language focusing on depth and utilizing space beyond a square screen.<p>If IDEs weren't confined to a 2D window, what would it look like? Are there any features you can think of that would make AR coding more productive than simply on a monitor?
Can I throw my hat in the ring? I’ve been working on this for.. a long time.<p>AR VR iOS and macOS app for arbitrary code rendering in 3D space. Terminal like rendered glyph by glyph means perfect control over ever mesh and texture.<p>The iOS demo is fun. You can walk around your code like an art museum, draw lines of executed traces, and perform visual hierarchical search.<p><a href="https://github.com/tikimcfee/LookAtThat">https://github.com/tikimcfee/LookAtThat</a>
Most programming work is still text editing with various amounts of debugging, refactoring, and autocompletion.<p>It is not obvious to me how an AR interface would make a difference other than more virtual screen real estate. You would still need a way to enter text, build code, run tests, etc. This means a keyboard and pointing device (mouse, trackpad, etc.) are still needed unless something else can do it better.<p>Granted, for the same cost as the Vision Pro you could get several large, high resolution monitors and have lots of screen to work on.
I would expect extracting out more of the meta-logic via GPT, and showing relevant panes dynamically.<p>If you get an error, automatically search for the answer and propose the change.<p>If you add a new flow uncovered by tests, propose the test.<p>Generally, have panes that are dynamic to what you are doing, and tightly couple them.<p>I could imagine looking at different zoom levels of a code file, folder, or architecture, and working primarily on abstractions, approving / rejecting the resulting proposed edits.<p>Strategic coding more akin to a game like Supreme Commander or Planetary Annihilation.
I think it might make large scale code visualization in a similar way to how SourceTrail does it more feasible: <a href="https://github.com/CoatiSoftware/Sourcetrail">https://github.com/CoatiSoftware/Sourcetrail</a>
Thinking about depth:<p>Using it to present stacks of information (version history, undo/redo chain)<p>Using it to render background information that doesn't need to be swapped into the foreground to be useful - the architecture/module that the code you're working in serves, the remote services that fulfill certain commands, the test coverage available to you in this module.
I'd want a 3D node editor interface much like Blender's texture or geometry creation interface. The nodes would be 3D objects (cubes, cylinders, etc) and nested groups of objects w/ physical ports that could be wired together.<p>Nodes would include class/object/method nodes w/ code blocks. So, an important AR/VR UX feature would be the ability to collapse/dive-into nodes & groups of nodes much like code-outliners in 2D IDEs.<p>Another awesome feature would be the ability to affix dials/gauges and other displays to the outside of nodes & node groups that would provide indicators of the unit's state: How "full" is a collection node, how often/frequently was this node invoked, the health of the node (errors, exceptions, slow-execution times), etc.
It would look like a 3D world, like this: <a href="https://www.youtube.com/watch?v=z4FGzE4endQ">https://www.youtube.com/watch?v=z4FGzE4endQ</a>
I think a robust eye-tracking functionality for code editors has a ton of potential for the Vision Pro. A lot of the barriers and benefits of vim/emacs style navigation can be replaced or augmented by smart, interpretive eye tracking. It’s like digging into the future where machine starts to read our minds, and an intimate integration of where you look is a big step.
More open windows.<p>But in all seriousness, you can have a tree of small code windows connected by various dependencies. So it will be much easier to see a piece of code and it’s uses / definitions, and thus understand a codebase’s architecture
I think it will be a wholesome natural interface, so I imagine some form of a LLM will be included which can be used to actually make the interface elements work and connect them to the outside world, while having the possibility to design and modify the interface you are building with your hands/fingers (think Interface Builder, but spatial).<p>If they manage to pull something like this off and be first to market I guess Magic Leap, HoloLens, and whatever Meta is cooking up (if they still are doing something in that space anyway) will very likely be pretty much dead, by the way.
I've heard about people using Unity for other AR/VR stuff. Not sure if they might expand to include it. I've played around with Unity a little bit. I guess we have to see if Apple is using it's own proprietary stuff (as usual), or if they will be closer to the industry standards, or at least providing support for including their language in existing IDEs.
> would make AR coding more productive than simply on a monitor<p>One obvious thing is much more space. I.e. unlimited number of extra monitors, or entities which work as such.
just look at what IDEs that runs exclusively on ios are like: crap.<p>nobody invest into a closed platform. i expect jetbrains to came up with something marvelous for ar/vr, but it will run on the upcoming version of Microsoft or HP glasses. you know, the only ones today that works just like an external monitor without a locked in ecosystem like apple or facebook.<p>the silly apps and games and such will net millions tho.