Curious to hear from the HN community - what is the impact of ChatGPT and LLM applications on how we need to build user interfaces for future applications?
My first proper use case is to ban any kind of "lorem ipsum" from my designs. I can populate every field quickly with relevant data and content, which helps for communication.<p>But I haven't seen really impactful tools yet. Some UI generators that can at best replicate UI kits you can buy for $20. Maybe with time you'll be able to have good kits for a fraction of that? It won't disrupt the field tho.<p>The real value of UX/UI is before the generation, when you gather the requirements and lay out your flows. A copilot-style assistant might make sense here, providing you with context, references and relevant data.
You know all those beautiful touch screens we see in sci fi and games:<p><a href="https://i.imgur.com/NbAZrg7.jpeg" rel="nofollow">https://i.imgur.com/NbAZrg7.jpeg</a><p>---<p>Make the AI understand what the human inputs are required with what interaction modality - what the interaction represents as well as what important feedback from your system should be shown and have it arrange interfaces like this that are actually meaningful and work - and let it be such that if a user doesnt like it - tell the AI to remix it - or have the user customize it by picking the control he wants and then DRAWING it on the screen in whatever ergo he likes... and AI does all the NODE-ing in the background.<p>So all the CGI bs we see in games and movies on these beautiful panel interfaces can be a reality but also as flexible as could be.<p>So talk to it and tell the ai to do your bidding, but with an added intellegence in the elements to actually work, make the AI carry the load.
I wonder can you replace config UI elements with micro text input areas (one to two sentences) and abstract a more complex configuration process behind text interpretation? Then just show the parts that are missing/less clear from the input text config.
I think a good UI demonstrates a simple internal model, which the user has in mind, and uses necessary controls to affect that model. To make that model for the application - the one which would be not too simple and not too complex, close to obvious in learning, so little documentation is needed, and convenient controls to work with - not too many of them, not too few - is a tricky problem, "beginner vs. expert" conflict of assumptions is a part of it. AI could help with making a good approximation of such a model with controls.
I think the UI will get a lot more dynamic, with the app being able to understand better what the user currently wants to do and adapt for it.<p>It could also transform mostly into:<p>- Prompt -> user asks what they want to see (e.g. my bills in last month)<p>- Response -> dynamically generated UI/interface that shows only the relevant information
They help put manual work on autopilot. Some tools added AI for generating full-blown landing page UI (Framer) or coded components (our tool, UXPin). It gives greater consistency, frees designer's mind of making the same UI design decisions over and over again.