Thank you for sharing this. It is somewhat staggering to come across a paper that so perfectly captures that problems that one encounters, and discover that it was written nearly two decades before.<p>This is a comprehensive review of human factors affecting interaction with nearly any "formal" system where "formal" means anything from a color wheel to a prolog database.<p>I have repeatedly encountered these issues while working on a formal language for specifying scientific protocols. I knew I had my work cut out in terms of finding a way to create an interface that non-expert users could work with, but this makes it clear just how critical it is for the tools not to distract the user from their expert thinking.<p>This poses and incredible design challenge, and that is without any discussion of variability between users!
Is this still true? I guess the question is when is an interface concept "cognitive overload" - a bad formalism - vs serving as a useful abstraction? Say that's grouping attributes into a named cluster. Or, collecting actions into a function. As programmers, great. As people scheduling appointments, maybe that's ok in some workflow automation setting (which is not unlike programming) but otherwise, not great.<p>What predicts the aptness or ineptness of a given formalism? It has to begin with an idea of what the user already knows, and with the fitness of the formalism to the user's concept of the job to be done. Perhaps for the majority of the HN readership: "Informality Considered Harmful".