The purpose is more obvious if you understand what makes \r and \n different, and why both existed. But I guess that's becoming lost knowledge now.<p>Ultimately, many "character sets" combined printable characters, cursor/head control, and record boundaries into one serialized byte stream that could be used by terminals, printers, programs for all sorts of purposes.
It was a common technique back in the day with both dot matrix and "letter quality" printers to print a line and then go back and print it again to either get a bold effect by printing the same characters twice or to overlay one character on top of another. If the spacing was right you could have drawn accented characters that way.
Neither on SE nor here I could find a mention of dead keys.<p><a href="https://en.wikipedia.org/wiki/Dead_key" rel="nofollow">https://en.wikipedia.org/wiki/Dead_key</a><p>I can't claim I know how they relate exactly to the accented characters being
encoded in character sets but they seem to be at least historically an influence. Pressing a dead key which doesn't advance the cursor and then overwrite the basic character over it is certainly faster than using backspace (also cheaper if you think about character pricing).<p>That the ECMA specs only talk about using BACKSPACE is surprising. At least those OS I used only supported the dead key approach but of course that was decades after the specs were written.
I sometimes use these in Windows when I expand characters with FormD[0] as part of username validation.<p>If the expanded count doesn't match, a diacritic might be present.<p>[0]
<a href="https://learn.microsoft.com/en-us/dotnet/api/system.text.normalizationform?view=net-8.0#fields" rel="nofollow">https://learn.microsoft.com/en-us/dotnet/api/system.text.nor...</a>
They exist so that typists can insert random characters that look similar to what they actually meant to type. This has a nice income-generating effect for developers who know how to handle incorrectly encoded data. I make good money fixing data processing pipelines written to expect utf-8 only to be given something else.
My younger self would have used the umlaut character for 'ditto'.<p>To some extent the character set was still evolving, for example the Euro sign was not around until decades later and that would need to be bolted together with backspace characters or escape codes, maybe even downloaded characters, with the printer specific manual (Epson) studied at great length.<p>In the DOS era (and before with home micros that were programmed in BASIC) it was quite normal to compose things for the printer that you had no expectation of seeing on screen, not that anyone read much on screen (as everyone had vast piles of paper on their desk).<p>Until quite recently some POS systems were very much tied in to a very specific printer, at least these character sets were a step forward from hard coding a BASIC program to an exact make and model of printer.
> <i>they're all pretty much useless on their own for anything besides ASCII art.</i><p>The asker completely ignores that asking questions about accent marks, like they themselves are doing in that very post, would be a lot more annoying without being able to write said accent marks.
I'm surprised it wasn't mentioned, but they were also used for text entry in some text editing applications.<p>For example, one could type ë by entering ¨ then following with e. The ¨ would be displayed at the position where the combined character would be, while waiting for the second character to be entered. Once the second character is entered, the display would be updated with the correct combined character.
> Having an OS drawing characters on a bitmap display, a prerequisite to composing, is a very new development, way more recent than the character definitions leading to above encoding.<p>New? Some computers of the 1980s could already do this. At least the 16 bit home computers had bitmap drawn characters on the screen.<p>Edit: Looks like somebody doesn't believe that computers in the 80s had such a thing.<p>> On the Amiga, rendering text is similar to rendering lines and shapes. The Amiga graphics library provides text functions based around the RastPort structure, which makes it easy to intermix graphics and text.<p>> In order to render text, the Amiga needs to have a graphical representation for each symbol or text character. These individual images are known as glyphs. The Amiga gets each glyph from a font in the system font list. At present, the fonts in the system list contain a bitmap of a specific point size for all the characters and symbols of the font.<p><a href="https://wiki.amigaos.net/wiki/Graphics_Library_and_Text" rel="nofollow">https://wiki.amigaos.net/wiki/Graphics_Library_and_Text</a>
In ADM-3 (or some such) there was only one backspace+overstrike character and it was underscore. So Ä and Ö was marked thus.<p>Otherwise HYVÄÄ YÖTÄ was HYV{{ Y|T{, which was only little miserable.<p>But if you changed the ROM into Swedish ROM, {a|b} become äaöbå, which was basically unreadable.
Yup, ASCII was a multi-byte character set, using overstrike with BS (backspace). Little-known fact, that. There's still a holdover of this in terminal apps, which use this for underscoring and bolding.
TLDR, you combine/precede naked accent characters with the backspace character on your output device (probably a printer) to get accented characters.