> rendering subtitles at the output resolution is better than rendering them at the video resolution<p>I would like to know what's wrong with this approach. I watch a lot of commentated speed-run videos: that's often something like ~244p video, plus soft subtitles. The subtitles get rendered at the source resolution (presumably, into the video framebuffer) and then upscaled along with the image, forcing them to be a tiny blurry mess instead of the crisp, readable text they could be.
The original one ( <a href="http://www.kalzumeus.com/2010/06/17/falsehoods-programmers-believe-about-names/" rel="nofollow">http://www.kalzumeus.com/2010/06/17/falsehoods-programmers-b...</a> ) left me bafled. Then I realized you have to strike a balance; otherwise you cannot deal with names at all. The point where drawing the line depends on your industry/customers, but I'd safely say that it's too restrictive nowadays so these lists are useful somewhat and of course they are interesting.
- "all subtitle files are UTF-8 encoded"<p>Hah, this strikes really close to home. I've had to work with so so many subtile files in Eastern European and Turkish Windows codepages mostly but not entirely compatible with Win-1252. There's no way to tell them apart programmatically, so you check that the extended characters make sense. It's a bit of a nightmare.
> my hardware contexts will survive the user’s coffee break<p>hell, they don't survive alt-tabbing into a game that has a different resolution than the monitor
From the article:<p>> I can exclusively use the video clock for timing<p>Heh. I just finished writing up a design doc to address problems I had with this, and I referenced "Falsehoods programmers believe about time". Then I opened Hacker News and saw this article. So this is very timely for me.<p>(My doc: <a href="https://github.com/scottlamb/moonfire-nvr/blob/new-schema/design/time.md" rel="nofollow">https://github.com/scottlamb/moonfire-nvr/blob/new-schema/de...</a>)
it is true, video is a nightmare mess littered with weird functionality nobody needs. (limited range only just disappeared in rec 2100, optionally??? really??? i'm not worried about my electron gun in my CRT from 1975 these days...nor do i want to know what a Y or a Cb or a Cr means because everything is RGB and B&W TV is long dead... and 4:2:2 is not exactly compression so much as computational overhead etc.. etc.)<p>its a nightmare, but the reason for these observations is precisely that it shouldn't be a nightmare. this area of programming is a wasteland ... nobody that good wants to solve these trivial problems :/
There is a lot of potential information in such a list. But in this form is quite a "trust me" thing that does not really add to the reader knowledge.
> a H.264 hardware decoder can decode all H.264 files<p>and<p>> video decoding is easily parallelizable<p>At a previous job, I don't know if it was just the field I was in or just bad luck, but having to explain this over and over again was kind of a personal nightmare.<p>That being said, this is an excellent list!
I don't think programmers believe any of the video decoding falsehoods; not because they know any better, but because they know they don't know.<p>Also, none of these unfounded preconceptions make intuitive sense, so I don't see why people would believe them.
> <i>interlaced video files no longer exist</i><p>Interlaced video files should no longer exist.<p>Seriously, f<i></i>k interlaced video.<p>> <i>upscaling algorithms can invent information that doesn’t exist in the image</i><p>That's not a falsehood. Upscaling <i>does</i> invent information that doesn't exist in the image.