TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Xerox copier flaw changes numbers in scanned docs (2013)

36 pointsby naetiusover 2 years ago

5 comments

qikInNdOutReplyover 2 years ago
That bug ruined quite some archives and tainted all scanned legal documents before it as evidence. Its the sort of clusterfuck so big, its just burried by the omerta. Want out of a old contract? Ask for it to be presented.<p>Then ask &quot;was it scanned and archived digitally&quot; and if yes, claim that the document be invalidated on the basis of the scanning station altering the document.
quintussssover 2 years ago
there is a great talk about this from the guy that found it:<p><a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=7FeqF1-Z1g0">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=7FeqF1-Z1g0</a><p>it&#x27;s in german though...
评论 #34816669 未加载
评论 #34816173 未加载
d4rkp4tternover 2 years ago
Interestingly, this problem is mentioned in the opening paragraph of this particle by Ted Chiang, one of the best takes on ChatGPT.<p><a href="https:&#x2F;&#x2F;www.newyorker.com&#x2F;tech&#x2F;annals-of-technology&#x2F;chatgpt-is-a-blurry-jpeg-of-the-web" rel="nofollow">https:&#x2F;&#x2F;www.newyorker.com&#x2F;tech&#x2F;annals-of-technology&#x2F;chatgpt-...</a><p>&gt; In 2013, workers at a German construction company noticed something odd about their Xerox photocopier: when they made a copy of the floor plan of a house, the copy differed from the original in a subtle but significant way. In the original floor plan, each of the house’s three rooms was accompanied by a rectangle specifying its area: the rooms were 14.13, 21.11, and 17.42 square metres, respectively. However, in the photocopy, all three rooms were labelled as being 14.13 square metres in size.
评论 #34821089 未加载
Brian_K_Whiteover 2 years ago
It&#x27;s like single glyphs version of what&#x27;s going on with ai assistants: confidently wrong, wrong but looks good so you trust it.<p>When a normal old codec is used at too low quality levels, it <i>looks</i> like low quality and you do not trust the data.<p>Maybe the codec needs to include a disclamer watermark that it injects into the output that the image was processed by jbig2 with the aggressive option and all text is not to be trusted.
ale42over 2 years ago
Previous discussion about the same issue (but different article): <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=32537073" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=32537073</a>