TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: Why are all OCR outputs so raw?

7 点作者 james-revisoai超过 1 年前
I am tired of postprocessing OCR.<p>I have used many OCR solutions - Tesseract (4 and 5), EasyOCR, the TrOCR(not-document level), DocTR and Paddle-Paddle (self-hostable on GPUs), and lastly Textract(best).<p>Some are just about fast enough to be useful in production for long documents, but all have one thing in common: - You need to preprocess so much!<p>Why in this day and age do they all tend to output lines or words of text, completely leaving things like sorting out which text goes in which column or which bullet point is a new sentence?<p>I know solutions like GROBID solve this by correctly processing columns etc for papers, but for general documents, it seems so unsolved.<p>Are there good maintained solutions to this? At a team I am on, we spent a long time on an internal solution, which works well, and seeing the performance difference from raw processing to proper processing (formatting text and other improvements) has been -night-and-day-<p>So why don&#x27;t providers or producers add steps to tidy up generic formats?<p>PS: I haven&#x27;t found GPT APIs to be great for this, because the location and size of text is often crucial for columns and subheaders.

6 条评论

vivegi超过 1 年前
Layout analysis is the key. Quite a bit of work has been going on recently in this area.<p>Some papers of relevance:<p><pre><code> - Xu Zhong, Jianbin Tang, Antonio Jimeno Yepes. &quot;PubLayNet: largest dataset ever for document layout analysis,&quot; Aug 2019. Preprint: https:&#x2F;&#x2F;arxiv.org&#x2F;abs&#x2F;1908.07836 Code&#x2F;Data: https:&#x2F;&#x2F;github.com&#x2F;ibm-aur-nlp&#x2F;PubLayNet - B. Pfitzmann, C. Auer, M. Dolfi, A. S. Nassar and P. Staar, &quot;DocLayNet: a large human-annotated dataset for document-layout analysis,&quot; 13 August 2022. [Online]. Available: https:&#x2F;&#x2F;developer.ibm.com&#x2F;exchanges&#x2F;data&#x2F;all&#x2F;doclaynet&#x2F;. - S. Appalaraju, B. Jasani, B. U. Kota, Y. Xie and R. Manmatha, &quot;Docformer: End-to-end transformer for document understanding.,&quot; in The International Conference on Computer Vision (ICCV 2021), 2021. </code></pre> The first one is for publications. From the abstract: &quot;...the PubLayNet dataset for document layout analysis by automatically matching the XML representations and the content of over 1 million PDF articles that are publicly available on PubMed Central. The size of the dataset is comparable to established computer vision datasets, containing over 360 thousand document images, where typical document layout elements are annotated&quot;.<p>The second is for documents. It contains 80K manually annotated pages from diverse data sources to represent a wide variability in layouts. For each PDF page, the layout annotations provide labelled bounding-boxes with a choice of 11 distinct classes. DocLayNet also provides a subset of double- and triple-annotated pages to determine the inter-annotator agreement.
sandreas超过 1 年前
This. I stumbled over the same problem and did not find the preprocessing too hard.<p>I achieved pretty good results with a few simple steps before using tesseract:<p>- Sauvola adaptive thresholding (today there are many better algorithms, but sauvola is still pretty good)<p>- Creating Histogram based Patches for analysing what parts are text and what parts are images (similar to JBIG2)<p>I even once found a paper using an algorithm for detecting text-line slopes on geographical maps that was simple, fast and pure genius for curved text lines and then implemented a pixel mapper to correct these curved text lines. Unfortunately the whole project got lost somewhere in the NAS. Maybe I still have it somewhere, but Java was not the best language to implement this :-)<p>However, I think that even if I found a simple solution for some of my use cases - the whole OCR topic is pretty hard to generalize. Algorithms that work for specific use cases in specific countries don&#x27;t work for others. And it is lots of hard work to capture all the fonts, typography, edge cases and performance problems in one piece of software.
solardev超过 1 年前
From my (several years out of date) experience, commercial OCR software like ABBYY FineReader tends to be a lot better at dealing with layout than the FOSS stuff. They have a GUI layer that lets you draw areas to define columns, etc.<p>These days it looks like ABBYY has pivoted towards cloud services and SDKs though, with the standalone software (now called FineReader PDF) de-emphasized. I am not sure if the new versions and services still offer column separation.
quickthrower2超过 1 年前
Sounds like you have the seed for a startup!
8organicbits超过 1 年前
I&#x27;m not sure why, but I&#x27;ll agree with the frustration, better tooling is needed.
billconan超过 1 年前
maybe this is better? <a href="https:&#x2F;&#x2F;github.com&#x2F;clovaai&#x2F;donut">https:&#x2F;&#x2F;github.com&#x2F;clovaai&#x2F;donut</a><p>I&#x27;m not sure