It appears images and PDFs are the currently supported document types. If so, is there an opportunity to support web pages? We have quite a few verbose legal documents that consist of auto-geneated HTML that average 100 pages. A tool like this would be helpful in automatically dividing these into their respective section heading. It would also be beneficial to detect / remove extra info such as page numbers. Thanks for posting this!
This seems like a super useful way to package and deliver this kind of tool chain! I’ve been looking for PDF parsers on and off the last couple of years, and have found it challenging to get most tools set up for data extraction and analysis.<p>This one packages a off the shelf version into a Docker, and starts a GUI website locally. Looking forward to using this more!
Hmmm, looks useful. The list of dependencies is basically a who's who of doing various types of document parsing. Is this basically just a unified interface that wraps them all up into an API?
I was really excited to try this until I saw that the only extraction methods are pdfminer, finereader, and tesseract. I was hoping there was something you rolled on your own. I've been trying for a long time to parse tables (and nested tables) but the available extractors seem to only work on really simple, idealized tables with virtually no skew or warping. The best I've found so far as Amazon's Textract, but it's not that great either. Alas, every attempt I've ever made at generalized table extraction has quickly regressed to templates.
That looks like a great comprehensive tool kit for data extraction. I understand the bundle is licensed under Apache, I'm curious to check on the needs/rules-to-follow to include a service like Abbyy.<p>We, extracttable.com - extract tabular data from images and PDFs over API, are interested to contribute and integrate the service into the bundle.