What Is The Best OCR Extraction Method On Printed Text?
by Amit Jnagal, on March 15, 2017 11:00:00 AM PDT
I spotted another interesting question on Quora related to machine learning & OCR technology, here's my answer:
I will give you a consultant’s answer - you may not like it but here goes - “It depends”.
The ‘best’ OCR extraction method depends on the context of what you are trying to extract. My guess is that you are not talking about the OCR process itself. But, rather how to extract features out of the text that OCR spits out. There are two broad approaches for extraction depending on whether you know the kind of data you are dealing with (invoices, tax docs, grocery labels, etc) or you do not:
DOMAIN-BASED OCR EXTRACTION
This approach helps when you know beforehand the kind of data extraction you are after. Let’s say you were trying to extract features of wines from a set of wine ratings and notes that you have OCR-ed. Before you can do the feature extraction, you may consider running topic modeling algorithms on a large collection of existing wine notes to figure out trends and topics. Once you build a learning model out of that you can deploy it on top of OCR extracted data. This will not only help you extract features but also will help in automatically fixing the OCR output of the text which the OCR engine reads incorrectly.
DATA BASED OCR EXTRACTION
In case your extraction case is generic and you are unlikely to know in advance what kind of data you will need to extract then the domain-based extraction does not work. The data could be an invoice or scanned page of a book. In this case, you need to build an unsupervised learning system and run a large volume of data through it. The system would need to use a number of signals - the source of the data, words in OCR data, meta tags on the file, geographical location, etc. to first take the best guess of categorizing the data in different buckets.
You should then build extraction models on top of each of these buckets. When a new document is OCR-ed, you try to categorize the document in an existing classification bucket based on matches. Once that classification guess is made then you run extraction algorithms based on that bucket. If it does not match any bucket then you create a new bucket and just do the base extraction. Rinse and repeat. Over time, the new bucket will also fill up with enough data. And then you can run domain-based extraction on top of that.
Hope this helps, have fun!