What is OCR?
Optical character recognition, Optical character reader or OCR is the process of reading printed or handwritten text and converting them into machine-encoded text. OCR is mainly used in the field of artificial intelligence, pattern recognition, and computer vision.
So how does it work? In simple words, for a computer, an image is nothing but a collection of pixels. In OCR processing, the image is scanned for light and dark areas to identify each character.
Emanuel Goldberg, an Israeli physicist and inventor developed a machine in 1914 that could read characters and convert them into standard telegraph code. Concurrently, in 1913, Edmund Fournier d’Albe invented the optophone. It was used mainly by the blind to scan text. It produces time-varying chords of tones to identify a letter. That was the beginning of OCR. With the advent of computer and internet, OCR is now available for free through different products like Adobe Acrobat, Google Drive etc.
Where is OCR used?
OCR is used in places like:
- Automatic data entry like check passport, invoice, bank statement etc
- Automatic number plate recognition
- Scanning and reading out the words to blind people
- Extracting business card information and storing them in a contact list etc
OCR in Android devices:
In this blog, we will learn how to implement OCR in Android applications. To implement it, we will use Mobile Vision Text API that provides an easy way to integrate OCR on almost all Android devices.
We have previously explored how Face Detection works (check details here ). Text Detection is similar to face detection. You can pull the code from Github directly (link) and run it using android studio.
- Create a project on Android Studio with one blank Activity. Add the Google Play services dependency to it:
[gist https://gist.github.com/nandantal/31ab11c248f0ff7aa476b3a0f53db6ff ]
- Add permission for camera in the manifest file :
[gist https://gist.github.com/nandantal/c4f29aea8cc5ad7659dab75ffa85fd2a ]
- Our main and only Activity file is MainActivity.java and layout xml file is activity_main.xml. activity_main.xml:
We have one SurfaceView to show the camera view and one TextView to show the detected text.
- In the MainActivity, check if camera-permission is available or not. If not, request for it:
- On receiving the permission, create a TextRecognizer object.
- Create a CameraSource object to start the camera.
- Set one processor to the TextRecognizer to detect if any text is available on the camera screen. We will receive one callback and update the TextView that is on the camera screen. The code for starting the camera source looks like:[gist https://gist.github.com/nandantal/876db0ab5b1192a5b5acef9c14a4e61a]
- And to start the text recognizing processor:[gist https://gist.github.com/nandantal/c720cd03822a415d69cb68f2817e092f]
How To Run the project :
- Use Android Studio 3.0 +.
- Import the project.
- Run it on a phone.
Ensure that Google play services are installed on the phone and it is connected to the internet.
You will get an output similar to the following image after executing the project:
Using Google mobile vision API, we can easily integrate face detection, text detection or bar code detection on any Android device. Not only on Android, for iOS devices also Google has introduced the same features. If you want to learn more about Mobile vision API, you can check reference doc here.