Group Group Group Group Group Group Group Group Group Forums

Tesseract OCR Tutorial

Want to implement OCR in your iOS app? This tutorial will walk you through how to integrate the Tesseract OCR framework into an app!

This is a companion discussion topic for the original entry at

Hi there, thanks for this tutorial, it has been a lot of fun so far.
I was wondering why libstdc++.dylib is .tbd for me in xcode 7? Is that the same thing?
Thanks for your help in advance.

just for future reference: xcode was complaining about the imagepicker delegate. I had to change the method argument to didFinishPickingMediaWithInfo info: [String : AnyObject], instead of [NSObject : AnyObject].

Have a good day,

I got always this error: “error: linker command failed with exit code 1 (use -v to see invocation)” also with BitCode = NO

Hi @nhimkova, I made that update in the final project, but forgot to make the update in the actual tutorial. Thanks for pointing that out.

As for the difference between libstdc++.dylib and libstdc++.tbd, I don’t know off-hand, but either seems to work.

Hard to say what’s causing your error without more info, but are you sure you included all the necessary frameworks?


First, for the tbd, it’s just a compact version of the dylib that Apple made to reduce compiled size.

Now, a question about Tesseract:
The tutorial shows how to handle an image, but is there a way to do the same with a video stream? What I want to do is something like Apple’s iTunes card reader, to recognise text or character in real time from a video.

I wander if it’s possible and if you can hit on a way to do it.

@tuzzo77 I was facing the same issue yesterday but you will have to give additional details shown in the error. Mine was related to the Test folder created in the project so after disabling the test, it started working.

Hello! I have a question about the iPhone app “Google Translate” by Google. Do you think it is built on top of Tesseract? That application is AMAZING! Also super fast. That’s why I’m asking. Or do you think they actually leverage some Google’s computing power on the cloud?
And lastly, do you know any open source library that help you select text/section of image by swipe your finger on it like the Google Translate app?
Thank you very much!

Thanks for a great tutorial. You mentioned “Tesseract is unable to recognize handwriting”. What iPhone OCR SDK would you recommend for text drawn directly on the iPad Pro using the stylus? Something that might scan the input a character at a time?



can we detect a particular pattren from image like phone number or email id

Thanks for a great tutorial.

By the way, if you simply put your form in a tableView, you won’t have to deal with all the moving the view up and down for the keyboard. It will handle all of that for you automatically.

Any idea why the number is not being recognized in this image? I get a blank result. (fyi, the actual image is not rotated like that)

How can I scan only this 2 lines? (Model/Serial)

if you want to scan only an area of your image, why not have a try with zonal OCR. That means you can only control to scan and OCR the specific area of you image. Certainly, if you have the need of recognize multiple zones on your images, that’s allowed.

Hi Steve,

I’m looking for something similar - were you able to track down a tool to recognize handwriting?

How can I activate the tesseract OCR feature at the touch of a certain tab at the bottom of my app?

I want to get the option of “choose existing” and “take photo” to appear when I click the tab button to go to my secon view?

FYI, the project could stand to be updated for Swift 3. Thanks!

Agree that this should be updated. Final product running into issues on real iPad.

I am having trouble on adding new language , as for now i want to add Korian language, please provide the associated trained language link for the sdk used in this project , When i download trained data from link provided and used in this project it crashes ! . Finger crossing,