Core ML and Vision Tutorial: On-device training on iOS | raywenderlich.com

This tutorial introduces you to Core ML and Vision, two cutting-edge iOS frameworks, and how to fine-tune a model on the device.


This is a companion discussion topic for the original entry at https://www.raywenderlich.com/7960296-core-ml-and-vision-tutorial-on-device-training-on-ios

Hello, thanks for the tutorial.
Is it better to use VNCoreMLRequest compared to VNRecognizeTextRequest to recognize the text?

As the tutorial is doing more than simply recognizing text, VNCoreMLRequest should work better.

Thanks for this tutorial! I tried running the final version of the app but am not able to draw on the three canvases in the Add Shortcut part of the app. I’ve tried both in the simulator and on device.

Is there something that must be done in advance to enable drawing?

Adding canvasView.drawingPolicy = .anyInput in the function setupPencilKitCanvas()solved the issue for me. Also it may be required in iOS 14, so simply wrap the statement in an availability check like so.

if #available(iOS 14, *) {
      canvasView.drawingPolicy = .anyInput
}

You can find it in DrawingView in the Views folder.

1 Like

This was a fun tutorial, but I was also frustrated for a bit when I couldn’t draw on the canvases. My fixed was to add

canvasView.allowsFingerDrawing = true 

in setupPencilKitCanvas() in DrawingView.swift

I note that @ryanneilstroud has also offered a solution to this, and theirs may be more future-proof, but this worked for me.