I have made a stack overflow post about this, and you can see it here:
It has some extra detail like code and output from Instruments, however the problem is that I have multiple layers on top of each other. Each layer has an irregularly shaped UIImage in it.
I need the user to be able to select a specific image and drag / rotate it with their finger.
I can’t allow the user to have their touch detected when in the transparent areas, so I found a solution using Core Graphics to work out the Alpha of a pixel at the touch-point.
This is working, however it’s massively expensive and causing about a 1 second lag (on a 12.9" iPad Pro) from when the user starts to move their finger until the view starts moving with the finger.
My app has no control over the ‘hit test’ per se i.e. I’m not triggering it, nor am I able to immediately stop it from testing further and deeper into the view stack when it’s found an acceptable view for the user to touch.
I am, however, having to do this alpha test each, and every, time the user puts their finger down onto the screen.
Can someone please help me with this, and perhaps suggest a more efficient mechanism?
I find it crazy Apple hasn’t already done this work for us.