Core Graphics: Optimal method for preventing touches on transparent areas

I have made a stack overflow post about this, and you can see it here:

https://stackoverflow.com/questions/56943255/ios-most-optimal-means-to-hit-test-for-transparency

It has some extra detail like code and output from Instruments, however the problem is that I have multiple layers on top of each other. Each layer has an irregularly shaped UIImage in it.

I need the user to be able to select a specific image and drag / rotate it with their finger.

I can’t allow the user to have their touch detected when in the transparent areas, so I found a solution using Core Graphics to work out the Alpha of a pixel at the touch-point.

This is working, however it’s massively expensive and causing about a 1 second lag (on a 12.9" iPad Pro) from when the user starts to move their finger until the view starts moving with the finger.

My app has no control over the ‘hit test’ per se i.e. I’m not triggering it, nor am I able to immediately stop it from testing further and deeper into the view stack when it’s found an acceptable view for the user to touch.

I am, however, having to do this alpha test each, and every, time the user puts their finger down onto the screen.

Can someone please help me with this, and perhaps suggest a more efficient mechanism?

I find it crazy Apple hasn’t already done this work for us.

Looking at the profiler screen shots at that stack overflow link, I noticed that at the very bottom there is an inflate call using libz.1 that is taking a lot of time. I think that means your images are compressed PNG, and are getting decompressed (inflated) over and over. It seems like that should only have to happen once, when it is first loaded. Maybe there is a way to load the pngs into a UIImage and then use that to load the views? Not really sure…

You really may want to bite the bullet and figure out how to make some CGPaths that outline your images, even if they are irregular. That would make inside versus outside more efficient to determine. There are probably graphic programs that can make a vector outline of an image for you.

Or you might look at SpriteKit for working with a stack of images. It has tools for designating touchable areas and such. It is a fairly elaborate system, so it might take a while to work out how to use it for your purpose. Part of the answer to why hasn’t Apple done this is that they have done quite a bit of it in SpriteKit.

Hi @annemarie1185,
If I were to do what you were, I’d create a mask or have a polygon for the image, that way I can check if the point was inside of the polygon, much faster than calculating the alpha.

cheers,

When I saved the png’s from photoshop, I saved them with zero compression. They are large (2048x2048) and they are indeed loaded into a uiimageview. There are 7 UIImageViews stacked on top of each other.

I’ve looked at SpriteKit, I saw a stack overflow post about assigning a physics body based on alpha to add to the sknode.

Done all that today; and they still aren’t detected on touch. I’m getting really really strange and inconsistent responses, but no real touch detection.

I admit, everything I know about SpriteKit I learned today, but I’m just utterly blocked and lost on what’s going on and what to do next…, :frowning:

Very disheartening.

This topic was automatically closed after 166 days. New replies are no longer allowed.