In this tutorial, you’ll use Metal and the Vision framework to remove moving objects from pictures in iOS. You’ll learn how to stack, align and process multiple images so that any moving object disappears.
Did anyone else seem to lose color in their final images? I’m only getting a red and green and it’s washed out. I’m pretty sure I followed everything correctly.
You multiply the stack count by the current average to get the sum of the pixels already seen. You then add the pixel for the new image to it and divide everything by the stack count + 1 (because the new image increases the stack count by one).
This should ensure that the average you see is a balanced average of all images, provided the correct stack count is passed in with each filter operation.
Back in Swift land, you ensure that the stack count is correct by setting it in this for loop:
When the photos are aligned, the areas that are transparent become dark when the average calculation completes. How can you create a calculation so there aren’t dark lines when a transparent color and non-transparent color are calculated? Thanks for the tutorial.
I guess it depends on the color of the transparent pixels. Since I did this tutorial using output from the camera, I didn’t take transparent pixels into account. Just thinking off the top of my head you could add the 4th alpha channel to the average calculations? Or maybe weight the value of the color by the alpha percentage?