And here is the equivalent in Swift https://github.com/gsabran/HelloOpenGL_Swift (based on the part 1 done by drouck: https://github.com/drouck/HelloOpenGL_Swift)
Let me know if you have any comments/suggestions!
Thanks for great tutorial! Here is project for Objective-C, using XCode 8.2.1 https://gitlab.com/kirstone/OpenGL-ES-Tutorial-RayWenderlich
Great tutorials. I do have a follow-up question – if I wanted to take a 2D CGPath and turn it into vertex coordinates, should I just use the x,y values from my path and a 0 for the z coordinate? Or is there something else (math?) that I need to be thinking about? My end goal is to be able to apply textures to regions defined by VNFaceObservations. I’m already able to get landmark points from the Vision API - but I’m struggling a bit on how to use those points as the basis for vertex coordinates…maybe I’m overthinking it? I guess my concern with just 0-ing out the Z axis is that I’m guessing that the final texture will feel flat rather than feeling as if has depth… Any resources/links/suggestions appreciated.
This tutorial is more than six months old so questions are no longer supported at the moment for it. We will update it as soon as possible. Thank you! :]