3D Graphics with Metal | raywenderlich.com

In this course you'll get an introduction to computer graphics using Metal on the GPU. You'll render 3D models and even write a simple game using your very own game engine.


This is a companion discussion topic for the original entry at https://www.raywenderlich.com/1258241-3d-graphics-with-metal
1 Like

What a brilliant course! Just finished it. I learned a lot, and it was
fun.

Excellent presentation - clear, friendly, and enthusiastic.

The material is well thought out. I like how you started out pretty
basic, then revisited and refactored several times as we learned more.

Finishing off with a playable game that used the engine we developed
really tied everything together.

One little improvement could be to add more challenges. Having done
other courses, I found that the challenges are where you really
learn the material.

Other than that - great! Thanks!

@m_a_c thanks for your feedback! Iā€™ll certainly keep that in mind when / if the course gets updated.

1 Like

Hi Caroline, thanks a lot for this wonderful class. Iā€™ve learned a lot from it. I am also reading the book Metal By Tutorials.

I have a question related to 3D models and animation. Is it possible to rotate/scale/transform a single submesh from keyframe values that are defined in the code, as opposed to rotating/scaling/transforming submeshes from animations defined by joint movement animation clips created by Blender?

For example, I created a 3D model of a face using Blender. I can rotate it using the model matrix. But now I want to rotate just the eyeball submeshes. And I donā€™t want to use the predefined animation clips that are created using Blender.

If it is possible, is it recommended?

Hi!

If you can identify the submesh, you can certainly send a different model matrix to the shader function. In train.obj, each material (submesh) has a unique name eg ā€œChassisā€, ā€œWheelā€. In Submesh init(), you can verify this by printing out mdlSubmesh.material?.name.

Maybe you could set up an identity matrix that you always send to the shader as a parameter, and only on that particular submesh add the extra rotation. Itā€™s not standard to use submeshes in this way, however.

Alternatively, you could separate the eyes out into a separate object, load the two models head and eyes, and parent the eyes to the head in the scene. That would make rotation even easier.

Itā€™s standard to make some kind of rig though. If you use USD, you can access the name of the transform component. If you download the robot from Appleā€™s gallery (Quick Look Gallery - Augmented Reality - Apple Developer), and drag it into Xcode, youā€™ll be able to see how the model is broken down. This isnā€™t a skeleton - it uses transform components. The skeleton USD model supplied with the book uses a skeleton, and when you click on the model in Xcode you can compare its hierarchy with the hierarchy of the robot.

I just realised that this question is for the video and not the book :woman_facepalming:!

As you have the book, Chapter 8, Character Animation may go towards explaining further.

Hi Caroline, thanks for the answer, they are super helpful!

I see that the robot with transform component has structured Xform divisions in the usda file. Some Xform divisions has their own transforms. Thatā€™s every interesting. I just installed USD on my mac, which was a very cumbersome process. Is the robot with transform component made with USD?

Apple provide compiled USD tools at the bottom of Quick Look Gallery - Augmented Reality - Apple Developer. If you download usdPython, you have various USD tools available to you. USD.command opens up Terminal and you can use usdcat to see what the file consists of.
I didnā€™t have much luck with USDPython 0.63, but I didnā€™t spend a lot of time with it, as 0.62 worked for me. 0.63 installs to /Applications, whereas 0.62 is just archived directories.

When you say made with USD, Iā€™m not sure what you mean. USD is a file format with various supporting apps such as usdcat to pretty print the file.

If youā€™ve read through chapter 8, you can take the final code and import the robot to see it animating. (Iā€™m referring to the book Metal by Tutorials here, not the videos!)

In Renderer, I changed the skeleton model loading to:

let skeleton = Model(name: "toy_robot_vintage.usdz")
skeleton.rotation = [0, .pi, 0]
skeleton.scale = [0.1, 0.1, 0.1]

In Mesh.swift, in Mesh init(), skeleton will be nil for the robot, as it has no skeleton. You can verify this with print("skeleton: ", skeleton).

However, each mesh will have a TransformComponent. You can verify this with print("Transform: ", mdlMesh.name, transform) - thatā€™s also in Meshā€™s init().

The robot is split up into several meshes, and Model will iterate through each mesh in render(renderEncoder:uniforms:fragmentUniforms).

Compare this with skeleton.usda, which has a skeleton rig with three joints, but only one mesh.

P.S. I made a bad choice of file name for the skeleton model! Please donā€™t confuse skeleton.usda, which is the model, which could be called anything, with the Skeleton struct, which holds the joints from any loaded Model. In Renderer, skeleton refers to the model skeleton.usda, whereas in Mesh, skeleton refers to any Modelā€™s joint hierarchy.

Hi Caroline,
Iā€™ve got something interesting to share:

I tried to transform a single submesh, but it was not successful. Not only the submesh that was supposed to transform transformed, all the other submeshes transformed in the same way.
I used the method you mentioned: identify the submesh that needs to be transformed by checking submesh.name, then send the transform matrix throught setVertexBuffer at index 21; in case it is not the submesh that needs to be transformed, send an identity matrix.

here is my code in render(commandEncoder: MTLRenderCommandEncoder, submesh: Submesh):

    var submeshPointer = submeshesBuffer.contents().bindMemory(to: Submeshes.self, capacity: instanceCount)
    
    for submeshTransform in submeshesTransforms {
        if mtkSubmesh.name == "eyelid" {
            submeshPointer.pointee.modelMatrix = submeshTransform.matrix
            print("šŸ¹ \(mtkSubmesh.name), submeshTransform: \(submeshPointer.pointee.modelMatrix)")
        } else {
            submeshPointer.pointee.modelMatrix = .identity()
            print("šŸ¦Š \(mtkSubmesh.name), submeshTransform: \(submeshPointer.pointee.modelMatrix)")
        }
        submeshPointer = submeshPointer.advanced(by: 1)
    }

    commandEncoder.setVertexBuffer(submeshesBuffer, offset: 0, index: 21)

I printed out the submeshPointer.pointee.modelMatrix, the results are (because there are two eyes, so I used the instances class, therefore two eyeballs/eyelids) :

:fox_face: eyeball, submeshTransform: simd_float4x4([
[1.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 0.0],
[0.0, 0.0, 0.0, 1.0]])
:fox_face: eyeball, submeshTransform: simd_float4x4([
[1.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 0.0],
[0.0, 0.0, 0.0, 1.0]])
:hamster: eyelid, submeshTransform: simd_float4x4([
[1.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 0.0],
[0.0, 1.0, 0.0, 1.0]])
:hamster: eyelid, submeshTransform: simd_float4x4([
[1.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 0.0],
[0.0, 1.0, 0.0, 1.0]])

You can see that the eyelidā€™s transform matrix is different from eyeballā€™s, as it should be, at least at this time. But after I run the app, things changed. Both eyeball and eyelid are transformed. I checked the value of vertexBuffer at index 21 in the GPU debugger and found out that they are the same value for eyeball and eyelid. Hereā€™re the screenshots:

vertexBuffer at index 21 for the eyeball submesh:

vertexBuffer at index 21 for the eyelid submesh:

So event if I sent different matrices for different submeshes through vertexBuffer at index 21, they still receive the same value. I donā€™t understand why it is like this at all.

Hi Caroline:
Thank you for the info on USD file and how the toy drummer model rotate his parts through transforming meshes. I realized that the toy drummer didnā€™t animate through transforming certain submeshes, but actually it did it through transforming certain meshes.

Remember in the first part of chapter 4 of Metal by Tutorials, where you set the matrix for the points, and you had to create a second buffer to hold the data for the second draw call?

It looks like you are overwriting the matrix on the second draw call before doing commit.

Although you are setting up an array of submesh matrices. Are you accessing the correct array element in the shader?

Hi Caroline,
thanks for the reply.
I think I might be able to make it clearer by using the code we wrote in the 3D Graphics with Metal video tutorial.
Remember we drew 100 trees using the Instance class? I tried to move the leaves submesh up by 2 units.
Hereā€™s how I did it and what result I came up with:

  1. I created var transformsLeaves: [Transform] in the Instance class to store all the transform matrices for the leaves submesh.

  2. I assigned values to transformsLeaves in the GameScene class:

     for i in 0..<100 {
         trees.transforms[i].position.x = Float(i) - 50
         trees.transforms[i].position.z = 2
         for mesh in trees.meshes {
             if mesh.mtkMesh.name == "Cylinder.001_Cylinder.006_Leaves" {
                 for submesh in mesh.submeshes {
                     if submesh.mtkSubmesh.name == "Cylinder.001_Cylinder.006_Leaves" {
                         trees.transformsLeaves[i].position.y = 2
                     }
                 }
             }
         }
     }
    
  3. I passed the transformsLeaves to the shader function through a vertex buffer at index 22

     var leafPointer = leafBuffer.contents().bindMemory(to: LeafInstances.self, capacity: instanceCount)
     for transform in transformsLeaves {
         leafPointer.pointee.modelMatrix = transform.matrix
         leafPointer = leafPointer.advanced(by: 1)
     }
     commandEncoder.setVertexBuffer(leafBuffer, offset: 0, index: 22)
    
  4. In the shader function, I multiply the vertex position with the model matrix just like what we did with the instances model matrix.

    Instances instance = instances[instanceID];
    LeafInstances leaf = leaves[instanceID];
    VertexOut out {
        .position = uniforms.projectionMatrix * uniforms.viewMatrix * uniforms.modelMatrix * instance.modelMatrix * leaf.modelMatrix * vertexBuffer.position,
        .worldNormal = (uniforms.modelMatrix * instance.modelMatrix * float4(vertexBuffer.normal, 0)).xyz,
        .worldPosition = (uniforms.modelMatrix * instance.modelMatrix * vertexBuffer.position).xyz,
        .uv = vertexBuffer.uv
    };
    

The result is that the whole tree moved up by 2 units, not just the leaves submesh.
image

What result were you expecting?

The vertex function seems to be multiplying all vertices by that y= 2 that you set in step 2.

Youā€™re going through setting each [i] for position and then you iterate through each submesh. But every tree has a leaves submesh so youā€™re setting y = 2 for every [i]

In the vertex function you are multiplying every vertex by y=2 no matter what submesh it is in

Or did I miss something? Are you able to zip up a project for me?

Hi Caroline,
I gave you the wrong example. So sorry about that. You are right, every tree has a leaves submesh so every [i] has y = 2.

I have another example, also using the Instance class from the video tutorial.
In the Instance class, in render(commandEncoder: MTLRenderCommandEncoder, submesh: Submesh), I replaced these lines:

    for transform in transforms {
        pointer.pointee.modelMatrix = transform.matrix
        pointer = pointer.advanced(by: 1)
    }

with these lines:

    for transform in transforms {
        if mtkSubmesh.name == "Cylinder.001_Cylinder.006_Leaves" {
            var transformCopy = transform
            transformCopy.position.y = 2
            pointer.pointee.modelMatrix = transformCopy.matrix
            pointer = pointer.advanced(by: 1)
            print("šŸ· pointer.pointee.modelMatrix: \(pointer.pointee.modelMatrix)")
        } else {
            pointer.pointee.modelMatrix = transform.matrix
            pointer = pointer.advanced(by: 1)
            print("šŸ¦Š pointer.pointee.modelMatrix: \(pointer.pointee.modelMatrix)")
        }
    }

I expected to see that the tree leaves submesh are moved up on the y-axis by 2 units, and the tree trunk submesh stays the same position. But, the trees didnā€™t move at all. Interestingly, since I printed out pointer.pointee.modelMatrix, which slows down the GPU I guess, I saw two rows of trees appearing at y=0 and at y=2. I included a video in the zip file to show you what I mean.

MyMetalRenderer.zip (2.7 MB)

As I said previously,

This is the same sort of thing.

In Instance, set up two instance buffers, one for each submesh.

At the start of render, determine which submesh you are rendering:

let mtkSubmesh = submesh.mtkSubmesh
let buffer: MTLBuffer
if mtkSubmesh.name == "Cylinder.001_Cylinder.006_Leaves" {
  buffer = instanceBuffer1
} else {
  buffer = instanceBuffer
}
var pointer = buffer.contents().bindMemory(to: Instances.self, capacity: instanceCount)

After the loop, set the buffer:

commandEncoder.setVertexBuffer(buffer, offset: 0, index: 20)

That worked for me.

Hi Caroline:
I see. I could have tried this earlier but when I went back to read Chapter 4 again, I thought my case was different. Anyway, thank you so much for answering my questions!

1 Like

Iā€™m very glad you raised this issue and made me think about it :slight_smile:

Hi Caroline,
:grin:I appreciate it!

I have another question, itā€™s a simple one. I know that in the animation workflow, shape keys are used frequently, and joints are to control the shape keys. In Chapter 8 Animation, in the Metal by Tutorials book, I learned how to implement animations with joints and animation clips. I am curious, is it possible to use shape keys for animations with metal?

Chapter 13, Instancing and Procedural Generation has a section on Morphing.

Basically you hold a buffer of vertex positions in one state and a second buffer of positions in the morphed state. To animate to the morphed state, a vertex shader function takes in both buffers and calculates a value between the two vertex positions depending on time.

Remember that Metal is just an API to set up interactions with the GPU. As long as you create a pipeline state and vertex and fragment (or kernel) functions, you can do anything :slight_smile: