Group Group Group Group Group Group Group Group Group

Beginning Metal - Part 9: Model I/O | Ray Wenderlich


In this beginning Metal tutorial video, you’ll learn how to import Blender and other 3D models into your game scenes.

This is a companion discussion topic for the original entry at


I watched the WWDC very roughly since still not learning graphics not very long.

But then I realized Metal and MetalKit are different things||| :joy:

And then I found how easy it is to use MetalIO and SceneKit compared with Metal :sob:
But I assume it’s still very cool and useful to know the detail things since then we can have more control for the render engine ( hope so


MetalKit was brought out in 2015 to assist in writing Metal code. We used to have to set up a CALayer, but now we have a MTKView with a delegate. It also provides a MTKMesh that can interface with the ModelIO mesh import. It’s a lot easier to import .obj files now.

The benefit of learning Metal, aside from the fact that you might want to use it one day :grinning:, is the same as learning how a car’s engine works. You don’t need to know it to be able to drive, but when you want to drive with good performance and fix it up yourself, then you need to know the innards.

Writing shaders is a good skill to have, as all game engines, including SceneKit and SpriteKit and Unity use them.


Hi Caroline, thanks for your reply.

While I’m still a little confused, yesterday I found I can directly load obj file using MDLAsset and get the MDLMesh and add that to screen using SceneKit and etc…

let url = Bundle.main.url(forResource: "xxx", withExtension: "obj") 
let asset = MDLAsset(url: url!)
let object = asset.object(at: 0) as! MDLMesh

// then it can be added to the Scene

In the course, we have to write vertexDescriptor and specify how to load them, is it because we have to pass all the things to the shader? So we can’t use it this way.

Second, now we’re letting the object rotate according to time. what if I want to move the camera just like some games do, I learnt things about model, view, projection, as long as we’re changing the viewMatrix, we can change the view we see, so If I want to have several sliders that control the movement and rotation of camera, I should have those parameters ( camera location, camera system coordinates) listed out and then compute the viewMatrix of Camera on the fly.

I find it’s a little hard in two aspects:

// update

  1. Calculate the math, however, I found a WWDC sample code, in the sample code it has a look_at_matrix method combined with the MatrixMath you provided, the math problem should be solved.

  2. I have to link the slider values to camera variables, once the NSSlider changes, compute the viewMatrix on the fly. But how to do this? Slider values can get in the ViewController, camera instance is defined in Scene, how do I pass data from the ViewController to the camera. I don’t want to change a lot of your code since the structure is well defined, and I’m not sure about how to pass data under this situation.

Thanks and sorry for the long questions.


@xueyu - you can do a lot of shortcuts with SceneKit. If you’re writing a game, my recommendation would be to do as much in SceneKit as you can, and then drop down to Metal. If you know Metal, it makes it easier to drop down.

As for walking through the scene, when you get to lighting, you’ll be doing a touch rotation.

You’ll have this code:

mushroom.rotation.x += Float(delta.y) * sensitivity
mushroom.rotation.y += Float(delta.x) * sensitivity

You could change the camera z position instead with this code:

camera.position.z -= Float(delta.y) * sensitivity
camera.position.z += Float(delta.x) * sensitivity

And that would zoom in and out of the scene.

  1. I didn’t implement the look_at. You can find information on the various ways of calculating camera matrices here: Towards the end of the article is an interactive section where you can experiment with the different ways.

  2. Have a look at how I implemented touch. The touch is passed from ViewController to Scene. That’s how I would link slider variables.

Good on you for extending the course and exploring :+1:



How can I add more than one textures, for example, lets say that I have the base color, but i do also have specular, emission, roughness, metal, etc…



@anibalrodriguez - in Model: Renderable, you have:

    if texture != nil {
      commandEncoder.setFragmentTexture(texture, at: 0)

You can load up other textures using setTexture(device:imageName:) and send them to the fragment function in the same way. eg (The API call has changed)

commandEncoder.setFragmentTexture(roughnessTexture, index: 1)
commandEncoder.setFragmentTexture(metalnessTexture, index: 2)

And in the fragment function where you have the parameter:

texture2d<float> texture [[ texture(0)]],

you’d also have

texture2d<float> texture [[ roughnessTexture(1) ]],
texture2d<float> texture [[ metalnessTexture(2) ]],

making sure that the index numbers in the fragment function match the index numbers that the render command encoder used.

Your fragment function would then sample the textures in the same way that it sampled the color texture, and do the appropriate color math.


Thanks @caroline

It worked, I have loaded the textures now, however, I have a new question, the fragment function in my understating always returns a rgba color. How can I apply the roughness and metallic? (even other kind of textures)


You apply the roughness and metallic values by using the appropriate math. I can’t give you the exact math, because it’s quite complicated and extensive.

In the videos, we used Phong shading. This is quite old-fashioned now, as people have moved onto Physically Based Rendering (PBR), which uses base-colour, roughness, metallic textures, as you have indicated that you are using.

Phong shading is a great introductory lighting scheme which tells you how to compute the appropriate color in the fragment function. For example, if your normal is facing away from a light, you’ll make the color darker in the fragment function.

There are multiple parts to lighting - calculating the diffuse is quite easy using the Lambertian algorithm that we use in the videos. Calculating the specular part is more involved and there are many many algorithms.

Apple have written a really nice renderer here:

It has metal, roughness and other maps and also demonstrates function constants, which, if you are using various textures, are also necessary, as your shader will require conditional code depending on whether the texture exists or not.

The specular shading algorithm they’re using here is Cook-Torrance which has Geometry, Distribution and Fresnel parts to it. Even these parts have various algorithms too.

There are entire books written about real time rendering and pbr.

Here’s one good link to the theory behind it, but if you google, you’ll find many more.

Oh - and Allegorithmic have a couple of great pdfs too:


I really appreciate your help. seems to be complicated but I’ll try to understand it and do it. Thanks.


Hey @Caroline thanks for the great tutorials!

I’ve found some cool models online, e.g. this one I’ve brought into my tutorials repo: GitHub

Unlike in our examples, textures for this model is spread across multiple files. I’ve tried to implement support for multiple textures here: ModelWithMultipleTextures.swift

…which didn’t work as the result displayed doesn’t look kinda right

As afterthoughts I’m thinking maybe texture get’s overwritten before GPU actually starts drawing, and I probably need to set all textures at the same time. If that’s the case, I’d probably need to modify Vertex struct to include texture name that should be used for it. If that’s the case, is there any simple way to get it from MDLVertexDescriptor?

Thanks again for your work!


@caroline Can you please help with this when you get a chance? Thank you - much appreciated! :]


Hi @bpashch - multiple textures is no problem as long as you have the .mtl file that goes along with the .obj file. R2D2 doesn’t have his .mtl file, so I wasn’t able to prove that it will work.

I believe you said you are starting the book Metal by Tutorials soon, and that will explain how to use the .mtl file.

In brief, meshes are broken up into submeshes (groups). Each submesh can be colored and/or textured in a different way. For example, on a car you might have a car body submesh and a tires submesh. One will be textured with car paint, and the other with rubber. In the book’s app, the submesh class holds the texture, so that when the submesh is rendered, it passes along the correct texture. It doesn’t matter whether the car paint and rubber are on one texture or two, as the texture is held along with the submesh.

(This isn’t optimal either, as multiple submeshes may be holding the same texture, but you could have a texture controller with a dictionary as you have in your GitHub project to avoid loading the same texture twice. Sometimes we have to make things simpler for the sake of teaching concepts :slight_smile: )

When Model I/O reads in the .obj file, each group of vertices is named, and it looks at the .mtl file for the corresponding group name to see how each submesh is to be colored/textured.

In this brief beginning Metal course, it was too complex to show how to do this, but the book goes into reading each submesh in.

Your GitHub project makes a valiant attempt, but the obj file doesn’t have the textures registered to each submesh, so only one texture axe_bras is ever sent to the gpu.


@Caroline thank you, appreciate the explanation, I’ll wait till I get to the chapter in the book then :slight_smile: