Normals from procedural MDLMesh for Raytracing

Hi,

I am working with the Raytracing example in Chapter 21 which has a nice loadAsset() function to create the mesh via MDLAsset. When I want todo the same with a mesh from MDLMesh (like a sphere or cube) it crashes in

let normalsBuffer = mesh.vertexBuffers[1].buffer

because there is only one vertexBuffer. How to be able to use the loadAsset(() routine also for procedural MDLMesh generated meshes ?

btw; will the book be updated for the new Raytracing APIs etc ?

Thanks

You can try:

https://developer.apple.com/documentation/modelio/mdlmesh/1391284-addnormals

That calculates normals from the mesh data.

We haven’t decided on an exact outline for the next edition yet, but I would hope that we would be able to update ray tracing in some form :slight_smile: . But I can’t say for definite that it will happen, or give you a time frame I’m afraid.

2 Likes

No the addNormals call does not help. Still only one vertexBuffer.

Fair enough on the ray tracing APIs. Book is already amazing value anyway.

@markusm - First I have to give you thanks for finding a stupid bug :slight_smile:

In the definition of vertexDescriptor in Renderer.swift, the attribute format of the normal is .float2 and of course it should be .float3. This makes the final render much nicer!

vertexDescriptor.attributes[1] =
  MDLVertexAttribute(name: MDLVertexAttributeNormal
                     format: .float3,
                     offset: 0, bufferIndex: 1)
1 Like

Next, I had time to look at the code today. The reason why the MDL primitive doesn’t load is because of the vertex descriptor.

This is vertexDescriptor (with the float2 bug, and this screen shot is how I picked it up):

This is the vertex descriptor of the MLD primitive sphere:

Whereas I created vertexDescriptor to be in two different buffers, Model I/O interweaved position, normal and uv in one buffer.

So I’ve created a new loading method for primitives. Add it in RenderExtension.swift:

  func loadPrimitive(name: String, position: float3, scale: Float) {
    let allocator = MTKMeshBufferAllocator(device: device)
    let mesh: MDLMesh?
    switch name {
    case "cube":
      mesh = MDLMesh(boxWithExtent: [1,1,1], segments: [1, 1, 1], inwardNormals: false, geometryType: .triangles, allocator: allocator)
    case "sphere":
      mesh = MDLMesh(sphereWithExtent: [1, 1, 1], segments: [50, 50], inwardNormals: false, geometryType: .triangles, allocator: allocator)
    case "plane":
      mesh = MDLMesh(planeWithExtent: [1, 1, 1], segments: [1, 1], geometryType: .triangles, allocator: allocator)
    default:
      mesh = nil
    }
    guard let mdlMesh = mesh else { return }

    let positionData = mdlMesh.vertexAttributeData(forAttributeNamed: MDLVertexAttributePosition, as: .float3)!
    let normalsdata = mdlMesh.vertexAttributeData(forAttributeNamed: MDLVertexAttributeNormal, as: .float3)!
    let positionPtr = positionData.dataStart.bindMemory(to: float3.self, capacity: mdlMesh.vertexCount)
    let normalsPtr = normalsdata.dataStart.bindMemory(to: float3.self, capacity: mdlMesh.vertexCount)
    guard let submeshes = mdlMesh.submeshes as? [MDLSubmesh] else { return }
    for submesh in submeshes {
      var indices = submesh.indexBuffer.map().bytes.bindMemory(to: UInt16.self, capacity: submesh.indexCount)
      for _ in 0..<submesh.indexCount {
        let index = Int(indices.pointee)

        vertices.append(positionPtr[index * 2] * scale + position)
        normals.append(normalsPtr[index * 2])
        indices = indices.advanced(by: 1)
        let color: float3
        if let baseColor = submesh.material?.property(with: .baseColor),
           baseColor.type == .float3 {
          color = baseColor.float3Value
        } else {
          color = [1, 1, 0]
        }
        colors.append(color)
      }
    }
  }

You use it in the same way as loadAsset(name:position:scale:):

  func createScene() {
    loadAsset(name: "train", position: [-0.3, 0, 0.4], scale: 0.5)
    loadAsset(name: "treefir", position: [0.5, 0, -0.2], scale: 0.7)
    loadAsset(name: "plane", position: [0, 0, 0], scale: 10)
//    loadAsset(name: "sphere", position: [-1.9, 0.0, 0.3], scale: 1)
//    loadAsset(name: "sphere", position: [2.9, 0.0, -0.5], scale: 2)
    loadPrimitive(name: "sphere", position: [-1.9, 0.0, 0.3], scale: 1)
    loadPrimitive(name: "sphere", position: [2.9, 0.0, -0.5], scale: 2)
    loadPrimitive(name: "cube", position: [0, 0.0, 0], scale: 1)
    loadAsset(name: "plane-back", position: [0, 0, -1.5], scale: 10)
  }

I believe there is a bug that I will have to report.

I was expecting:

 let positionData = mdlMesh.vertexAttributeData(forAttributeNamed: MDLVertexAttributePosition, as: .float3)!

to return only position data from buffer 0. But it returned position / normal(?) / position / normal(?) / etc.

Which is why I did the index * 2 in the for loop.

2 Likes

Final render of train, fir, two Model I/O spheres and a Model I/O cube:

The lighting is much nicer.

1 Like

Wow, Caroline, thanks so much for all this!

I ported this very nice and comprehensible Principled BSDF path tracer (GitHub - knightcrawler25/GLSL-PathTracer: A GLSL Path Tracer) to the MPS framework for an GPL based app and now I can test it with at least the primitives.

The next step would be to be able to load complex 3D models via Model I/O, however models can be very complex with any number of textures which may need to be extracted etc. and associated with the right tex coordinates etc etc. Do you know of any reference source which handles this ? It seems like a major piece of work. If not, I will just release the tracer as an PBSDF example with the primitives (for doing reference images like: GLSL-PathTracer/hyperion.jpg at master · knightcrawler25/GLSL-PathTracer · GitHub).

Thanks again.

1 Like

I :heart: pretty pictures like that!

I don’t even know what a ModelIO model looks like. Do you really mean ModelIO?

Sorry, I corrected my post, “to load complex 3D models via Model I/O”, i.e. .usd and so on.

btw; for me the primitive code still crashes when accessing the normal pointer. Could you update the Git with the source ?

I’m not able to do that, but here’s the project:

<download removed because it’s incorrect!>

I don’t know any other sources for loading files through Model I/O except for Metal by Tutorials which loads usd and obj files.

Your project also crashes for me on the first access to the normals pointer. Strange. I am running the latest 11.3 Beta, maybe that is the problem? Maybe I should file a bug report ?

I just made a request to update the repo :woman_facepalming: - so I should probably hold off on that one then :smiley:

I’m having another look at vertexAttributeData later today or tomorrow because it doesn’t work as I think it ought to, so I’ll try it out on my other computer (I’m on M1 Silicon, Big Sur, Xcode 12.4).

[It probably needs the stride taken into account. And I did wonder why index * 2 works when there’s uvs in the buffer as well.]

In the meantime, if you don’t mind your lighting being wrong for the moment, you could replace that line with normals.append([0, 1, 0]).

Ok, great, thanks. I am on Xcode 12.4 too.

The normal pointer is just invalid, it crashes when the index is 0.

btw; one minor thing. The Raytracing project is dependent on files from the Bloom project (the Storyboard file). So when you copy the project it out of the folder it would not compile. Not a big thing.

Thanks. I’ll make a note to update the file links. Might not make it until the next version though.

My bad. vertexAttributeData(forAttributeNamed:as:)works perfectly if you read the documentation :woman_facepalming:.

Raytracing.zip (163.4 KB)

This does not work on non-Silicon Macs. See following post.

Hopefully you will get this render:

Using the method in loadPrimitive(...), you should be able to convert any Model I/O loaded file to a format that your app can deal with. This only uses position and normals, but you could get uvs and any other attributes that your vertex descriptor includes

1 Like

I’m unable to explain the not-working on Catalina, except by yet again accusing Apple of a now-fixed bug :smiley:

It appears that Catalina doesn’t like a float3 in:

let newNormal = normalPtr.assumingMemoryBound(to: float3.self).pointee

For Catalina, you can replace that line in loadPrimitive(...) with:

let f1 = normalPtr.assumingMemoryBound(to: Float.self).pointee
let f2 = (normalPtr + MemoryLayout<Float>.stride).assumingMemoryBound(to: Float.self).pointee
let f3 = (normalPtr + MemoryLayout<Float>.stride * 2).assumingMemoryBound(to: Float.self).pointee
let newNormal: float3 = [f1, f2, f3]

That loads three Floats and then puts them into a float3.

Works on my Catalina MacBook Pro Xcode 12.2 (beta 4). (Yes, I need to update Xcode :smiley: )

1 Like

Yay, it works! Thanks so much!

But I am on Big Sur (just on an Intel platform) and it also crashes without your float3 fix. You should report this to Apple … .

Thanks again.

I’ve just been talking with Warren Moore, ex-Apple, and he says that I should be loading Floats and not float3, as it’s best not to rely on buffer alignment.

So the Catalina code is more robust than the Big Sur code, and I should replace position data with the same.

If it still crashes on Big Sur, then the Silicon Macs treat the buffers differently.

From Warren:

The SIMD generic types are intended to be 1:1 with the extended vector types in the “old-school” simd framework, which means that Clang wants to be able to assume that SIMD3 and SIMD4 always start at 16-byte aligned addresses, so it can emit the most efficient simd instructions to operate on them

Packed model data often doesn’t follow that rule, and alignment doesn’t matter quite as much to the shaders, since modern GPUs are almost universally scalar these days

Ah, that makes sense. But I think we should still report this as this is a pitfall many more developers will experience. Or maybe they fixed this intentionally for M1 and the fix for Intel would have been too much work / testing.