Set IndexType on draw_indexed_primitives created on GPU

I am extending the challenge on Chapter 13 (with the grass scene) to use GPU driven rendering as seen on Chapter 15.

The Skybox model is created using a MDLMesh and converted to a MTKMesh, but that is producing a mesh.submeshes[0].indexType of UInt16 while the imported models are using UInt32.

When calling cmd.draw_indexed_primitives in the Kernel it doesn’t have any way to pass the correct IndexType and it is just assuming a UInt32.

It is strange, because when the MTLIndirectRenderCommand on the CPU side does provide a way to pass the IndexType.

My solution was to create a new IndexBuffer with double the size and copy all indexes from the the original buffer into the new one with the values casted from UInt16 into UInt32.

Is there a better way to do it?

The Metal Shading Language function is:

void draw_indexed_primitives(
  primitive_type type, 
  uint index_count,
  ushort/uint *index_buffer,
  device/constant uint instance_count, 
  uint base_vertex, 
  uint base_instance);

This is from the Metal Shading Language specification: https://developer.apple.com/metal/Metal-Shading-Language-Specification.pdf

I haven’t tried changing this, but it leads me to think that how you define index_buffer is whether ushort or uint is used.

struct Model contains constant uint *indexBuffer;. I think that’s why it’s using UInt32 on the GPU side.

You could duplicate the ICB.metal code to use a ushort in the indexBuffer and check this out.

Oh, and welcome to the forums :grin:. Great first question!

1 Like

Thank you @Caroline

It worked! :grin:

To do a quick test I’ve changed the argument to:

drawArguments.indexStart + (constant ushort*)model.indexBuffer
1 Like