Navigation chapter and simple mesh

Hi @caroline

one more thing… based on my previous questions, I am trying to modify the navigation chapter sample to be able to render simple mesh ( quad from chapter 4 ) instead of the full model, but it currently fails.

renderEncoder.drawPrimitives(
        type: .triangle,
        vertexStart: 0,
        vertexCount: quad.indices.count)

is my render call, I have already added uniforms and params buffers, but with option .triangle I get only one triangle, with .line just single line, and .linestrip gives me 2 lines.

vertex buffer has optimized version of 4 vertices, with proper indices being sent.

vertex VertexOut vertex_main(
  const VertexIn in [[stage_in]],
  constant Uniforms &uniforms [[buffer(UniformsBuffer)]])
{
  float4 position =
    uniforms.projectionMatrix * uniforms.viewMatrix
    * uniforms.modelMatrix * in.position;
  float3 normal = in.normal;
  VertexOut out {
    .position = position,
    .normal = normal,
    .uv = in.uv
  };
  return out;
}

I know this has something to do with the vertex function in .metal shader file, but how to combine it? applying matrices AND using indexed primitives from buffers?

thx a lot

Does your vertex descriptor match the actual data format?

Have you inspected the geometry buffers on the GPU?

I did… and indeed, it does not match at all. will look more into descriptor chapter

1 Like
Row Offset 0 1 2 3 4 5 6 7
0 0x0 -0.800000 0.800000 0.000000 0.800000 0.800000 0.000000 -0.800000 -0.800000

this is how my VertexBuffer looks in frame capture.

weird is, that I have only numbers 1, or 0 in Quad code. however if I compare the code, I see no differences between web and my code.

probably, there is also an error in displaying data for vertex buffer. shall I set up the type for float3? 4? packed_float currently I have already both triangles displayed, but displaced.

I find it very difficult to debug code with just snippets :upside_down_face:.

When I debug, I examine the MTLBuffer and vertex descriptor on the CPU side as well as looking at the frame capture. The problem could be anything, but I would guess you are scaling your quad on the CPU side, and that you are sending 0.8 to the GPU, and that your vertex descriptor doesn’t match the data.

1 Like

Navigation.zip (2.6 MB)
Attached see my current project.

Hi again

attached see image of my debugger. I have already found out, that vertices have to correspond to vertex descriptor… so 4 floats for position, 3 for normals, 2 for UV. in left bottom corner of screenshot, contents of ‘in’ is visible, but… it says out of range in the right side of screen, and I cannot understand this.

any hints here?
thx a lot

Snímek obrazovky 2022-09-23 v 11.42.04

  1. Your quad mesh consists of Position: Float x 3. However, your vertex descriptor and vertex shader function has position, normal and uv, so they are never going to match up.

  2. You don’t need to explicitly send the index buffer to the GPU in a buffer, as the indexed draw call references the index buffer, so it will automatically go to the GPU.

  3. Your vertices array consists of packed floats, which the vertex descriptor can’t handle. You could send them to the GPU without using a vertex descriptor, and take the packed float array directly into the vertex shader without using stage_in. Or you can define your array as [float3]

Navigation.zip (2.6 MB)

The attached fixes all these issues.

  1. VertexDescriptor.swift now defines a vertex descriptor for quadLayout with just a float3, and Renderer uses this vertex descriptor when creating the pipeline state object. I’ve commented out the normal and uv in the Shaders.metal, as you’re not using those.

  2. I’ve commented out the setVertexBuffer for the index buffer.

  3. In Quad.swift, I’ve set up a float3 array for `vertices and changed the length of the vertex buffer.

thx a lot for the fix, has definitely helped.

May I ask, what changes would be needed in the code, in order to include also normals? as I plan to use them for lightning purposes.

have made several attempts, but all of them failed.

as for those “packed” floats… so using Float data type means it will be packed? and float3(0,0,0) will not be packed and shader can work well with it? this has surprised me a bit, although I have read this in the book

many thanks for guidance in advance

Marek

To add normals, you’d calculate them, and then add them to an array like the position vertices as float3, and add them to the vertex descriptor and also to the structure in the shader. You’d pass the vertex buffer before the draw call, just like the position vertex buffer.

Have a read of this answer to see if it helps with float3 vs packed floats.

so it’s another buffer? I thought it can be combined in quad vertices variable… but there I’d have to have a tuple of (float3, float3) and this approach does not seem to work.
so… if normals are just another buffer, and they will be only combined in the shader… still bit confused

It doesn’t have to be another buffer. If you layout the vertex descriptor properly, and you define your vertices array with a structure that contains position and normal, then you can interleave position / normal / position / normal in the same buffer.

defaultVertexDescriptor shows both approaches. It has position and normal interleaved in buffer 0, and uvs in buffer 1

… wait. so its still only array of .float3 , but I have to interleave them? so…
instead of

  var vertices2: [float3] = [
    float3(-1, 1, 0),
    float3(1, 1, 0),
    float3(-1, -1, 0),
    float3(1, -1, 0),
    float3(-1,-1,-1),
    float3(1,-1,-1)
    
  ]

I’d have

  var vertices2: [float3] = [
    float3(-1, 1, 0), // vertex
    float3(1, 1, 0), //normal
    float3(-1, -1, 0), //vertex
    float3(1, -1, 0), //normal
    float3(-1,-1,-1),
    float3(1,-1,-1)
    
  ]

… and so on?

It’s better to use

  struct Vertex {
    var position: float3
    var normal: float3
  }

as in this example:

Navigation.zip (2.6 MB)

(that’s with [0,0,0] in the normals, but the positions are correct)

thx a lot!! now I start to understand this. slowly… :smiley:

1 Like

P.S. Irrelevant if you’re using my latest example, but I notice now that you had another error when you were using defaultLayout (which my latest example doesn’t).

    vertexDescriptor.attributes[Normal.index] = MDLVertexAttribute(
      name: MDLVertexAttributeNormal,
      format: .float3,
      offset: offset,
      bufferIndex: VertexBuffer.index)
    offset += MemoryLayout<Float>.stride

That offset is using the stride of Float and not float3, so it’s not allocating enough space.

It’s a good idea to spend a lot of time on buffers, vertex descriptors and checking the GPU frame capture to see what goes onto the GPU. In my opinion, that and matrices is the hardest thing about computer graphics with Metal.

yes… have tried a lot now to look into GPU capture… starting to understand it more and more. which timezone are you living, btw? I also realized, that for my purposes ( wireframe of something ressembling Minecraft, I need not share vertices, as the normals used for lighting cannot be shared. it would mess up lighting then

I’m on Brisbane Australia time, but I do keep odd hours sometimes :yawning_face: