Face normal vs vertex normal

No. Afraid not.
Think I’ve heard of them.

Just to backtrack a little so that I can catch up :smiley:. I don’t feel that quote is quite right.

Simplistically, the GPU takes in a stream of vertices into a vertex function. That vertex function outputs a position. That’s the only really important thing about the vertex function.

The rasteriser takes those positions and fills out triangles in 2d. If you think of that 2d as a grid, then conceptually the triangles cover squares (fragments) in that grid.

The fragment function takes in the fragments and assigns a color to that fragment.

So vertex function is for position, fragment function is for color. Anything else is extra.

You might use normal values to help calculate the color, for example, if a face points a certain way, then darken it.

That’s all on the GPU side.

Metal is an API that allows you to decide what the GPU will receive and change certain state properties on the GPU. If you use Metal vertex descriptors, then yes, there are certain properties that Metal ‘knows about’, such as Normals and Colors. But you can send the GPU any property that you care about.

I am slowly working on a parametric shader function where you calculate everything inside the function as @ericmock was describing up there. I wrote a tessellated version a while ago, but it is not quite right yet.

1 Like

I agree that the quote is not quite right, as I pretty much contradicted myself later in the reply. Lol. Really, Metal knows about what you tell it about. I’ve been looking through Apple’s sample code in the DynamicTerrainWithArgumentBuffers project. While I’m still very much overwhelmed by it, one thing is certain, they did A LOT of things on the GPU.

That is their best sample in my opinion. It takes a long time to tear it apart and I haven’t completely succeeded yet. But it answers a lot of questions. And raises more :grimacing:.

Well I hope you can forgive the poor artwork - I’ll improve it if I’m able. But here is a textured version of my “planet”:

Planet

The code is inelegant but I’ve now realised how to do it much more nicely.

Well done :clap:

I’m afraid I’ve been a bit preoccupied :grimacing:. Have you resolved all your questions?

I’ll keep working on my parametric example, but hopefully you’re all beyond that now.

That looks as if it might be a prime candidate for argument buffers. The sample Eric mentioned - DynamicTerrainWithArgumentBuffers and the accompanying WWDC video does much the same thing.

You could assign a custom attribute to each hexagon and depending on the attribute, use that particular texture on the hexagon

I’m not sure what argument buffers but they sound like they might describe my next solution. I’ve designed it but haven’t implemented it yet. In fact, right from the start I assumed that it was the right approach but was so feeble at shaders that I couldn’t. Now I think I know how. I can’t find a good tutorial or book on shaders. They all want me to copy what they do but nobody goes under the cover and explains.

Will revert in a few days with a full explanation which might help people who struggled like me.

Most people do most of their work in fragment shaders. That’s where you set the final color of each fragment, so that’s where you check your normals (which you’ve either calculated in the vertex shader or precalculated) and light the fragment accordingly.

This is an excellent site for fragment that has many examples: https://thebookofshaders.com

There are two main sites where people share their shaders:

Vertex: https://www.vertexshaderart.com
Fragment: https://www.shadertoy.com

Both of these have tutorials.
These use glsl, but conceptually they are much the same.

1 Like

I’ll take a look. I’ve implemented my “better” version and all my hexes have become near white!!

I’m sure the shader is responsible :slight_smile:

Very nice site!!

I’ve done something that I really don’t understand. I have a working vertex shader that takes a vertex in [[buffer(0)]], some uniforms and [[vertex_id]] and it does exactly what I wanted it to do.

Now I am changing my code to provide a simpler vertex (no uvs) and a table of uvs which can be looked up based on calculations involving some new uniforms. In order to avoid breaking my working model, I left all the parts in place and just added the new parameters (and extended the uniforms) to the function signature as below:

vertex VertexOut
vertex_main(constant VertexIn *vertices [[buffer(0)]],
            constant VertexUniforms &uniforms [[buffer(1)]],
            constant float2 *localUVs [[buffer(2)]],          // NEW STUFF (7 values)
            constant VertexIn *newVertices [[buffer(3)]],     // NEW STUFF
            uint id [[vertex_id]]) {

Compiling and running with no new code, everything works as before. The new parameters have no effect, as planned.

AND THEN I STARTED TO THINK … WHAT DOES THIS MEAN?

I realised that I have no idea where [[vertex_id]] comes from. Actually, [[buffer(0)]] and [[buffer(3)]] are different sizes, the former being a sub-mesh and the later the entire mesh. Which element of these buffers would be selected by [[vertex_id]] if I included code. Clearly there is no one-to-one relation between their elements !!!

Naturally I don’t actually intend to use both buffers - this is just a careful build. But my question is: What is [[vertex_id]] and how does it know which buffer I mean?

You mentioned argument buffers in a recent post. My [[buffer(2)]] contains 7 uv values (representing each vertex on a hexagon) and I will use localUVs[uniforms.index] to select the appropriate one allowing me to make six draw calls in my loop and build the hexagons - each triangle needs 3 of those 7 uvs. (In my new Vertex, each vertex know which of the three it needs based on which of the 6 we’re drawing).

uniforms.index is a uint16_t. I hope that’s right.

Sorry to keep bending your ear like this, Caroline. I hope seeing my questions is valuable for you, at least.

The Metal Shading Language Spec document is here: https://developer.apple.com/metal/Metal-Shading-Language-Specification.pdf

vertex_id is defined as:
The per-vertex identifier, which includes the base vertex value if one is specified

Which isn’t a great explanation :slight_smile: .

When you do a draw call, you specify the number of vertices to be drawn. This will set up the necessary shader cores on the GPU to execute the vertex function in parallel. For example:

renderEncoder.drawPrimitives(type: .point, 
                             vertexStart: 0,
                             vertexCount: 1000)

Here I’m drawing 1000 vertices, starting at vertex 0. Each shader being executed in parallel will receive a value from 0 to 999 in vertex_id.

So in your case the index will be on the entire mesh rather than the submesh. But vertex_id doesn’t itself know about any buffers - it’s just how you apply it in your code.

The book has a section on argument buffers in Chapter 15, GPU-Driven Rendering. I’d wait and get it working before you try and optimise.

No worries - all questions are valuable, as they help me question my world view too :slight_smile: . (Especially as I won’t know all the answers :stuck_out_tongue: )

Just saw your answer as I loaded the following. I’ll take a look. I did jump forward to debugging in chapter 23 but figured I’m not quite ready for it yet. :slight_smile:

I’ve been looking at Uniforms. Maybe chapter 15 will help?

I’ve been trying to apply them to improving my model. Actually since every element is a hex made of 6 triangles, always the same shape and using a set of 8 textures all the same size, I realised I could limit my data to (i) vertices, (ii) indices and (iii) some lookup tables to figure out the uv coordinate. So I use one texture, a uvOffset variable to point to the appropriate section of the texture and a uvLocal table to hold the (only) seven different points needed in a texture to produce the triangles. Yayyy! Now I need (iv) a pointer (index) to choose with. Actually it’s slightly more complicated but I don’t want to bore you.

Using the above procedure with colours instead of the uv table, I have no problem. It works. But when I apply the uv table, I get a torus that’s all white. My diagnosis is that my calculations are producing colours > 1.0 but I can’t see why.

So I tried this trick: As well as uv in the vertex shader, I provided it with a further parameter (to a zero-initialised buffer [[4]] I set up on the CPU).

vertex VertexOut vertex_main(constant VertexIn *vertices [[buffer(0)]],
                             constant VertexUniforms &uniforms [[buffer(1)]],
                             constant float2 *localUVs [[buffer(2)]],      // contains 7 uv pairs
                             constant ushort *indexTable [[buffer(3)]],    // contains 18 values (0...6)
                             device Output *output [[buffer(4)]],
                             uint id [[vertex_id]]) {

I understand that device is read/write. So in the vertex shader I record all my inputs and outputs in this buffer and send it back to my app where I read it back into a struct as follows.

      // after   commandBuffer.commit()
      var array = [Output]()
      array.reserveCapacity(mesh.vertexCount)
      for i in 0..<mesh.vertexCount {
         array.append(outputBuffer.contents().load(fromByteOffset: MemoryLayout<Output>.stride * i, as: Output.self))
      }
      for output in array { print("\(output)") }

To my surprise, the buffer remains (mostly) zero-initialised. Do I misunderstand this device attribute? Or is what’s going on in the GPUs too rapid to let me update and print the buffer synchronously like this?

I also went in to the debugger (pressing the camera icon). The triangle indices and buffers [[0]] and [[1]] make sense (Unlike [[0]], [[1]] doesn’t seem to know he has a mixture of float2 and short values). Buffers [[2]] and [[3]] make no sense, whatever way Iook at them, float or integer.

Chapter 15 won’t help with Uniforms.

Uniforms is a buffer that contains constants that are the same across all shaders. As, for example, a view matrix.

Didn’t you get a warning about writing to a resource in a non-void vertex function? If you want to write to a buffer, you should either use a vertex function with a void return (non-rasterising vertex function), or preferably a compute function : Metal write to buffer from vertex function - Stack Overflow

Here’s an example of a simple compute just updating a single buffer and reading it back: Compute sum of array values in parallel with metal swift - Stack Overflow

Compute is discussed in chapter 16 onwards.

However, rather than leaping to compute, it sounds as if you were just writing to a buffer to see what was in it? You should be able to see them on the GPU if they are defined properly? I’d have to see more of your project to be able to understand what’s going on.

I also don’t know why your buffers don’t make sense in the GPU debugger. They ought to :slight_smile:.

Another way of debugging if you’re having trouble with the GPU debugger, is to put that value in your VertexOut structure and send it to the fragment shader and return that value as the fragment color (that works only between zero and 1 of course). But you should also be able to see the value in the GPU debugger if it’s in the VertexOut struct.

I did :slight_smile: but it was “only a warning” so I thought I could finesse it.

I understand that but in effect it seems to me that where I’m expecting my value to be a colour it’s showing up white and therefore > 1.

Curiously, in respect of the fragment shader, the debugger shows only the texture, the sampler and the code. Not VertexOut. That’s why I created Output.

Here’s a cut-down version of my code with a simplified torus with no transforms or lighting. The only parts that seem relevant to my problem are the shaders and the draw(in:_) function and I’ve isolated the important part into a sub-function.

TestMyTorus.zip (1.7 MB)

If you run it as is, you’ll see an oval shape (edit: actually not oval 'cos clip is set to .back) which is the torus head-on with a few obvious coloured hexes. It’s correct! Change the last line of the fragment shader (commented) and you’ll get the version which doesn’t work. It’s still a torus, but white.The console displays the values of the output buffer. Very few if any are updated from the initialised values. Of course, that non-void warning is still there.

I can see the indices and the vertices with no problem. Clearly the debugger knows exactly what they are. When it comes to the uniforms which contains a mixture of data types, it presents them all as floats but can see that from offset 0x18 it holds a sensible short value. Just strange that it doesn’t reflect this in the debugger. As for the tables in the other buffers, I can convert them to any type I want and they make no sense.

Just doing a quick overview of where to start debugging. (I make observations before diving into actual code.)

In the debugger, if you double click Geometry, you’ll get all those output values. So instead of creating an Output buffer, you can put the values in VertexOut and look at Geometry.

Doing that, I see that localUVs are always zero, indexTable is either 0 or 769 and index is either 3 or 9 or 15. I gather index is supposed to be [0…17]?

In the fragment shader, if I return uv.x as the color, there are different values. If I return uv.y as the color, they are all the same value.

I’ll get to actual debugging shortly :slight_smile:

Edit: If you’re only using one texture, you don’t really need to have multiple submeshes. You can just index into the coordinates of that texture. For example, Chapter 11, Tessellations, renders a mountain and in the challenge project has three different textures. Depending on the height and slope of the fragment, it uses one of those three textures to get cliff, snow or grass.
(It didn’t have to be three different textures, it could have been one and calculate the texture position from the uvs, but that’s extra complexity)

Edit2: Forget that previous edit. I guess you’d have to hold a color attribute for each hexagon to access the correct texture. Instead you’re doing four draw calls. I’m not sure which is faster.

If you’re using setVertexBytes, don’t create an MTLBuffer. The whole point of setVertexBytes is to send the object directly. (As long the object is less than 4k.)

Removing the MTLBuffer, I get the localUVs showing up correctly on the GPU.

I did the same with indexTables and flattened the multi dimensional array. This is a better result. Not sure what you’re actually going for!

TestMyTorus VC.01.zip (3.5 MB)

Edit: btw - I changed colour to color. I’m British, so it goes against the grain, but it does save heartache in the long run.

Yes, that change is sufficient to get a picture.

You’re brilliant. I should have seen that I didn’t need a buffer. I knew that and I still did it :frowning:

Nevertheless, I don’t understand why a buffer doesn’t work!!! Though I don’t need one, surely I could use one? I suspect my use of ushort for UInt16 may be wrong.

Anyhow, thank you so much. Lots of the numbers make sense now and I’ll check the others tomorrow. You’ll have noticed we’re getting a kaleidoscope rather than a texture but I’ll get on it.

I’m Irish. Color is just too much to stomach :grin:

setVertexBytes expects an UnsafeRawPointer.

If you set &mtlbuffer, then you are sending the pointer to the metal buffer not the contents.

If you set metalbuffer.contents(), which returns an UnsafeMutableRawPointer, then setting a buffer works. (But please don’t :laughing:!)