Camera and moving scene object

Hi again Caroline

after implementing part of code for chapter 9, where we introduce the camera, also after looking heavily into previous chapters, I have one question. what and how needs to be done, so that I can have the option… of either moving the camera, OR moving object within my scene. ( or applying any possible transform to the object, vertex, quad… whatever)

if I understand the code correctly… let’s just take simple quad… from it’s own object space, coordinates of its 4 vertices get transformed by all 3 matrices, including the camera view transform, so in the end effect I am rendering the quad AS SEEN by camera.

in uniforms… this transformation matrix is somehow stored and applied to ALL vertices at once (in shader.metal file ) but by changes in chapter 9 it seems to me, that the option has been lost to apply transform ( move, scale, rotate ) to only SINGLE object in the scene.

could you pls give me some insight to this?

many thanks in advance

Marek

I attach a project that is from the chapter, but just before adding the Orthographic camera.

MarekNavigation.zip (2.6 MB)

  1. Each Model has a Transform which contains position, rotation and scale.
  2. Renderer, in draw(in:), renders each model in turn. It calls model.render(encoder:uniforms:params:).
  3. This model method uses uniforms, which contains the same projection and view matrices for all models. So the camera position is the same when rendering all the models. It’s only the model’s transform that differs between models.
  4. In this model method, you update uniforms with the modelMatrix. The model matrix uses the model’s transform (position, rotation, scale). You then draw the single model with encoder.drawIndexedPrimitives... using the individual model matrices.
  5. If you move the camera using the mouse, you update the camera’s position and rotation. The viewMatrix is created from the camera’s position and rotation, and applied to each model matrix in turn while rendering each model.

More random explanations as I’m not sure what piece of information is missing :slight_smile: :

  • position is in world space. If you change house.x to 10, the house moves 10 units to the right.

  • When you render, it’s convenient, but not necessary, to render through a camera. In Shaders.metal’s vertex function, you can remove * uniforms.viewMatrix and your house will still render at the centre of the scene because the house’s position is [0, 0, 0]. But you won’t be able to move around the scene.

  • The draw call encoder.drawIndexedPrimitives... is executed for each model.

  • Renderer, in draw(in:) sets up the render command encoder within the command buffer. The command buffer holds the list of encoded commands for the GPU to run. In the case of this example, these are the list of commands in the command buffer:

Screenshot 2022-09-18 at 12.43.57 pm

That is a screenshot of the GPU frame capture. It may be the missing link for you. The vertices aren’t all rendered at exactly the same time using the same buffers. There are multiple commands that set those different buffers at different times. Each of these commands is run sequentially on the GPU. The commands set the state of the GPU, so that the draw can be run parallel for all vertices using the same state. The draw is indeed parallel for all vertices, but only those vertices in the current vertex buffer.

You can see that there are two draw calls (at 11 and 17). One for the ground, and one for the house. The uniform buffer containing the projection, view and model matrix, is set before drawing each model (at 6 and 12). The vertex buffer for each model’s position is also set before the draw call (at 8 and 14).

During the draw call, the vertex function runs simultaneously on each vertex. First the ground’s vertices, then the house’s vertices. Before drawing each model’s vertices, the uniform vertex buffer is set at index 11, so each model thus has a different ultimate matrix.

  • The aim of having these different spaces and matrices, is simply to make rendering easy. (Edit: easy is relative! Rendering is hard! And matrices are harder!)

  • Having a model matrix for each model makes it easy to move a model into different positions in the scene. Having a view matrix for the camera makes it easy to move the view point of the scene. Having a projection matrix controls the 3d projection, so that your scene looks 3d.

  • If you simply have one quad, you don’t need any of these matrices. The final result of the matrices * model.position converts each vertex position into Normalised Device Coordinates, and if you manually assign the correct vertex position in NDC, you don’t need the matrices.

Add this to GameScene.update(deltaTime:):

house.position.y += 0.001
ground.rotation.y += 0.001
print(camera.position, camera.rotation)

This code shows that each of the objects has its own Transform.

When you run it, you’ll see the house slowly rise into the air, while the ground rotates around the y axis. The camera transform prints out in the debug console and doesn’t change until you use your mouse to move it.

1 Like

thx a lot for the answer. as always, spot on. that random piece of explanation was exactly, what I was missing.

have to say… it is really a perfect book. it just takes time to digest it all

KR, Marek

1 Like

I’m glad you’re enjoying it :blush:. And well done for pushing through!

My main advice would be to use GPU frame capture often to see what’s actually happening on the GPU. There’s so much information there.

yes… GPU frame capture is a very interesting part, it’s just not easy to learn it. and learn, how to use it effectively. is there any good tutorial on capture?

Chapter 31, Performance Optimization goes into the frame capture a little more, but there’s really only Apple’s videos.

To be able to understand Apple’s Metal videos, you do need some background in Metal. I would guess after Chapter 16, GPU Compute, when you’ve been introduced to TBDR, you’ll be better equipped. But each time you watch them, you’ll understand a little bit more :slight_smile: