Masking Two Quads

Actually, scenario is
There are position coordinates coming from server.
So for the first quad, I am drawing vertex according to coordinates coming from server.
Not there is a mask image, as shown in the picture. I have to mask the previous quad according to the maks colors from the image. For example: Multiple corresponding mask color to previous Quad.

As I understand it.

  1. You draw the quad coming from the server.
  2. You draw another quad with mask texture on top of the first quad.

The result should be that the underneath quad is blended with the second quad?

Is that correct?

If so, it sounds as if you:

  1. Draw the first quad to a full screen texture.
  2. Draw the second quad to a full screen texture.
  3. Combine the two textures to create a third texture.
  4. Render this texture onto a full screen quad.

Yes you got right.
But can I make two textures in the same quad ?
Like the first texture is the image with coordinates and. second. texture is the mask image which covers the. whole view.
Currently the mask image also setting to coordinates.

If I have that right, you could possibly work out the uv coordinates of the second image from the vertices of the quad. If your full screen texture is the full size of the screen, then you can send it to the fragment shader with the first texture, and then use get_width() and get_height() on that texture to work out where the quad vertices sit inside

Thanks, I will try.
So for the mask. image, I should update the texture coordinates in fragment shader…right. ?

Fragment shader gives you current interpolated xy position in screen space, eg x: 500, y: 100. That’s for the rendered quad.

Full size texture might be sized 1000 x 1000, being the same size as the window.

UV coordinate to read the full size texture then would be u: 500 / 1000, v: 100 / 1000 or [0.5, 0.1].

I think :smiley:

So I did like this,

func prepareVerticesData(from coordinates: SlideShowTemplateConfig.CoordinatesRow,
renderSize: CGSize) {
let width: CGFloat = renderSize.width
let height: CGFloat = renderSize.height

    let topLeft = convertToMetalCoordinates(point: CGPoint(x: coordinates.topLeft.x * width, y: coordinates.topLeft.y * height), viewSize: renderSize)
    let topRight = convertToMetalCoordinates(point: CGPoint(x: coordinates.topRight.x * width, y: coordinates.topRight.y * height), viewSize: renderSize)
    let bottomLeft = convertToMetalCoordinates(point: CGPoint(x: coordinates.bottomLeft.x * width, y: coordinates.bottomLeft.y * height), viewSize: renderSize)
    let bottomRight = convertToMetalCoordinates(point: CGPoint(x: coordinates.bottomRight.x * width, y: coordinates.bottomRight.y * height), viewSize: renderSize)

    vertices = [
        Vertex(position: SIMD3(Float(topLeft.x), Float(topLeft.y), 0), color: SIMD4(1, 0, 0, 1), texture: SIMD2(0, 1)),
        Vertex(position: SIMD3(Float(bottomLeft.x), Float(bottomLeft.y), 0), color: SIMD4(0, 1 , 0, 1), texture: SIMD2(0, 0)),
        Vertex(position: SIMD3(Float(bottomRight.x), Float(bottomRight.y), 0), color: SIMD4(0, 0, 1, 1), texture: SIMD2(1, 0)),
        Vertex(position: SIMD3(Float(topRight.x), Float(topRight.y), 0), color: SIMD4(1, 0, 1, 1), texture: SIMD2(1, 1))
    ]
}.

Now If I create mask texture, it take the same vertex coordinates.

Did that work? Depending on what your vertex function is doing, and any matrix calculations, the position result of the vertex function needs to be in NDC between -1 and 1 in x and y.

The fragment function in.position will be in screen space rather than NDC.

Yes. My texture with images is working fine. it is placing at correct coordinates.
But If I apply mask texture, it is taking as same vertex coordinates.
What i want, the mask. texture should take the. screen vertex coordinates, not the coordinates coming from server.

Did you try sending the full screen texture to the fragment shader and doing the calculation of the uv coordinates there?

I don’t know how can I do this.
And how should I create full screen texture.
Archive.zip (3.6 KB)
You can check these two files I am. using

I don’t know how you would want to resize the mask image to full screen. Are you using a fixed size for the screen? Will the aspect ratio change?

You could experiment with Metal Performance Shaders MPSImageLanczosScale or this looks promising: GitHub - imxieyi/MetalResize: Fast image interpolation using Metal

Once you have a texture in the same size as the screen, then you can send that texture to the fragment shader with the quad, and work out the UV coordinates to read from that texture using the calculation I suggested earlier: Masking Two Quads - #10 by caroline

This is a simple version of working out texture coordinates from a 400x400 image, when the screen size is 800x800 pixels.

Shaders.zip (321.5 KB)

Without the texture, the quad renders:

Combined, it renders:

You’ll need Xcode 13 to run it. I hope Monterey is not a requirement, but not sure.

OKay I will try that.

It doesn’t do any image resizing - I matched the image to suit the screen size, but first steps first :smiley:

Actually what I have to do.

  1. Placing image to the coordinate coming from server.
  2. then use that image and mask with the image with RGB channel.

I have done the first part. the second part I am stuck in.
because the mask texture takes the same size as the previous quad.
I am trying to add image texture and mask texture in same quad.

Give me 5 mins, I am attaching screenshots of what I want. From that, it will be clear to you.

How about this one? Where the 400 x 400 image is stretched over the quad?

1 Like

Here’s the code for that. Just simple uvs in that case
oops wrong one
edit: try this one:

Shaders.zip (325.5 KB)

1 Like

Any solution how can I change vertex data for. second texture ?
For example: first texture should use different vertices and the second texture uses different vertices.