Group Group Group Group Group Group Group Group Group

Chapter 6: Confused about sRGB color space and gamma correction in Metal

First of all, thank you for the great book!

I am still not sure why the house is not brighter but darker. I think this could be addressed by knowing:

  1. How does Metal texture loader deal with texture’s color space?

I am wondering if we set MTKTextureLoader.Option.sRGB to True, does this mean Metal will automatically linearize the pixel data for us? My assumption is YES.

  1. After fragment shader, does Metal automatically apply gamma correction (^2.2)?

My assumption is NO. We need to manually do the gamma correction in shader.

With these assumptions,

  • if .SRGB is set to True (by default), baseColor is in linear space, so it looks darker since the display would lower the value. By doing sRGBcolor = pow(linearColor, 1.0/2.2); (manual correction) before output, it becomes normal.
  • if .SRGB is set to False, baseColor is actually in sRGB space, so we don’t need to correct it before it is sent to display.

If I understand it right, my concern is that when we set .SRGB to False, we’re dealing with baseColor in non-linear space, which I think it should not make sense in light calculation, especially for PBR in chapter 7 (Maps & Materials) ><

SRGB also depends upon the MTKView’s colorPixelFormat. You can change this if you want to support wide color.

The default is bgra8Unorm.

From the docs:

To test things out, start with the sample final code.

In fragment_main, after reading baseColor at the top of the function, add this:

return float4(baseColor, 1);

That’s so all the lighting stuff doesn’t interfere.

The two textures in the asset catalog are set to Data and not Colors, which means they will load in linear space.

With both the MTKView’s color pixel format and the pipeline’s color pixel format being bgra8Unorm, and textures also having the format bgra8Unorm, they all have the same color pixel format.

In the asset catalog, change barn and grass to use Colors and not Data. This will change it back to an SRGB texture.

When the GPU samples from an SRGB texture, it does a conversion:

(https://developer.apple.com/metal/Metal-Shading-Language-Specification.pdf)

Because of this conversion, the colors are too dark.

However, in Renderer's init(metalView:), add this line after setting metalView.device:

metalView.colorPixelFormat = .bgra8Unorm_srgb

And in Model.swift, in buildPipelineState(), change:

pipelineDescriptor.colorAttachments[0].pixelFormat = .bgra8Unorm

to

pipelineDescriptor.colorAttachments[0].pixelFormat = .bgra8Unorm_srgb

When you run, all color spaces match, so the colors look correct again.

And, welcome to the forums! I’m so glad you’re enjoying the book :blush:

Hi Caroline, thank you for the swift reply. It makes sense to me and I learned more from your answer!

I still have one concern as I try this out. :face_with_head_bandage: If we select Data option and it loads the texture in linear space and keep color format as .bgra8Unorm, the display result is correct. It is because the data is always in sRGB space. No conversion happens at all. So we are still dealing with baseColor in non-linear space, right? (by loading texture in linear space it would not change what the original space it is in?)

By validating this, I keep Data and change .bgra8Unorm to .bgra8Unorm_srgb, the result looks brighter. For example, the input data is 0.5 ^ (1 / 2.2) = 0.73. Since there is only one auto conversion when writing to sRGB texture, the value in the texture would become 0.73 ^ (1 / 2.2) = 0.87, which will be applied ^2.2 on display.

So, I think by only using Color (or .sRGB = True) and .bgra8Unorm_srgb we are having baseColor in linear space. For this we have an assumption that the original image data is in sRGB space.

:face_with_head_bandage: - me too :smiley:

I think that this is how it works.

Shader colors are always linear.

If you have your textures in sRGB, they will be sampled automatically by the GPU and converted into linear space. When writing to textures that are sRGB, the reverse conversion will take place.

So if the view’s pixel format is .bgra8Unorm_srgb, and the textures are the same, then when the shader samples the textures, they will convert to linear. The fragment shader will write the linear result, and as it’s going into an sRGB drawable, that linear result will convert to sRGB.

If the view’s pixel format is .bgra8Unorm and the textures are the same, then no conversion will be done by the GPU.

(This bit is conjecture.) However, presumably, because the display wants sRGB ultimately, then there will be a conversion of the .bgra8Unorm drawable texture to sRGB for the display.

1 Like

Thanks so much for the information! :laughing:

1 Like