So I’m not a colour professional or anything, just a geek who wants to calibrate his monitors. I still want to understand what I’m going though, and I’m struggling to get to the bottom of the relationship between the calibration curve and tone response curve you get at the end of a calibration. Let me elaborate:
If I set a high gamma or apply an ambient light level adjustment, the calibration curve is broadly the shape you would expect, but the tone response curve goes the other way, in kind-of a mirror image about the diagonal, but not a mirror image, and the R, G and B lines sometimes separate in a completely different order compared to the calibration curve. If it was exactly a mirror image it would make some sense, but it doesn’t seem to follow any kind of logic.
This seems like a fundamental concept that everybody else just takes for granted as obvious, but I can’t find any explanations at all online. What I’ve inferded research is that it’s the calibration curve that gets put into the GPU as a LUT, so it takes the RGB output from a program, modifies it, then sends that to the monitor, and that the tone map that is maybe the part thst is stored as an ICC for colour aware programs to use (but different to a 3D Lut)? So does the software use the tone map to modify a colour to convert it to a reference standard, before it can be modified again to represent a diferent colour space, then go to the GPU to be changed again by the calibration curve?
I’m going to stop rambling… if someone could please explain exactly what each one is and where it comes in the graphics pipeline, that would be brill!