The complicacy of bit depth mismatch with Nvidia RTX and 10 bit panel

Home Forums Help and Support The complicacy of bit depth mismatch with Nvidia RTX and 10 bit panel

Viewing 2 posts - 1 through 2 (of 2 total)
  • Author
    Posts
  • #39164

    Blowi
    Participant
    • Offline

    So I have this Nvidia RTX card supporting 10 bit output and a wide gamut display also supporting 10 bit input (wheather FRC of true 10 bit doesn’t seem to matter here).

    The background is that in the full 8 bit workflow, if any correction is applied in the GPU LUT before sent out to the monitor there  can be banding issues since  it reduces the effect output entries from 256, and there will have to be different input values sharing same output, also input values differing by 1 ending up differing by 2 in the output.

    But looks like even in the 10 bit workflow, the transitions can still suffer from LUT correction. It’s like “soft-banding”, where no abrupt color edges can be seen but the brightening of the shades is not uniformly smooth. The bit depth (12/14/ 16 bit, etc) in Displaycal has an effect on where the unnatural transitions happen, but they still exist nonetheless.

    I found this when testing the “RTINGS.com” banding 16-bit tiff in Photoshop after 10 bit is confirmed enabled. The edges between adjacent values, which are very noticeable in 8 bit system are all smoothened. But the “soft-banding” can be seen a few locations when scrolling the picture left to right.  If I set the Nvidia driver back to 8bit processing, and disable 10 bit in Photoshop, I realized these “soft bandings” happen exactly where the two adjacent 8bit input values are corrected to the same visual colors. In my case, for Green,  it’s 16&17, 35&36, 52&53, 68&69, etc.

    I have verified this has nothing to do with the “colorspace conversion” part of the color management. It’s likely the outcome of the LUT correction alone since if I reset video card  gamma table, the “soft banding” is gone.

    How should I explain this? Is NVidia ever able to improve the LUT correction artifacts in 10 bit system? It looks to me that NVidia is still utilizing a 8bit LUT correction table a 10 bit input and 10 bit output system. My assumption is that  the 8bit LUT is interpolated by 4 times to yield an effective 10 bit LUT with 1024 entries to yield an output and sent them out to the display.  In this way, two identical 8 bit LUT entries becomes a flatter slope than the surrounding in the 10 bit LUT, and an entry jumped by “2” in 8 bit LUT will be a steeper slope in 10 bit LUT. That’s why they appear as softer unnatural transitions instead of sharp banding edges in 10 bit workflow. See my attached figure.

    The compressed 8bit test image is also attached. But you should download the original 16 bit from RTINGS.

    Is there a way in Windows I can measure exactly what is the display pixel value after the video card LUT correction? I am using the color picker from PowerToys and apparently it is giving me the value before the LUT correction but after color space conversion. And it is always in (0-255) even if the display is running 10 bit.

    Or maybe I should try looks into the raw vcgt to find out the exact input-output associations? What is a proper tool to decode VCGT?

    And then how does the DisplayCal LUT depth kick in here? By changing it from 8 bit to 16 bit I do notice real visual changes but none helps reduce the “soft-banding”. The part I never understood is that if NVidia is bottlenecking the correction with 8 bit, how would any higher-than 8 bit preprocessing from DisplayCal matter at all? ( I know the fact that nVidia itself handles 16bit to 8bit improperly by simply truncating the last 8 bits, while it should actually be done by dividing 2^8+1=257 and rounding, but how where does this step occur in the whole workflow)

    Attachments:
    You must be logged in to view attached files.
    #39167

    Vincent
    Participant
    • Offline

    Is there a way in Windows I can measure exactly what is the display pixel value after the video card LUT correction? I am using the color picker from PowerToys and apparently it is giving me the value before the LUT correction but after color space conversion. And it is always in (0-255) even if the display is running 10 bit.

    Argyllcms (or DIsplaycal) ucalibrated display report, but this will be a yes/no test: 8bit or “likely to be more than 8bit”

    Or maybe I should try looks into the raw vcgt to find out the exact input-output associations? What is a proper tool to decode VCGT?

    iccdump.exe -v 3 -t vcgt ICCprofile.icc

    or ICC Profile Inspector from ICC web

     The part I never understood is that if NVidia is bottlenecking the correction with 8 bit, how would any higher-than 8 bit preprocessing from DisplayCal matter at all? ( I know the fact that nVidia itself handles 16bit to 8bit improperly by simply truncating the last 8 bits, while it should actually be done by dividing 2^8+1=257 and rounding, but how where does this step occur in the whole workflow)

    There can be some banding even if truncating 16bit VCGT data to >8bit. nvidia lacks of dithering but siem users have tries to enable it in windows registryt. There was a thread about it.

    Remember that once Widnows LUT loader kicks in (like after waking display from stand by) DisplayCAL can do nothing to load untruncated VCGT data to GPU LUTs unless you reboot.

Viewing 2 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic.

Log in or Register

Display Calibration and Characterization powered by ArgyllCMS