What exactly is the issue with Dithering on Windows 10 + Nvidia GPU?

Home Forums General Discussion What exactly is the issue with Dithering on Windows 10 + Nvidia GPU?

Viewing 8 posts - 1 through 8 (of 8 total)
  • Author
    Posts
  • #28392

    Questions9000
    Participant
    • Offline

    I have been trying to find information on this topic for weeks and it unfortunately just confuses me to no end. Can someone explain exactly what the issue is with Dithering on Windows 10 + Nvidia GPUs? I suppose my questions are below. If someone could help answer at least some of them, that would be great.

    Specs: Using ICC Color Profile in Windows 10 with Nvidia GTX 1080. 1440p 8 bit IPS panel. I am using Displaycal Profile Loader.

    1. What exactly is the issue with Dithering on Windows 10 when using an Nvidia GPU?
    2. What is the difference between the drivers dithering and my panel dithering?
    3. My panel is an 8 bit IPS monitor. I’m not really seeing any differences between banding tests with the ICC profile activated vs. deactivated. Is this an issue that mostly effects 6 bit + frc panels? I think I just don’t understand what the problem is?
    4. I have a TV that is 8 bit + frc. If I hooked up my PC to that TV, would the GPU only send an 8 bit signal and not dither it due to these issues?
    5. Does DisplayCal Profile Loader do anything to fix this Dithering issue? Does this problem happen regardless of whether or not I use DisplayCAL Profile Loader?
    6. When using DisplayCal profile loader, does selecting different bit depths do anything? I read that I should have it on 8 bit with an Nvidia GPU?

    Thank you!

    • This topic was modified 3 years, 2 months ago by Questions9000.
    #28399

    Vincent
    Participant
    • Offline

    It is nothing related to your display panel bitdepth. It is related to GPU to monitor link.

    DisplayCal computes 16bit correction for each grey, 8bit+”8bit decimal places”. A calibration loader loads these data into GPU LUT, DIsplayCAL does it properly (others don’t).

    Some GPU LUT cannot hold this precision so it is truncaed in a bad way, or they can load it but when sending LUT output to actual GPU to monitor link , it truncates to link bitdepth in a bad way.

    Dithering prevents this last error. It should be on GPU hardware (+driver), so it’s outside DisplayCAL or other calibration tool scope.

    -intel iGPUs cannot hold >8bit data in lut entry or driver or has no dithering fro truncating => Intel fault, nothing you can do.

    -nvidias cannot hold >8bit data in lut entry (old ones)
    or truncates LUT output to link bpc (like in display s not accepting 10bit input, or through DVI links)
    or has no dithering
    => nvidia fault, nothing can be done on DisplayCAL side. Users are trying to activate dither ON GPU DRIVER, so last truncation is done in the proper way.

    • This reply was modified 3 years, 2 months ago by Vincent.
    #28412

    Questions9000
    Participant
    • Offline

    It is nothing related to your display panel bitdepth. It is related to GPU to monitor link.

    DisplayCal computes 16bit correction for each grey, 8bit+”8bit decimal places”. A calibration loader loads these data into GPU LUT, DIsplayCAL does it properly (others don’t).

    Some GPU LUT cannot hold this precision so it is truncaed in a bad way, or they can load it but when sending LUT output to actual GPU to monitor link , it truncates to link bitdepth in a bad way.

    Dithering prevents this last error. It should be on GPU hardware (+driver), so it’s outside DisplayCAL or other calibration tool scope.

    -intel iGPUs cannot hold >8bit data in lut entry or driver or has no dithering fro truncating => Intel fault, nothing you can do.

    -nvidias cannot hold >8bit data in lut entry (old ones)
    or truncates LUT output to link bpc (like in display s not accepting 10bit input, or through DVI links)
    or has no dithering
    => nvidia fault, nothing can be done on DisplayCAL side. Users are trying to activate dither ON GPU DRIVER, so last truncation is done in the proper way.

    Oh ok, gotchya. So if I were for example to hook up my 8 bit + frc TV to my PC, would my Nvidia GPU only send over 8 bit signals since it can’t do the frc dithering part of it? Or is this like a totally different issue? My apologies if this is obvious. I’m still learning.

    EDIT: GPU is a GTX 1080 so relatively new.

    • This reply was modified 3 years, 2 months ago by Questions9000.
    #28419

    Vincent
    Participant
    • Offline

    It is nothing related to your display panel bitdepth. It is related to GPU to monitor link.

    DisplayCal computes 16bit correction for each grey, 8bit+”8bit decimal places”. A calibration loader loads these data into GPU LUT, DIsplayCAL does it properly (others don’t).

    Some GPU LUT cannot hold this precision so it is truncaed in a bad way, or they can load it but when sending LUT output to actual GPU to monitor link , it truncates to link bitdepth in a bad way.

    Dithering prevents this last error. It should be on GPU hardware (+driver), so it’s outside DisplayCAL or other calibration tool scope.

    -intel iGPUs cannot hold >8bit data in lut entry or driver or has no dithering fro truncating => Intel fault, nothing you can do.

    -nvidias cannot hold >8bit data in lut entry (old ones)
    or truncates LUT output to link bpc (like in display s not accepting 10bit input, or through DVI links)
    or has no dithering
    => nvidia fault, nothing can be done on DisplayCAL side. Users are trying to activate dither ON GPU DRIVER, so last truncation is done in the proper way.

    Oh ok, gotchya. So if I were for example to hook up my 8 bit + frc TV to my PC, would my Nvidia GPU only send over 8 bit signals since it can’t do the frc dithering part of it? Or is this like a totally different issue? My apologies if this is obvious. I’m still learning.

    EDIT: GPU is a GTX 1080 so relatively new.

    It is not related to TV panel bitdepth, it is related to the whole display input bitdepth. Your TV can be 8bit panel but accept 10bit signal processed in a 16bit LUT where those LUT outputs goes to 8 internally with dithering. For GPU that display is 10bit. A black box with 10bit input.
    Same with a 8+frc or 10bit panel with internal 16bit LUT connected through DVI, for GPU display is 8bit. A black box with 8bit input.

    Banding on iGPUs an some nvidias is related to link between GPU and display input, being the whole display a black box.
    Link set to 8bit without >8bit LUTs and without dithering = banding, unless calibration loaded into GPU is linear, input=output, or almost linear.

    If display accepts 10bit, set bpc top 10bit on nvidia panel, use 16bit DisplayCAL ICC profiles and use DisplayCAL loader, that TV + 1080GTX should show none or mild banding. None if you enable dither registry hack.

    (edit, of course your TV can have banding on its own, caused by its electronics or bad configuration)

    • This reply was modified 3 years, 2 months ago by Vincent.
    #28455

    Questions9000
    Participant
    • Offline

    It is nothing related to your display panel bitdepth. It is related to GPU to monitor link.

    DisplayCal computes 16bit correction for each grey, 8bit+”8bit decimal places”. A calibration loader loads these data into GPU LUT, DIsplayCAL does it properly (others don’t).

    Some GPU LUT cannot hold this precision so it is truncaed in a bad way, or they can load it but when sending LUT output to actual GPU to monitor link , it truncates to link bitdepth in a bad way.

    Dithering prevents this last error. It should be on GPU hardware (+driver), so it’s outside DisplayCAL or other calibration tool scope.

    -intel iGPUs cannot hold >8bit data in lut entry or driver or has no dithering fro truncating => Intel fault, nothing you can do.

    -nvidias cannot hold >8bit data in lut entry (old ones)
    or truncates LUT output to link bpc (like in display s not accepting 10bit input, or through DVI links)
    or has no dithering
    => nvidia fault, nothing can be done on DisplayCAL side. Users are trying to activate dither ON GPU DRIVER, so last truncation is done in the proper way.

    Oh ok, gotchya. So if I were for example to hook up my 8 bit + frc TV to my PC, would my Nvidia GPU only send over 8 bit signals since it can’t do the frc dithering part of it? Or is this like a totally different issue? My apologies if this is obvious. I’m still learning.

    EDIT: GPU is a GTX 1080 so relatively new.

    It is not related to TV panel bitdepth, it is related to the whole display input bitdepth. Your TV can be 8bit panel but accept 10bit signal processed in a 16bit LUT where those LUT outputs goes to 8 internally with dithering. For GPU that display is 10bit. A black box with 10bit input.
    Same with a 8+frc or 10bit panel with internal 16bit LUT connected through DVI, for GPU display is 8bit. A black box with 8bit input.

    Banding on iGPUs an some nvidias is related to link between GPU and display input, being the whole display a black box.
    Link set to 8bit without >8bit LUTs and without dithering = banding, unless calibration loaded into GPU is linear, input=output, or almost linear.

    If display accepts 10bit, set bpc top 10bit on nvidia panel, use 16bit DisplayCAL ICC profiles and use DisplayCAL loader, that TV + 1080GTX should show none or mild banding. None if you enable dither registry hack.

    (edit, of course your TV can have banding on its own, caused by its electronics or bad configuration)

    Thank you! It’s still a bit confusing but I will have to do more research so i don’t want to bother you with explaining further. I have one final question though if that’s okay! If I’m just using the GTX 1080 with my 8 bit IPS monitor, do I want to set 8 or 16 bitdepth in Displaycal? I’m using an ICC profile.

    #28457

    Vincent
    Participant
    • Offline

    It is nothing related to your display panel bitdepth. It is related to GPU to monitor link.

    DisplayCal computes 16bit correction for each grey, 8bit+”8bit decimal places”. A calibration loader loads these data into GPU LUT, DIsplayCAL does it properly (others don’t).

    Some GPU LUT cannot hold this precision so it is truncaed in a bad way, or they can load it but when sending LUT output to actual GPU to monitor link , it truncates to link bitdepth in a bad way.

    Dithering prevents this last error. It should be on GPU hardware (+driver), so it’s outside DisplayCAL or other calibration tool scope.

    -intel iGPUs cannot hold >8bit data in lut entry or driver or has no dithering fro truncating => Intel fault, nothing you can do.

    -nvidias cannot hold >8bit data in lut entry (old ones)
    or truncates LUT output to link bpc (like in display s not accepting 10bit input, or through DVI links)
    or has no dithering
    => nvidia fault, nothing can be done on DisplayCAL side. Users are trying to activate dither ON GPU DRIVER, so last truncation is done in the proper way.

    Oh ok, gotchya. So if I were for example to hook up my 8 bit + frc TV to my PC, would my Nvidia GPU only send over 8 bit signals since it can’t do the frc dithering part of it? Or is this like a totally different issue? My apologies if this is obvious. I’m still learning.

    EDIT: GPU is a GTX 1080 so relatively new.

    It is not related to TV panel bitdepth, it is related to the whole display input bitdepth. Your TV can be 8bit panel but accept 10bit signal processed in a 16bit LUT where those LUT outputs goes to 8 internally with dithering. For GPU that display is 10bit. A black box with 10bit input.
    Same with a 8+frc or 10bit panel with internal 16bit LUT connected through DVI, for GPU display is 8bit. A black box with 8bit input.

    Banding on iGPUs an some nvidias is related to link between GPU and display input, being the whole display a black box.
    Link set to 8bit without >8bit LUTs and without dithering = banding, unless calibration loaded into GPU is linear, input=output, or almost linear.

    If display accepts 10bit, set bpc top 10bit on nvidia panel, use 16bit DisplayCAL ICC profiles and use DisplayCAL loader, that TV + 1080GTX should show none or mild banding. None if you enable dither registry hack.

    (edit, of course your TV can have banding on its own, caused by its electronics or bad configuration)

    Thank you! It’s still a bit confusing but I will have to do more research so i don’t want to bother you with explaining further. I have one final question though if that’s okay! If I’m just using the GTX 1080 with my 8 bit IPS monitor, do I want to set 8 or 16 bitdepth in Displaycal? I’m using an ICC profile.

    Regarding banding caused by GPU calibration it does not matter if your monitor IPS panel is 8bit or 10 or 8+2. It does matter which signal is acepted as input for display.

    8 input + nvidia “no dither” = bad news. Regarding truncation IDNK which look better, it would be banding anyway.

    #28463

    Questions9000
    Participant
    • Offline

    It is nothing related to your display panel bitdepth. It is related to GPU to monitor link.

    DisplayCal computes 16bit correction for each grey, 8bit+”8bit decimal places”. A calibration loader loads these data into GPU LUT, DIsplayCAL does it properly (others don’t).

    Some GPU LUT cannot hold this precision so it is truncaed in a bad way, or they can load it but when sending LUT output to actual GPU to monitor link , it truncates to link bitdepth in a bad way.

    Dithering prevents this last error. It should be on GPU hardware (+driver), so it’s outside DisplayCAL or other calibration tool scope.

    -intel iGPUs cannot hold >8bit data in lut entry or driver or has no dithering fro truncating => Intel fault, nothing you can do.

    -nvidias cannot hold >8bit data in lut entry (old ones)
    or truncates LUT output to link bpc (like in display s not accepting 10bit input, or through DVI links)
    or has no dithering
    => nvidia fault, nothing can be done on DisplayCAL side. Users are trying to activate dither ON GPU DRIVER, so last truncation is done in the proper way.

    Oh ok, gotchya. So if I were for example to hook up my 8 bit + frc TV to my PC, would my Nvidia GPU only send over 8 bit signals since it can’t do the frc dithering part of it? Or is this like a totally different issue? My apologies if this is obvious. I’m still learning.

    EDIT: GPU is a GTX 1080 so relatively new.

    It is not related to TV panel bitdepth, it is related to the whole display input bitdepth. Your TV can be 8bit panel but accept 10bit signal processed in a 16bit LUT where those LUT outputs goes to 8 internally with dithering. For GPU that display is 10bit. A black box with 10bit input.
    Same with a 8+frc or 10bit panel with internal 16bit LUT connected through DVI, for GPU display is 8bit. A black box with 8bit input.

    Banding on iGPUs an some nvidias is related to link between GPU and display input, being the whole display a black box.
    Link set to 8bit without >8bit LUTs and without dithering = banding, unless calibration loaded into GPU is linear, input=output, or almost linear.

    If display accepts 10bit, set bpc top 10bit on nvidia panel, use 16bit DisplayCAL ICC profiles and use DisplayCAL loader, that TV + 1080GTX should show none or mild banding. None if you enable dither registry hack.

    (edit, of course your TV can have banding on its own, caused by its electronics or bad configuration)

    Thank you! It’s still a bit confusing but I will have to do more research so i don’t want to bother you with explaining further. I have one final question though if that’s okay! If I’m just using the GTX 1080 with my 8 bit IPS monitor, do I want to set 8 or 16 bitdepth in Displaycal? I’m using an ICC profile.

    Regarding banding caused by GPU calibration it does not matter if your monitor IPS panel is 8bit or 10 or 8+2. It does matter which signal is acepted as input for display.

    8 input + nvidia “no dither” = bad news. Regarding truncation IDNK which look better, it would be banding anyway.

    By truncation, do you mean the bitdepth? So your’re saying you’re not sure whether to select 8 or 16 bitdepth with my 8 bit IPS panel connected to the GTX 1080? I’ve been trying to follow the threads on how to force driver-side dithering but it’s confusing as hell and it seems like it doesn’t work. I tried some calibration tool that was supposed to force it with Nvidia but noticed zero difference.

    #28469

    Vincent
    Participant
    • Offline

    Regarding truncation IDNK which look better, it would be banding anyway in you can only choose 8bit in connection but you cannot use dither.

    • This reply was modified 3 years, 2 months ago by Vincent.
Viewing 8 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic.

Log in or Register

Display Calibration and Characterization powered by ArgyllCMS