Graphic cards LUT bit depth

Home Forums Help and Support Graphic cards LUT bit depth

Viewing 11 posts - 1 through 11 (of 11 total)
  • Author
    Posts
  • #36452

    Egor S.
    Participant
    • Offline

    Hi @ all,

    When opening a profile with the DisplayCAL profile information app, a bit depth of 16 bits is indicated under the graphic cards Gamma-table entry. Moreover DisplayCALs Profile Loader is also set to 16 bits (by default).

    Does this really mean that my graphic card has a 16 bit LUT?  I know that the graphic cards LUT used to have a bit depth of 8 bits (many years ago). Therefore, hardware calibrated monitors with an internal LUT of 10 or more bits were beneficial.

    If the graphic cards LUT actually had a bit depth of 16 bits, there would be no advantage to hardware calibration, would there?

    Actually I don’t think the graphic cards LUT has indeed a bit depth of 16 bits, because the number of unique tone values indicated for each of the 3 channels is less than 256 @ 8 bits, if the calibration curves are not linear.

    My NEC has an internal LUT of 10 bits. I always first calibrate with a program that supports hardware calibration and then create a new profile with DisplayCal without doing the calibration again.  Of course, it would be nice if I could abstain from this cumbersome procedure and instead just make a software calibration with DisplayCAL,  without losing tone values.

    #36464

    Vincent
    Participant
    • Offline

    ICC profile contains grey calibration at 8 or 16 bit, 1D LUT. This info can be loaded with GPU API with several precision. It’s up to driver and HW how to deal with it.

    Some have 10-12bit LUT and truncates to that with dither, then ouptuts to whatever bitdepth is the actuak link with monitor (8-10) like AMDs, other truncate in an ugly way hence GPU calibration banding.

    Also GPU 1D calibration is limited to GREY. If you had a widegamut your HW can do a LUT-matrix-lut to simulate smaller colorspaces or even a LUT3D. Grey calibration in ICC profiles cannot do that… but some GPUs have dedicated HW to that (and software can use it like novideo_srgb or AMD control panel) and all of then can run a LUT3D in shaders with dithering like DWMLUT.

    Usually you want HW calibration if it is working properly because Windos may truncate to 8bit it another app uses 8-bit api , DWMLUT inciompatibilities and so on. Wehn HW cal is not working properly (Benq, LG, Asus and Dell on some models) than you want DisplayCAL, but only if after calibration testing with DIsplayCAL shows issues.

    #36470

    Egor S.
    Participant
    • Offline

    Hello Vincent, Thank you very much for your help!

    When I talk about the GPU / Display LUT, I mean exclusively the 1D LUT for gray balance adjustment. My NEC P221W does not have a LUT3D for color space emulation.
    So I just want to understand, if a HW calibration -using the monitors 10 bit 1D LUT- offers a theoretical advantage over software calibration with DisplayCal, as the DisplayCal profile information app indicates a bit depth of 16 bits in the graphic card gamma table entry.
    The monitor is connected via DVI, so 8 bit.

    You say that the ICC profile supports gray calibration with 8 or 16 bits. I guess you mean the entries in the vcgt-tag, right? Assuming that the vcgt-tag indeed contains 65536 values per channel, i.e. 16 bits.  I would be interested to know how many of these values are actually stored in the GPU LUT at computer startup. Some (possibly many) years ago it was said that there are always 8 bits stored in the GPU LUT. Now you say it can be 10 – 12 bits. Are there GPUs with even 16 bit 1D LUTs? And can it really be true that my very old AMD Radeon HD6570 has a 16 bit 1D LUT, as stated in the profile information app? If not, am I misunderstanding what is indicated in the DisplayCal profile information app? I meen the entrys  which are marked in blue in the attached screen shot. Sorry, its German, but I’m shure you know what I mean.

    Assuming my GPU had a 1D LUT with at least 10 bit depth. Then a HW calibration with my display’s 10 bit 1D LUT should theoretically not bring any advantage over a SW calibration with DisplayCal? Even with slightly curved gamma curves in the GPU LUT, finally all 256 individual  tone values (i.e. 8 bits) should be send to the GPU output. So just as many dedicated tone values as there would be send in the case of a HW calibration with its linear tone curve in the vcgt tag (and GPU LUT) and a non linear tone curve in the Display’s 1D LUT.
    But if the number of  the transmitted tone values from GPU to Monitor is the same, regardless of wether it’s a HW calibration or a SW calibration, I don’t understand why it is indicated in the DisplayCal’s profile information app that the number of dedicated tone values is less than 256, in case of SW calibration with non linear calibration curves in the vcgt tag?

    P.S. I’m not sure what a DWMLUT is, but it has something to do with LUT3D and color space emulation, has it?

    • This reply was modified 1 year, 10 months ago by Egor S..
    Attachments:
    You must be logged in to view attached files.
    #36480

    Vincent
    Participant
    • Offline

     

    You say that the ICC profile supports gray calibration with 8 or 16 bits. I guess you mean the entries in the vcgt-tag, right?

    yes

    Assuming that the vcgt-tag indeed contains 65536 values per channel, i.e. 16 bits.  I would be interested to know how many of these values are actually stored in the GPU LUT at computer startup. Some (possibly many) years ago it was said that there are always 8 bits stored in the GPU LUT. Now you say it can be 10 – 12 bits. Are there GPUs with even 16 bit 1D LUTs?

    IDNK, we can say “it is more than 8” with ArgyllCMS uncalibrated test report test because it test truncation.
    Under the hood it can be 10-12-14-16 + dithering… so who cares? Its the dithering what allows a bandless GPU calibration because efective unique levels AFTER calibration is applied is >=256, because dithering.

    Your monitor is doing the same. 6-8bit panel with a 10+ bit lut. What gives you smooth gradients after HW CAL is nit lut itselft but the dithered output to actual panel input bitdepth.

    And can it really be true that my very old AMD Radeon HD6570 has a 16 bit 1D LUT, as stated in the profile information app? If not, am I misunderstanding what is indicated in the DisplayCal profile information app? I meen the entrys  which are marked in blue in the attached screen shot. Sorry, its German, but I’m shure you know what I mean.

    It should be 10+dithering hence visual equivalent to HW calibration for grey… as long as propietary AMD drivers are used, Displaycal 16bit loader is used… and the tricky part “NO other app trys to use MS api to load 8bit data into lut”.

    Usually this last issue is caused by other very limited calibratio software like Xrite or Basiccolor, but sometimes it is caused by Windows due to some energy saving solutions after standby. AFAIK these are the known culprits using AMD.
    Using nvidia it has no dithering by default, hence prone to banding because cannot keep 256 unique levels unless you have a newer card and an actual link tio disolay at >8bit. Dithering can be enabled with aregistry hack, but sime times it goes off so it is not as reliable as AMD (actually ATI AVIVO engine back to 2005 or some close date)

    Assuming my GPU had a 1D LUT with at least 10 bit depth. Then a HW calibration with my display’s 10 bit 1D LUT should theoretically not bring any advantage over a SW calibration with DisplayCal? Even with slightly curved gamma curves in the GPU LUT, finally all 256 individual  tone values (i.e. 8 bits) should be send to the GPU output. So just as many dedicated tone values as there would be send in the case of a HW calibration with its linear tone curve in the vcgt tag (and GPU LUT) and a non linear tone curve in the Display’s 1D LUT.
    But if the number of  the transmitted tone values from GPU to Monitor is the same, regardless of wether it’s a HW calibration or a SW calibration, I don’t understand why it is indicated in the DisplayCal’s profile information app that the number of dedicated tone values is less than 256, in case of SW calibration with non linear calibration curves in the vcgt tag?

    256 unique visual levesl as long as dithering is used with a high bitdepth (>8) LUT even if you have a 8bit DVI connection.
    But if lut is truncated or dithering disabled (explained above), “banding” induced by calibration comes back because now you won’t have 256 unique visual levels in greyscale.

    So if Spectraview II if working and your displack backlight is supported (not PA271Q, PA311D) it is more reliable to have calibration in monitor.

    P.S. I’m not sure what a DWMLUT is, but it has something to do with LUT3D and color space emulation, has it?

    Yes, but you can simulate your own display idealized colorspace (defined by 3 primaries and a gamma, create it with synth profile editor) so GPU shaders emulate it including VCGT in GPU using general purpose shaders to do dither.
    This will be equivalent to HW calibration to “native display colorspace”. it works well but DWMLUT has 65 entries (65x65x65) so if display had a very weird response in a thin range of 4bits it may go unnoticed.
    VCGT has 256 entries and witgh ArgyllCMS it can be computed extrapolated from 96 points in grey scale (slowest calibration) so in a worst scenario situation, VCGT data in a GPU with an AMD card may perform better. I have not seen such scenario and DWNLUT simulation of display’s own idealized colropace works well. It is useful in widegamut monitors to work with PS and such if you have a laptop with undithered 8bit LUT (intel iGPU).

    #36513

    Egor S.
    Participant
    • Offline

    Thanks Vincent, I had to think twice about it, but I think I got the point.

    Before your explanations, I dind’t take the 8 bit DVI data transfer into account. Now I understnd that without dithering it doesn’t make any difference how many bits the GPU LUT has, as the 8 bit GPU output will limit the number of unique levels after SW calibration. And I see that in this case only dithering can help.

    Just to be sure that I undertoot the situation, I would like to discuss the following theoretical case:

    Assuming no dithering is used and the GPU has a 16 bit LUT. Still using DVI connection with 8 bit but a  juicy high end display with a 16 bit internal LUT as well as a >=12 bit panel (does not matter if such hardware exists).

    After SW calibration always less than 256 unique levels would be transmitted through the DVI ports. Banding would be the result, as the display would receive a limited number of values right from the GPU’s DVI port.

    However, after HW calibration still all of the 256 unique levels would be transmitted to the display through DVI, as the GPU LUT is not affected. After HW calibration the calibration curves will not have any negative visual effect (related to the banding),  as they are applied with a higher bit depth (16 bit display LUT and >= 12 bit panel). So no banding occurs.

    Would you agree?

    #36518

    Vincent
    Participant
    • Offline

     

    However, after HW calibration still all of the 256 unique levels would be transmitted to the display through DVI, as the GPU LUT is not affected. After HW calibration the calibration curves will not have any negative visual effect (related to the banding),  as they are applied with a higher bit depth (16 bit display LUT and >= 12 bit panel). So no banding occurs.

    No, unless you clean (reset to linear, input=output) the GPU LUT, or unless actual VCGT values are very very very close to linear.

    If you had such monitor you have to load no data (or linear data) in GPU LUT, use only HW calibration

    #36549

    Egor S.
    Participant
    • Offline

    No, unless you clean (reset to linear, input=output) the GPU LUT, or unless actual VCGT values are very very very close to linear.

    But this is exactly what happens with HW calibration. VCGT values are set to input = output, as calibration curves are stored in the monitor LUT.

    #36551

    Vincent
    Participant
    • Offline

    But you can apply both on a failing HW calibration software (grey range a*b*: Dell, Benq….), more likely you can apply both if you have an AMD.

    #36563

    Egor S.
    Participant
    • Offline

    OK, I didn’t know there was software that combined HW and SW calibration. However, this makes no sense to me.

    So thank you for your exlanations! It was instructive as always! 🙂

    #36570

    Vincent
    Participant
    • Offline

    OK, I didn’t know there was software that combined HW and SW calibration.

    There is no such software

    However, this makes no sense to me.

    Buy a Dell or Ben or LG widegamut with HW cal, use HW cal to simulate Rec709 for example for non color managed enviroment
    Validate grey range in DisplayCAL. If grey range a*b* is bad (not to strange) then you’ll have to use DIsplayCAL to correct grey in GPU (if you have an AMD)

    #36620

    Egor S.
    Participant
    • Offline

    OK, I see. Thanks!

Viewing 11 posts - 1 through 11 (of 11 total)

You must be logged in to reply to this topic.

Log in or Register

Display Calibration and Characterization powered by ArgyllCMS