HI, I could use a bit of help to help me understand something in my log file.

Home Forums Help and Support HI, I could use a bit of help to help me understand something in my log file.

Viewing 4 posts - 1 through 4 (of 4 total)
  • Author
    Posts
  • #21021

    Darkmatter
    Participant
    • Offline

    Hi Florian, I was trying to diagnose if the calibrator was part of the problem. I have changed my drivers to an Nvidia Studio Driver that has a number that is before the bug you mentioned to me earlier, and DCal says it can’t determine the bit depth MOST of the time, but I’ve also gotten these readings when I make changes to my monitors settings.

    Here is a set of uncalibrated reports and how the changes I made to the monitor changed the report. I’ve highlighted the bit depth part. Now, before someone jumps in and says this, I will say that I do know that this is talking about the “Video LUT” so I am assuming that it is referring to the actual video card, not the monitor.  Obviously with the workflow going from the card to the monitor, to the calibrator, the monitors changes could affect things like brightness and colour temp, but I didn’t expect it to affect what bit depth it though the driver was working at.

    The only thing I can think of is that it is an “psudo-error” due to clipping making the sensor think the bit depth is lower than it is actually set at, full 0-255 RGB, 10bit, in the Nvidia Control Panel. I call it that because the reading is sort of wrong, but because the user pushed the monitor past it’s limits.

    I also wanted to ask you what it is telling me when it shows R– G+ B++ in the log (Just an example), Does that mean that red is to low, or that you need to lower it?

    I’m also including some oddities that only show up in the darkest patches of the calibration phase, and I didn’t understand why DCal lists a measurement as a failure when above it says “Repeat threshold of 0.9 DE”  but in that same iteration it says, “Patch 31 of 32 DE 0.053471, W.DE 0.053471, W.peqDE 0.072188, Fail ( > 0.066333)” which I don’t understand because that is well below 0.9 DE.

    Never mind, I was going to paste it all into here but I think I’ll add an attachment instead. The part that shows the changes to read DE is in the attachment along with the uncalibrated reports since those are a bit longer, as is the calibration part of the log. Sorry I shrunk the text size a bit to avoid the problem of having 1 line break and go to a 2nd line.

    Thanks!

    Attachments:
    You must be logged in to view attached files.
    #21030

    Vincent
    Participant
    • Offline

    I do know that this is talking about the “Video LUT” so I am assuming that it is referring to the actual video card, not the monitor.  Obviously with the workflow going from the card to the monitor, to the calibrator, the monitors changes could affect things like brightness and colour temp, but I didn’t expect it to affect what bit depth it though the driver was working at.

    Video LUT is a test, how changes in video LUT “input” translates to measured output. If it reports 8bit, it is likely that “something” like driver itself or Windows issues (caused by Windows itself like the examples provided in other threads) is messsing with LUT, or limiting what you can inject in video LUT.

    You can have a display with 10bit input, internal electronics with 10+bit to the panel, 10bit ouput from GPU to monitor and 10bit from application to driver… but LUT can suffer from truncation errors. “Standby” issues described in other threads, or if other app tha DisplayCAL loaded LUT contents to GPU = Truncated unless reboot.
    In that 10bit pipeline output at LUT may be truncated by those bugs (nvidia’s or Windows’)

    The only thing I can think of is that it is an “psudo-error” due to clipping making the sensor think the bit depth is lower than it is actually set at, full 0-255 RGB, 10bit, in the Nvidia Control Panel. I call it that because the reading is sort of wrong, but because the user pushed the monitor past it’s limits.

    I would say that in most situations is not a displaycal pseudo-error if it reports “8bit”, I would say that what you read its true = 8bit, that there is some malfunction in that pipeline that truncates LUT contents to 8bit (zero fill 2LSB in that 10bit pipeline).
    For example:  wake from standby, i1Profiler/basiccolor LUT loading.. etc and DisplayCAL reporting 8bit video LUT ARE NOT pseudobugs at 99.999% certainty. These two examples are Windows (or nvidia… even AMD) bugs…. “old friends” to anyone that has been using ArgyllCMS for a while.
    Of course you may have found an ArgyllCMS bug and it will very interesting if it is corrected.

    It is not a DisplayCAL test, it’s ArgyllCMS test. Mr. Gill (ArgyllCMS developer) can help you or request you to execute some commands with a more verbose option in each of your control panel/display OSD configurations.

    I also wanted to ask you what it is telling me when it shows R– G+ B++ in the log (Just an example), Does that mean that red is to low, or that you need to lower it?

    ArgyllCMS command line naming convention. DisplayCAL acts as an front end.

    • This reply was modified 4 years, 5 months ago by Vincent.
    #21034

    Darkmatter
    Participant
    • Offline

    Thanks for the reply. I was aware that ArgyllCMS was the backed, command line based (as I recall) “engine” behind the DisplayCal GUI that Florian made. I’ll ask about the logs on the ArgyllCMS forums, if there is one, which I assume there probably is. 🙂

    My one question out of what you said, is, why does it vary from 8bit, to 9bit, to “Cannot determine….” which Florian said is a good hint that it’s reading 10bit? Of course, just because it hints at it being 10bit, doesn’t mean it has to be 10bit, it could be a bug in the pipeline, as you said, that is confusing ArgyllCMS-DCal.

    Thanks!

    #21044

    Vincent
    Participant
    • Offline

    How would you test if a LUT is working as intended? An 1D LUT is a table per channel data in -> data out.

    If you change data stored in LUT by varying least significative bits and measure output you may have a hint about how it is working.
    An accurate device should be able to measure color diferences  from 9bit onwards, perhaps more if gamut if big enough. That is what I assume ArgyllCMS does.
    So:
    -8bit means truncation.
    -More than 8bit means LUT is working as high bit depth LUT (and perhaps dithering) so even if you have an 8bit display, it can render smooth gradients even with GPU lut calibration.
    -“Unable to determine…” means according to Florian that significative bit which change is detected goes beyond noise in measurement device = good, all bit changes from MSB to some of the less significatve bits goes noticed in measurement device to the noise/repeatability limit.

    So 9bit in LUT test  does not mean “9bit”, it means more than 8 = LUT is working OK = if you modify one or more bits below 8th bit it goes noticed. Same to 10bit, or 11. Dithered outputs can encode visually 9-10-11bits on a 8bit GPU-display link and 8bit display/panel. Test may say 10bit on a DVI 8bit link… and it means that GPU calibration will render smooth gradients like if that 8bit monitor had HW calibration.
    High end monitors with reliable HW calibration play the same trick. Their LUT can be 12-16bit, but panel has 8, 8+2, or 10bits… that translation could be done without loosing visual “steps”.
    All white point correction, or factory gamma correction inside monitor is going to be made with a LUT (an intrnal one) so even in a full un interrupted 10bit pipeline from app to monitor, changes un 10th LSB bit may not be measured because the loss of DR, the loss of contrast window because internal LUT.

    So you cannot infer that a 10bit video LUT test result means a full 10bit pipeline end to end. It just means that video  LUT is operating as intended = no truncation = no banding in non color managed apps. Same for 9, or 11. That’s all.
    If you see 8 bit with a modern device like ypur SpyderX or a better device like i1d3 or even better device like a Klein or a reference spectrophotometer… it is very likely that something is wrong in that LUT and we know a few situations in Windows that causes that, like the examples above.

    • This reply was modified 4 years, 5 months ago by Vincent.

    SpyderX Pro on Amazon  
    Disclosure: As an Amazon Associate I earn from qualifying purchases.

Viewing 4 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic.

Log in or Register

Display Calibration and Characterization powered by ArgyllCMS