A few questions and a potential feature request

Home Forums General Discussion A few questions and a potential feature request

Viewing 12 posts - 1 through 12 (of 12 total)
  • Author
    Posts
  • #12940

    Kamikaze Ice
    Participant
    • Offline

    Florian,

    As per thread title, I have a few questions that I think might be best addressed with a new feature (for stabilization and/or improving measurements for some unstable display types.

    First, let me apologize as I struggle to put my thoughts into words (OCD+Pedantic). Please let me know if you would like me to rephrase/explain/detail anything below.

    I have an LG 2016 OLED (E6). As many have discovered and discussed at avsforum, they are quite unstable. Back then everyone was calibrating by running a “sweep” and then adjusting display controls. I immediately noticed a discrepancy (luminance difference) between running sweeps vs manually measuring one by one (~20 nit difference at 100% white with the same settings).
    I have a few posts on their forum from a few years ago where I started to investigate this behavior and my conclusion was image persistence/retention which is VERY easy to check if white is 150+ nits (blatantly so with HDR and small pattern windows). I tried to discuss my findings at the time, but it wasn’t very constructive as I couldn’t put my thoughts into words.

    However, Ted has a very nice post HERE which is basically covers what I was trying to say.

    To summarize this behavior, OLEDs have a small degree of persistence and decays over time. The length of this persistence varies depending on the strength of pixel voltage; near blacks (IRE 0.5-6%) and anything at 100+ nits both require a little more power.

    This behavior can cause shifts in measurements, impacting them or not, depending:
    a) What pattern was shown previously.
    b) How long a pattern was on screen (bad for slow instruments)

    This behavior can easily be observed even by your eyes with HDR (or very high nit SDR).
    Simply show a 300 nit window for 30 seconds then switch to a full screen field of 35,35,35 rgb. You should see a glow where that windowed pattern was. This goes away ~3-5 minutes. Black frame insertion helps minimize this, but for HDR I’ve found a 30,30,30 frame works better than 0,0,0 (pixel components can discharge faster if not turned “off”, part of the display’s built in panel “noise” cleaning process as far as I can tell).

    Now to the point of this post.
    I’ve been manually creating patch sets by hand to force DisplayCAL to show a BLACK screen for a set period of time to stabilize the display. On patches that would be 90+ nits I add two to three black ones back to back (more luminance = faster retention buildup).
    Not only is doing this VERY tedious, it greatly increases profiling time for darker patterns as I don’t need the full length of time it takes for my meter to “read” black (ColorMunki Display).
    I cannot do this for “calibration”. Or at least I don’t know how to edit the “calibration” pattern set.

    I’ve attached my most recent profile, if you’re curious.
    IREs ~5% and lower have a massive red push caused by the flashing lengthy red/grey meter placement that happens when calibration/profiling begins. My meter can’t measure below 3% (reads down to ~0.013 nits via HCFR set to measure grayscale in in 100 individual steps).
    I manually corrected this range visually with 2-point low controls before doing anything with DisplayCAL (factory settings have a strange and aggressive red spike for 7,7,7 and 8,8,8 rgb values.

    Now, questions šŸ™‚

    Can you please consider adding a “black frame” insertion option to have each measurement show a black screen in between each pattern for the reasons above? Preferably with options to customize the frame to any RGB value, type of frame (full-screen field, windowed, inverted window (lets pixels outside a pattern window size get usage without unlike fields, and a way to control how long the frame will be shown. This is something Zoyd finally added to HCFR, too.
    This is absolutely needed for HDR, otherwise retention WILL be effecting measurements.

    Is there a way to modify the “calibration” patch set? Unless/until you implement a black frame insertion feature, I’d like to continue manually inserting a black pattern.

    How does DisplayCAL deal with multiple measurements of the same patch? I’ve got one patch set with an extreme number of duplicates of near blacks thinking that it will improve results for those colors. The attached set took about 13 hours to run, and my new one has ~1000 points (excluding ~1500 black patterns I’ve added manually) in the IRE 0-25% range with the hope that results will improve/smooth quantization to be similar or better than what I get by manual calibration via internal controls).

    Is there a way to skip the flashing red “place meter here” and the rest of the start up procedure and go straight to measuring? I’d like to not show this on my OLED displays as per reasons above, if at all possible.

    Is there a way to generate patterns similar to the “single channel patches”, but for secondaries and/or transitions (red>yellow>green>cyan>blue>magenta>red)? Basically for all the edges of a LUT cube? How about for a single “face” of said cube?
    I’d like to check and visualize the linearity of my display’s gamut to better help me understand how internal controls change behavior, without needing to measure points I don’t really care about (points well inside the cube). Very situational, I know.

    Is it possible to “update” a portion of an existing profile? Like re-measure a particular pattern because Windows decided to interrupt the last 5 minutes of a 12 hour profile to see if you were ready to update….? šŸ˜›
    If not possible now, could it be at some point in the future? I’m strictly referring to remeasuring something that was already measured when the profile was generated, I don’t mean adding a new point of data, and not knowing the inner workings of DisplayCAL and/or ArgyllCMS I think this might be possible unless initial raw data or what have you is deleted once profile is created. In my head I picture all the pattern values and measurements for each in a spreadsheet referenced by a coordinate transform formula/function and think all that needs to be done is simply changing the cell that contains the matching pattern’s measurement data. I hope you get what I’m trying to say here.

    Attachments:
    You must be logged in to view attached files.

    Calibrite Display SL on Amazon  
    Disclosure: As an Amazon Associate I earn from qualifying purchases.

    #12997

    Florian Hƶch
    Administrator
    • Offline

    Can you please consider adding a ā€œblack frameā€ insertion option to have each measurement show a black screen in between each pattern for the reasons above?

    That would need to be added in ArgyllCMS.

    Is there a way to modify the ā€œcalibrationā€ patch set?

    No, because it’s not a patch set, it’s dynamic.

    How does DisplayCAL deal with multiple measurements of the same patch?

    I think ArgyllCMS averages them.

    Is there a way to skip the flashing red ā€œplace meter hereā€ and the rest of the start up procedure and go straight to measuring?

    Only if madTPG is not set to fullscreen I think.

    Is there a way to generate patterns similar to the ā€œsingle channel patchesā€, but for secondaries and/or transitions (red>yellow>green>cyan>blue>magenta>red)? Basically for all the edges of a LUT cube?

    That should be covered by the multidimensional steps, except of course those fill the whole cube, not just the edges.

    How about for a single ā€œfaceā€ of said cube?

    Currently this is only possible by specifying the patches manually. Copy & paste from an external application (e.g. spreadsheet) is possible though (or dragging a .txt file).

    Is it possible to ā€œupdateā€ a portion of an existing profile?

    You could manually edit out the patches from the .ti3 file.

    #13003

    Kamikaze Ice
    Participant
    • Offline

    Can you please consider adding a ā€œblack frameā€ insertion option to have each measurement show a black screen in between each pattern for the reasons above?

    That would need to be added in ArgyllCMS.

    Really? I was assuming this was something simple like automatically adding a 0,0,0 pattern in between whatever patch set is being measured (auto/preset or manual) and pausing to make the meter stop “reading” with a custom wait time (like 500-2000ms). This is far from being an elegant solution though. I’ll make note to
    As an alternative (work around/band-aid) for OLEDs, what about having an option to simply add a #,#,# (user defined 0-255) pattern in between each pattern in the user selected test chart ?
    It takes a while to manually add them to a large quantity patch set, but worth it for stable measurements.

    I’ll make note to inquire Graeme about this at some point in the near future.

    Is there a way to modify the ā€œcalibrationā€ patch set?

    No, because itā€™s not a patch set, itā€™s dynamic.

    Does Black level drift compensation affect this “Calibration” sequence? If yes, are there any variables or option flags that can be changed/set to increase the frequency of when this happens?

    If there was an option to insert a user defined pattern, as described above, could this be applied to automatic sets (calibration’s “dynamic” set and/or “auto-optimized” for Profiling)?

    Currently this is only possible by specifying the patches manually. Copy & paste from an external application (e.g. spreadsheet) is possible though (or dragging a .txt file).

    Yeah I recently found that out (drag&drop csv (r,g,b) .txt files)…. šŸ˜› I had a good laugh at my self when it happened.

    Being honest here, I only discovered that drag & drop FOR NON-IMAGES was a thing by complete accident. I was so confused for the longest time as to why the test chart editor had an “export” button and always (wrongly) assumed the “add reference patches…” button was actually “import” with a fancy name, and always wondered why importing a file that was exported from this same editor would always fail, lol, and thus never tried to drag&drop them in. I admit I never read that section of the documentation either for the same reason (wrong assumption). Previously I was using GIMP to make single color picture files and adding them this way, which was quite tedious.

    Is it possible to ā€œupdateā€ a portion of an existing profile?

    You could manually edit out the patches from the .ti3 file.

    Ah, nice. I’m assuming the column format is the sameĀ (left to right = R’G’B’ <-> XYZ?) as shown in the window for Tools > Advanced… > Check Measurement File…
    Are these measurements with or without calibration?
    After manually editing, I assume should generate a new profile from this modified .ti3 file (via File > Create profile from measurement data…). Does the resulting profile use whatever profiling settings are selected for <Current> settings? What I mean is, could this single set of measurements (.ti3) be used to generate multiple profiles, without remeasuring, for like comparing various advanced gamut mapping options and/or black point compensation?
    Additionally, can a Calibration be added or removed from a “Settings” profile? As in an entry in the “Settings” drop down list? What I’m trying to ask is for this kind of scenario:
    Originally I did Calibration+Profile+3D LUT, but results had some bogus readings. Cleaned up manually (edited .ti3), then made a new profile from said .ti3. With this new settings profile active, opening the original .cal seems to make a new entry in the Settings drop down list and I’m assuming it was not added to the new settings profile.

    Final question. If all I want is a 3D LUTĀ  for say an eeColor, everything I’ve read was recommendations to basically use display controls to adjust white point (100% white) as desired and set Calibration to “As measured”. However, with my OLED I’m using a custom white point of x=0.3039 y=0.3214; provided by Dwayne (aka D-Nice at avsforums) based on his experiences with 2016 OLEDs for the purpose of addressingĀ  metamerism failure for C6/E6/G6 models. This (2-point high controls) has been set with HCFR before anything with DisplayCAL.
    I’m doing more measurements to verify behavior of this, but my 3D LUTs I made without calibration (i.e. set to “as measured”) seem to not adjust grayscale balance. I did try specifying white point in Calibration but it didn’t seem to make a difference.
    Should I be making a synthetic rec709 with my desired white point in this situation?

    Cheers for your time

    #13007

    Florian Hƶch
    Administrator
    • Offline

    Does Black level drift compensation affect this ā€œCalibrationā€ sequence?

    Yes.

    If yes, are there any variables or option flags that can be changed/set to increase the frequency of when this happens?

    No.

    Does the resulting profile use whatever profiling settings are selected for <Current> settings?

    Yes.

    With this new settings profile active, opening the original .cal seems to make a new entry in the Settings drop down list and Iā€™m assuming it was not added to the new settings profile.

    If you didn’t edit out the CAL section from the profile, the calibration will be embedded in the profile.

    Should I be making a synthetic rec709 with my desired white point in this situation?

    Use relative colorimetric rendering intent if you want to keep the current whitepoint of the TV.

    #13021

    Kamikaze Ice
    Participant
    • Offline

     

    With this new settings profile active, opening the original .cal seems to make a new entry in the Settings drop down list and Iā€™m assuming it was not added to the new settings profile.

    If you didnā€™t edit out the CAL section from the profile, the calibration will be embedded in the profile.

    Just to make sure I’m understanding, here’s what I was doing. I was copying one of my .ti3 files into a new folder, changing the name of both folder and .ti3 before creating a new profile from said .ti3. I wasn’t copying anything else at that point. I assumed the “calibration” measurements were stored separately as the .cal file, but I didn’t know if they would get carried over as I didn’t initially have it there, and if it wasn’t was wondering what the process would be to do so.

    Should I be making a synthetic rec709 with my desired white point in this situation?

    Use relative colorimetric rendering intent if you want to keep the current whitepoint of the TV.

    Sorry, I originally had written more in-depth on this, but deleted it as I thought I was rambling (unhappy with my wording). I forgot to state again that I was using relative colorimetric, as per described in your documentation.Ā  I think it would be better that I provide you with context surrounding my question because my results were not expected.

    I’ve attached my most recent calibration+profile if you’re curious about anything (also chose to not include the 3d lut as per “create compressed archive” button), including my bad measurements (I have a copy with them removed). This was a stable ~24 hour session, and by that I mean there was no drifting caused by image retention/persistence/burn-in (detailed below).

    I don’t know what happened when generating my .3dlut yesterday from this file. I generated around 5 of them yesterday. The initial one used low quality PCS-to-device tables which gave me a pastel pink tint to everything EXCEPT white which mostly remained where I adjusted it to with display controls prior to DisplayCAL.
    I used patterns on Ted’s calibration disc to verify visually. I use HCFR because honestly I simply fail to understand how to verify via DisplayCAL and simply avoid it (terminology and intent simply does not “click” with me no matter how many times I read it and responses to other posts on this.) The problem with this lies between my keyboard and my monitor lol.
    Anyways, first one was bad so I made one without vcgt, and some using various combinations of advanced gamut options and building new profiles for each all from the original file. Some of them had the same white point and below tinted, at least one had a notably more cool color temperature, and at least one tried to go back to an absolute/D65 white point (never used any settings that should change white point as described in documentation).

    The last one I made yesterday was back with the original settings, had the same results. Then today I remade the .3dlut because I forgot if I did this yesterday (lol)…. and now it’s fine and looks like I expected it to the first time. And my meter has not moved at all (been securely stationary for the past month while observing my E6’s HDR behavior for different firmware and Icon and game modes in a light and temperature controlled man cave).

    Totally baffled, and none of the logs had anything that stood out. The only thing I can think of that might effect results is my CPU/RAM overclock, although extremely unlikely (pc is stable and runs 24/7/365).

    Somewhat related:
    I have a ColorMunki Display. With HCFR I can get “negative” measurements for at most two of R/G/B (the measurement metric, like xyY/xyz/XYZ). I’d like to ignore these measurements. I think this is probably a question for Graeme, but does DisplayCAL drop such measurements or have a way to view patterns/measurements outside of R’G’B, XYZ or a verification report?
    Here is how I evaluate this kind of stability on my E6:
    Prior to starting any profile session I make a few quick measurements using HCFR, IREs 5/45/95 three separate times with the 2nd and 3rd being taken with display’s “color filter” with one set to red and then the other blue. I do this to make sure the white subpixel is off for more accuracy). The results for this session was fantastic; max DE76 (balance+gamma for 130 nits) was <1.0 (overall avg was ~0.65).

    I also visually check the screen for burn-in and image retention/persistence using full screen fields or windows background set to 30,30,30 rgb as only a portion of the screen is active for so many hours. This session was fine, but the previous one failed (image retention/persistence, temporary) and required me to run “clean panel noise”. The session I attached in my original post took ~12 hours which had more luminance/green drift.

    Sorry, had another idea šŸ™‚
    Regarding the Tools > Advanced > Check Measurement File… window, since we can manually edit bad measurements here would it be possible to integrate spotread or otherwise to remeasure and update the listed XYZ?

    Attachments:
    You must be logged in to view attached files.
    #13081

    Florian Hƶch
    Administrator
    • Offline

    I have a ColorMunki Display. With HCFR I can get ā€œnegativeā€ measurements for at most two of R/G/B (the measurement metric, like xyY/xyz/XYZ). Iā€™d like to ignore these measurements. I think this is probably a question for Graeme, but does DisplayCAL drop such measurements

    Negative XYZ should not be possible from an instrument. I don’t think Argyll allows it either.

    Regarding the Tools > Advanced > Check Measurement Fileā€¦ window, since we can manually edit bad measurements here would it be possible to integrate spotread or otherwise to remeasure and update the listed XYZ?

    You could pull out any bad reads into a new TI3 which then could be re-measured, but ultimately the better approach is to not have them in the first place.

    #13085

    Kamikaze Ice
    Participant
    • Offline

    I have a ColorMunki Display. With HCFR I can get ā€œnegativeā€ measurements for at most two of R/G/B (the measurement metric, like xyY/xyz/XYZ). Iā€™d like to ignore these measurements. I think this is probably a question for Graeme, but does DisplayCAL drop such measurements

    Negative XYZ should not be possible from an instrument. I donā€™t think Argyll allows it either.

    Sorry, I don’t think that came across right. What I was trying to say was that I can get negative R/G/B measurements in HCFR (using the RGB view and not the xyY/xyz/XYZ views, the radio button options for HCFR under the “display” section directly above the measure buttons).
    These “negative” measurements only happen when my meter fails to take a measurement correctly with only one or two of the three sensors inside. I don’t know how the internals work in my meter, and I’m not about to open the sealed environment to find out, so I’ve been removing any patterns with at least one rgb channel between 1-7 but only for IRE range 0-15% where the display is very sensitive to changes (also applies to the display’s own internal calibration controls). I’ve not noticed them being an issue outside this range, but going forward I will remove any for all ranges just so I can rule it out entirely.
    Originally what I was trying to ask was how does DisplayCAL/ArgyllCMS handle measurements like this and if possible configure software to ignore/drop them. I mean if there is an option I’m unaware of, I’d rather use it instead of manually editing the .ti3.

    Regarding the Tools > Advanced > Check Measurement Fileā€¦ window, since we can manually edit bad measurements here would it be possible to integrate spotread or otherwise to remeasure and update the listed XYZ?

    You could pull out any bad reads into a new TI3 which then could be re-measured, but ultimately the better approach is to not have them in the first place.

    I agree. I’ve been tediously working on improving the pattern and measurement sequence for my profiling chart, but OLED behavior puts up a good fight. Especially near black where these OLEDs have poor quantization, and my meter has a forced speed limit for darks by design (which is actually useful for HDR giving more time for panel to settle after manually adding black measurements after every pattern).

    Visually calibration results of any speed are bad enough that I will not make one for OLEDs. Without inserting a black frame in between measurements, grayscale colorization can be very obvious and can have luminance fluctuations (visible in secondary color ramps and non-life content like animations or CGI where gradients are more prominent and often more saturated).

    I’ve made about 40 profiles in the last few months on this OLED. I’ve taken results from an average delta error of 1.5-2.0 down to 0.1-0.2 and a max average of 6-7 down to 2-3 simply by making iterative test charts based on a prior run, but I’m still struggling with reducing grayscale colorization and near black quantization. And by struggle I mean I know there is still room for improvement as I made a nice 3D LUT for my eeColor two years ago but lost display settings and the DisplayCAL profile/settings.

    All I can do is keep profiling and changing the patternĀ  sets until I’m satisfied with the result. I’m trying to keep the measured time below 3 hours, ideally less than 2 hours, so a profile will finish before the 4 hour automatic “clear noise” process happens (after 4 hours it starts, then repeats every hour until powered off). This changes peak white balance enough to add enough rgb balance error in IREs 75-100 (+1-2 delta error). But having to add so many black patterns limits me to ~500 iterative points, ~121 neutral and ~5 multidimension and then using those results for another ~500 iterative of that set, and repeating).
    I know Lightspace has features to modify a profile/lut, but I’d rather address the problem(s) and not put makeup on it lol.

    In theory, could I “create” a calibration by manually modifying the data in an existing “bad” .cal based on some .ti3 data (grayscale+primaries) with the same tone response (bt.1886, absolute with true “0” black measured)? If possible I’d make a new profile based on it but I’m not quite sure how to make a new profile using the calibration of another profile (both with same display settings, just different profile patch sets), as a few I tried kept their previous calibration.
    In HCFR: for SDR I use a black (0,0,0 rgb) frame inserted after every pattern for 1500ms, and for HDR I use a dark grey (30,30,30) frame (full screen field, not a “window”) except I manually time it for 2 minutes (extremely important for HDR, or well anything 200+ nits that is shown for more than 5 seconds will start to cause image retention which causes the following measurements to have more green/luma. This can snowball quickly causing even more green/luma drift.
    This is easily seen even without meters by simply checking a full screen field for a glow where the measurement patterns, or even the displays own UI menu (I actually have red burn in from spending so much time spent observing how all controls interact and change my displays behavior.
    As to why I use a 30,30,30 field for HDR (or 200+ nit SDR) is to keep a small level of image retention so Y measurements should be a better approximation of panel behavior while viewing actual content, which is not an issue for typical SDR conditions.

    #13090

    Florian Hƶch
    Administrator
    • Offline

    What I was trying to say was that I can get negative R/G/B measurements in HCFR

    This is dependent on the target set in HCFR (usually Rec. 709). It is not necessarily a problem, just means the read is not on target.

    Originally what I was trying to ask was how does DisplayCAL/ArgyllCMS handle measurements like this and if possible configure software to ignore/drop them

    When profiling a display, there is no target.

    In theory, could I ā€œcreateā€ a calibration by manually modifying the data in an existing ā€œbadā€ .cal based on some .ti3 data (grayscale+primaries) with the same tone response (bt.1886, absolute with true ā€œ0ā€ black measured)?

    No, just skip the 1D calibration.

    If possible Iā€™d make a new profile based on it but Iā€™m not quite sure how to make a new profile using the calibration of another profile

    Load the *.cal file under “Settings”, then set everything on the calibration tab to “As measured”.

    #13114

    Kamikaze Ice
    Participant
    • Offline

    What I was trying to say was that I can get negative R/G/B measurements in HCFR

    This is dependent on the target set in HCFR (usually Rec. 709). It is not necessarily a problem, just means the read is not on target.

    Originally what I was trying to ask was how does DisplayCAL/ArgyllCMS handle measurements like this and if possible configure software to ignore/drop them

    When profiling a display, there is no target.

    Please bear with me, I think I’m not quite saying what I’m trying to ask.

    When the meter fails to read, let’s say IRE 2% (white@100 nits), the measurement results are simply 0. But if I use internal display controls to raise one of the triplets enough for the meter to read it, let’s say red, the other two (green/blue) now measure as a negative number instead of 0. If I change the negative channels to 0 I get different coordinates different xyY coordinate along with different calculations. I’m concerned that these kinds of readings are causing issues.

    I guess what I’m trying to ask is, can DisplayCAL drop these kinds of measurements (read: not included in calculations) or be configured to only clip/truncate/??? any “negative” channels to 0 (included in calculations)?

    So far I’ve simply been manually removing any testchart patterns with even a single channel of the triplet is 8 or less. This has helped prevent some grayscale colorization in IREs 0-10% (less likely than before where I left them in).

    Out of curiosity I took the best profile I’ve made so far (~4000 points, smoothest and least colorized grayscale (only a small but noticable red spike on a grayscale ramp), modified that profile’s ti3, and made a new profile+3D LUT with it.
    I used HCFR as a spreadsheet to convert XYZ to RGB, then added to red and then back to XYZ. The self check failed with a huge error delta, but visually the red was almost neutralized. I saw no visible problems with Ted’s calibration disc, all his gradient and chroma-gradient patterns, and HCFR measurements seemed fine.
    As good as the results seem to be from this limited testing, I would rather like to get everything in order so DisplayCAL/ArgyllCMS to produce results like this (smooth grayscale without colorization).

    I’ve attached both profiles if you’re curious (I ONLY modified XYZ of line 38, nothing else).

    I’m now going to try pre-calibrating grayscale, despite advice from everyone at avsforum saying this is not needed except for 2-point high to dial in 100% white balance.
    Not quite sure why DisplayCAL/ArgyllCMS 3D LUTS generated from a profile seem to have a hard time keeping a neutral and/or smooth grayscale, sometimes a little worse than no 3D LUT at all. Wait, that sounds like I’m saying the problem is the software and is not what I mean. I don’t know how to word it, but I hope you can figure out what I’m trying to say. šŸ™‚

    So far for me, all of this seems to be determined by the measurement pattern order on my OLED, and manually editing them has produced better results than “automatic” or sorting via response/lightness/luma/RGB#.
    I think keeping an average lightness/luma for 90% of the total time to measure would help, as well as more variance rgb variance between higher green and blue patches (i.e. so greens, blues and greens+blues aren’t measured close together) might as well. The white subpixel seems to be controlled/tied closer to blue, so giving it more variation overtime appears to help minimize drift during profiling).

    In the HCFR thread at avsforum, user Webdove posted THIS video of a moving+pulsating near black circle gradient (I’ve downloaded it for use in MPC+MadVR). This is just one of the files I’ve been using to check the results of DisplayCALs profile and the resultant 3D LUT.

    In theory, could I ā€œcreateā€ a calibration by manually modifying the data in an existing ā€œbadā€ .cal based on some .ti3 data (grayscale+primaries) with the same tone response (bt.1886, absolute with true ā€œ0ā€ black measured)?

    No, just skip the 1D calibration.

    This is what I’m doing, but I think my question didn’t come out right.

    Here is why I wanted a calibration:
    Exclusively used for the desktop, absolutely nothing else would use it.
    Since it’s not actually color correction, I just wanted to use one to slightly compensate for the oversaturated/bright primaries caused by the WIDE gamut to help prevent image retention/burn that can happen.

    To rephrase my question: Since a profile contains data of the display’s current state, why couldn’t a calibration be made from this data? It would make that particular profile unusable with it, sure, but was curious if it was possible in theory.

    If possible Iā€™d make a new profile based on it but Iā€™m not quite sure how to make a new profile using the calibration of another profile

    Load the *.cal file under ā€œSettingsā€, then set everything on the calibration tab to ā€œAs measuredā€.

    Just to clarify if I understood you:
    Let’s say I have TWO seperate Calibration+Profiles, one for Mancave mode (dark) and one for casual time (bright). Both made with a different pattern set.
    You’re saying I can load the .cal of one of them, set calibration to “as measured” and use a different pattern set and create a NEW profile that would include the .cal file I loaded?

    If yes, I never would have thought to try that. I thought the video card gamma table (calibration) was reset when calibrating+profiling.

    Is it possible to combine a previous calibration (.cal) with existing measurements file (.ti3) that was made with a linear calibration?
    Er… not quite satisfied with phrasing what I’m trying ask again.
    Basically I’m just wanting to understand how the calibration data inside at the end of .ti3 file relates to the RGBXYZ data at the beginning. Are the listed XYZ values WITH or WITHOUT the calibration?

    Sorry for the unusual and confusing questions that read completely different from what I’m trying to ask lol.

    Attachments:
    You must be logged in to view attached files.
    #13263

    Florian Hƶch
    Administrator
    • Offline

    When the meter fails to read, letā€™s say IRE 2% (white@100 nits), the measurement results are simply 0. But if I use internal display controls to raise one of the triplets enough for the meter to read it, letā€™s say red, the other two (green/blue) now measure as a negative number instead of 0

    Yes, this is exactly what I was talking about. It depends on the target set in HCFR.

    I guess what Iā€™m trying to ask is, can DisplayCAL drop these kinds of measurements (read: not included in calculations) or be configured to only clip/truncate/??? any ā€œnegativeā€ channels to 0 (included in calculations)?

    Check the TI3. There won’t be any negative values for XYZ.

    To rephrase my question: Since a profile contains data of the displayā€™s current state, why couldnā€™t a calibration be made from this data?

    You could. Just lookup neutral XYZ (or L*a*b* which is easier as only L* has to change) inverse forward through the profile. This is the same what a 3D LUT does btw.

    Youā€™re saying I can load the .cal of one of them, set calibration to ā€œas measuredā€ and use a different pattern set and create a NEW profile that would include the .cal file I loaded?

    Yes (it’s also mentioned in the documentation I think).

    Is it possible to combine a previous calibration (.cal) with existing measurements file (.ti3) that was made with a linear calibration?

    Argyll’s applycal can apply a calibration to a profile.

    Are the listed XYZ values WITH or WITHOUT the calibration?

    With.

    #14027

    Florian Hƶch
    Administrator
    • Offline

    Just as a heads-up, full field pattern insertion (not measured) is now a feature in DisplayCAL 3.7.

    #14138

    Kamikaze Ice
    Participant
    • Offline

    Thanks for the update, should help when I feel like tackling HDR on this OLED.

    Cheers!

Viewing 12 posts - 1 through 12 (of 12 total)

You must be logged in to reply to this topic.

Log in or Register

Display Calibration and Characterization powered by ArgyllCMS