Indie Filmmaker Color Grading Suite Setup

Home Forums Help and Support Indie Filmmaker Color Grading Suite Setup

Viewing 15 posts - 1 through 15 (of 28 total)
  • Author
    Posts
  • #4225

    Shane Taylor
    Participant
    • Offline

    Hi,

    I’m in the process of setting up a color grading suite for my short film I’m  in post on, and am struggling a bit. I’m here hoping to find some help. First, some details of what I HW and SW I’m using and what I’ve done to date, followed by a few questions to (hopefully) get me started down the right path.

    HP Z820, 32 GB Ram, Windows 7 Ultimate x64, nVIDIA Quadro 4000 GPU, Dell U3011 CCFL IPS Wide Gamut 10-bit capable LCD Display connected via Display Port, Adobe Speedgrade CS6, Adobe After Effects, CS6, and Premiere Pro CS6.

    My goal is to create a good calibration for a global tone curve for the 10-bit vcgt on the Quadro, along with proper/optimum adjustment of the OSD on the monitor, and an ICC profile that I can use for all my color-managed apps (of which Premiere Pro and Speedgrade are NOT), and finally a 3D-LUT that I can use as a calibration LUT in Speedgrade CS6 while grading my footage (as well as in After Effects – I don’t believe Premiere Pro CS6 has LUT capability). I will be working in Rec709 space (unsure of gamma at this time, 2.2-2.4 – BT.1886?), but ultimately will be outputting to DCP (DCI-P3) for the festival circuit (hopefully).

    I have an (older) i1 Pro Rev B spectrometer and an (older, but never used, just now out of the box) i1 Display 2, both of which are recognized and useable by the latest installed version of DisplayCAL. I have successfully created a Correction Matrix for the Colorimeter using the Spectrometer. I have also done a verification of my uncalibrated Dell  as Florian described in another post from the Verification tab. I have my room very dim (dark), windows blacked out, a neutral gray background behind my screen (SAVAGE Thunder Gray seamless paper), and some bias lighting behind the monitor (currently, unfortunately, 5000K Ott-Lite, but that should change).

    Now, my Dell U3011 has an extensive set of OSD controls (described in some detail here), which is giving me some pause as to what to change and what to leave lone. Along with the typical Brightness and Contrast controls, there are multiple presets (Standard, Multimedia, Game, Movei, Warm, Cool, Adobe RBG, sRGB, and Customer Color), but more on that later. Also, Brightness and Contrast range from 0 to 100, with the factory default set at 50 for both (which is about 180 cd/m2). I have two choices for Gamma: PC and Mac (which I assume to be 2.2 and 1.8 respectively).  In previous calibration/profiling sessions using xrite’s profiler SW, I used the interactive mode to modify the RGB Gains in the Custom Color mode to set the White Point (along the Brightness and Contrast to set the luminance level). I’ve played with that a bit in DisplayCAL as well.  I also have options in the Custom Color Preset mode to alter Gain (RGB), Offset (RGB), Hue (RGBCMY), and Saturation (RGBCMY).

    But, I also see that the different color modes, although there is no documentation on what white point / gamma are established for each of these, clearly created very different viewing conditions. From my reading, I suspect that one of these is better than the other as a starting point for an accurate calibration and profile. First, though I needed to create a correction for my i1 Display 2. That led to my first dilemma. I first ran a correction using my then current OSD settings, which was factory default, except for Contrast=39, Brightness=0. I then thought that ‘I’m pretty sure that how a monitor is set up determines the overall accuracy of the display”, so I reset my monitor to factory defaults (Con=50, Bri=5), and ran the correction again. I got different numbers. Not sure how different they are, but they are not the same, and don’t really know if the difference is significant in the profiling process. I’ve attached them both below. So, my first question is: Does monitor settings affect the efficacy of the corrections file?

    Next, I ran several Verifications of the uncalibrated display against Rec709 in a combination of 4 different states to see what changed (and it was significant) (in all cases, Gamma was set to “PC”):

    1. Standard Color Preset, Brightness = 50, Contrast = 50 (full factory default)
    2. Standard Color Preset, Brightness = 0, Contrast = 39 (my previous ‘preferred’ dim-surround viewing setting)
    3. sRGB Color Preset, Brightness = 50, Contrast = 50
    4. sRGB Color Preset, Brightness = 0, Contrast = 39

    I’ve also attached all four HTML reports to this post in a ZIP file. The names should be clear. So, my second question is this “What does all this mean?” (either my Dell is BAD out of the box OR one or more of my measurement devices is bad).

    What I can gather is the following:

    1. The Standard Color Preset’s white point varies from 6200K at factory settings to 6700K at the lower brightness setting, and color accuracy is unnacceptable. Gamma was about 2.2 except on the extremes.
    2. The sRGB Color Preset’s white point varies from 5500K at factory Bri/Con settings to 5600K at the lower brightness setting, and color accuracy is much better, but still unnacceptable. Gamma was about 2.2, but varied more wildly in the midtones.

    I didn’t try any of the other Presets. There is no documentation as to what the ‘native’ settings are, but I have to assume Standard is since that is what the Factory Default sets.

    Finally, I’ve attached a Report on Uncalibrated Display Device for the monitor in the factory reset condition.

    I need help understanding what I need to do next to reach my goal, i.e., how best to adjust my monitor using he available controls to get the best calibration, profile, and 3D-LUT possible. Once I know where to start, I may have more questions regarding some of the settings in some of the tabs to best fit my goal. I’ve read several reviews of the U3011, and they vary from OK to glowing regarding out of the box color accuracy and profileability (not $10,000 professional broadcast monitor glowing, but useable).

    So far, I’ve had absolutely no problems with displayCAL. It has run flawlessly, and much quicker since I brought out my colorimeter (instead of using the i1 Pro for measurements). Any and all help/advice would be appreciated. Let me know if I can provide any more information or files, or run any tests.  I’m kind of stuck in my post process until I can get confidence in my display system.

    Thanks!

    Shane

    Attachments:
    You must be logged in to view attached files.
    #4236

    Florian Höch
    Administrator
    • Offline

    Hi,

    Now, my Dell U3011 has an extensive set of OSD controls (described in some detail here), which is giving me some pause as to what to change and what to leave lone.

    In terms of brightness/contrast/color controls I would only alter brightness and RGB gain and leave the other controls (contrast, hue/saturation) alone. This probably requires using the monitor’s “Custom Color” mode, which should also give you the widest possible gamut and thus the most “breathing room” for the 3D LUT.

    So, my first question is: Does monitor settings affect the efficacy of the corrections file?

    As the panel’s primaries and backlight are static, the actual monitor settings won’t affect the correction much. For most confidence in the results, you could do the monitor’s whitepoint and brightness adjustment with the spectro, and once that’s done, cancel out, create the correction, and do the rest with the colorimeter.

    Next, I ran several Verifications of the uncalibrated display against Rec709 in a combination of 4 different states to see what changed (and it was significant) […] So, my second question is this “What does all this mean?”

    Looking at the primaries only, the monitor’s “sRGB” mode seems to hit them pretty well. Note that you verified against Rec. 709 with the Rec. 709 tone curve (“unmodified”), which is technically an encoding-only curve and usually not used for monitor calibration. You could use the synthetic ICC profile creator to create a Rec. 709 profile with different curve (gamma 2.2 or BT. 1886) for checking the uncalibrated monitor, but for the purpose of creating an accurate profile and 3D LUT, these tests are not very meaningful – the  important thing is that the monitor gamut is large enough to encompass the desired Rec. 709 space (which is the case in the U3011’s standard and custom color mode, being a wide-gamut monitor).

    #4293

    Shane Taylor
    Participant
    • Offline

    Thanks so much for your help, Florian. I will try setting  white point and brightness with the spectro, then create the colorimeter corrections as you suggest.

    BTW, my RGB Gain settings default to 100%, so I can only lower them to affect color. Is that typical behavior? I assume that lowering each by the same amount is quite different than lowering Brightness. I understand that Brightness controls the backlight, but what exactly does the Gain do? Gain should have no effect on the backlight, since there is only one fixed color backlight, so it must be attenuating the signal path for the particular channels (values) in some way, right? A built in 1-D LUT equation for each color channel adjustment setting?

    Finally, given that it appears my grey ramp shifts color along its range, would it be advantageous (if possible to do with DisplayCAL) to also correct any black point (color cast) using the monitor’s OSD ‘offset’ controls (I assume that is what these controls are meant to do, in contrast to the gain controling the hightlights). However, as you’ve pointed out before, there is no guarantee that they only affect the shadows. Just wondering if that is something I should try to improve my chance of a good grey  ramp.

    Although, having written all of that, maybe I’m overthinking this. I suppose that making any changes to the controls on the monitor beyond brightness and gain is just creating a Dell-manufactored LUT/correction in firmware that won’t be as good as letting DisplayCAL do it by itself.

    Thanks again for your help, insight, and excellent interface to Argyll.

    #4294

    Shane Taylor
    Participant
    • Offline

    Hi Florian, as suggested I’ve modified my monitor settings to match the targets in Interactive mode using the i1 Pro. I was able to get 0.6 dE on the color and meet my 80 nit luminance target ( it seems bright compared to the when I had the contrast also set at 39, which had me at about 55 nit, I think).  I also created a new colorimeter correction at this setting.

    That done, I’m on to profiling and creating a 3D LUT. Given my ‘goals’ as stated earlier, can you please look over my settings in the attached PDF (multi-page) and let me know if you would recommend changes anywhere. In particular, I wasn’t sure about the CIECAM02 gamut mapping settings or the Rendering Intent and LUT size settings on the 3D LUT tab. I have read that anything over 17x matrix is overkill, but not sure if that applies here, nor am I sure what Speedgrade will handle on the high end.

    Any advice would be greatly appreciated. Thank you. Shane.

    Attachments:
    You must be logged in to view attached files.
    #4298

    Shane Taylor
    Participant
    • Offline

    OK, profiling completed. I’ve attached a PDF with the Profile Information report covered over three pages. Each page shows more of the right hand side data. On the last page, I changed the last dropdown under the chart to the other setting, which changed the chart, just for your reference. My first question then (from my previous post), were my settings on each tab correct/optimum for my purposes and, if not, what should I change? I’m not sure what this chart shows me. I’m assuming that the colored line is my monitor’s gamut, compared to the reference Rec 709. Does anything on this tell me how well Rec 709 data will be represented on the capabilities of my monitor, or is that the purpose of the Verification?

    If my creation looks even close to reasonable, my next step is to verify it, but I’m not clear on how to do that. My questions are as follows: Based upon my reading of the manual, do I verify the Profile (ICC and video card 1-D LUT and vcgt) and the 3-D LUT separately, since they have different purposes. If I want to see how well my profile (ICC) and video card corrections work for color managed apps (like Photoshop), do I just set the ICM in the Settings dropdown, choose either the Verification or Extended Testchart, and leave Simulation Profile OFF? If I want to see how well the 3-D LUT works to convert Rec 709 video to my newly profiled monitor, do I do the same, except choose the respective (video) testchart, or do I need to engage the Simulation Profile? My instinct is to choose the Rec 709 Simulation Profile, and select the Rec 1886 Tone Curve. I’m still confused about what I’m actually verifying?

    I’m assuming the 3-D LUT I created is NOT a DeviceLink, since (as I understand it), DeviceLink converts one device profile into another device profile, skipping the PCS. In my case, what I created was a conversion from standard Rec709 video from my camera to the character of my Display so that my Rec 709 video displays properly (in this case, using the Rec 1886 Tone Curve) to be used as a calibration LUT in Speedgrade. Is my thinking correct?

    One final question is this: When using my 3-D LUT in Speedgrade, do I need to unload the calibration and reset the gamma in the video card through the loader? In other words, does the 3-D LUT include all the necessary corrections applied by my profile/calibration already, or does it assume that the video card always contains the profile data. I don’t want to double compensate for something.

    My understanding is improving, but I’m still too unsure about many of the settings and tests, as well as, usage of the products to be confident to move forward with grading until I am.

    As always, tremendous thanks for your help.

    Attachments:
    You must be logged in to view attached files.
    #4309

    Florian Höch
    Administrator
    • Offline

    Given my ‘goals’ as stated earlier, can you please look over my settings in the attached PDF (multi-page) and let me know if you would recommend changes anywhere. In particular, I wasn’t sure about the CIECAM02 gamut mapping settings or the Rendering Intent and LUT size settings on the 3D LUT tab. I have read that anything over 17x matrix is overkill, but not sure if that applies here, nor am I sure what Speedgrade will handle on the high end.

    CIECAM02 is not necessary in this case, so can be turned off (it’s not a problem that you already created a profile with it enabled though, just mentioning it for future reference). 3D LUT rendering intent should be “Absolute colorimetric with white point scaling”. I’d go with the largest available LUT size for best accuracy (65x65x65), if Speedgrade can support it. You can just re-recreate the 3D LUT, with altered settings, from the existing profile by removing the check from “Create 3D LUT after profiling”.

    I’m not sure what this chart shows me. I’m assuming that the colored line is my monitor’s gamut, compared to the reference Rec 709

    Correct.

    Does anything on this tell me how well Rec 709 data will be represented on the capabilities of my monitor, or is that the purpose of the Verification?

    It does tell you a little about how much of Rec. 709 is covered, but yes, verification is the way to check how well it actually does in terms of accuracy, using the 3D LUT.

    If I want to see how well my profile (ICC) and video card corrections work for color managed apps (like Photoshop), do I just set the ICM in the Settings dropdown, choose either the Verification or Extended Testchart, and leave Simulation Profile OFF?

    Yes.

    If I want to see how well the 3-D LUT works to convert Rec 709 video to my newly profiled monitor, do I do the same, except choose the respective (video) testchart, or do I need to engage the Simulation Profile? My instinct is to choose the Rec 709 Simulation Profile, and select the Rec 1886 Tone Curve. I’m still confused about what I’m actually verifying?

    To verify the 3D LUT, simulation profile needs to be enabled and used as target (checkbox). Basically the same settings need to be used as on the 3D LUT tab for source colorspace and tone curve. When the 3D LUT is applied in software like in your case, device link profile needs to be enabled and set to the profile that accompanies the 3D LUT .cube file (same basename).

    I’m assuming the 3-D LUT I created is NOT a DeviceLink, since (as I understand it)

    A device link profile is, for all intents and purposes, a standardized 3D LUT format.

    When using my 3-D LUT in Speedgrade, do I need to unload the calibration and reset the gamma in the video card through the loader?

    Either that, or (re-)create the 3D LUT without calibration applied (from the existing profile, see above), and keep the video card gamma tables. I think the latter may be preferable as you then don’t need to remember to turn off/on the calibration.

    In other words, does the 3-D LUT include all the necessary corrections applied by my profile/calibration already, or does it assume that the video card always contains the profile data.

    It depends on wether the calibration was applied to the 3D LUT or not (checkbox on the 3D LUT tab).

    #4340

    Shane Taylor
    Participant
    • Offline

    Florian,

    Thanks for the details. I too prefer the option to create the LUT without the calibration applied. That way I can keep it active in the video card at all times, and my color managed apps can use the profile as needed, and I can use the LUT in Speedgrade correctly (since it ignores the windows CMS completely anyway – but will use the LUT + vcgt + monitor OSD settings).  Is my thinking correct?

    Cheers,

    Shane

    #4342

    Shane Taylor
    Participant
    • Offline

    Florian,

    I posted this on LiftGammaGain, but wanted to post it here as well to get your thoughts, as it applies directly to this thread. I queried Nikon as to the exact format of the video files from their Nikon D810 (which I used for my current short film). I did not use an external recorder, so I have shot flat (low contrast) and have H.264 and 4:2:0 data. Not the best, I know, but what I have. Exposure is good. Film is set in the woods at dusk, so I don’t need a lot of shadow detail. Day for night, basically. I’ve included output from MediaInfo (link below) on one of my files that shows much data about the file format, some confusion arose as to what I actually have, hence the questions to Nikon. I’ve also pasted below my questions and Nikon’s response to them.

    http://imageshack.com/f/pnJtYV3nj

    ==================

    Hello Mr. Taylor,

    I have received some feedback to your questions. I hope this proves to be useful for you.

    (1) Is the actual color data in the video in the YUV color space format as indicated, or is it really RGB?
    A: The color space of D810’s video is YUV space. Its color gamut is equal to sRGB.

    (2) Rec709 (BT.709) is listed as the Color Primaries, yet Rec 601 is listed as the Matrix Coefficients.I don’t quite understand this difference. Which is it?
    A: As above #1, Color Primaries, Matrix coefficients and Transfer characteristic have different meaning respectively. That means, those don’t have to have the same value. In order for any 3rd application to display Nikon’s video correctly, those applications should read these tags to process the Nikon’s video with correct color reproduction. However Nikon cannot force 3rd party application to do it because it’s up to 3rd party. Please note that D810’s video has full range value in the data, 0-255, any 3rd application should read the data as full range value. If the data is read as limited range, then the gradation will be lost.

    (3) In addition, the “Transfer Characteristic Original” is different, in being “BT.470 System M”. Why the difference?
    A: Color primaries, value of “B.T.709” is color space. Transfer characteristics, value of “B.T.470 system M” is gamma. Matrix coefficients, value of “B.T.601” is matrix for RGB <-> YUV conversion.

    (4) When pulling my video from my D810 into a color grading application, do I assume that the data is in Rec709 format, or do I need to assume a different color space before the transform to XYZ?
    A: Color primaries, value of “B.T.709” is color space.

    Kind regards,
    Dave

    ======================

    What I’m trying to determine is, even before the LUT is applied in Sg, if Sg is interpreting the footage from the camera correctly (as described in #2), given the color space information provided by Nikon in #3. Or, if not, is there something I could do in creating the LUT that would accommodate the correct conversion. As the product manager for Speedgrade has pointed out to me, Speedgrade is Rec 709 through and through, it expects 709 as input and outputs what is to be interpreted as 709 (unless a calibration LUT is applied to output). Internally, it’s all just 32-bit floating point math and numbers. What I’m trying to determine is if what I’m feeding Sg is what it is expecting, or do I need to make some adjustments to the footage either prior to, or (if possible), within the calibration LUT.

    For example, in the “BT. 470 System M” they use, it lists a gamma is 2.2, as shown in

    https://www.itu.int/dms_pubrec/itu-r/rec/bt/R-REC-BT.470-6-199811-S!!PDF-E.pdf  (you need to Copy & Paste this link since the forum chokes on the !!)

    Also of particular interest is the chromaticity coordinates and Illuminant C lhttps://www.itu.int/dms_pubrec/itu-r/rec/bt/R-REC-BT.470-6-199811-S!!PDF-E.pdfisted in Table 2 as well as the first 5 points in Annex 2. Not what I’d expect. It also looks like they use a 7.5 offset (if I’m interpreting difference between black and blanking level correctly – although in #2 Nikon does say the data is full range). I also don’t understand the use of the BT 601 RGB<>YUV conversion they list in answer #3 either, where it is used, or if it is even important to me (or Speedgrade).

    One thing that is encouraging is that when I pull up the footage in both Speedgrade (unmodified) and in Nikon’s ViewNX application (which shows the video unmodified), I see an indistinguishable image on the same screen in the two images. However, when I open the same footage in Windows Media Player, I see an image that has a slightly higher gamma, i.e., hightlights are brighter, shadows are darker (greater contrast, more dynamic range), but a narrow range of midtones seem to be an exact match. And although the colors look the same, they seem to be a little more saturated in WMP. What, if anything, might I interpret from this?  Also, the color of the shirt worn on-screen doesn’t quite match the actual shirt (in hand), but that could be a white balance matching issue of the original footage, which I might be able to fix in the grade.

    If you have any insight into any of this, suggestions or recommendations would be greatly appreciated. Otherwise, I must just assume that it is what Speedgrade is expecting and hope for the best.

    Thanks,

    Shane

    • This reply was modified 7 years, 7 months ago by Shane Taylor.
    • This reply was modified 7 years, 7 months ago by Shane Taylor.
    • This reply was modified 7 years, 7 months ago by Shane Taylor.
    • This reply was modified 7 years, 7 months ago by Shane Taylor.
    #4349

    Florian Höch
    Administrator
    • Offline

    I too prefer the option to create the LUT without the calibration applied. That way I can keep it active in the video card at all times, and my color managed apps can use the profile as needed, and I can use the LUT in Speedgrade correctly (since it ignores the windows CMS completely anyway – but will use the LUT + vcgt + monitor OSD settings). Is my thinking correct?

    Yes.

    I also don’t understand the use of the BT 601 RGB<>YUV conversion they list in answer #3 either, where it is used, or if it is even important to me (or Speedgrade).

    This concerns the video codec. If it’s working correctly, it should use the defined matrix.

    However, when I open the same footage in Windows Media Player, I see an image that has a slightly higher gamma, i.e., hightlights are brighter, shadows are darker (greater contrast, more dynamic range)

    Hmm. It may be that WMP (incorrectly) interprets the file as having video levels instead of full range.

    If you have any insight into any of this, suggestions or recommendations would be greatly appreciated.

    You could use the combination of MPC-HC and madVR to check if the decoding matches expectations. Pressing CTRL+J in MPC-HC during playback with madVR will bring up information about the primaries and matrix used for decoding (among other things).

    #4353

    Shane Taylor
    Participant
    • Offline

    OK, thanks! I have MPC-HC installed, and MadVR downloaded, I just don’t know how to enable MadVR in it. Will research it and check it as you suggest. Calibrating my monitor is quickly becoming the most time-intensive part of my entire post effort, having to learn so much just to make sure that what I’m seeing on my screen when I grade is in fact what it should look like! Now on to MPC-HC and MadVR. I could, of course hire a professional, but I wouldn’t really be any more certain that it is right then either.

    Hmm. It may be that WMP (incorrectly) interprets the file as having video levels instead of full range.

    Interestingly, in MPC-HC, changing the output range from full to signal makes no difference.  Enabling/Disabling color management does,  however, even when I have no ICC profiles loaded in Windows CMS and the video card gamma table has been reset with the loader. Not sure what kind of color management it is doing.

    #4355

    Shane Taylor
    Participant
    • Offline

    Florian,  so in terms of the format that my Nikon footage is in and the 3-D LUT I create in DisplayCAL to use in Sg so that I see Rec 709 data correctly on my monitor, my confusion centers around what exact settings I need to use in DisplayCAL in creating that profile LUT.

    For example, I’ve calibrated my monitor and created a 3-D LUT for my display in DisplayCAL using the Rec 709 colorspace and the BT.1886 Tone Curve, with a gamma of 2.4. I did this based upon prior reading which indicated that, viewing content destined for theatrical exposition in a dim environment like my room, a gamma of 2.35-2.4 might be more appropriate. However, Nikon’s response indicates that the footage encodes with a gamma of 2.2 from Rec 470 System M. But in my 3-D LUT settings, I used (again) Rec 709 and BT 1886 with a gamma of 2.4. To me, that seems to be a mismatch. Aren’t I telling DisplayCAL to create a correction for my monitor to show a footage encoded with 2.4 gamma on my newly calibrated monitor at 2.4 gamma – but instead the footage is actually encoded in 2.2 gamma? If this is correct, then that is why I’m trying to understand what Nikon has told me, so that I can create the appropriate LUT. Am I just way off base and missing the point completely? I think you’ve already told me that for a LUT used for calibration, all source and target settings be the same. I believe, in my current case, they are not. Am I in error using 2.4 (1886) in my calibration and LUT creation?

    #4360

    Florian Höch
    Administrator
    • Offline

    However, Nikon’s response indicates that the footage encodes with a gamma of 2.2 from Rec 470 System M.

    Even if that’s the case, the way the footage then looks through the 3D LUT is what counts, so the end result will be correct for the given target (possibly after adjusting the grade).

    #4361

    Shane Taylor
    Participant
    • Offline

    So, it appears that I can independently create a display 3D LUT that determines my preferred viewing conditions (e.g., gamma 2.4) of ANY Rec 709 color space data, and since Speedgrade assumes all footage it holds internally is Rec 709, then I’ll see correctly what is represented by those numbers as Rec 709, even if it happens to appear flat in the case of loading unconverted log footage. Even if my Nikon footage was not strictly Rec 709, I think I’m hearing that it doesn’t matter since it would be like grading unconverted log footage directly to bring it into ‘linear’ space.

    Have I arrived? 😉

    A couple of final questions before I run a Verify regarding calibration and creation of the LUT in displayCAL. On the calibration tab, should I stick with my preferred 1886 for Tone Curve, or should I use gamma 2.2?  This tab is establishing my preferred viewing conditions, right? Similarly, since we now know the Nikon footage is gamma 2.2 (my source), should I use that on the LUT tab for source Tone Curve or stick with 1886? This tab specifies the source of the data that is to be translated to my preferred viewing conditions established on the calibration tab, right?

    #4362

    Florian Höch
    Administrator
    • Offline

    Even if my Nikon footage was not strictly Rec 709, I think I’m hearing that it doesn’t matter since it would be like grading unconverted log footage directly to bring it into ‘linear’ space.

    Sounds about right (although in the case of log footage I’d assume there will be some input LUT to bring it into linear space).

    On the calibration tab, should I stick with my preferred 1886 for Tone Curve, or should I use gamma 2.2? This tab is establishing my preferred viewing conditions, right? Similarly, since we now know the Nikon footage is gamma 2.2 (my source), should I use that on the LUT tab for source Tone Curve or stick with 1886?

    Depends on what you’re grading for. Gamma 2.2 is still being used, but slowly being replaced by BT.1886.

    This tab specifies the source of the data that is to be translated to my preferred viewing conditions established on the calibration tab, right?

    The 1D calibration only influences non-color managed applications in terms of tone curve, so has no impact on what you get when (e.g.) the 3D LUT is used.

    #4363

    Shane Taylor
    Participant
    • Offline

    me  – …should I use that on the LUT tab for source Tone Curve or stick with 1886?
    Depends on what you’re grading for. Gamma 2.2 is still being used, but slowly being replaced by BT.1886.

    Right, ok, so does that mean the Tone Curve on the 3D LUT tab is not meant to be the ‘source’ gamma, but the gamma you want to apply to the source?

    By ‘what you’re grading for’, let’s say that if I grade two versions of my film to look exactly the same through both a 2.2 gamma and 2.4 gamma calibration LUT, and output both to DCI to display in a theater at DCI’s 2.6 gamma, which will turn out darker, 2.2 or 2.4 version?

    The 1D calibration only influences non-color managed applications in terms of tone curve, so has no impact on what you get when (e.g.) the 3D LUT is used.

    This last response is confusing. As we’ve discussed previously, the 1D calibration (vcgt) is global and affects all output equally. Also, won’t the 3D LUT be affected as well depending on whether calibration was applied when creating it, and whether or not the gamma table has been loaded by the loader.

    I really appreciate your patience.

Viewing 15 posts - 1 through 15 (of 28 total)

You must be logged in to reply to this topic.

Log in or Register

Display Calibration and Characterization powered by ArgyllCMS