LG C8 Lut

Home Forums General Discussion LG C8 Lut

Viewing 15 posts - 46 through 60 (of 202 total)
  • Author
    Posts
  • #21089

    Florian Höch
    Administrator
    • Offline

    *.3dl = original(?) Autodesk format (fastest to slowest incrementing channel order B-G-R, with input intervals in header)
    *.dcl = similar to Autodesk *.3dl, but DeviceControl specific 3D LUT format (fastest to slowest incrementing channel order R-G-B, no input intervals in header)

    Due to the different channel order, the formats are not interchangeable.

    #21092

    János Tóth F.
    Participant
    • Offline

    *.3dl = original(?) Autodesk format (fastest to slowest incrementing channel order B-G-R, with input intervals in header)
    *.dcl = similar to Autodesk *.3dl, but DeviceControl specific 3D LUT format (fastest to slowest incrementing channel order R-G-B, no input intervals in header)

    Do we know which one is the hardware format (RGB or BGR)? May be this current format should be labeled as “LG (LS)” if it’s not the native hardware format and “LG (native, LS)” in case it is. Or the name should be kept generic and a separate file generated with the correct intervals (as yet another drop-down list item for LG would feel redundant) or even different LUT data which is CalMAN compatible (in case CalMAN only accept 16-255 data and always converts it before upload regardless of the interval line or rejects 0-255 interval lines…) depending on what CalMAN actually wants to do with that interval data and what data it accepts.

    In any case, profiling using full RGB works fine with DisplayCal while having the LG Black Level control set to High, there’s no need to calibrate using 16-235 or 16-255 and the LG Black Level control set to Low, which is the only way Calman is able to. DisplayCal is only able to generate the 3D LUT, but leaving the 1D LUT untouched seems to be the recommended way to minimize banding and posterization anyway, according to posts in the LightSpace threads on avsforum.

    Are you sure? I think with HDMI Black set to High, the default Contrast = 85 maps the incoming 1023 (10 bit RGB) to the same 940 output as it does with an incoming 940 to 940 with HDMI set to Low. This is how you get the same measured lumininace for video input 100% (and not 109% which is brighter because WTW is not clipped) and PC input 100%. So, if you go with High, you should set Contrast to 100 (no WTW for video range). Which would be fine (and even looks better for the human eye, in sync with the HDR mode…), except the ABL is more aggressive with higher Contrast and that can easily come into play when you enable BFI (which cuts the brightness to half, so OLED Light needs to be raised by ~100%). I tried this with CalMAN’s 16-235 patter set and didn’t like it.

    It’s fairly easy to test this: raise Contrast from 85 to 100 (in your current, calibrated picture mode) and check if the lumininace of the 100% white patch increased significantly (and thus the previously not used WTW range is now utilized, e.g. 100% Full RGB now maps to 109% video and not 100% video) or not (clipping occures which should be immediate since the current settings should have no headroom).

    I am not concerned about keeping WTW but as I said, this effects the ABL and not in the good way (it kicks in faster rather than slower).


    Also, it turns out the HDR10/HLG mode uses the same 33^3 3DLUT as the SDR mode. I wonder if it would be better to leave the HDR10 1DLUT at neutral and build a Rec2020+gamma2.2 3DLUT from a static shaper+matrix profile (something simple, like 45 gray patches + RGB patches somewhere below the R+G+B limit, not above where the W “boosts” the brightness). When the panel is so infamously unstable, less is probably more…

    #21095

    Florian Höch
    Administrator
    • Offline

    Do we know which one is the hardware format (RGB or BGR)?

    No, and it is a bit moot as long as there is no direct access to the hardware LUT (without using 3rd party software). The BGR ordering is definitely more common because it is the most “natural” way in which to increment the channels in nested loops.

    or even different LUT data which is CalMAN compatible (in case CalMAN only accept 16-255 data and always converts it before upload regardless of the interval line or rejects 0-255 interval lines…)

    I would be surprised if CalMAN expected anything different than the 3DL format specified in [1].

    [1] http://download.autodesk.com/us/systemdocs/pdf/lustre_color_management_user_guide.pdf

    #21096

    János Tóth F.
    Participant
    • Offline

    I would be surprised if CalMAN expected anything different than the 3DL format specified in [1].

    [1] http://download.autodesk.com/us/systemdocs/pdf/lustre_color_management_user_guide.pdf

    As I indicated earlier, CalMAN refuses to use the standard .3dl format as well. It loaded the file when I copy-pasted the 64-1023 header from a CalMAN file but the DisplayCAL file started with 0, so it’s not the same. The question is if this data matters at all or not (placeholder to satisfy the format template or important data for later processing before the upload).

    #21097

    Florian Höch
    Administrator
    • Offline

    Technically, the actual input intervals do not matter, but if (as indicated) CalMAN encodes the input levels (video vs full range) in the first (and last) interval value, it may matter (while it is absolutely possible to create a 3D LUT that maps input values in video range, it would waste LUT points for values that will ultimately be clipped anyway, e.g. anything below video black).

    #21098

    János Tóth F.
    Participant
    • Offline

    Well, the CalMAN GUI DDC window shows me a 3DLUT table explorer with 64 as the starting point as 0% (the values are in percentage), then 96 at >0%, etc (as the header goes).

    I remember reading somewhere that 64 is mapped to 0 but that comment (as least what I recall from it) didn’t explain what that means (conversion or simply that “64 means 0 and we are just showing you 64 to confuse you”).

    It certainly sounds stupid to create and save a 3DLUT with video levels and then expand it to full range just before upload (without saving the final data) but it wouldn’t be the craziest thing about this software.

    I guess I can get a relatively good answer by some experimentation. Or I will be begging for a LightSpace Device Control file. 😀

    #21100

    stama
    Participant
    • Offline

    @janos: well, I did the measurements you asked:

    • graphics card: full range, TV black level = high, Contrast = 85 -> 255 patch measures 104 cd
    • graphics card: full range, TV black level = high, Contrast = 100 -> 255 patch measures 131 cd

    And I also did this:

    • graphics card: full range, TV black level = low, Contrast = 85 -> 255 patch measures 131 cd
      graphics card: full range, TV black level = low, Contrast = 100 -> 255 patch measures 131 cd (same as above)

    I attached the 3dl file too, so you can have a look at the data inside it.

    I don’t understand the purpose of these measurements, but here they are! 🙂

    I use the TV as a display monitor, so calibrating full range makes sense. If I want to use the HTPC for movie playback, then I have two choices:

    • either let the video renderer do the limited -> full range conversion, and have WTW clipped indeed
    • or configure the video renderer to output limited range, and just switch the TV black level control to “low”

    To be honest, I chose option 1 most of the time.

    Regarding HDR/DV, they make use of different LUTs than SDR modes, according to what Ted said on avsforum. But the TV needs to be in HDR or DV mode, when sending the LUT data in order to be written in the HDR or DV LUT.

    • This reply was modified 2 months, 1 week ago by stama. Reason: attached the 3dl file
    Attachments:
    You must be logged in to view attached files.
    #21103

    János Tóth F.
    Participant
    • Offline

    @stama – Ah, sorry. I forgot I failed with HDMI Black = High because the “expand built-in patterns to PC levels” option works with the built-in pattern window (as I later found out) but not with madTPG (and I used the latter when I gave up with the full range setup).

    I assumed HDR10 uses a different 3DLUT (an actual 3×3 matrix) but it’s Ted who wrote (as an answer to my direct question) that it’s a 33^3 3DLUT. But CalMAN’s LUT engine is unable to create a useful 3DLUT from multi-point measurements with volumetric correction methods due to the WRGB panel’s “W boost” (that “dome” on top of the HDR mode gamut where the RGB is already maxed out but W can still go higher and it’s allowed to do so in HDR mode) and that’s why CalMAN forces the “matrix method”. So, it creates a 3×3 matrix but then fills in a 33^3 3DLUT for upload.

    DolbyVision on the other hand has no active programmable 3DLUT (well, at least we can’t access it and may be Dolby forbade LG to keep it in the chain but may be a neutral 3DLUT is technically still there), all the mapping is done by the Dolby CMU (rather than LG’s processor – but that doesn’t mean we couldn’t use it to calibrate the panel response the same way as if it was a 1DLUT). Although the 1DLUT is there. For that reason it might be better to keep HDR10 and DV in close sync. But I currently see no way of creating a 1DLUT from a static profile (other than buying LightSpace ).

    #21104

    János Tóth F.
    Participant
    • Offline

    I did a quick test: created a 3DLUT from the measurements of the SDR Game mode with Rec2020 and gamma 2.2 as targets and uploaded it to the HDR Game mode with CalMAN. Aside from some obvious black-crush it seems mostly fine. I think this could actually work. (But I would obviously need to measure the HDR mode and sort this limited/full range thing out.)

    And the black crush goes away when I create the 3DLUT with full range input, limited range output (no clipping) in DisplayCAL. Crazy crazy CalMAN. 🙁

    #21106

    János Tóth F.
    Participant
    • Offline

    I wrote down the steps of experimental LG HDR10 hardware calibration with DisplayCAL here: https://www.avsforum.com/forum/139-display-calibration/2962814-2018-lg-oled-calibration-user-settings-no-price-talk-91.html#post58839958

    There is no way of validating the 3DLUT alone in HDR mode after the hardware upload (because the math based PQ->gamma2.2 tone mapping and the 3DUT based the gamut mappings can be disabled or enabled together but not separately).

    After this I used the same workflow (save for the HDR10 metadata injection) for an SDR mode and got fairly bad validation results (about dE=7 max). I guess ArgyllCMS doesn’t like having R,G,B measurements at 50% only. (But as I remember from earlier experiments, it would hate it much more to have RGB at 100% after all components got clipped at unique <100% levels except for W in HDR.)

    #21123

    chros
    Participant
    • Offline

    *.3dl = original(?) Autodesk format (fastest to slowest incrementing channel order B-G-R, with input intervals in header)
    *.dcl = similar to Autodesk *.3dl, but DeviceControl specific 3D LUT format (fastest to slowest incrementing channel order R-G-B, no input intervals in header)

    Sorry for asking again, but do you plan to add support for LG B8’s  17 points cube? Thanks!

    #21197

    János Tóth F.
    Participant
    • Offline

    Ah, I finally managed to figure out how to use CalMAN for DisplayCAL generated 3DLUT upload: what it expects in the .3dl (header and data) depends on it’s own patter generator setting which is “TV 16-235” by default (restored every time it’s re-launched or a template re-loaded). After setting that to “PC 0-255” it’s happy with 0-255->0-255 Kodak .3dl files and their original headers as they come from DisplayCAL. 🙂

    #21190

    Josh Bendavid
    Participant
    • Offline

    Hi,

    FYI I’m working on adding support for the LG calibration functions (LUT upload, etc) to the open source pylgtv library (https://github.com/TheRealLink/pylgtv/tree/master/pylgtv)

    WIP branch is here https://github.com/bendavid/pylgtv/tree/calibration/pylgtv

    e.g. https://github.com/bendavid/pylgtv/blob/936486d1c4fb4d4b6b0e9d93ab38fdb931a6054b/pylgtv/webos_client.py#L837-L885

    The intention is to use this together with DisplayCal for a full FOSS solution.

    The internal data format used by the python library here is numpy arrays.  It should be straightforward to add support for reading the appropriate file formats, and if there is interest one may even think about direct integration in DisplayCal.

    The protocol is websockets/json based and is closely related to the exiting remote control protocol, and the data exchange uses a simple base64 encoding scheme to send the calibration data to the TV.

    Currently only the 2018 alpha9 (ie C8) functions are supported, but if someone with alpha7 or 2019 model is willing to help I can also add support for e.g. 17^3 3d luts, or the custom tone mapping functions for the 2019 sets.

    (Dolby vision config upload is not present yet, but I know how the protocol works for that as well and will add it soon)

    I should also say that I do not know yet the purpose or meaning of the 9 floating point numbers which are sent to the tv as part of the start calibration and end calibration commands (currently sending by default the same values used by existing commercial tools)

    https://github.com/bendavid/pylgtv/blob/936486d1c4fb4d4b6b0e9d93ab38fdb931a6054b/pylgtv/constants.py#L3

    #21209

    chros
    Participant
    • Offline

    Ah, I finally managed to figure out how to use CalMAN for DisplayCAL generated 3DLUT upload:  …  it’s happy with 0-255->0-255 Kodak .3dl files …

    Good job!

    FYI I’m working on adding support for the LG calibration functions (LUT upload, etc) to the open source pylgtv library … branch is here https://github.com/bendavid/pylgtv/tree/calibration/pylgtv

    Amazing, thanks!

    The internal data format used by the python library here is numpy arrays.  It should be straightforward to add support for reading the appropriate file formats

    Errr, I thought those utils (Calman, DeviceControl) only upload the created 3dlut, but do they converting them on-the-fly during upload?

    Currently only the 2018 alpha9 (ie C8) functions are supported, but if someone with alpha7 or 2019 model is willing to help I can also add support for e.g. 17^3 3d luts, or the custom tone mapping functions for the 2019 sets.

    I’m in! I have a 65B8 (alpha7 gen2). DisplayCal doesn’t support 17 points cube with DeviceControl 3dluts yet, but it’s there with Kodak 3dlut (supported by Calman). So, if the latter could be used, that would be amazing.

    Enable the Issue tracker in your github repo, and we can continue the discussion there.

    (Dolby vision config upload is not present yet, but I know how the protocol works for that as well and will add it soon)

    Wow! 🙂

    #21210

    Josh Bendavid
    Participant
    • Offline

    Yes Calman and DeviceControl are reading the created 3d lut and encoding it appropriately to send to the TV.  When I say “The internal data format used by the python library here is numpy arrays.” this is just a choice I have made for the implementation.  The actual ordering of the elements in the arrays is chosen to be the same as what Calman or DeviceControl sends to the TV.  I did not actually look closely at the file formats yet since I haven’t yet implemented any file input, but any variation of element order.

    The data flow in the python library is numpy array (with correct ordering) -> flattened binary representation -> base64 encoded string -> json message -> websocket communication to TV

    What remains to be implemented is file input in a convenient format -> numpy array -> numpy array with desired element ordering.  This is extremely straightforward for any delimited text type format, and any reordering can be handled in a straightforward way by manipulating the numpy arrays.  Open to suggestions for which file format(s) are the most useful to support.

    Issues are enabled for the github repo now.

Viewing 15 posts - 46 through 60 (of 202 total)

You must be logged in to reply to this topic.

Log in or Register

Display Calibration and Characterization powered by ArgyllCMS