3x 1D LUT vs 3D LUT

Home Forums General Discussion 3x 1D LUT vs 3D LUT

Viewing 7 posts - 1 through 7 (of 7 total)
  • Author
    Posts
  • #22347

    Wire
    Participant
    • Offline

    (I’m ready to be disabused if my whole way of thinking is distorted)

    I’ve been reading up on old threads discussing relationship between ICC profiles, VCGT, 3x 1D LUTs,  video industry 3D LUTs, devicelink profiles, gamut emulation, etc.

    I’m struggling to grasp the implications the in-display 3D LUT approach as compared a 3x 1D LUT, and moreover why more isn’t done with VCGT

    I see that from OS graphics POV, the gamut of the device has typically been  invisible (in spite of EDID potentials) and that you want all the scarce bits from the graphics card to be applied to the display’s regime, subject to its limits. And I see that it’s traditionally been fine to commandeer some of these bits to set the basic tonal response of the display: temp and TRC as you want the display response normalized for generalized device-like data you may throw at it and the alignment has to be done somewhere in the pipeline, so why not there where it helps everything. IOW I see why there are some kinds of data massaging you won’t try to do with VCGT, like gamut mapping because that’s the wrong place in the system to do it; that pipeline was not designed for it.

    Am I off the mark on the above? Because my question is somewhat be informed by it:

    Once you have programmable LUTs in the display, what’s the practical difference between a 3x 1D LUT and a 3D LUT?

    I see the data representation diffs, and these point to significantly different levels of abstraction. Is it that 3D LUT is a more general and display-agnostic approach to giving a display a personality? Or are the capabilities of 3x 1d and 3D radically different due to an aspect I can’t see? Isn’t a 3x 1D LUT one form of a 3D LUT, just not a “3D LUT” as regarded by conventions of the industry.

    (Maybe you can see I am trying to work backwards from terminology into mechanics. But I’ve been reading up on this and it’s not made clear.)

    IOW what aspects of display personality can you do with 3D LUT that you can’t do with display 3x 1D LUT?

    Beyond this, Is the principle of operation of ICC profile and CMM pretty much same sort of technique but everything gets input / output referred to PCS and CIE model, whereas “3D LUTS” are only device local transformations (however abstract the device may be)?

    I notice the ICC profile spec keeps referring to key color data structures as “a set of non-interdependent per-channel tone reproduction curves” Does this mean that the functions of a 3D LUT can only occur in CMM from an input or working profile to an output profile; that’s what defines the dependency that allows a mapping? This would appear to be rational for devicelink as a way to sidestep a lot of expensive processing…

    TIA

    #22364

    Florian Höch
    Administrator
    • Offline

    Or are the capabilities of 3x 1d and 3D radically different due to an aspect I can’t see?

    Sets of 1D LUTs are independent from one another, meaning one channel cannot affect the output of others. A 3D LUT can map an input triplet to any arbitrary output triplet. E.g. with 3x 1D LUT that maps channel R in 64 to out 128, any combination where input R is 64 will always map output R to 128, irrespective of the two other channels. With a 3D LUT, input 64 64 64 could map to 128 128 128, but input 64 128 128 could map to 128 64 64 etc.

    Beyond this, Is the principle of operation of ICC profile and CMM pretty much same sort of technique but everything gets input / output referred to PCS and CIE model, whereas “3D LUTS” are only device local transformations

    In ICC CM, on-the-fly linking (and creation of color transforms) allows for different source spaces. The term “3D LUT” as used by the video industry is a “baked” transform (aka device link) and so limited to one specific source space.

    #22368

    Wire
    Participant
    • Offline

    OK yes! What you just said makes sense according to writeups.

    I can see why (to use an example from another article) if you wanted to build Photoshop’s channel-mixer, or a sweet grading utility that let you do amazing things artistically, 3D LUT is an efficient way to go.

    But in practical sense of device gamut emulation, given that the target space must (in ordinary case) fit with the display space, and RGB displays are pretty well behaved (:0), when is such arbitrary mapping helpful?

    IOW, I get Resolve users wanting 3D LUTs for grading… But what’s the application for video renderers, or display internal support? My guess is that if you don’t have a fully color managed pipeline, you offload the CMM just to build the 3D LUT, incorporating display calibration you see fit, and use relatively fast cheap LUT logic to run the display personality.

    Where it gets confusing to me is that tools like HTPC MadVR, Kodi, etc, or black-box renderers, take 3D LUTs but they should have plenty of power to do a full CMM. Is it licensing costs, or….? Somewhere behind the blackbox is a full ICC system. As to logic cost, can’t a RaspberryPi be modded to work as a full CMM? I think I’m getting closer to answering my own question… HT calibrators are a business that may not see the value of making this easy or general, or/and geeks are nerding out with edge projects

    You can see how out of touch I am re experience with the way things are actually doing things, but I’m sort of OK because it seems like certain approaches take on a life of their own that further yech obviates. I merely want to understand the reasoning so I can understand the tools at hand.

    I got here by reading a discussion elsewhere on these forums someone wishing Dell Ultrasharps could be programmed by Argyll.

    Also is a DisplayCal XYZLIT ICC profile technically a 3D LUT profile, or is the 3D term reserved for devicelink style approaches? The terminology is maddening

    #22375

    Vincent
    Participant
    • Offline

    I got here by reading a discussion elsewhere on these forums someone wishing Dell Ultrasharps could be programmed by Argyll.

    Also is a DisplayCal XYZLIT ICC profile technically a 3D LUT profile, or is the 3D term reserved for devicelink style approaches? The terminology is maddening

    1DLUT (or 3 x 1DLUT) just fixed grey / white in the way Florian explained. It cannot fix/correct/emulate gamuts.
    HW are 3 1D tables loaded with profile VCGT tag.
    Each possible value in range (0-255) will have its calculated correction. You did not measure each 256 grey ramp to compute those entries, you just measure a few of them (12/24/48/96) and interpolate correction for all entries.

    ***

    Those Dells, or Eizo CS, some Zx form HP, or other models have a “lut matrix” which can emulate smaller colorspaces like sRGB.
    Pipeline works as follows:
    RGB data -> 3 x 1D LUT (undo gamma encoding, “Prelut”) -> 3×3 matrix primaries mixing for emulate gamut -> 3x 1DLUT (re do gamma encoding and fix white/grey, “postlut”) -> RGB data to panel
    Each possible value in range (usually 0-1023) will have its calculated correction. As with 1DLUT in VCGT you do not measure all of them, just a few (10/20), TOO FEW unfortunately for current QC on some display vendors. An hypotetical ArgyllMS 3 which can take 48/96 measurements in a 0-255/0-1023 ramp could solve some of the calibration issues associated with HW calibration on those poorQC/cheap displays.

    VCGT tag in profiles cannot store that info. Some GPU actually have that kind of prelut-matrix-postlut (example AMD’s AVIVO engine & EDID sRGB gamut emulation).
    IDNK is there is a shared API between GPU vendors to use this structure in GPU, so it could be used in a VCGT2 tag in the future.

    ***

    Then you have a LUT3D, which are a set of isolated nodes in a cube (each possible input has not a precalculated correction, it is interpolated on the fly).
    In HW LUT3D usually you have a 17x17x17 node cube, hence it is expected that underlying display has not a very bad uncalibrated behaviour, otherwise with just 17 values in the 0-255/0-1023 ramp some deviation cannot be corrected.
    Is not uncommon that such displays may have some full entries 1D LUT, then a LUT3D with N-node per side cube.

    An XYZLUT display profile taken with N measurement pacthes could be transformed into a M-nodes per side LUT3D trying to enulate a transformation between an arbitray ICC profile (source colorspace) and that display profile. 17 node per side is somehow equivalent to a profile with about 4-5k measurements equally spaced.
    LUT3D interpolation  (HW or software) has to fill the gaps. Color management engine has to fill the gaps in an ICC profile.
    The cool thing is that 1 XYZLUT profile can generate an arbitary number of LUT3Ds just varying source colorspace. You can see a LUT3D as a particular transformation with source and destination colospaces fixed.

    “Software LUT3D” like resolve/madVR uses “generic purppose” computing units in graphics card to store and interpolate RGB outputs for an arbitray RGB input in  a N-node per side LUT3D.

    • This reply was modified 4 years, 3 months ago by Vincent.
    #22378

    Wire
    Participant
    • Offline

    I got here by reading a discussion elsewhere on these forums someone wishing Dell Ultrasharps could be programmed by Argyll.

    Also is a DisplayCal XYZLIT ICC profile technically a 3D LUT profile, or is the 3D term reserved for devicelink style approaches? The terminology is maddening

    Those Dells, or Eizo CS, some Zx form HP, or other models have a “lut matrix” which can emulate smaller colorspaces like sRGB.
    Pipeline works as follows:
    RGB data -> 3 x 1D LUT (undo gamma encoding, “Prelut”) -> 3×3 matrix primaries mixing for emulate gamut -> 3x 1DLUT (re do gamma encoding and fix white/grey, “postlut”) -> RGB data to panel

    Thanks VIncent

    In a world of my design 🙂 I would not extend GPU VCGT to V2, I would put that logic in the display—as is being done. Give the display a desired personality that covers its use cases. Central to my way of thinking  is that powerful microprocessors and fast communications are dirt cheap, so old ways of partitioning the system can be left behind in favor of more modular architectures. That said, I’m keenly aware of of the tensions in progress.

    I need to go back and re-peruse the vast amount of documentation in the FAQ and users guide here guide before spending much more of your time, but I’ll ramble on here anyway with more questions / observations…

    The essence of my post is trying to gain an understanding why devices have the designs they do, and sorting out the terminology. Thanks for helping me…

    When I read the ICC profile spec, it talks about 3x non-interdependent 8/16 but LUTs. There are three, they are independent, sounds like a 3x 1D LUT. Yet ICC tools can do the gamut bending, no?. So it occurs they have to refer to something else—the PCS—and work in input/output pairs? From what you describe about “pre/post” LUTs in displays seems like the same idea.

    So I think what you mean by “undo the gamma” is some kind of normalizing to a device-local PCS (not)? How can the display know what the input gamma is unless it’s told, so this suggests that to do its emulation the display must present to the source as well-defined (say a reference like Adobe RGB) color space. Th tension is when you do this you loose degrees of freedom in OS-based color management.

    My thinking still has a gap here that maybe you can help

    It seems like the interactions of device LUTs and OS custom device profiles become somewhat mutually exclusive or they will end up working against each other, the best-case of downside being addition of  Q noise in the form of odd contours when data that are already coded for perceptual efficiency are decoded and recoded.

    And this may interact with other display features connected to the display personality logic, such as described in another thread for Dell Uniformity Correction (UC) interplay with the color personality LUTs.

    So to HW calibration options, it seems you want to either give the device a reference personality (eg Adobe RGB) and config system to run it as such, or run it native and use a custom OS profile.

    For WCG Dell covering DCI and Adobe, If you want the benefits of UC in Dell you have to use at least a 6500K reference mode. The UP2516D menu “UC” setting calls it “Calibrated”) suggesting interplay between colro and uniformity logic. If you want a non-6500 white you are forced to turn UC off.

    Dell TRC is selectable as “PC” or “Mac” (my god) but you are locked at 6500K. Good news is I think this leaves the gamut close to native.  Then profile this leaving cal temp “as measured”, and your choice of TRC. I chose sRGB to get towards what you nicely refer to as “sRGB-2020” This gives benefits of UC, with close to native gamut range, and is close to your ideal of “you can have it all!” When handed un-managed images, it degrades gently with good tonal response and somewhat too much saturation, which many people like however wrong that may be.

    It seems like Dell DUCCS can be a way to pre-cal, but prolly doesn’t add much here as the lifting is done by OS.

    As to using DUCCS to modularize display personality, not sure what range of personalities it can do… I still don’t have an i1d  so can’t expore the solution. The SW comes from XRite and is long in the tooth. I assume it gets you good conformance to common refs like DCI-P3, Adobe etc, where a corp deployment can just load a stock working space and be good to go? I also imagine that with DUCCS style personalities the thinking truly crosses over to needs of video, where OS CM is not assumed and you tell the display “be the best 709 you can be..”

    IDK—Struggling to make sense of all this

    Each possible value in range (usually 0-1023) will have its calculated correction. As with 1DLUT in VCGT you do not measure all of them, just a few (10/20), TOO FEW unfortunately for current QC on some display vendors. An hypotetical ArgyllMS 3 which can take 48/96 measurements in a 0-255/0-1023 ramp could solve some of the calibration issues associated with HW calibration on those poorQC/cheap displays.

    Yes thank you for clarifying this. 3D LUT has a numbers problem which is incrementally increasing precision gets LogN more compute expensive.

    VCGT tag in profiles cannot store that info. Some GPU actually have that kind of prelut-matrix-postlut (example AMD’s AVIVO engine & EDID sRGB gamut emulation).
    IDNK is there is a shared API between GPU vendors to use this structure in GPU, so it could be used in a VCGT2 tag in the future.

    I follow you, per my thoughts above about modularity.

    I don’t grok the term “EDID sRGB gamut emulation” the way you are using it here. My reference to EDID was a reach in context for the idea that  EDID could make a statement to a CM about the devices gamut limits.  …Maybe nvm about this, my thoughts didn’t gel

    ***

    Then you have a LUT3D, which are a set of isolated nodes in a cube (each possible input has not a precalculated correction, it is interpolated on the fly).
    In HW LUT3D usually you have a 17x17x17 node cube, hence it is expected that underlying display has not a very bad uncalibrated behaviour, otherwise with just 17 values in the 0-255/0-1023 ramp some deviation cannot be corrected.
    Is not uncommon that such displays may have some full entries 1D LUT, then a LUT3D with N-node per side cube.

    Ah ha! So it’s about various display designs details of approach. Given nature of pro vid equipment history, common to have a TV or esp projector with a pretty odd response. So you use GPU to linearize, then use a renderer with a LUT3D to account for its weirdness. In a pathological manifestation this leads to things like Darby-vision hdmi dongles 🙂 I’M Joking! omg

    An XYZLUT display profile taken with N measurement pacthes could be transformed into a M-nodes per side LUT3D trying to enulate a transformation between an arbitray ICC profile (source colorspace) and that display profile. 17 node per side is somehow equivalent to a profile with about 4-5k measurements equally spaced.LUT3D interpolation  (HW or software) has to fill the gaps. Color management engine has to fill the gaps in an ICC profile. The cool thing is that 1 XYZLUT profile can generate an arbitary number of LUT3Ds just varying source colorspace. You can see a LUT3D as a particular transformation with source and destination colospaces fixed.

    You mean “somehow” as in “it’s amazing” or as in “idk, a miracle occurs”

    OK yes, this last sentence is what I mean by “pushing the PCS back”

    “Software LUT3D” like resolve/madVR uses “generic purppose” computing units in graphics card to store and interpolate RGB outputs for an arbitray RGB input in  a N-node per side LUT3D.

    If I grok it all, LUT3D avoids complexity of OS color (WCS) skips idea of CMM. For high-res vid to get most from gaming graphics, cuts to the chase. Conventions of this tech percolated to HTPC community which has always been WIndows and skips ICC, which Microsoft never made a point of involving users because Apple had already staked out a lot of branding turf with Colorsync and MSFT go-to tactic of Embrace Extend Extinguish design could not hit paydirt. Win ICC features were supported (WCS) but never pushed by marketing. Gaming always says “just give me the HW”. It’s been a tussle. Strange that in vid support Apple hasn’t done much better than MSFT, what with Quicktime and inspite of FCP Confusing.  Now even Apple doesn’t really care about Colorsync, per lameness with XYZLUT display profiles.  gar.

    Re XYZLUT prof bug I asked for help at colorsync-users and after a lot of give and take, John Gnaegy from Apple heard my complaint and promised to the  bug report into the tracker at Apple. His last words were, “I think this bug is known…” Oh well.

    I’m super glad for the help. Thank you.

    #22380

    Vincent
    Participant
    • Offline

    I got here by reading a discussion elsewhere on these forums someone wishing Dell Ultrasharps could be programmed by Argyll.

    Also is a DisplayCal XYZLIT ICC profile technically a 3D LUT profile, or is the 3D term reserved for devicelink style approaches? The terminology is maddening

    Those Dells, or Eizo CS, some Zx form HP, or other models have a “lut matrix” which can emulate smaller colorspaces like sRGB.
    Pipeline works as follows:
    RGB data -> 3 x 1D LUT (undo gamma encoding, “Prelut”) -> 3×3 matrix primaries mixing for emulate gamut -> 3x 1DLUT (re do gamma encoding and fix white/grey, “postlut”) -> RGB data to panel

    Thanks VIncent

    In a world of my design ???? I would not extend GPU VCGT to V2, I would put that logic in the display—as is being done. Give the display a desired personality that covers its use cases. Central to my way of thinking  is that powerful microprocessors and fast communications are dirt cheap, so old ways of partitioning the system can be left behind in favor of more modular architectures. That said, I’m keenly aware of of the tensions in progress.

    I think that it will be easiter that GPU vendors fix a standard than to display vendors to stopo relying on obscure propietary libraries for LUT luploading, but I understand your point.

    When I read the ICC profile spec, it talks about 3x non-interdependent 8/16 but LUTs. There are three, they are independent, sounds like a 3x 1D LUT. Yet ICC tools can do the gamut bending, no?. So it occurs they have to refer to something else—the PCS—and work in input/output pairs? From what you describe about “pre/post” LUTs in displays seems like the same idea.

    So I think what you mean by “undo the gamma” is some kind of normalizing to a device-local PCS (not)? How can the display know what the input gamma is unless it’s told, so this suggests that to do its emulation the display must present to the source as well-defined (say a reference like Adobe RGB) color space. Th tension is when you do this you loose degrees of freedom in OS-based color management.

    You undo gamma encoding because you need to have gamma =1 values when adding RGB primaries into the mix in the 3×3 matrix

    Matrix stores “emulated primaries” composition. Native gamut is identity matrix. For example AdobeRGB gamut emulation should have near identity primary emulation for G & B, but R is going to have a mix of the three.

    Post lut reencodes gamma as if display was with “X” TRC curve (calibration target)

    Should DUCCS fix prelut so values are actually gamma 1 when they get in the matrix instead of doing all of it in postlut? You can question it. I just exposed what it does, it behaves that way.
    Complain to Xrite.

    For WCG Dell covering DCI and Adobe, If you want the benefits of UC in Dell you have to use at least a 6500K reference mode. The UP2516D menu “UC” setting calls it “Calibrated”) suggesting interplay between colro and uniformity logic. If you want a non-6500 white you are forced to turn UC off.

    Dell TRC is selectable as “PC” or “Mac” (my god) but you are locked at 6500K. Good news is I think this leaves the gamut close to native.  Then profile this leaving cal temp “as measured”, and your choice of TRC. I chose sRGB to get towards what you nicely refer to as “sRGB-2020” This gives benefits of UC, with close to native gamut range, and is close to your ideal of “you can have it all!” When handed un-managed images, it degrades gently with good tonal response and somewhat too much saturation, which many people like however wrong that may be.

    UC limitations and OSD blocking are MODEL specific.
    Dell 2016 models behave in a different way than 2013 models. Older models lock brightness but let you fine tune RGB gains, to it is possible to get P2 conditions (D50, 160cdm2) even if brightness is locked to a high value for every day use in D65.

    And those are different between manufacturers. AFAIK in Eizo/NEC you can have full display configuration regardless of UC ON/OFF.

    It seems like Dell DUCCS can be a way to pre-cal, but prolly doesn’t add much here as the lifting is done by OS.

    Which lifting?

    1-Use DUCCS, or PME, or Colorbration on a cheap WG monitor with HW cal. Then measure it with DisplayCAL.
    2-Grey/white/gamut emulation is OK ? Done
    3-no OK? run DisplayCAL on top of it. If your GPU is good for calibration task, gradients are neutral & smooth. Done
    4-banding because GPU? bad luck, add a HW upgrade in TODO list (better monitor or GPU)

    It does not matter if you choose native gamut in DUCCS or sRGB. This works

    As to using DUCCS to modularize display personality, not sure what range of personalities it can do… I still don’t have an i1d  so can’t expore the solution. The SW comes from XRite and is long in the tooth. I assume it gets you good conformance to common refs like DCI-P3, Adobe etc, where a corp deployment can just load a stock working space and be good to go? I also imagine that with DUCCS style personalities the thinking truly crosses over to needs of video, where OS CM is not assumed and you tell the display “be the best 709 you can be..”

    IDK—Struggling to make sense of all this

    DUCCS & PME & Colorbration are… not fitted to current Quality Control of native uncalibrated response of their displays.

    If native uncalibrated display has a neutral grey you can expect good results with some uncertainties in achieved whitepoint and typically a little lower contrast than expected.

    If native uncalibrated display is not so well behaved.. well.. grey results can be improved (previous reply)

    It is not a sensible choice to buy a WG display without means of measuring it.

    VCGT tag in profiles cannot store that info. Some GPU actually have that kind of prelut-matrix-postlut (example AMD’s AVIVO engine & EDID sRGB gamut emulation).
    IDNK is there is a shared API between GPU vendors to use this structure in GPU, so it could be used in a VCGT2 tag in the future.

    I follow you, per my thoughts above about modularity.

    I don’t grok the term “EDID sRGB gamut emulation” the way you are using it here. My reference to EDID was a reach in context for the idea that  EDID could make a statement to a CM about the devices gamut limits.  …Maybe nvm about this, my thoughts didn’t gel

    Driver/AMD apps ask display for EDID data and it “believes” EDID data to be accurate (native gamut is as EDID says, white is perfect D65, etc)
    Then app computes a lut-matrix-lut which results on sRGB emulation, on the fly, on GPU, system wide regardless of color profiles. A WG display will behave as if it was a sRGB display.

    Unfortunately AFAIK it is not customizable. Otherwise every monitor can have the kind of calibration that Eizo CS have (or your dell), all in GPU.

    An XYZLUT display profile taken with N measurement pacthes could be transformed into a M-nodes per side LUT3D trying to enulate a transformation between an arbitray ICC profile (source colorspace) and that display profile. 17 node per side is somehow equivalent to a profile with about 4-5k measurements equally spaced.LUT3D interpolation  (HW or software) has to fill the gaps. Color management engine has to fill the gaps in an ICC profile. The cool thing is that 1 XYZLUT profile can generate an arbitary number of LUT3Ds just varying source colorspace. You can see a LUT3D as a particular transformation with source and destination colospaces fixed.

    You mean “somehow” as in “it’s amazing” or as in “idk, a miracle occurs”

    I mean “as GIMP/Photoshop do when they use profiles”. A LUT3D is a particular fixed color transform between 2 profiles/ colorspaces. It is commonly used in video because they use mostly 2/3 source colorspaces. “profile”, measure how your display behaves, get LUT3D for your working colospace… you are done.

    In photo you may want to use more colorspaces: sRGB, P3-D65, AdobeRGB, eciRGBv2,ProPhotoRGB, some external printer service exchange 8bit colorspace, variants of these ones (like ProStar, a Prophoto with L*). Also you want to preview (softproof) how things look on a miriad of device specific colorspaces (printer profiles for eack printer-paper combination)… hence ICC appoach is better because you (I, we) want it to be flexible.

    • This reply was modified 4 years, 3 months ago by Vincent.
    #22388

    Wire
    Participant
    • Offline

    Thanks for more perspective V

    By (heavy) “lifting” I meant if you consider OS +GPU vs. display-internal color management just for job of display alignment,  it becomes apparent that these need to have supporting roles and friction between them can create noise. The assumption of OS ICC for DTP is almost total. But as even cheap displays become excellent, we are inexorably drawn to simplify and modularize.

    I see this in the concept of display pre/post LUT based transform example you offered: in order to “undo” gamma to run the emulation, the display has to know what to undo. The only way it can know is by assuming a personality and a systen overlord (admin) ensures OS+GPU config matches display config. The personality will naturally end up being more than gamma; it will likely be a common abstract device (working)  space (tho of course not necessarily).

    Once you can do the display personality very well you may lose interest in ICC CM for common cases, because the display is an acceptable  expression of the medium’s ideal device—eg television. Yes, yes DTP is more complex; print has this little bugaboo called ink and paper.

    Naturally such refinement of modularity leads to less emphasis on tools/techniques that bridge gaps and I can begin to see why CM bugs in MacOS GUI get triaged to “never fix” It’s actually a sign of progress Lulz!

    As a user I struggle to gain a sense of partitioning of fidelity comcerns. The wisdom to keep it straight is not obvious, but hard won.

    I’ll go a bridge too far and say If stuff like DUCCS works really well you either want DisplayCal/Argyll to embrace and help  define/refine the display API and improve control of display HW,  or you tend to lose interest in DisplayCal because the display personalities will become canned and we’ll discover that RCA had figured all this out by 1953 with a little end-to-end architecture called TV

    Ok, strike that! I’ll narrow my point to high-five your suggestion that DUCCS  (if it works reliably) is excellent for precal and makes most of scarce bits in pipeline.

    BTW I stand by my comment that 8bpc was a good design cut but not quite enough 🙂 The fact highbit is deep faked haha using gimmicks like FRC shows industry got up against wall on this stuff. YCbCr style chroma subsampling is even worse; yikes nightmare stuff that we may hopefully move beyond.

    The difference in thinking between DTP and TV gurus is perpetually interesting to me as I observe the tech progress crossing back and forth in industry.

    Your last points confirm what I was suspecting about videophiles: As I read up on way vid nerds (high-end video consumers) think about “cube LUT” they have a poor-mans ICC point of view. There’s a way talking about it that is like ICC without the CIE. While some were trying to get a PC involved in tweaking their component video DVD setup, along the way the PC replaced their disc player, and their TV. Most want to do display alignment for two reasons: 1) make rich peoplez home theaters sexy,  2) get skin tones to finally look right. I am not making this up!

    Sometimes the small matter of viewing environment adaptation is brought up, but quickly forgotten because if you need it your theater is not spec.

    It occurs to me that  cube LUT display tech has maybe biggest value in percrptual adaptation to environment: that’s where it can really help. Along lines of ambient bias lighting tech, etc

    I also see how cube LUT approach to display alignment ended up on HTPC as video data rates are hard work and expensive GPU adds value—tho if they can put a complete networked PC that can decode h265 in a $1000 tv, I’m at a loss to comprehend what about a home theater color control requires a full gaming rig. I still have to figure out ;/

    Thanks again for engaging on this topic, its really helping me sort out vocabulary and thinking

Viewing 7 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic.

Log in or Register

Display Calibration and Characterization powered by ArgyllCMS