Question about sRGBtoY inverse companding #53
Replies: 5 comments 2 replies
-
Hi Dustin @dustinwilson Thank you for commenting. In three years I think you are the first person to ask me about this, and I have written about it in some of the docs (somewhere...), but let me explain in depth. Background
TL;DR Part IThe point with most of the above conversation is that whether to use the piecewise or the simple gamma, or just some LUT, is up to your needs and the specific workflow. In the case of APCA, it's the "and of the road" it's for predicting perceptual lightness contrast, Lc. In the early versions of SAPC and APCA, I was using 2.213 as the exponent, as that had unity at code value #777. Last year I shifted to 2.4 along with shifting the several other major constants in APCA to fit the empirical data of the continuing studies of users viewing self illuminated displays. ^2.4 matches actual output of monitors (some even higher, 2.5, in extreme cases 3.0) and 2.4 has some other benefits in terms of the relationship between green and the red and blue primaries, in a way that is helpful to color insensitive vision on sRGB monitors (though a different approach is required for Rec2020 to be sure). Why Y?Y is luminance from CIEXYZ. It is linear as light is in the real world. If you are doing ray-tracing, or compositing, you'll probably want to be in linear — but not for modeling human perception, which is not at all linear. (This is the key reason that WCAG_2 contrast does not work as intended — it does not follow perception). So APCA takes the linear Y and curves it to model perception. You may be familiar with L* from CIELAB, APCA does something similar, but where L* is tuned to surface colors (Munsell) the APCA is tuned specifically for self illuminated monitors, and then specifically for readability. When it comes to how we perceive contrast, after the retinal cells, the cones receive photons their electrical output goes to the ganglion cells which essentially data compresses what we see (opponent process) so that it fits thru our optic nerve and then heads off through the thalamus and then to V1 of the visual cortex. V1 filters based on the luminance information in the visual stimuli. After filtering in V1 that visual stimuli is sent off to other areas of the brain for processing. V4 for instance is known to be very active in color processing. And depending, the filtered visual data is sent to the visual word form area (VFWA) which leads to lexical processing on the left side of brain. The VFWA is where whole words and letter pairs are filtered/recognized. And the key to this? Good luminance contrast. Not chroma or hut, but specifically luminance. Luminance is three times higher resolution than chroma, and luminace is where the details are, and that's critical for reading small/thin fonts. Rather than recite the rest of this part of theory, here's a link to a white paper in progress: https://www.w3.org/WAI/GL/task-forces/silver/wiki/Visual_Contrast_of_Text_Subgroup/Whitepaper Other Questions:You mentioned:
function sRGBtoY (sRGBcolor) {
// send 8 bit-per-channel integer sRGB (0xFFFFFF)
let r = (sRGBcolor & 0xFF0000) >> 16,
g = (sRGBcolor & 0x00FF00) >> 8,
b = (sRGBcolor & 0x0000FF);
EDIT: except no, as Javascript stores everything as floats, and said bitshifts can have unexpected results when encountering the new CSS4 "intfloats" like 255.763 -- like I said, all deprecated. Note however that sRGBtoY in all latest and future versions takes a simple rgba array
Yes, and not specifically relevant to our use case as I mentioned or inferred above. You'll notice for instance that the first stage in APCA is a soft black clamp, which does something similar to the sRGB piecewise linearization, but is specifically tuned for the contrast prediction. The specific relative luminance is not something that will be used outside of the function — and both text and background are processed identically, and this processing is part of the entire chain that gets us to the perceptual contrast prediction. TL;DR Part IIIn short, we are doing that conversion intentionally, based on extensive work here in the lab, and other studies. It is part of the total processing chain for perceptual contrast, and is not intended for any other use. I hope that answers your question? I may be a bit sleep deprived, so feel free to ask further if I wasn't clear on anything. Thank you! Andy |
Beta Was this translation helpful? Give feedback.
-
Wow. No need to lose sleep over my questions haha. Thanks for your detailed answer, and woo I'm first! You mentioned that the latest version accepts an array instead of an integer for the input. HEAD on this repo still uses the integer. Digging around I found https://github.com/Myndex/apca-w3/blob/master/src/apca-w3-v.0.0.98g-4g.4.js. Is that what you were referring to? I noticed something interesting when looking through the code there, though. Your readme mentions that the current release only applies to sRGB, but that one there has a conversion to Y from Display P3. Display P3 uses D65, but theoretically say there was a function for converting ProPhoto RGB to Y for the purposes of calculating APCA contrast (I know the coefficients would be different there of course) would there need to be chromatic adaptation from D50 to D65? If this is not appropriate to ask here I can move it to that repo's issues. |
Beta Was this translation helpful? Give feedback.
-
Hi Dustin @dustinwilson You're welcome, by providing a detail answer I also create material I can use in the FAQ, so it's helpful (I hope). And yes, sorry, I am trying to get things more organized: the published npm code lives at apca-w3 and bridge-pca repos — that code is preferred, the code here may have experimental elements or legacy elements I have had tome to comb out as yet. SOme of the code here is for the live tool sites also, which again you're welcome to use, but the canonical code is apca-w3.
Issues and discussion is best here, so there are no issues or discussion tabs open at those repos. I will however, move this thread to the discussions tab at this repo just as an FYI, as it's a FAQ type thread that should remain open for comment.
Yea, thank you, again I need to comb through things as it all develops. I literally JUST added the displayP3 transfom function, but I consider it experimental pending further validation. I do expect it to be suitable. The color spaces of greatest concern are Rec2020 and Rec2100, as the spectral red primary is outside the M cone response to the point that a protanopia will not see it, whereas they do still see the sRGB red primary. And HDR displays add in an entirely different element regarding contrast prediction. We have technology for these in the lab in the experimental stage.
Short AnswerDifferent primaries always require different transform matrixes with different coefficients. P3 is also different. Longer more rantful answerProPhoto is a useful profile as a working space for pre-press. That said I have long objected to it for use in web content because it can not be a display profile. ProPhoto uses imaginary primaries that can not exist as a real color. I do like using it in intermediate production steps, but I would never use it as a delivery format. An ideal media-delivery colorspace is identical to the space of the display device. There can never be a ProPhoto display because it uses imaginary primaries. Andy Rant: All common displays and devices are D65. The few notable exceptions are also in closed ecosystems of a dedicated or specific purpose, not for general use, and are still close to D65, such as DCIP3 at 6300K, (not on the planckian locus), a value chosen for best efficiency with Xenon bulb type theatrical projectors. It is specific the closed DCDM DCIP3 pipeline. ACES is ~6000K, for similar reasons specific to the film industry, and ACES spaces are not intended for end-user delivery. Like ProPhoto, ACES are intended for intermediate or archival use, also with imaginary primaries. Some print facilities may have systems calibrated at D50, but again, that's inside of their closed, production ecosystem. As far as web content is concerned, it is an open ecosystem, where commonality is most important for content distribution. Live content, meaning a web page with CSS that is dynamic, is never going to be displayed on a display that can not exist. So what use case is there for a D50 profile with imaginary primaries on a web page? Color managed browsers will already transform the embedded ICC profile, so having that as a page space merely means choosing CSS colors that "match" a given ProPhoto image, but those CSS colors ... are they still 8 bit? Should a designer be able to choose CSS color values that can not be displayed? Because they can in a ProPhoto space. It means that greater ambiguity is added to a chain of distribution, where an important goal is removing ambiguity. Sooooo... regarding colorspace and predicting contrast: As mentioned in my earlier reply, APCA predicts contrast in part considering the characteristics of a self-illuminated display. To do this for ProPhoto means first transforming the color to the destination display space, so yes, a D50 to D65 Bradford transform followed by (and this is important) the SAME gamut mapping that will be used to display that ProPhoto color at the user's device. Last I checked gamut mapping is not well defined for CSS, though I should see what new developments are over there. TL;DRRWhat the CURRENT apca-w3 engine needs to see is the simple relative luminance, relative to the end user display. As mentioned previously we're designating this as Ys, for screen luminance. And I've abstracted the layers exactly this way for this reason: the transform to Ys is going to be unique for each color space, with some unique issues each. displayP3 and AdobeRGB not so much, but increasingly so with Rec2020, and immensely so with Rec2100. And the non-D65 imaginary spaces like ProPhoto need even further consideration. Thank you! Andy |
Beta Was this translation helpful? Give feedback.
-
I just used ProPhoto as an example that was D50 off the top of my head, but yeah colors could be selected from that gamut that can't be displayed. I wouldn't exactly know what the user's destination display space would be. For many that definitely would be sRGB (or truth be told some horrible approximation of it), but the display I'm viewing this on at present has its own calibrated color profile that is more or less around Adobe RGB. I know that presently the concept of this is strictly for screens on the Web, but I've run into contrast issues when printing, too. Other illuminants based upon physical lighting would come into play there. Again, I know you're focused on self-illuminated screens with this, though. What I think I'll do is just have mine convert to sRGB regardless of what the color began with and then to Ys until you all have worked out what to do on different profiles. It's only preliminary anyway so I can play around with it with some color palettes I have in my scripts already. Thanks for your help. I appreciate it. |
Beta Was this translation helpful? Give feedback.
-
And therein lies the rub...
I have several wide gamut monitors too... if the peak luminance is under 200-250 cd/m² and reasonably calibrated and the correct coefficients are used, the Ys should be reasonably accurate. sRGB is the current web standard though.
Well, the project is expanding scope, just FYI. I do pre-press work too, so it is in mind, along with a lot of other things..
I should create an
Anytime! I'm going to move this thread to the discussions area if you come looking for it in the future. Thank you, Andy |
Beta Was this translation helpful? Give feedback.
-
I am trying to implement the APCA contrast algorithm into my PHP color library, and am trying to figure out what you're trying to do in your
sRGBtoY
function.What it looks like you're doing is that you're converting from sRGB to CIEXYZ D65 (because you're not doing chromatic adaptation, and the sRGB input color is D65) but only using the Y lightness channel. However, when you do inverse companding you're not using the sRGB inverse companding algorithm but something like what other RGB profiles use (you don't max to 0 if negative and min to 1 if above 1 though) except with sRGB's 2.4 gamma (for the purposes of companding) instead of 2.2. For instance, if I feed your function
0x662d91
I get0.05333370870294055
back for Y when if it used sRGB's inverse companding it'd be0.06746021917230773
.Above is what you have now, but if you were to use sRGB's inverse companding it'd be like this:
Is there any particular reason for this?
Beta Was this translation helpful? Give feedback.
All reactions