If we're only encoding one blue deviation and one red deviation, would the green deviation just be the remaining color deviation from gray? Does that mean that "negative" Cr and Cb would be "allowing" more room for green, which is what looks like is happening in the slide?
@jzhanson Right; it’s just a 2D coordinate system for chroma, with colors laid out as above. I wouldn’t read too much into positive/negative values; after all, you could just shift the coordinate system so the whole region is positive. Putting grey at the origin is just nice psychologically, i.e., it’s easy to think about the origin as the place where the chroma is neutral.
I'm curious if linear interpolation works between two colors represented in Y'CbCr. For example, if we try to linearly interpolate between the first and third quadrants of the chart above, we should be finding some value between green and purple, which ends up being gray with the interpolation. Is this understandable behavior, or should we switch to a color representation such as RGB before performing interpolation, then switch back?
@acpatel Great question: what’s the “right” way to do color interpolation (such as alpha blending between colors during image compositing)? Ideally, perhaps, you would interpolate between (emission or absorption) spectra, which capture complete information about color. Unfortunately, there is more than one spectrum that corresponds to any given color encoded in, say, RGB. In other words, there are “metamers.” So if you don’t already have spectral data, you simply can’t compute the objectively correct answer. One thing you can do, however (and is discussed a lot in image compositing) is convert to a “linear” color space, i.e., one where brightness or intensity is a linear function of the numerical values you’ve stored. Recall for instance that sRGB has a nonlinear profile so that a larger range of values are used for brighter colors. Also, as you point out, an space like (linear) RGB may make more sense for linear interpolation than a perceptual space like YCbCr, since adding RGB values has at least some kind of physical interpretation: light is additive; for instance, turning on two light bulbs and taking a photo should give you the same image as turning on each individually and adding the images.
Is there a particular reason why we use this for videos?