Constant luminance

Constant luminance

Constant luminance The term luminance is widely misused in video. See Relative luminance, on page 258, and Appendix A, YUV and luminance considered h...

157KB Sizes 4 Downloads 132 Views

Constant luminance

The term luminance is widely misused in video. See Relative luminance, on page 258, and Appendix A, YUV and luminance considered harmful, on page 567.

10

Video systems convey colour image data using one component that approximates lightness, and two other components that represent colour, absent lightness. In Colour science for video, on page 287, I will detail how luminance can be formed as a weighted sum of linear RGB values each of which is proportional to optical power. A colour scientist uses the term constant luminance to refer to this sum being constant. Transmitting a single component from which relative luminance can be reconstructed is the principle of constant luminance. Preferably a nonlinear transfer function acts on that component to impose perceptually uniform coding. Standard video systems do not strictly adhere to that principle; instead, they implement an engineering approximation. The colour scientist’s weighted sum of linear RGB is not computed. Instead, a nonlinear transfer function is applied to each linear-light RGB component individually, then a weighted sum of the nonlinear gamma-corrected R’G’B’ components forms what I call luma. (Many video engineers carelessly call this quantity luminance.) In standard video systems, luma is encoded using the theoretical RGB weighting coefficients of colour science, but in a block diagram different from the one a colour scientist would expect: In video, gamma correction is applied before the matrix, instead of the colour scientist’s preference, after. Historically, transmission of a single component representative of greyscale enabled compatibility with “black-and-white” television. Human vision has poor acuity for colour compared to luminance. Placing “black-and-white” information into one component

107

The term “monochrome” is sometimes used instead of “greyscale.” However, in classic computer graphics terminology monochrome refers to bilevel (1-bit) images or display systems, so I avoid that term.

enables chroma subsampling to take advantage of vision’s low acuity for chroma in order to reduce data rate (historically, bandwidth) in the two other components. In colour imaging, it is sensible to code a “blackand-white” component even if “black-and-white” compatibility isn’t required (for example, in JPEG). I’ve been placing “black-and-white” in quotes. At the invention of television, the transmitted signal represented greyscale, not just black and white: Then, and now, greyscale would be a better term. Historical video literature refers to the “signal representing luminance” or the “luminance signal” or the “luminance component.” All of these terms were once justified; however, they are now dangerous: To use the term “luminance” suggests that relative luminance (Y ) can be decoded from that component. However, without strict adherence to the principle of constant luminance, luminance cannot be decoded from the greyscale component alone: Two other components (typically CB and CR) are necessary. In this chapter, I will explain why and how all current video systems depart from the principle of constant luminance. If you are willing to accept this departure from theory as a fact, then you may safely skip this chapter, and proceed to Introduction to luma and chroma, on page 121, where I will introduce how the luma and colour difference signals are formed and subsampled. The principle of constant luminance

Applebaum, Sidney (1952), “Gamma correction in constant luminance color television systems,” in Proc. IRE, 40 (11): 1185–1195 (Oct.).

Ideally, the so-called monochrome component in colour video would mimic a greyscale system: Relative luminance would be computed as a properly weighted sum of (linear-light) R, G, and B tristimulus values, according to the principles of colour science that are explained in Transformations between RGB and CIE XYZ, on page 307. At the decoder, the inverse matrix would reconstruct linear R, G, and B tristimulus values:

Figure 10.1 Formation of relative luminance Y R G B

108

[P]

11 b

-1

[P ]

R G B

DIGITAL VIDEO AND HD ALGORITHMS AND INTERFACES

Two colour difference (chroma) components would be computed, to enable chroma subsampling; these would be conveyed to the decoder through separate channels: Figure 10.2 Hypothetical chroma components (linear-light) R G B

Y

-1

[P]

[P ]

R G B

Set aside the chroma components for now: No matter how they are handled, in a true constant luminance system all of the relative luminance is recoverable from the greyscale component alone. If relative luminance were conveyed directly, 11 bits or more would be necessary. Eight bits barely suffice if we use nonlinear image coding, introduced on page 31, to impose perceptual uniformity: We could subject relative luminance to a nonlinear transfer function that mimics vision’s lightness sensitivity. Lightness can be approximated as CIE L* (to be detailed on page 259); L* is roughly the 0.42-power of relative luminance. Figure 10.3 Encoding nonlinearly coded relative luminance L*

Y R G B

Y

8b

-1

2.4

[ P]

[P ]

γE=0.42

R G B

The decoder would apply the inverse transfer function: Figure 10.4 Decoding nonlinearly coded relative luminance R G B

L*

Y

[ P]

0.42

γ =2.4 D

Y

-1

[P ]

R G B

If a video system were to operate in this manner, it would conform to the principle of constant luminance: All of the relative luminance would be present in, and recoverable from, the greyscale component. Compensating for the CRT Unfortunately for the theoretical block diagram – but fortunately for video, as you will see in a moment – the

CHAPTER 10

CONSTANT LUMINANCE

109

electron gun of a historical CRT display introduces a power function having an exponent of about 2.4: Figure 10.5 The CRT transfer function R G B

L*

Y

Y 2.4

[ P]

-1

2.4

[P ]

0.42

R G B

In a constant luminance system, the decoder would have to invert the display’s power function. This would require insertion of a compensating transfer function – roughly a 1⁄ 2.4 -power function – in front of the CRT: Figure 10.6 Compensating the CRT transfer function R G B

L*

Y

-1

2.4

[ P]

[P ]

0.42

2.4 1⁄

2.4

The decoder would now include two power functions: An inverse L* function with an exponent close to 2.4 to invert the perceptually uniform coding, and a power function with an exponent of 1⁄2.4 – that is, about 0.42 – to compensate for the CRT’s nonlinearity. Figure 10.6 represents the block digram of an idealized, true constant luminance video system. Departure from constant luminance Having two nonlinear transfer functions at every decoder was historically expensive and impractical. Notice that the exponents of the power functions are 2.4 and 1⁄2.4 – the functions are inverses! To avoid the complexity of incorporating two power functions into a decoder’s electronics, we begin by rearranging the block diagram, to interchange the “order of operations” of the matrix and the CRT compensation: Figure 10.7 Rearranged decoder

R G B

L*

Y

[ P]

Y

-1

2.4 0.42

1⁄

2.4

[P ]

2.4

Upon rearrangement, the two power functions are adjacent. Since the functions are effectively inverses,

110

DIGITAL VIDEO AND HD ALGORITHMS AND INTERFACES

the combination of the two has no net effect. Both functions can be dropped from the decoder: Figure 10.8 Simplified decoder

R G B

L*

Y

[ P]

-1

[P ]

0.42

2.4

Decoder signal processing simply inverts the encoder matrix. The 2.4-power function is intrinsic to a CRT display; alternative display technologies exhibit comparable mapping from signal value to tristimulus. Rearranging the decoder requires that the encoder also be rearranged, so as to mirror the decoder and achieve correct end-to-end reproduction of the original RGB tristimulus values: Figure 10.9 Rearranged encoder

R G B

0.42

R’ G’ B’

Y’

-1

[ P]

[P ]

2.4

Figure 10.9 represents the basic signal flow for all video systems; it will be elaborated in later chapters. Luma

Television engineers who are uneducated in colour science often mistakenly call luma (Y’) by the name luminance and denote it by the unprimed symbol Y. This leads to great confusion, as I explain in Appendix A, on page 567.

CHAPTER 10

The rearranged flow diagram of Figure 10.9 is not mathematically equivalent to the arrangement of Figures 10.1 through 10.4! In Figure 10.9, the encoder’s matrix does not operate on (linear-light) tristimulus signals, and relative luminance is not computed. Instead, a nonlinear quantity – denoted luma and symbolized Y’ – is computed and transmitted. Luma involves an engineering approximation: The system no longer adheres strictly to the principle of constant luminance (though it is often mistakenly claimed to do so). In the rearranged encoder, we no longer use CIE L* to optimize for perceptual uniformity; instead, we use the inverse of the CRT’s inherent transfer function. A 0.42-power function accomplishes approximately perceptually uniform coding, and reproduces tristimulus values proportional to those in the original scene. The following chapter, Picture rendering, explains that the 0.42 value must be altered in a normal scene to CONSTANT LUMINANCE

111

about 0.5 to accommodate a perceptual effect. The alteration depends upon artistic intent, and upon display and viewing conditions. Ideally, display systems should have adjustments for picture rendering depending upon display and viewing conditions, but they rarely do! “Leakage” of luminance into chroma Until now, we have neglected the colour difference components. In the rearranged block diagram of Figure 10.9, colour difference components are “matrixed” from nonlinear (gamma-corrected) R’G’B’: Figure 10.10 Chroma components

R G B

0.5

R’ G’ B’

[ P]

Y’ CB CR

-1

[P ]

2.4

In a true constant luminance system, no matter how the colour difference signals are handled, all of the relative luminance is carried by the greyscale component. In the rearranged system, most of the relative luminance is conveyed through the Y’ channel. However, to the extent that Y’ isn’t equal to Y, some relative luminance can be thought of as “leaking” into the colour difference components. If the colour difference components were not subsampled – for example, in a Y’CBCR , 4:4:4 system – this leakage would be inconsequential. However, the colour difference components are formed precisely to enable subsampling! So, we now turn our attention to subsampling. Figure 10.11 below shows Figure 10.10’s practical block diagram augmented with subsampling filters in the chroma paths. With conventional coding, some of R G B

0.5

R’ G’ B’

Y’

[ P]

-1

[P ]

2.4

CB

Figure 10.11 Subsampled chroma components

112

CR

DIGITAL VIDEO AND HD ALGORITHMS AND INTERFACES

Figure 10.12 Y’ and CB/CR waveforms at the greenmagenta transition of SD colourbars are shown, following idealized 4:2:2 chroma subsampling. The luma waveform is plotted in grey; CB and CR share the same waveform, plotted in magenta. The transition rate (rise time) of the CB and CR components is half that of luma.

1.0 0.8 0.6 0.4 0.2

-1.0

-0.5

0.5

1.0

0.5

1.0

-0.2 -0.4

Figure 10.13 Luminance waveform at the green-magenta transition of colourbars is shown in the solid line. The dashed line reflects luminance in a hypothetical true constantluminance system.

1.0

0.8

0.6

0.4

0.2

-1.0

Figure 10.14 Failure to adhere to constant luminance is evident in the dark band in the green-magenta transition of colourbars. The dark band is found upon displaying any colourbar signal that has been subject to chroma subsampling.

CHAPTER 10

-0.5

0.0

the relative luminance traverses the chroma pathways. Figure 10.12 above shows the idealized Y’CBCR waveforms at the green-magenta transition of colourbars, with 4:2:2 chroma subsampling. Figure 10.13 shows, in the solid line, the luminance that results after conventional decoding. Subsampling not only removes detail from the colour components, it removes detail from the “leaked” relative luminance. We have to ask, “What’s lost?” The departure from theory is apparent in the dark band appearing between the green and magenta colour bars of the standard video test pattern, depicted in Figure 10.14 in the margin. With conventional video coding, in areas where luminance detail is present in saturated colours, relative luminance is incorrectly reproduced: relative luminance is reproduced too dark, and saturation is reduced. This inaccurate conveyance of high-frequency

CONSTANT LUMINANCE

113

Livingston, Donald C. (1954), “Reproduction of luminance detail by NTSC color television systems,” in Proc. IRE 42 (1): 228–234.

luminance is the price that must be paid for lack of strict adherence to the principle of constant luminance. Such “Livingston” errors are perceptible by experts, but they are very rarely noticeable – let alone objectionable – in normal imagery. To summarize signal encoding in video systems: First, a nonlinear transfer function, gamma correction, comparable to a square root, is applied to each of the linear R, G, and B tristimulus values to form R’, G’, and B’. Then, a suitably weighted sum of the nonlinear components is computed to form the luma signal (Y’). Luma approximates the lightness response of vision. Colour difference components blue minus luma (B’-Y’) and red minus luma (R’-Y’) are formed. (Luma, B’-Y’, and R’-Y’ can be computed from R’, G’, and B’ simultaneously, through a 3×3 matrix.) The colour difference components are then subsampled (filtered), using one of several schemes – including 4:2:2, 4:1:1, and 4:2:0 – to be described starting on page 124. This chapter has outlined how, in the development of NTSC, an engineering approximation to constant luminance was adopted rather than “true” constant luminance. This engineering decision has served spectacularly well, and has been carried into component video systems (SD and HD), and into modern compression systems such as JPEG, MPEG, and H.264. Since about 2000, the majority of television receivers have incorporated digital signal processing that obviates the engineering argument made in 1950: The two nonlinear functions of Figure 10.6 could today be easily be implemented by lookup tables. Some purists believe that in the modern age we should abolish the approximation, and adopt the correct theoretical approach. However, the video infrastructure of SD and HD is built on Figure 10.9 (or with chroma subsampling, Figure 10.11). It seems unreasonable to change the block diagram of video, and impose a huge conversion burden, unless substantial benefit can be shown. I appreciate the theoretical argument; however, I am unaware of any significant benefit that would result from such a change, so I argue that we should not change the block diagram of video.

114

DIGITAL VIDEO AND HD ALGORITHMS AND INTERFACES