Gamma: how it affects image quality
Many devices used in computer imaging and video have a non-linear relationship between pixel values or voltage and physical light. Most of these devices can be characterized by an equation of the form
L = (V + d)^gamma .
That single Gamma number describes how to convert pixel values or voltage to light intensity (usually described in watts per meter square) or the reverse. The 'd' parameter is an offset, or black point, which is frequently left out of calculations (but should always be taken into account for CRT like devices!).
Unfortunately, many people don't understand gamma, or think they understand it and propagate mistakes that lead to poor image quality.
Some people assume that a gamma function is used in encoding images only because it compensates for the transfer function of their CRT. This is wrong. Gamma encoding makes use of the eye's known sensitivity to store the data in a way that minimizes the visible difference between the reproduced and original scenes with a limited number of values (bits). In other words, the image is stored in a way that is perceptually uniform -- an equal number of pixel values are used for equal areas of perceived lightness. Human lightness sensitivity has a gamma somewhere between 2.0 and 3.0 depending on the viewing conditions. Typical CRT gamma is between 2.0 and 2.7 (due to the physical characteristics of the electron guns). These numbers are not designed to be close, but it just worked out that way. And the fact that they are similar simplified the development of early television and computer graphics. Even if we are using a different display device (LCD, Plasma, Printer, etc.) we still want to encode our images with a gamma around 2.0-2.5 to maintain perceptual uniformity.
Using an incorrect gamma encoding, or applying it incorrectly, can lead to lower quality images. For example, some scanners (especially the cheaper ones) return pixel values that have no gamma encoding -- the pixels correspond directly to linear light measurement (gamma 1.0). But this means that the values don't correspond to the way WE see the image. In this case, many of the pixel values are wasted on highlights (bright areas) that we cannot distinguish, while almost no pixel values are used for the shadows (darker areas) or midtones (everything near 50% gray). Even worse, when this data is converted to a perceptually uniform encoding, you typically lose many of the values due to quantization (this term basically means that you've got a lot of information, but only a few spaces (or bits) to put it in). With 8 bit pixels, converting from a gamma of 1.0 to a gamma of 2.2 means that you've gone from 256 values down to 192 values (losing one quarter of the values!). You would have to scan the image in at 12 to 14 bits in a gamma 1.0 space to match the same quality you get from scanning 8 bits in a gamma 2.2 space. And, as usual, the more you convert the image among various gamma spaces, the lower the image quality gets.
When an image is viewed on the CRT, it must be converted to a gamma of about 2.5 at some point. If it gets converted from a gamma of 1.0 to 2.5 for display you're going to see a lot of banding (because with 8 bit converters there are only 173 levels left available to each color) and not a lot of detail. If the image was converted when it was created, then the video card doesn't have to do much additional conversion and you see all the details in the image (because close to 256 levels are available to each color). There is usually some conversion because the video card LUT is set to correct for your monitor's deviation from an ideal device. The same sort of conversion takes place when printing - and again, if the image starts in a gamma 1.0 space the resulting print will have fewer levels of gray or color available and you will see banding.
Website Maintainer: firstname.lastname@example.org