It is only natural to switch to γ = 2.2, but mainly to agree with the rest of the world. Neither is particularly “better” for image viewing – the difference between the two is trivial and irrelevant, except insofar as it makes images look different on two different machines.
γ is mainly a way to align the metric in our color space with human perception: humans adapt to varying light levels, even in small areas of the visual field, and our judgment of color differences is relative (i.e. relative to magnitude, logarithmic) rather than absolute (linear).
Any operation that requires interpolating between arbitrary colors (e.g. image rotation, scaling, or compositing with transparency) will have errors introduced by any γ ≠ 1.0, so careful image processing applications should convert to γ = 1.0, perform whatever operation, and convert back, but in practice none of them do, because with sufficient pixels, especially if they're mostly opaque, users don’t know they should care.
But storing colors in 8-bit color with γ = 1.0 would be a disaster, because we’d be devoting half of our storage space to the very bright colors, while leaving the dark colors only a few levels of distinction: the darks would end up extremely noisy.
Incidentally, this last is the reason that digital cameras, even though they collect 10 or 12 bits per channel of data, end up with noise in the blacks: cameras gather photons and store them linearly, meaning that the light parts of the image are silky smooth because we can make many fine color distinctions, but not the darks. Hence the general advice to “expose right.”
Soon enough, we’ll just be storing and manipulating images in γ = 1.0 floating point, and thereby get the best of both worlds: operations are technically linear, but the encoding space is assigned logarithmically.
Much of what you have stated here is incorrect. For example, the encoding of a photo affects only quantization noise, and has nothing to do with the noise in the dark pixels (due to shot noise).
There are plenty of sites where you can learn more about camera noise.
No, he's spot on about the advantages of 'linear light' processing. It even makes a big difference to bilinear interpolation when you're doing texture mapping.
This is an incredibly confusing subject, thanks to the overlap of gamma as an encoding technique with all the other issues around color management. Poynton's Gamma FAQ is one of my favorite resources ( http://www.poynton.com/GammaFAQ.html ), but even after five years working on image processing software I still have to make an effort to wrap my head around it all.
γ is mainly a way to align the metric in our color space with human perception: humans adapt to varying light levels, even in small areas of the visual field, and our judgment of color differences is relative (i.e. relative to magnitude, logarithmic) rather than absolute (linear).
Any operation that requires interpolating between arbitrary colors (e.g. image rotation, scaling, or compositing with transparency) will have errors introduced by any γ ≠ 1.0, so careful image processing applications should convert to γ = 1.0, perform whatever operation, and convert back, but in practice none of them do, because with sufficient pixels, especially if they're mostly opaque, users don’t know they should care.
But storing colors in 8-bit color with γ = 1.0 would be a disaster, because we’d be devoting half of our storage space to the very bright colors, while leaving the dark colors only a few levels of distinction: the darks would end up extremely noisy.
Incidentally, this last is the reason that digital cameras, even though they collect 10 or 12 bits per channel of data, end up with noise in the blacks: cameras gather photons and store them linearly, meaning that the light parts of the image are silky smooth because we can make many fine color distinctions, but not the darks. Hence the general advice to “expose right.”
Soon enough, we’ll just be storing and manipulating images in γ = 1.0 floating point, and thereby get the best of both worlds: operations are technically linear, but the encoding space is assigned logarithmically.