As mentioned in the previous post in an ideal world the first level of pixel brightness should coincide with the brightness control black level cut off point.
The first attachment shows the ideal where the first level of pixel brightness coincides with the brightness control black level cut off point. The lower part shows the video data gamma encoded.
Running the Monitor Black Point Check from this Link with no Gamma correction applied and brightness set to maximum.
http://www.drycreekphoto.com/Learn/C...itor_black.htm
It is common not to see any activity until the pixel luminance level reaches 8 or more which equates to a luminance level of over 3%. So in the real world the first pixel luminance level is below the CRT black level cut-off and therefore will never be seen.
Now if we were to use an excessive gamma encoding to achieve the viewing of the 1% box in this link we can end up with a host of problems.
http://www.pbase.com/image/67769565
The second attachment illustrates applying excessive gamma encoding to achieve viewing shadow detail in these dark areas.
In the last attachment I have plotted the shape of different gamma encoding curves.
Gamma 1 is a straight line.
Gamma 1.8 out dated Apple Mac standard
Gamma 2.2 PC now accepted as the de facto standard
Gamma 2.6 just to illustrate the effect of increasing the value
Note the steepness of the initial part of the curve for the last 3 curves.
The difference between gamma 2.2 and 2.6 is a 24% gain in pixel brightness for input pixel brightness of 5%. The mid grey (50%) would be 3.5% brighter. When applying excessive gamma correction the viewed image will exhibit some of the following symptoms. Posterisation, banding and increased noise artifacts in the shadow detail. The image will appear too bright and lacking contrast with an over sensitivity to small brightness edits.
Part 3 will study the workings of the Adobe Gamma method