News and Views from Dave Wilson

Archive for December 28th, 2009

Musings on Dynamic Range, Bits and Stops

by on Dec.28, 2009, under Computer, Photography

I got into a conversation with Guy Tal, Jim Goldstein and Pete Carr on Twitter this afternoon and it quickly became clear that it wasn’t going to work in 140 character chunks so I reckoned a blog post would be in order. This is something I’ve been meaning to write about for a while now so I’m happy I now have an excuse.

The issue under discussion related to the relationship between the number of bits used to store a High Dynamic Range image and the light levels that the image can store. An HDR image is typically thought of as being represented with 32 bits per color component (32 bits of red, 32 bits of green and 32 bits of blue for every pixel). Some of the most common HDR file formats actually use less than 32 bits per pixel but, regardless, the question comes up about what each of those bits represents and how the number of bits dictates whether an image is “High Dynamic Range” or not.

I had always assumed that the dynamic range of an image depended to some extent on the absolute maximum light level that could be recorded. On reading Christian Bloch’s rather good “HDRI Handbook” last year, I was surprised, however, to read that the dynamic range is defined instead in terms of the ratio of the number of discrete values (2**32, for example if we’re talking about a 32 bit number) a measurement can represent divided by the smallest measurable difference (the change in the signal represented by 1 least significant bit). Nowhere in this calculation does any absolute value appear – the dynamic range is a ratio of the largest and smallest values that can be represented but it says nothing about the actual quantities that those values represent.

This didn’t seem intuitive to me. Where was the reference? How would you know how bright a given pixel would be? Thinking about this for a while, however, it became clear that I was forgetting to take into account the camera as a whole rather than merely the sensor.

The camera’s sensor has a fixed, maximum signal that it can record and a certain number of bits of resolution. For example, a 12-bit sensor can differentiate between 2**12 (or 4096) different levels of red, green and blue light. At some absolute level of light, the sensor saturates and outputs its maximum value. If you increase the light level falling on the sensor above this amount either by keeping the sensor exposed to the light for longer or by increasing the brightness of the light falling on the sensor, you get no new information and the signal stays saturated.

Thinking about this from a photography point of view, however, this is exactly as you would expect and corresponds to overexposure. In this case, we close the aperture, reducing the brightness falling on the sensor, or speed up the shutter to reduce the total amount of time the sensor is exposed to the light. These changes reduce the total amount of light falling on the sensor and allow us to take another, hopefully correctly exposed image. Although the maximum light level the sensor sees has dropped and it no longer saturates, it still records 4096 different levels falling on it. The recorded dynamic range is the same but we’ve shifted the recorded values so that all of the actual image brightness levels fall within the recording capabilities of the sensor.

This is exactly as you would expect in a film camera too – if the film is overexposed, it “saturates” to opaque and you can’t store any more information (there is the complication of logarithmic vs. linear response here but let’s gloss over that for now since it’s not really relevant to this discussion). In these cases, you reduce your exposure to get the amount of light hitting the film such that you don’t saturate the medium.

Considering things this way, it is now clear (to me at least) that the absolute value of light represented by the sensor’s (or film’s) maximum output value is irrelevant. The photographer adjusts the exposure to ensure that the brightest highlight in the image is at or just below the sensor’s saturation point and, hey presto, you end up taking maximum advantage of the sensor’s dynamic range. The absolute maximum light level that will cause the sensor to saturate is, therefore, related to the sensitivity and not the dynamic range. If the sensor saturates with very little light hitting it, we end up with a high sensitivity (or high ISO) sensor that allows us to record images in lower light than one which saturates at higher light levels.

So how does this tie back in to the normal photographic system of defining exposure in terms of stops or EVs? For every bit you add to a sensor, the number of values it can represent doubles and, as a result, its theoretical dynamic range will also double (forgetting about noise which reduces this somewhat). Thinking about exposure calculations, you know that increasing your exposure by a stop also doubles the amount of light hitting the sensor. There is, therefore, a direct correlation between 1 stop and 1 bit. Reducing your exposure by a stop divides the sensor output in two or shifts the value one bit to the right. Adding a stop to your exposure does the opposite, doubling the sensor output or shifting the value one place to the left.

Using the bit shifting idea, we can, therefore, get some idea of how much dynamic range an HDR image may have based upon how we recorded it. I typically use 3 images bracketed 2 stops apart when shooting HDRs. If the original scene contains very bright highlights or lots of dark shadow areas, I will use more brackets but 3 is usually enough. My Nikon D90 has a 12 bit sensor and, at low ISO values, has close to .12 bits or 4096 levels of dynamic range. Adding +2 stops to the exposure has the effect of multiplying the sensor output values by 2**2 or 4 or shifting them 2 bits to the left. Taking away 2 stops has the opposite effect, dividing the output values by 4 or shifting them 2 bits to the right. When I combine all three exposures, -2 stops, 0 stops, +2 stops, into an HDR image, therefore, the maximum dynamic range I could possibly record would be 12 + 2 + 2 bits or 16 bits of information. Even though I may save this image in a 32 bit file format, I’m not actually storing pixel values that extend from the lowest to the highest possible value in the file format. Regardless of the fact that I’m not using the full dynamic range of the recording system (the 32 bit pixel component representation), I still have 16 bits of information per colour component which is twice the number of bits I would have had if I had saved in JPEG and 4 bits (or 16 times) more than I would have got from a single RAW file from the camera. The dynamic range of my image is something like 65536:1 versus 256:1 for JPEG or 4096:1 for my 12 bit RAW format.

Confused? If so, leave a comment and I’ll try to clarify this a bit!

8 Comments :, , , , more...

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!