In a previous blog post, where I delved somewhat into the question of over- and underexposure with CinemaDNG RAW, I stumbled upon the topic of the seeming similarites between the flat image of RAW and the Log (logarithmic) profile curve whose exposure f-stop range perfectly emulates the iris of the human eye. Let me reiterate what I wrote previously on this topic, namely what Kurt Lancaster states in his Cinema Raw: Shooting and Color Grading with the Ikonoskop, Digital Bolex and Blackmagic Cinema Cameras, that the BMCC and BMPCC converts 16-bit linear to 12-bit log RAW. The image coming from both cameras looks alot like the flat Log profile and quite different compared to the Blackmagic URSA ungraded RAW which provides more saturated colours, lacking the typical flat image of its predecessors. I concluded that it seems the world between RAW and Logarithmic processing is akin in the early Blackmagic Design cameras. BMCuser Eddie Barton (who also happens to be Digital Bolex’s colour engineer) commented on my conclusion and with his permission I will quote him in full (with slight typo correction and edition / addition for clarification), as his words are very enlightening and informative.
“Raw” as a term has come to mean many different things and now lacks specificity. In general, when people refer to “raw” or “camera raw” they’re talking about the data stored in the specific raw container (DNG, R3D, ARI, etc). This data is usually a Bayer pattern (a form of CFA mosaic) and has not had any color transforms performed on it. Now, this does not mean that a transfer function (a.k.a. gamma curve) can’t be applied to it. In the case of BMD cameras, they all perform a logarithmic transform on 16-b[it] linear data to save space in 12-b[it] DNGs. The purpose of this curve is to reduce the amount space taken up by the raw data and to retain details in the mids and shadows through compression.
This data is still considered raw, and the general pipeline is as follows. This logarithmic raw data goes through decompression and linearization behind the scenes. Processing programs/pipelines prefer to work with linear data as it is easier to manipulate with simple operations. So the raw files contain the LUT to linearize the logarithmic data. After linearization, the Bayer pattern is demosaiced (a.k.a. debayering) to create an RGB image that still does not have any color transforms done. And many people confuse the debayering process as part of the color transform process. It has nothing to do with it. It’s sole purpose is to create an RGB image from CFA mosaiced data. The color transform process is where you would choose the output/working color space for the image (BMD Film, Rec 709, ACES, etc). The color space is made up of the color gamut and transfer function. Usually, for convenience, the transfer function and color gamut are named the same as the color space, but is not always the case.
So yes, raw data can be logarithmic or linear or encoded with any other curve you can think of. It’s just a matter of choosing the right curve. In addition, raw does not have to be mosaiced. An example would be a camera with three sensors having a pixel for all three channels. The raw file wouldn’t need to be demosaiced, but it would still need color transforms performed on it. Another example is actually the [Canon] C500 2K raw. It super samples and creates 2K RGB raw from a 4K sensor.
Now as for over or underexposing, it depends on where you want your dynamic range to lie. In the case of the original Cinema Camera and the Pocket, you’ll get the following.
What the chart shows is how the dynamic range is shifted with exposure. Exposing for ISO involves changing the physical amount of incoming light (changing the aperture or shutter speed or any other physical factor) so that [18%] middle gray is recorded at the correct value in camera. When you expose for 800, the BM recommended ISO, you get +5/-8 EV distribution. Now overexposing would be moving to the left [on the chart]. Exposing for 400 is the same as overexposing 800 by one stop. As you can see the range shifts down. This means you get cleaner shadows, but you will clip the highlights faster. In the opposite direction, exposing for 1600 is the same as underexposing 800 by one stop. This will give you an extra stop of range in the highlights, but means you will descend into noise faster.
I made [the chart] based on a combination of information from the BMD Film curve LUTs provided by Blackmagic Design in Davinci Resolve, DNG metadata, and independent tests performed by Corey Robson and Ryan E Walters. The independent tests were just used as supporting evidence. I wanted to be as objective as possible in making these charts. A lot of the time, there is still valid data in the noise floor that can’t be viewed on a scope, but is still visible in the image. The data is there at the bottom and you’d be able to see it in a log image, whether it’s considered usable or not is up to the user.
Here is the data for all of the curves on a 10-b[it] scale:
1% Black: 36
18% Gray: 392
BMD Film 4K:
1% Black: 36
18% Gray: 392
BMD Film 4.6K:
1% Black: 76
18% Gray: 420
So, if I have understood the Blackmagic Design workflow correctly, the original 16-bit linear RAW signal is converted to a 12-bit Logarithmic curve just as it leaves the sensor in its original Bayern pattern, as a means to compress / reduce the size of the signal. This losslessly compressed signal is then packed into CinemaDNG files in camera which are stored in the SDXC cards for download. The Logarithmic RAW data is then decompressed in DaVinci Resolve and linearized before the Bayer pattern is demosaiced / debayered, to create an 12-bit RGB image which finally is taken through a colour transformation process to create a output / working color space for the flat log BMD Film LUT. This explains why Blackmagic Cameras have a large latitude in the highlights, compared to the Ikonoskop A-Cam dII and the Digital Bolex which are considered to be more sensititive to highlight clipping. In closing, I will quote another BMCuser regarding the preference of overexposure:
To answer the question [if it is better to overexpose or underexpose], provided you’re not clipping important data, it’s better to overexpose RAW because it gives you a better signal to noise ratio. Obviously, everyone has different tolerances when it comes to noise, and sometimes noise is desirable, but assuming a normal use case, overexposure is preferred. I typically just shoot at 400 ISO/ASA, which overexposes a stop. If you have time for noise reduction or want a little grittier image, 800 ISO has the most dynamic range and is proper exposure for the camera. 1600 is underexposed by a stop and is too noisy for me.
Technically, if you’re shooting RAW, ISO is just metadata, so no ISO has an advantage in dynamic range. Also, I think 1600 ISO actually has the greatest dynamic range in ProRes, and it’s the only one that exceeds 100% on a waveform. But as I said, it’s a little too noisy for me. I also should clarify, that I’m basing this off of the Pocket, which I’m told is more or less the same as the BMCC. The takeaway is that you’ll have less noise if you overexpose RAW and then bring the exposure back down in the RAW converter (which, in the case of the BMPCC, happens automatically when you set the camera at an ISO lower than the native 800 and expose for that setting).