Olympus-OM
[Top] [All Lists]

Re: [OM] ETTR, was: MooseRant on Low Light Shoot-Out

Subject: Re: [OM] ETTR, was: MooseRant on Low Light Shoot-Out
From: Chuck Norcutt <chucknorcutt@xxxxxxxxxxxxxxxx>
Date: Wed, 29 Jan 2014 19:57:54 -0500
OK, I'll accept that I'm 100% wrong if you'll accept my personal 
statement that all of the wrong crap I'm doing still just happens to 
work for me.  :-)

Chuck Norcutt


On 1/29/2014 6:16 PM, Ken Norton wrote:
>> But I do have to take AG to task for what I think is a totally erroneous
>> statement below:
>> ---------------------------------------------------------------
>> The in-camera histogram is generated off of an in-camera
>> JPEG and also shows only the primary (RGB) colors, not the derived
>> colors (CMY). You can have the yellows clip and never know it.
>> ---------------------------------------------------------------
>> I think AG should certainly realize that the camera never creates
>> "derived" colors.  The camera and its displays and our monitors only
>> have red, green and blue pixels.  "derived" colors (like yellow) are
>> only created in our brain.  When we see yellow the only thing we are
>> seeing is green and blue pixels (with maybe a hint of red mixed in).
>> Pixels are not additive in brightness.  It's impossible for the brain
>> derived "yellow" to be blown if its component R, G and B pixels are not.
>
>
> Ah, but you fail to take into account the capture side of things.
> You've got several steps involved. You have the actual A-D conversion
> where we are stuffing the analog light into a three-color sensor array
> of individual photosites. Then this three-color infomation is
> recombined to provide a RGB value for each individual pixel. The
> pixels are then used for the RGB Histogram displayed on your camera.
> Even a histogram that looks at the "RAW" file may still only see the
> recombined value, not the photosite value.
>
> Where things can get even more muddy is the algorithm used for
> combining pixels. Is it a three-pixel combination? Four pixel
> combination? Which three? Which four? Does it step every other pixel
> or single pixel advance with back fill average between the two? Lots
> of things going on there. Be VERY careful about assuming that it's a
> four-pixel merge (two greens, one red, one blue), because it is
> usually just a three pixel merge (one green, one red, one blue). Does
> the method the in-camera processor uses the same as your Adobe
> provided RAW converter? Which Adobe raw converter are you using? Which
> CAMERA are you using? Is there two different spectral responses to the
> greens? What is the spectral cutoff for each color of photosite on the
> sensor?
>
> Before you launch into a "AG doesn't know S***" response, you really
> do need to look at this seriously because when you are up there near
> saturation, there is a lot of ugly going on which very few people are
> aware of. I mention "highlight recovery" because that is a tool which
> really shows off that things aren't what you expect them to be. But
> let me illustrate:
>
> Let's take a real-life yellow, orange or red flower. Is a red flower
> 100% red? Rarely. To keep red from oversaturating, we have to pull the
> exposure back a bit. Whenever we have a bright color which matches up
> to one of the RGB primary colors, we are going to have to cut the
> exposure back a ways from the peak in order to keep any form of
> usability in the final file. Why? Shouldn't it be just right? No.
>
> The red photosites are capturing more than just red. The blue
> photosites are capturing more than just blue and the green photosites
> are capturing more than just green. There is overlap in all three
> colors. The spectral cut filters are not precise. So, when you
> photograph that red flower that you would think is showing up in just
> the red photosites, it's also showing up in the green and blue
> photosites. If the in-camera raw converter is using an averaging
> method for combining the photosites and if worse yet, it is heavily
> weighting the green photosites for luminance, you are not going to get
> an accurate representation of the actual clip points of the RGB
> photosites in your histograms. This color clipping is common with
> yellows and oranges. If you do this with any camera with two different
> green sensels, you're going to end up with some very nasty artifacts
> with all but one or two converters.
>
> I've done a lot of controlled testing of this using the Kodak color
> target and a number of different cameras (not just Olympus stuff) and
> films. I would absolutely agree with the general sentiment that it is
> far better to pull back an exposure with most Canon camera files than
> to boost in post. The Olympus E-1 was a camera best served directly at
> the proper exposure or with up to a stop boost in post. The E-3 will
> fight you no matter what. The EM5 is a lot like the original E1 and
> takes underexposure well. But all this testing did show that with the
> exception of the Canon 5D, all tested cameras corrupted the colors in
> the top half stop. The Nikons tested were the worse.
>
> One way that this manifests itself is in skintones. Again, this gets
> into spectral response of the photosites. It is very easy to clip
> human skin even though none of the RGB histograms are clipping.
>
> The point is that you really don't know what is going on unless you
> can see the exact histogram of the photosites themselves AND have a
> precise and complete knowledge of your raw converter. If you've ever
> scratched your head and wondered why you bracket an exposure and end
> up choosing the one to convert that has more histogram headroom, this
> is likely why.
>
> So, when I talk about "derived colors" I'm talking not just about the
> visual interpretation of RGB, but of the original parsing of the real
> world colors into the RGB color array sensor. I've said before and
> I'll say it again, that the next frontier in sensor development with
> be the RGBCMYW array where it seas red, green, blue, cyan, magenta,
> yellow and may also include a luminance channel. When this happens, it
> will be a true eye-opener of just how limiting the RGB filter array
> really is. The RAW converter will bring the resulting file into a
> standard 48-bit RGB structure, but getting there will give us a
> massive increase in color gamut during capture. That is where these
> stupid-high pixel densities will earn their keep.
>
> But without a controlled test with a calibrated color target, you
> really have no clue what's going on. Assuming that the RGB histogram
> will always show clipped conditions is foolish and 100% wrong.
>
>
-- 
_________________________________________________________________
Options: http://lists.thomasclausen.net/mailman/listinfo/olympus
Archives: http://lists.thomasclausen.net/mailman/private/olympus/
Themed Olympus Photo Exhibition: http://www.tope.nl/

<Prev in Thread] Current Thread [Next in Thread>
Sponsored by Tako
Impressum | Datenschutz