My first posting about lens differences might be a bit strident. The
design of experiments, the statistical significance of results, and what
can be legitimately inferred by them are a passion of mine. Professionally
I've seen shakey claims made in light of the experiment design and data
gathered.
At 07:34 1/8/01, Rich wrote:
In most precision manufacturing, constant "batching" is done to check that
dimensions, etc, fall within acceptable tolerances. When things start to
drift too far from "nominal", adjustments are made. Statistical process
control, etc.
Whether it's SPC or some other method, that's true and the important part
is how much deviation from nominal is allowed. For the resolving power of
new lenses, if parts and processes are held within control limits, I see
two absolute boundaries:
1. Upper bound defined by the design's inherent capability (everything
nominal)
2. Lower bound defined by a "worst case stackup" at the edges of parts
tolerances and process control limits. This should be what is considered
minimum acceptable performance . . . _maybe_ with some "guard band" to take
into account modest degradation from normal use.
For used lenses, the lower boundary moves downward with age/usage
degradation (sometimes outright abuse).
In real life, very nearly all will fall into some statistical distribution
between them which may or may not be a "normal" one (bell curve). If the
distributions for two different lens formulations are wide enough with
their means close enough, there will be some examples of one performing
better than some examples of the other when the majority of the population
is vice versa . . . without considering degradation.
The Japanese have a well-deserved and hard-won reputation for manufacturing
excellence.
I would not dispute this. Control and incremental continuous improvement
do reach a point of diminishing returns unless radical changes are
made. There is a cost factor here too. Ultimately, some level of
warranty, and some risk of something less than specification escaping costs
less than preventing it entirely. This is a pure business decision. At
that point, one starts looking for radical changes that can also withstand
scrutiny in a business case for them (i.e., the gains exceed the costs).
My "hunch" is that there are a BUNCH of other factors which affect things
more than manufacturing variations, when it comes to evaluating/comparing
lenses, esp. just one sample vs. just one other sample. Unless someone is
testing in a truly scientific way, holding constant any and every factor that
may affect lens performance, and using representative samples from large
"populations" of lenses, results can be all over the map. Can't they?
Yes and no. Yes, if not enough control is exerted on other variables. No,
for very small sample sizes of very large populations. First, there are
special distributions, such as the "Student's t" used to model very small
sample sizes. These distributions have larger standard
deviations. Second, in the absence of a large enough sample size, one can
(under the proper circumstances) make inferences about how close a sample
mean and standard deviation might be to the entire population . . . but
only to a level of confidence (probability). Sample means and standard
deviations also have probability distributions. Take enough samples and
these distributions emerge. The higher the confidence level demanded, the
greater the error a sample might have compared to the entire
population. The larger the sample size, the smaller the error the sample
might have for the same confidence level.
Arrrrgh! This posting is turning into a statistics dissertation. Time to
end it. This is a little more about why it pains me to see people pay an
outrageously (irrationally ??) high premium for a used lens with only some
probability they're getting a lens better than the mean. Whatever premium
there is should be commensurate with the risks. As for the all-time
recorded eBay high mentioned by Gary Reese, the winning bidder could have
gotten a much newer EX++ or LN- condition 50/1.2 for the same price, and
had a much higher probability of buying a lens with resolving power
noticeably beyond the average 50/1.4 MC (with projected transparencies).
One additional remark about Gary Reese's tests. He includes a very helpful
condition grading so results can be evaluated in light of visual condition
inspection. In several cases I strongly suspect (as likely did Gary) the
performance differences between two examples of the same lens type are a
result of differences in their condition. Unlike the popular journal lens
testing results some people live and die over in the 35mm Holy Wars (read
the USENET News Groups for a while), Gary provides much more information
about what he did, how he did it, and other factors that may have affected
his results and I'm grateful for it.
-- John
< This message was delivered via the Olympus Mailing List >
< For questions, mailto:owner-olympus@xxxxxxxxxxxxxxx >
< Web Page: http://Zuiko.sls.bc.ca/swright/olympuslist.html >
|