Why isn't Canon working on DSLRs with higher dynamic range?

If Sony had come out with a sensor with on-die 16-bit ADCs, that would have been far, far bigger news than the fact that it can do ISO 409k. No one really cares about ISO 409k. The noise levels at that ISO are a simple matter of physics when it comes to stills.

When it comes to the A7s video performance, their DSP, BIONZ X, is the bigger news, since it's doing a significant amount of processing on the RAW signal to reduce noise at ultra high ISO settings. The BIONZ X image processor does 16-bit IMAGE PROCESSING, however the sensor output is 14-bit, and the output of the image processing is ALSO 14-bit. There is a page on Sony's site somewhere that describes this, soon as I find it I'll link it.

The BIONZ X processor is the same basic thing as Canon's DIGIC and Nikon's EXPEED. It's the in-camera DSP. Canon's DIGIC 6 has a lot of similar capabilities to Sony's BIONZ X. They both do advanced noise reduction for very clean high ISO JPEG and video output. They both do high quality detail enhancement as well. I don't believe Canon's DIGIC 6 does 16-bit processing, it's still 14-bit as far as I know. The use of 16-bit processing can help maintain precision throughout the processing pipeline, however since the sensor output is 14-bit, you can never actually increase the quality of the information you start with. That would be like saying that when you upscale an image in photoshop, you "extracted" more detail out of the original image. No, you don't extract detail when you upscale...you FABRICATE more information when you upscale.

Same deal with BIONZ X...during processing, having a higher bit depth reduces the impact of errors (especially if any of that processing is floating point), however it cannot create more out of something you didn't have to start with. That is evident by the fact that Sony is still outputting a 14-bit RAW image, instead of a 16-bit RAW image, from their BIONZ X processor.

UPDATE:

From the horses mouth: http://discover.store.sony.com/sony-technology-services-apps-NFC/tech_imaging.html#BIONZ

16-bit image processing and 14-bit RAW output
16-bit image processing and 14-bit RAW output help preserve maximum detail and produce images of the highest quality with rich tonal gradations. The 14-bit RAW (Sony ARW) format ensures optimal quality for later image adjustment (via Image Data Converter or other software).

e1-6col1-imaging-chart04-desktop.png

Higher precision processing, but still 14-bit RAW. The fact that the raw sensor output is 14-bit means that the dynamic range of the system cannot exceed 14 bits. The use of 16-bits during processing increases the working space, so when Sony generates a JPEG or video, it can lift shadows and compress highlights with more precision and less error. I suspect their "15.3 stops of dynamic range" is really referring to the useful working space within the 16-bit processing space of BIONZ X. Simple fact of the matter, though, is that when it comes to RAW...it's RAW. Your dynamic range is limited by the bit depth of the ADC. Since the ADC is still 14-bit, and ADC occurs on the CMOS image sensor PRIOR to processing by BIONZ X, then any processing Sony does in-camera can do no more, really, than what you could do with Lightroom yourself.
 
Upvote 0
Being in my late teens in the 80's and witnessing how the DAC's in the CD players would go from 16 to 18 to 20 to 24 etc. every 2-3 years, then the sampling frequency even faster than that....

Apologies for the stupid question in advance:

What's the huge deal with this 14bit wall in Digital Photography? Why can't they make an 16 bit processor and give us 16 stops of DR.... ?
 
Upvote 0
dilbert said:
I'm still waiting for your reply to me asking for a reference (you know, a URL) to something that supports your claim of the Sony a7s only having a 14bit ADC ...

dilbert said:
That's the bit depth of the file, not the width of the ADC.

Let's turn this around. Can you provide a reference showing the a7S has a 16-bit ADC? As far as I know, there have not yet been any dSLRs with greater than a 14-bit ADC. If Sony was the first to release a true 16-bit camera, one would think they wouldn't be shy about it...it should be childishly simple for you to provide many such references.

Since they are claiming >15 stops of DR, it's certainly in their best interest to not make much of the fact that they're using a 14-bit ADC which cannot deliver the actual DR they claim, meaning they're merely cooking the RAW file to include fabricated data.

Speaking of Sony lying about their RAW data, perhaps you could also provide evidence that the a7S outputs a real 14-bit (or higher) RAW file, instead of using the lossy 11+7-bit delta compressed RAW format used by the a7 and a7R.
 
Upvote 0
dilbert said:
jrista said:
B&H Photo's product page, under specifications:

http://www.bhphotovideo.com/c/product/1044728-REG/sony_ilce7s_b_alpha_a7s_mirrorless_digital.html

It took me about 3 seconds to find that. I just searched for "Sony A7s Bit Depth", and that was one of the first five links (the rest, for some reason, were all about the A77...)

That's the bit depth of the file, not the width of the ADC.

Why would they do 16-bit ADC, which generates a 16-bit data set, then downsample it back to 14-bits? Technically you are correct that they're not the same, but it would be monumentally stupid for Sony to throw away data and incur additional processing overhead. It's therefore reasonable to assume they are the same.
 
Upvote 0
My comments are based on my (imperfect) memory of science podcasts and other science journalism I've encountered in the last few years. If you have contradictory info I'd love to see a reference.

jrista said:
Regarding eye-witness accounts...the reason they are unreliable is people are unobservant. There are some individuals who are exceptionally observant, and can recall a scene, such as a crime, in extensive detail.

My understanding is that new research has shown this to be wrong. There are a few "savant" types who have very precise/correct memory function, but for "neurotypical" (i.e. "normal") people, this is not so.

I believe the brain only fills in information if it isn't readily accessible. I do believe that for the most part, when we see something, the entirety of what we see is recorded.

Again, my understanding is that recent research shows that the adage "seeing is believing" has it backwards: it should be "believing is seeing." The brain does not record raw image info at all, but constructs a reality that incorporates visual data with existing beliefs and expectations. It's that highly-processed "reality" that's recorded. As an example, back in 2004 there was that video tape of a purported Ivory-Billed Woodpecker. Subsequent analysis showed that it was almost certainly the rather common pileated woodpecker. The "eyewitnesses," however, recall seeing detail that would clearly distinguish it as an IBW. Even if it was a pileated, those witness may truthfully and genuinely believe they saw those distinguishing characteristics.
 
Upvote 0
dilbert said:
Nope. The specs for the a7s only quote the bit depth for the image files, not the ADC.
So, you are suggesting, or at least implying, they are using an ADC with more than 14-bits, but you have no evidence to back that up. Simple logic would say that if Sony has indeed released a 16-bit camera, they would promote that fact and not throw away those extra bits. Conclusion: the a7S uses a 14-bit ADC.


dilbert said:
Or maybe they're using a spreading function (i.e applying a curve to the sensor feed) rather than doing a linear conversion?
Whether they are interpolating or extrapolating is irrelevant - the former is fabricating data inside the existing range, the latter is fabricating data outside the existing range, but either way the data are being fabricated. Conclusion: Sony continues to lie about their RAW image data.


dilbert said:
Note that at present the claim for 15.3 stops of DR comes from a 3rd party ... even I'm dubious on that. I'll wait and see what Sony says and more importantly, what DxO can measure.
It's a little sad when your factual errors lead you to qualify and hedge your own statements. The screenshot below is from Sony's a7S page. Conclusion: dilbert doesn't bother to check his facts.
 

Attachments

  • Sony a7S 15+ stops of DR.png
    Sony a7S 15+ stops of DR.png
    93.5 KB · Views: 2,009
Upvote 0
100 said:
Sporgon said:
The Sony sensor is very good, but if you exposure the Canon optimally the difference is generally academic in the vast majority of circumstances. However if you have no understanding of exposure the exmor is better.
However, filter manufacturers sell loads of 1-3 stop GND filters so a couple more stops of dynamic range is useful to a lot people as well, even people who understand exposure, or should I say, especially to people who understand exposure.

Those people will know that by using a 1-3 stop GND you are able to get more light to the non ND part of the frame, which, depending upon what you are shooting, results in improved data from dark areas whether you are using a 12 or 14 DR capable camera. So you probably have as many Sonikon photographers buying them as Canon.
 
Upvote 0
dilbert said:
neuroanatomist said:
Let's turn this around. Can you provide a reference showing the a7S has a 16-bit ADC?

Nope. The specs for the a7s only quote the bit depth for the image files, not the ADC.

Since they are claiming >15 stops of DR

Note that at present the claim for 15.3 stops of DR comes from a 3rd party ... even I'm dubious on that. I'll wait and see what Sony says and more importantly, what DxO can measure.

it's certainly in their best interest to not make much of the fact that they're using a 14-bit ADC which cannot deliver the actual DR they claim, meaning they're merely cooking the RAW file to include fabricated data.

Or maybe they're using a spreading function (i.e applying a curve to the sensor feed) rather than doing a linear conversion?

Did you completely miss the post where I linked directly to Sony's site that SHOWS the sensor output (which CONTAINS the ADC) is 14-bit? How convenient...that you read my first post, and just magically didn't happen to see my second post. Sony's OWN SITE says the Sensor output is 14-bit. The sensor is an Exmor. Exmor uses CP-ADC ON-DIE. The last output of the sensor is FROM the ADC.

Therefor...the A7s IS 14-BIT! I love it how you DEMAND I PROVE things to you, then simply ignore the FACTS when I smack you upside the face with them.

There is absolutely ZERO question about it. The facts are the facts. The A7s is still "just" an Exmor, and Exmor's use 14-bit ADC. Here, I'll smack you upside the face with them again:

From the horses mouth: http://discover.store.sony.com/sony-technology-services-apps-NFC/tech_imaging.html#BIONZ

16-bit image processing and 14-bit RAW output
16-bit image processing and 14-bit RAW output help preserve maximum detail and produce images of the highest quality with rich tonal gradations. The 14-bit RAW (Sony ARW) format ensures optimal quality for later image adjustment (via Image Data Converter or other software).

e1-6col1-imaging-chart04-desktop.png
 
Upvote 0
Orangutan said:
My comments are based on my (imperfect) memory of science podcasts and other science journalism I've encountered in the last few years. If you have contradictory info I'd love to see a reference.

jrista said:
Regarding eye-witness accounts...the reason they are unreliable is people are unobservant. There are some individuals who are exceptionally observant, and can recall a scene, such as a crime, in extensive detail.

My understanding is that new research has shown this to be wrong. There are a few "savant" types who have very precise/correct memory function, but for "neurotypical" (i.e. "normal") people, this is not so.

I'm not saying everyone can be a savant. I'm saying everyone can learn how to WORK their memory to improve it. I did it...I used to have the same old poor memory that everyone had, I forgot stuff all the time, couldn't remember accurately. By thinking about, exercising, and processing sensory input more actively, I can intentionally bring up other memories that I want associated to the new ones I'm creating. Purposely recalling memories in certain ways and reviewing after creating them has helped me strengthen those memories, improving my ability to accurately recall the original event, be it sight, sound, smell, touch, taste or all of the above.

Whatever current research shows, memory is NOT simply some passive process we have absolutely no control over. It's also an active process that we CAN control, and we can improve our memory if we choose to...either only of specific events of importance, or we can train ourselves to process input in a certain way such that most input is more adequately remembered and strongly associated.

Orangutan said:
I believe the brain only fills in information if it isn't readily accessible. I do believe that for the most part, when we see something, the entirety of what we see is recorded.

Again, my understanding is that recent research shows that the adage "seeing is believing" has it backwards: it should be "believing is seeing." The brain does not record raw image info at all, but constructs a reality that incorporates visual data with existing beliefs and expectations. It's that highly-processed "reality" that's recorded. As an example, back in 2004 there was that video tape of a purported Ivory-Billed Woodpecker. Subsequent analysis showed that it was almost certainly the rather common pileated woodpecker. The "eyewitnesses," however, recall seeing detail that would clearly distinguish it as an IBW. Even if it was a pileated, those witness may truthfully and genuinely believe they saw those distinguishing characteristics.

I don't think any of that contradicts the notion that our brains store much or most of everything that goes into them. I don't deny that our beliefs and desires can color HOW we remember...as they could control what we recall. Remember, memory is often about association. If the guy watching the woodpecker was vividly remembering an IBW at the time (would have been an amazing find, for sure! I really hope they aren't extinct, but... :'(), that wouldn't necessarily change the new memories being created, but it could overpower the new memories with the associations to old memories of IBW. Upon recall...you aren't just recalling the new memories, but things associated with it as well. What you finally "remember" could certainly be colored by your desired, causing someone to misremember. Good memory is not necessarily good recall, and it certainly doesn't overpower an individual's desires for something to be true. All that gets into a level of complexity about our our brains work that goes well beyond any courses on the subject I've ever taken.

BTW, I am not talking about savants who have perfect memory. Eidetic memories or whatever you want to call them, that's a different thing than what I'm talking about. Eidetic memories are automatic, it's more how those individuals brains work, maybe a higher and more cohesive level of processing than normal individuals. That doesn't change the fact that you CAN actively work with your memory to improve it, considerably. I'm not as good at it these days as I used to...severe chronic insomnia have stolen a lot of abilities like that from me, but when I was younger, I used to have an exceptional memory. I remembered small details about everything because I was always working and reviewing the information going in. Before I took that class, my memory was pretty average, after and still largely since, it's been better than average to truly excellent.

That has to do with memory creation itself, though...it doesn't mean my memories can't be colored by prior experiences for desires. I think it lessens the chance of improper recall, but it's still possible to overpower a new memory with associations to old ones, and over time, what is recalled may not be 100% accurate (again, not talking about eidetic memories here, still just normal memory.) There have been cases of obsessive-compulsive individuals having particularly exceptional memory, on the level of supposed eidetics and in some respects better. For the very very rare individual, memories become their obsession, and because it's an obsession, every memory is fully explored, strengthened and associated to a degree well beyond normal. Recall is very fast, and the details can be very vivid. It isn't just image-based either, all sensory input can be remembered this way (sounds, smells, etc.) With such strong associations and synaptic cleft strengthening, such an individuals memories are effectively permanent as well. The difference would be the obsessive-compulsive chooses what memories to obsesse over...so their recall isn't necessarily as complete as an eidetic (who's memory for imagery is more automatic.)
 
Upvote 0
Sporgon said:
100 said:
Sporgon said:
The Sony sensor is very good, but if you exposure the Canon optimally the difference is generally academic in the vast majority of circumstances. However if you have no understanding of exposure the exmor is better.
However, filter manufacturers sell loads of 1-3 stop GND filters so a couple more stops of dynamic range is useful to a lot people as well, even people who understand exposure, or should I say, especially to people who understand exposure.

Which is fine for photographs with a split horizon, What of trees extending in the darker area of the graduated portion of the frame? I used to be a big GND filter user, but I prefer using a combination of shots to balance exposures across a frame. I don't like GND's because I can usually see the GND graduation in the frame which points to a poor methodology to control the contrast in the first place.

Those people will know that by using a 1-3 stop GND you are able to get more light to the non ND part of the frame, which, depending upon what you are shooting, results in improved data from dark areas whether you are using a 12 or 14 DR capable camera. So you probably have as many Sonikon photographers buying them as Canon.
 
Upvote 0
GMCPhotographics said:
Sporgon said:
100 said:
Sporgon said:
The Sony sensor is very good, but if you exposure the Canon optimally the difference is generally academic in the vast majority of circumstances. However if you have no understanding of exposure the exmor is better.
However, filter manufacturers sell loads of 1-3 stop GND filters so a couple more stops of dynamic range is useful to a lot people as well, even people who understand exposure, or should I say, especially to people who understand exposure.

Which is fine for photographs with a split horizon, What of trees extending in the darker area of the graduated portion of the frame? I used to be a big GND filter user, but I prefer using a combination of shots to balance exposures across a frame. I don't like GND's because I can usually see the GND graduation in the frame which points to a poor methodology to control the contrast in the first place.

Those people will know that by using a 1-3 stop GND you are able to get more light to the non ND part of the frame, which, depending upon what you are shooting, results in improved data from dark areas whether you are using a 12 or 14 DR capable camera. So you probably have as many Sonikon photographers buying them as Canon.

I agree with you; I don't use GND filters at all since digital has come of age - ie for about the last ten years.

100 seemed to me to be suggesting that the continued production of GND filters is to support those poor souls who still use Canon for landscape photography.

Most of my panoramics are shot in the way you describe, but often I don't need to do this. There is more latitude in these modern Canon sensors than some people give them credit for. Here's a shot out of a window at Bolton Castle in the English Yorkshire Dales, where Mary Queen of Scots was imprisoned by Elizabeth I. It's actually taken from a 'garderobe' ( toilet) L shaped passage, and the only light coming in is from this window. It was taken at mid day with the sun out.

I've shown the original jpeg from RAW with the jpeg picture style applied. Then I've shown the finished picture, and finally for those that like absurdity I've lightened the shadows and brought the sky down to silly levels. Anything blown ? No. FPN ? Only in the few areas where the sensor has recorded zero light, and even then it's not bad.

And this was taken on the 'old' 5DII which is nothing like as good as the 6D in it's latitude and data manipulation.
 

Attachments

  • IMG_8770740.jpg
    IMG_8770740.jpg
    38.4 KB · Views: 673
  • Wensleydale.jpg
    Wensleydale.jpg
    63.2 KB · Views: 650
  • IMG_8770B.jpg
    IMG_8770B.jpg
    71.7 KB · Views: 677
Upvote 0
dilbert said:
K-amps said:
Being in my late teens in the 80's and witnessing how the DAC's in the CD players would go from 16 to 18 to 20 to 24 etc. every 2-3 years, then the sampling frequency even faster than that....

Apologies for the stupid question in advance:

What's the huge deal with this 14bit wall in Digital Photography? Why can't they make an 16 bit processor and give us 16 stops of DR.... ?



Beyond a certain point, it makes no sense - for example, measurements and analysis by various folks on the web put the capacity of the 5D Mark II's sensor at about 59,000 electrons (ISO 100). That's fewer than the maximum number of different positions achievable with 16 bits. If you can't record 16 bits worth of different electron values, why would you use 20?

Thanks. Again I am no expert, but perhaps they would use 16 or 20 bits for similar logic that the CD player manufacturers used, i.e. larger bit depth converters are more linear in certain bit depths... i.e. offset the data stream from the sensor by 2 bits and then encode/decode. If for nothing else, it might be less noisy maybe. I don't know... just thinking out aloud. I would feel visual acuity is more sensitive than auditory... so why doesn't this have the (marketing) traction that audio component manufacturers had. In the end it was a Sony idea of using direct stream as opposed to a ladder process that satisfied the most die hard audiophiles...

How does 59k electrons convert to 14 bit and what influences the # of electrons. Do EXMOR's do more than 59k electrons?
 
Upvote 0
Has any manufacturer implemented a dual scan or dual pixel single scan of a read whereby one scan reads the scene (as an example) +6 stops over and the second scan reads 6 stops under, then these scans are merged to output a file that is +12 stops more in DR than the one file.

If the dual scan will cause a blur for fast objects... perhaps have a dual pixel read out where each partner pixel offsets the recording of the image by +/- 6 stops, and either yield 2 raw files for manual processing, or do an in camera processing to compress the file into visible DR for output purposes...
 
Upvote 0
i think a more pertinent question should be why isn't everyone bored of this discussion since there has been near on a billion threads about it all running on endlessly.... yes i know i over exagerated the billion threads part.... but since everything else in these threads is over exagerated to the point of absurdity why not? :P
 
Upvote 0
dilbert said:
neuroanatomist said:
dilbert said:
Or maybe they're using a spreading function (i.e applying a curve to the sensor feed) rather than doing a linear conversion?
Whether they are interpolating or extrapolating is irrelevant - the former is fabricating data inside the existing range, the latter is fabricating data outside the existing range, but either way the data are being fabricated.

They don't need to be interpolating or extrapolating, just not using a 1:1 matching for raw data.

So...Sony's RAW files, much like your arguments, are half-baked. No surprise, really.
 
Upvote 0
dilbert said:
Sony's OWN SITE says the Sensor output is 14-bit. The sensor is an Exmor. Exmor uses CP-ADC ON-DIE. The last output of the sensor is FROM the ADC.

Therefor...the A7s IS 14-BIT!
...

Yup. As you say, Sony show they've got a 14bit sensor output delivering 15.3 stops of DR. Interesting.

I can see you and neuro and a whole host of other web folks getting ready to point out how it is impossible and when DxO measure it and say they've got 15.3 stops of DR, DxO will be telling lies and faking it and then someone will go out and do real world tests that also corroborates with it and you'll all still be saying that it is a lie.

I want to see what they've got to show before I denounce them. After all, they're already getting more than 14 stops of DR (DxO says 14.8?) with the Nikon D800 so why should another .5 stop of DR be unreasonable? Would be interesting to see what a real world test of the DR of those cameras turned out, kind of aligned with what the Zacutto(?) folks did with various cameras for video.

Obviously it isn't a linear conversion (even now) but I'm kind of curious to know what's going on.

A curve of some kind is pretty obvious, whether it is a gamma curve or something else ...

Well, it's no surprise that you buy into DXO's bull. There are two values on DXO's site for DR. One is a measure, as in something actually MEASURED from a REAL RAW file. The other is an EXTRAPOLATION. It isn't even a real extrapolation, it is just a number spit out by a simple mathematical formula...they don't actually even do what they say they are doing.

The first of these is Screen DR. Screen DR is the ONLY actual "measure" of dynamic range that DXO does. It is the SINGLE and SOLE value for DR that is actually based on the actual RAW data. In the case of the D800....do you know what Screen DR is? (My guess is not.)

The other of these is Print DR. Print DR is supposedly the dynamic range "taken" from a downsampled image. The image size is an 8x12 "print", or do DXO's charts say. As it actually happens to be, and this is even according to DXO themselves...Print DR is not a measure at all. It isn't a measurement taken from an actually downsampled image. You know what it is? It is an extremely simple MATHEMATICAL EXTRAPOLATION based on...what? Oh, yup...the only actual TRUE MEASURE of dynamic range that DXO has: Screen DR. Print DR is simply the formula DR+ log2 SQRT(N/N0). DR is ScreenDR. N is the actual image size, N0 is the supposed downsampled size. The formula is rigged to guarantee that "Print DR" is higher than Screen DR...not even equal to, always higher. And, as it so happens, potentially 100% unrelated to reality, since it is not actually measured.

DXO doesn't even have the GUTS to ACTUALLY downsample real images and actually measure the dynamic range from those downsampled images. They just run a mathematical forumla against Screen DR and ASSUME that the dynamic range of an image, IF they had downsampled it, wouold be the same as what that mathematical value says it should be.

Print DR is about as bogus as "camera measurement 'science'" can possibly get. It's a joke. It's a lie. It's bullshit. The D800 does not have 14.4 stops of DR, as DXO's Print DR would indicate. The Screen DR measure of the D800? Oh, yeah...it's LESS than 14 stops, as one would expect with a 14-bit output. It's 13.2 stops, over ONE FULL STOP less than Print DR. The D600? Says Print DR 14.2, but Screen DR is 13.4. D610? Print DR 14.36, but Screen DR 13.55. D5300? Print DR 13.8, but Screen DR 13. A7? Print DR 14, but Screen DR 13.2. A7s? Print DR 14 but Screen DR 13. NOT ONE SINGLE SENSOR with 14-bit ADC output has EVER actually MEASURED more than 14 stops of dynamic range. That's because it's impossible for a 14-bit ADC to put put enough information to allow for more than 14 stops of dynamic range. There simply isn't enough room in the bit space to contain enough information to allow for more than 14 stops..not even 0.1 more stops. Every stop is a doubling. Just as every bit is a doubling. Bits and stops, in this context, are interchangeable terms. In the first bit you have two values. With the second bit, your "dynamic range" of number space doubles...you now have FOUR values. Third bit, eight values. Fourth bit, sixteen values. Fifth bit, thirty two values. To begin using numeric space beyond what the 14th bit allows, which would be necessary to start using up some of the 15th stop of dynamic range, you need at least 15 bits of information. It's theoretically, technologically, and logically impossible for any camera that uses a 14-bit ADC to have more than 14 stops of dynamic range.

Here is another fact about dynamic range. Dynamic range, as most photographers think about it these days, is the number of stops of editing latitude you have. While it also has connotations to the amount of noise in an image, the biggest thing that photographers think about when it comes to dynamic range is: How many stops can I lift this image? We get editing latitude by editing RAW images. RAW. Not downsampled TIFFs or JPEGs or any other format. RAW images. How do we edit RAW images? Well...as RAW images. There IS NO DOWNSAMPLING when we edit a RAW image. Even if there was...who says that we are all going to downsample our images to an 8x12" print size (3600x2400 pixels, or 8.6mp)? We edit RAW images at full size. It's the only possible way to edit a RAW image...otherwise, it simply wouldn't be RAW, it would be the output of downsampling a RAW to a smaller file size...which probably means TIFF. Have you ever tried to push the exposure of a TIFF image around the same way you push a RAW file around? You don't even get remotely close to the kind of shadow lifting or highlight recovery capabilities editing a TIFF as you do a RAW. Not even remotely close. And the editing latitude of JPEG? HAH! Don't even make me say it.

Therefor, the ONLY valid measure of dynamic range is the DIRECT measure, the measure from a RAW file itself, at original size, in the exact same form that photographers are going to be editing themselves. Screen DR is the sole valid measure of dynamic range from DXO. Print DR is 100% bogus, misleading, fake.

It doesn't matter what Sony does in their BONZ X chip. The sensor output is 14-bit RAW. The only thing their BIONZ chip can do is...the same thing YOU do. They can lift shadows, and compress highlights. They can shift exposure around and reduce noise by applying detail-softening noise reduction algorithms. But then, well then you don't actually have a RAW file anymore. You have a camera-modified file. With Sony's propensity for using a lossy compression algorithm in their RAWs, you don't even get full 14-bit precision data per pixel, and that fact has shown in many cases when people go to edit their Sony RAWs in post. The compression artifacts can be extreme. I find it simply pathetic that Sony, with all this horsepower under their thumb, would completely undermine it all by storing their RAW images in a lossy compressed format. It completely invalidates the power of their sensors, and speaks to the fact that Sony is probably just as schizophrenic internally as Nikon is. That will lead to inconsistent products and product lines, poor product cohesion, lackluster design for OTHER aspects of their cameras beyond the sensor, etc. Were already seeing many of these problems with Sony cameras. Their sensors may be good, but how Sony themselves are using their sensors is crap.
 
Upvote 0
dilbert said:
jrista said:
...
The first of these is Screen DR. Screen DR is the ONLY actual "measure" of dynamic range that DXO does. It is the SINGLE and SOLE value for DR that is actually based on the actual RAW data. In the case of the D800....do you know what Screen DR is? (My guess is not.)

Curious then that someone else has slurped up DxO's data and says that the DR for the D800 is 14.0.

That's because it's impossible for a 14-bit ADC to put put enough information to allow for more than 14 stops of dynamic range. There simply isn't enough room in the bit space to contain enough information to allow for more than 14 stops..not even 0.1 more stops. Every stop is a doubling. Just as every bit is a doubling.

Assuming the relationship is linear.

It doesn't matter what Sony does in their BONZ X chip. The sensor output is 14-bit RAW. The only thing their BIONZ chip can do is...the same thing YOU do.

Unless they've come up with a new algorithm to map sensor output.

Or maybe they've decided that DR is like ISO and can be used for marketing joy. At this point it's all speculation because nobody has one of their cameras to test.

It doesn't matter what they do in the middle. The FILE you WORK WITH is 14-bit. All that BIONZ does is do some post processing. That doesn't change the dynamic range, it only changes the contrast within the original dynamic range. That's it. There is no magic here, nothing special. A 14-bit file has enough numeric space for 14 stops of dynamic range. Anything you do to the data, such as apply a non-linear curve, simply COMPRESSES the information contained within those original 14 stops to SOMETHING LESS!! There is no ex nihilo here.

It's no different than say upconverting from sRGB to AdobeRGB. There is no value to doing that, since you already lost the original color fidelity when you converted to sRGB in the first place. You cannot restore colors in the AdobeRGB space once they are lost...they are gone forever. It's the same thing with bit depth and stops of dynamic range.

The only way Sony can literally achieve 15.3 stops of dynamic range is if their sensor ADC units are 16 bit, the processing pipeline is 16-bit, AND the output RAW file is 16-bit. And I mean a FULL 16-bit, not some half-witted 13+3-bit lossy compressed RAW file, as that would just decimate the full tonal range. I mean a full, uncompressed (or at least lossLESSly compressed) 16-bit RAW file. There is no other way to achieve more than 15 stops of dynamic range...you have to have the numeric space to represent those stops.
 
Upvote 0