Multilayer Sensors are Coming From Canon [CR2]

We shouldnt also forget that Fuji & Panasonic are also colaborating on organic sensors with BSI.

http://www.fujifilm.com/news/n130611.html

http://www.fujirumors.com/updated-organic-sensor-patent/

And also Sony.

http://thenewcamera.com/sony-hybrid-organic-inorganic-sensor-patent/

All due in camera in 2015
 
Upvote 0
Lee Jay said:
A problem with Foveon sensors is lousy color separation. It's not a red, blue and green layer, it's three white layers with a little bit of bias on each one. This is why they have lousy, inaccurate colors with lots of color artifacts like purple and green splotches all over the place.

I hope Canon has a way to dramatically improve on Foveon sensors before they'd release this into the wild. Foveon's have lousy DR, lousy high ISO performance, lousy colors, and the lack of an AA filter means a ton of aliasing artifacts.

Agreed, they'd have to have had mega tech breakthroughs. That seems unlikely. However, I don't know that there is anything in basic physics that instantly jumps out at you and says that such a breakthrough would be impossible in this case.
 
Upvote 0
BozillaNZ said:
jrista said:
Think about it. If read noise is 35e-, and the maximum signal strength is 68,000e-. If your doing 16-bit conversion, then your gain is 1.0376e-/ADU . That's almost unity gain...at ISO 100! Unity gain is what you want. With 14-bit conversion, your gain is 4.15e-/ADU. So, with 14-bit, your read noise turns into 8-9 tonal levels. With 16-bit, your read noise turns into 33-34 tonal levels. That's the bottom of the signal, though. For 16-bit, you still have 65502 levels for all the signal detail above the noise floor. Any gradients in the image at tones 35 through 65535 are going to be smoother with 16-bit conversion than with 14-bit conversion.

The noise floor does not only 'exist' in the lower-end of the signal range, it exists through-out the range event to the clip point.

Let's do a math exercise:

Here's a stream of signal:
0 10 100 1000 10000 100000

If my sensor has noise floor of 3-bits (0~8), even if I sample it using full precision, I got this:
3 12 107 1004 10002 100005, my lowest 3 bits are drown in noise, even for the highlight.

If I sample it with 1/8 precision (chops off lower 3 bits), I get:
0 1 13 125 1250 12500

Then recreate the signal by multiplying the sample signal by 8 times:
0 8 104 1000 10000 100000

And mix it with random number generator for 3-bits:
5 10 109 1001 10007 100004

The results of high and low precision sampling only fluctuates within noise floor, so are essentially the same.

The conclusion? If you are sampling more precision than your SNR, you are just sampling noise more precisely, which is still noise, and is same as if you don't sample as precise, then add noise in post.

Apply this to your example, when you have 35e- noise floor, your tonal range is not 65535-35 = 65500, but rather 65535/35 = 1872 levels (~10.8 stops), because the bottom 5 (!) bits are unstable noise.

+1
 
Upvote 0
V8Beast said:
jrista said:
I'd say that most FF cameras could benefit from a 16-bit ADC. Even Canon cameras, which still have high read noise, can benefit. You won't see an editing latitude increase on a Canon camera (not with current read noise levels anyway), however overall, you should still see improved tonal grading. Convert 65, 80, 150 thousand electrons into 16k digital units, and your needlessly limiting your tonal range. Convert 65, 80, 150 thousand electrons into 65k digital units, and you greatly expand your tonal range...that should mean smoother gradients, softer shadow falloff (until you hit the read noise floor), etc.

Very interesting info. Other than higher bit ADC, what other methods can be used to improve tonal range?

I'm often shooting product images of engine or suspension parts that are some slightly different shade of gray, white, or black. Since I have 100 percent of the control over lighting, in these scenarios I'm far more interested in improvements in tonal range than dynamic range, although dynamic range is still a nice convenience.

Similar products shots I've seen captured with medium format gear absolutely kicks the $hit out of anything captured on 35mm sensors in terms of super fine tonal gradations. It makes me envious, but I don't shoot nearly enough product gigs like this to warrant investing in medium format.

improving SNR/DR so you can make use of more captured bits per channel and making the color filter array super well tuned (and likely a lot less color-blind than they have become, although then you run into issues of decreasing SNR so it's a balance, but since noise is not so bad in upper tones at low ISO, I think you'd do better with a more titghtly refined color filter array). And you also may want a display with 10bits or more per channel.
 
Upvote 0
I wonder if it would be helpful to have a quasi-bayeresque multi-layer sensor.

So instead of having every pixel being constructed with what I assume would be a B/G/R construction, they decide to have some interspersed pixels with other channel arrangements. (e.g. R/G/B)
 
Upvote 0
Woody said:
Completely off-track here but I've a question for Fujifilm users. Doesn't Adobe still struggle with Fujifilm RAW files? I know there are improvements, but they are still not entirely artifact free. So, how does one cope with that?

No, starting with Lightroom 5.4 and the associated Camera RAW release it's perfectly fine. The also added the Fuji camera profiles (Provia, Astia, etc). The camera JPGs and other RAW converters (especially PhotoNinja) can pull a bit more detail out that Lightroom can, but I only worry about that for a large print or tightly cropped image. LR is great 95% of the time. Here is a comparison. I took a screenshot while zoomed to 100% in Lightroom. RAF is left, JPG is right. I choose an area with green because that is where detail is hardest to come by. Edit: the forum recompresses the upload when shown here in the post. You can right click on it and choose "Open image in New Tab" in Chrome web browser to see full-size screenshot. Other browsers are similar.

Also here is a link to view and download the camera JPG in Google Drive: https://drive.google.com/file/d/0Bxu8IhRmJPZ2Rmc2c1BwUmNwd1E/view?usp=sharing
 

Attachments

  • Screen Shot 2014-10-09 at 1.55.45 PM.png
    Screen Shot 2014-10-09 at 1.55.45 PM.png
    2 MB · Views: 269
Upvote 0
jrista said:
StudentOfLight said:
I wonder if it would be helpful to have a quasi-bayeresque multi-layer sensor.

So instead of having every pixel being constructed with what I assume would be a B/G/R construction, they decide to have some interspersed pixels with other channel arrangements. (e.g. R/G/B)

Personally, I like Panasonic's color splitting technique. It still preserves all of the light, but it has W-R and W+R pixels (white plus/minus red, which ends up being "bluish" and "reddish" pixels in the end). It doesn't filter at all, it just splits the light coming in and directs some component of it (red) to different pixels. It's supposedly 100% transmission (probably not exactly, 99.something%), and 2-3 times the sensitivity of bayer sensors (which should mean it's far more sensitive than a layered sensor, with high dynamic range and color fidelity:

http://image-sensors-world.blogspot.com/2013/02/panasonic-develops-micro-color-splitters.html

Di and Trichroic filters have been used before, most notably in "3 CCD camcorders".
 
Upvote 0
jrista said:
Lee Jay said:
jrista said:
StudentOfLight said:
I wonder if it would be helpful to have a quasi-bayeresque multi-layer sensor.

So instead of having every pixel being constructed with what I assume would be a B/G/R construction, they decide to have some interspersed pixels with other channel arrangements. (e.g. R/G/B)

Personally, I like Panasonic's color splitting technique. It still preserves all of the light, but it has W-R and W+R pixels (white plus/minus red, which ends up being "bluish" and "reddish" pixels in the end). It doesn't filter at all, it just splits the light coming in and directs some component of it (red) to different pixels. It's supposedly 100% transmission (probably not exactly, 99.something%), and 2-3 times the sensitivity of bayer sensors (which should mean it's far more sensitive than a layered sensor, with high dynamic range and color fidelity:

http://image-sensors-world.blogspot.com/2013/02/panasonic-develops-micro-color-splitters.html

Di and Trichroic filters have been used before, most notably in "3 CCD camcorders".

Sure, but those were pretty bulky, with three full sized sensors. This is a fully integrated per-pixel solution.

You saw this?

http://www.dpreview.com/articles/6555348105/nikonimagesensor
 
Upvote 0
Yep. Quite interesting is that Panasonic idea. And I hear about it here for the first time. Seems to have missed it.

About that 16 bit talk a few posts earlier... I wonder about one alternative (aside from Sport photography maybe) Canon could develop an image processor to use stack/HDR-like/ photos to produce a 32 bit TIFF or even a RAW (which is similar). Perhaps combined with that dual ISO from ML exploit (which mechanics I honestly don't remember quite well). I enjoy quite to Adobe PS feature for 32 bit TIFFs. But Native is always better. I admit 10 FPS wouldn't be maybe possible, but who knows.... Anyways - tech is here: Hardly too much of R&D is required for this kind of feature compared to 16 bit Sensor option. Although having in mind Jrista's data on the wells... who knows. But 32 is always bigger than 16 ;-)


Jrista - you got your numbers from http://sensorgen.info/ ? BTW their Domain has expired... Bah! Google cache is still here :-)
 
Upvote 0
jrista said:
Canon, while a highly innovative company, doesn't seem to be all that innovative when it comes to sensors. (Which is REALLY weird to me...given that they are an imaging company...pretty much everything they do revolves around digital image sensor technology.) Canon just doesn't seem to be in the sensor innovation game right now...

And this is with Aptina's current HDR technology...I can only imagine what they can do with multi-bucket technology.

Canon did change the sensor game when they first started with CMOS technology instead of the widely accepted CCD stuff. But after Sony embarked on their own CMOS sensors (first shown in D300/D3), Canon basically stagnated. Sigh... I am still waiting for them to leapfrog Sony... not sure if the day will ever come...
 
Upvote 0
jrista said:
Yeah, I'm bummed the sensorgen.info domain expired. Those guys do seem to update the site with recent cameras...maybe it will come back when they notice the domain expired.
I will try to ask around about it :-(

jrista said:
Regarding the whole stacking/in-sensor HDR idea. Other companies are working on that. There are patents that support just that very thing. Some companies are even researching how to use dual-gain (basically the same thing as Magic Lantern's Dual ISO) to improve dynamic range WELL beyond 14 stops (20, 22 stops maybe more).
You sure about it? However a simple stacking of 3/5/9 frames @ different stops embedded in on image @32 bit is the best even without the multibucketing. The latter could only be useful for sport & High-FPS photography.

However if you have some time could I kindly ask for a patent or two links - it would be certainly of high interest to me and you definitely read patents better than me. :-[

jrista said:
This is why I'm frustrated with Canon, and now open to alternative brands. Canon, while a highly innovative company, doesn't seem to be all that innovative when it comes to sensors. (Which is REALLY weird to me...given that they are an imaging company...pretty much everything they do revolves around digital image sensor technology.) Canon just doesn't seem to be in the sensor innovation game right now. They have had a couple, but none of it (at least so far...maybe their layered sensors will change things) has been very ground breaking. DPAF is pretty awesome for video...but even that was an evolution on an idea Fuji originally implemented (and I think Fuji got it from a much older paper.)
Yeah! Tell me about it.

At least for one thing that Neuro imposter IMO was right:

"the real problems are the managers at canon
they are lazy and have made their fortune.
hell is there even one under 60?
"

Or Canon was more conservative on R&D spending (which I believe is not the case since if I recall correctly they have kept investing about 2% of their profit).

jrista said:
Back to the HDR sensor topic. I think the most viable technology at the moment is multi-bucket CCD-backed pixels. This is an Aptina innovation...they took the basic single-CCD backing buffer or "memory" for global shutter, and expanded it to four CCDs per pixel (it's a CCD, or charge-coupled device, as that's a very simple and efficient way to move charge from one place to another). With four memories per pixel, the charge in the pixel can be moved to memory four times.
Patent link, patent link PLZZZzzzzz

As for the global shutter - don't even remind me about that. Everybody is slowly getting on that ship too. If I remember correctly Canon has no such feature in any of its CMOS sensor within any Pro level camera.

And that is a good trend to follow at least.
 
Upvote 0
jrista said:
Not the patent, not sure where that is. Here is a paper on the topic, though:

https://graphics.stanford.edu/papers/gordon-multibucket-jssc12.pdf

Mostly by Aptina engineers.
Nice one, but main goal should be to get all the 32 bit of information and not some discrete auto HDR image.

I love to walk around with a brush or any other PS tool in order to get what I want.
 
Upvote 0
jrista said:
Jon_D said:
Will Canon use more than 3 layer?

What does the patents say?

From what I can gather from the poor japanese translation, the UV and IR layers are only for blemish removal. They are not used for output pixel generation...that still only uses the RGB layers...however those layers are modified by the blemish removal logic before demosaicing (at least, as I understood it...and I'm honestly not sure if that is just for JPEG, or includes RAW or not...I kind of think it's only JPEG.)

I would expect the extra information to be included in RAW files as a separate data layer, so that post-processing tools could benefit from it in the same way. That would also have the rather sizable advantage of making it possible to do IR photography with an unmodified camera just by changing the way that you combine the various layers in post-processing, which would be really cool.

Incidentally, there shouldn't be any demosaicing with a multilayer sensor. That's the whole point of having spatially coincident subpixels.
 
Upvote 0
jrista said:
It would also be a tremendous amount of data, and a lot more data to be factored into image processing. Five layers at 25megapixels is 125megaphotodiodes. At 14-bit, that's around 235-245 megabytes per image. RAW editors would also have to add the right kind of support to utilize those extra layers.

Even three layers would be unworkable uncompressed at 25 megapixels per layer. It's hard enough to deal with 25–30 megabyte image files, much less four times that. They're clearly going to have to come up with a good lossless compression algorithm. A lossless scheme similar to PNG should get you about 2.7:1 compression, which means about 81 MB with all five layers included, or 49 MB with only three layers. But I think it is possible to do better than 2.7:1. After all, the high order bits of nearby pixels are likely to be fairly similar except near high-contrast edges, and the more bit depth you have, the more identical bits you'll probably have.
 
Upvote 0
jrista said:
dgatwood said:
jrista said:
It would also be a tremendous amount of data, and a lot more data to be factored into image processing. Five layers at 25megapixels is 125megaphotodiodes. At 14-bit, that's around 235-245 megabytes per image. RAW editors would also have to add the right kind of support to utilize those extra layers.

Even three layers would be unworkable uncompressed at 25 megapixels per layer. It's hard enough to deal with 25–30 megabyte image files, much less four times that. They're clearly going to have to come up with a good lossless compression algorithm. A lossless scheme similar to PNG should get you about 2.7:1 compression, which means about 81 MB with all five layers included, or 49 MB with only three layers. But I think it is possible to do better than 2.7:1. After all, the high order bits of nearby pixels are likely to be fairly similar except near high-contrast edges, and the more bit depth you have, the more identical bits you'll probably have.

Storage space probably isn't nearly as big a concern, as yes, you can compress the files. However when your working on them, you need the full pixel data. It's like opening a large 16-bit or 32-bit TIFF in Photoshop...if you look at the memory usage, it is usually several hundred megs.

So what? When you're working in Lightroom or Camera Raw you're working on demosaiced data anyway at 16 bits per channel for four channeks. The size is 8 bytes * pixel count.
 
Upvote 0
I agree, it would be pretty cool to have an IR layer that was usable, but I don't think it'll happen immediately. I just don't see image files that large being...viable. "

I don't agree at all. It might not be viable on some systems, but much larger files are quite viable on todays pro systems. I regularly work with stitched air photos that can be 10s or 100s of gigs in size.

20 years ago a Photoshop filter could slow a Mac Pro down.
Now they can apply filters in real time to 4k video. For still photos we've had grossly excessive computing power available for a long time.

I tablet might be overwhelmed but there are certainly computers you can buy that wouldn't break a sweat over a 5 layer Phase One
 
Upvote 0
jrista said:
Your rendering demosaiced data, which doesn't necessarily require the constant memory space.

Yes, it does. When you're in the Develop module of Lightroom or using Camera Raw, the entire 64 bit per-pixel image is in memory. The rendered view is on top of that.
 
Upvote 0
A few questions pertaining to the usefulness of capturing IR data in a separate channel:
1) Can humans see InfraRed?
2) How much of the IR spectrum can be transmitted through DLSR lenses?
3) Can you gain added colour accuracy by sampling additional channels which overlap with wavelengths outside human visual perception?
4) For a given ISO and Aperture, what is the difference in exposure time needed to create an IR image vs a visible light image?
 
Upvote 0
StudentOfLight said:
A few questions pertaining to the usefulness of capturing IR data in a separate channel:
1) Can humans see InfraRed?
No.
2) How much of the IR spectrum can be transmitted through DLSR lenses?
All of the near IR spectrum. But little gets through the sensor's IR filter.
3) Can you gain added colour accuracy by sampling additional channels which overlap with wavelengths outside human visual perception?
I doubt it.
4) For a given ISO and Aperture, what is the difference in exposure time needed to create an IR image vs a visible light image?
With the IR filter, orders of magnitude
 
Upvote 0