July 28, 2014, 06:48:30 PM

Author Topic: Canon Dual-Scale Column-Parallel ADC Patent  (Read 6910 times)

CarlTN

  • Canon EF 300mm f/2.8L IS II
  • *******
  • Posts: 2227
    • View Profile
Re: Canon Dual-Scale Column-Parallel ADC Patent
« Reply #15 on: December 15, 2013, 02:29:53 AM »
Also, Canon has a prior Foveon-like patent, and now this patent.  While there is certainly a large (sometimes insurmountable) gap between patent and product, these patents belie the statements those who suggest Canon is failing to innovate in the area of sensor design (as do prototypes like the 120 MP APS-H sensor).

But was that 120MP aps-h sensor ever tested?  What process was used to produce the 120MP sensor?  All I've seen on here is how Canon's process hasn't gotten small enough, but something must have been small to make that sensor.

As you might know I'm a bit of a fan of the foveon technique, so if Canon actually produces one for sale, maybe it will perform really well.  It does seem almost plausible to me that the 60 or 75MP sensors that have been rumored, would use the technique...Because at this point, is a bayer RGB array with that many photo sites really viable on a 36mm wide sensor?  Seems like that would descend into the noise levels of compact point and shoot sensors...


canon rumors FORUM

Re: Canon Dual-Scale Column-Parallel ADC Patent
« Reply #15 on: December 15, 2013, 02:29:53 AM »

jrista

  • Canon EF 300mm f/2.8L IS II
  • *******
  • Posts: 3741
  • POTATO
    • View Profile
    • Nature Photography
Re: Canon Dual-Scale Column-Parallel ADC Patent
« Reply #16 on: December 15, 2013, 11:13:41 AM »
Also, Canon has a prior Foveon-like patent, and now this patent.  While there is certainly a large (sometimes insurmountable) gap between patent and product, these patents belie the statements those who suggest Canon is failing to innovate in the area of sensor design (as do prototypes like the 120 MP APS-H sensor).

But was that 120MP aps-h sensor ever tested?  What process was used to produce the 120MP sensor?  All I've seen on here is how Canon's process hasn't gotten small enough, but something must have been small to make that sensor.

As you might know I'm a bit of a fan of the foveon technique, so if Canon actually produces one for sale, maybe it will perform really well.  It does seem almost plausible to me that the 60 or 75MP sensors that have been rumored, would use the technique...Because at this point, is a bayer RGB array with that many photo sites really viable on a 36mm wide sensor?  Seems like that would descend into the noise levels of compact point and shoot sensors...

It was most certainly tested. That was the entire point of the press release now so many years ago...that they had successfully fabricated AND tested a 120mp APS-H sensor that was capable of 9.5 frames per second. It was an amazing feat. As for process, it would have had to have been done on their small form factor fab, as the pixels would have been only 2┬Ám in size (too small for a 500nm process to effectively create, especially with the added logic for column-parallel readout (which the press release did mention.) Canon does have the capacity to fabricate larger sensors in multiple exposures on the fab that is dedicated to their smaller parts. It isn't particularly efficient, but that doesn't matter when you are only creating a few prototype parts for testing.
My Photography
Current Gear: Canon 7D | Canon EF 600mm f/4 L IS II | EF 100-400mm f/4.5-5.6 L IS | EF 16-35mm f/2.8 L | EF 100mm f/2.8 Macro | 50mm f/1.4
New Gear List: Canon 5D III/7D II | Canon EF 300mm f/2.8 L II

jrista

  • Canon EF 300mm f/2.8L IS II
  • *******
  • Posts: 3741
  • POTATO
    • View Profile
    • Nature Photography
Re: Canon Dual-Scale Column-Parallel ADC Patent
« Reply #17 on: December 15, 2013, 11:17:18 AM »
Very interesting, but recently you had said that if Canon relies on dual ISO, that's only a bandaid, and might not yield enough of a DR increase, at least with the combined benefit of a lower noise floor.  Obviously you meant more akin to what ML did, rather than starting from quasi-scratch, as this link hints at.

Using the existing downstream amplifier on half the pixels, which is what ML is doing, is a bandaid (and not ideal, as it costs you in resolution). What Canon has patented here is MUCH better...the way I would expect it to be done. Since they are reading the sensor with two different gain levels, I really don't see why there would be any reasonable limits on DR for the foreseeable future...ML is only limited to 14 stops because the ADC is 14-bit. Technically, the potential for very scalable DR is there in Canon's patent (assuming I've understood it correctly, that is.)

It seems to me there will be a lot of lossless compression necessary for the large RAW files (and a lot of processing power).  Also though, does this not make it likely, that the 2014 1-series camera, assuming it's in the 40MP range, may not use the above process?  If so, it might just "only" have 14 bit RAW capability.  I too was hoping it was actually going to be 16 bit, whether it actually got much over 14 stops of "real" DR or not.  That would really be something, if Canon just suddenly introduced a camera that could actually do 16 stops.

Are you planning on buying the new camera, early on?

Agreed, normally a RAW file will have lossless compression. Still, a gigabit of information is a lot...you can't compress the read stream, really...you have to process it all in order to compress the output file. So, while from a storage space standpoint it wouldn't be all that bad, from an image processing standpoint...you would need much faster processors.

Canon, or someone, mentioned around a year ago, maybe not quite that long, that Canon might push a bit depth increase with the Big MP camera. Who knows if that is the case, it was a CR1, but still, interesting nevertheless. I can't imagine anyone pushing bit depth until there is a definitive reason to do so. For all of DXO's claims about the Nikon D800 and D600 offering more than 14 stops of DR, they are talking about downscaled output images. The native DR of the hardware itself is still less than 14 stops...13.2 for the D800 IIRC.

That's with 3e- of read noise...which is INSANELY LOW (usually, you don't see that kind of read noise until you start peltier cooling sensors to sub-freezing temperatures). There are a few new ideas floating about regarding how to reduce read noise. There have been a number of patents and other things floating around lately about "black silicon", a structural modification of silicon that gives it an extremely low reflectivity index, which supports a natural read noise level of around 2e- and some of the best low light sensitivity known, and it is being researched for use in extreme low light security cameras that can see by starlight (which blows my mind.) Theoretically, this can greatly improve DR at what would be high ISO settings.

Canon's approach with dual scaling is potentially another way to get a lot more average dynamic range at low or high ISO out of a single read by using two separate signals with different gain and sampling (I guess) to effectively do a low ISO and high ISO read at the same time for each pixel, and blend the results together using on-die CP-ADC.

As for new cameras...all that is on hold until I can get my business started and start making some money again. I don't have any plans to purchase anything at the moment, outside of possibly a 5D III if the price is right. I certainly won't be buying a 1D MPM (megapixel monster) any time soon if it hits with a price over $5k. Besides, I like to wait and see how things settle first...I am still interested in the 7D II, and want to wait for both cameras to hit the street and demonstrate their real-world performance before I make a decision.

Very informative points, thank you.  And I think it was you who first mentioned "black silicon" on here earlier this year.  I recall trying to read more about it, probably a link you posted.  I think I read something on Wikipedia about it as well, for what little that is worth.

Thanks for pointing out that the compression would be useless during the read and processing stage.  I knew that but hadn't even considered it...I was just thinking of the large files being written to a storage media of some kind.  It almost seems like the high processing power is more achievable than the speed required to write and store the files, say while at 5 frames a second or more.  You would need large internal buffer capacity.  I suppose some kind of wireless technique could be used to write very large files quickly to an external computer, or watch phone or something...haha!  I guess it would all get designed to work, if the need for really large files came to the fore...or rather when it does.

When it comes to the processing power required to process the image on the sensor, it has to be uncompressed data. But not only that, it has to be uncompressed data PLUS overhead...there is always a certain amount of overhead, additional data, additional processing to combat one problem or another, etc. So while the data size may be a gigaBIT (about 140mb per image), the actual total amount of data read is going to be larger, maybe closer to 160m per image. If one wanted a high readout rate...say 9.5 fps, then the total throughput rate would need to be 1.6gigaBYTE per second! :P
My Photography
Current Gear: Canon 7D | Canon EF 600mm f/4 L IS II | EF 100-400mm f/4.5-5.6 L IS | EF 16-35mm f/2.8 L | EF 100mm f/2.8 Macro | 50mm f/1.4
New Gear List: Canon 5D III/7D II | Canon EF 300mm f/2.8 L II

CarlTN

  • Canon EF 300mm f/2.8L IS II
  • *******
  • Posts: 2227
    • View Profile
Re: Canon Dual-Scale Column-Parallel ADC Patent
« Reply #18 on: December 16, 2013, 04:37:18 PM »
Very interesting, but recently you had said that if Canon relies on dual ISO, that's only a bandaid, and might not yield enough of a DR increase, at least with the combined benefit of a lower noise floor.  Obviously you meant more akin to what ML did, rather than starting from quasi-scratch, as this link hints at.

Using the existing downstream amplifier on half the pixels, which is what ML is doing, is a bandaid (and not ideal, as it costs you in resolution). What Canon has patented here is MUCH better...the way I would expect it to be done. Since they are reading the sensor with two different gain levels, I really don't see why there would be any reasonable limits on DR for the foreseeable future...ML is only limited to 14 stops because the ADC is 14-bit. Technically, the potential for very scalable DR is there in Canon's patent (assuming I've understood it correctly, that is.)

It seems to me there will be a lot of lossless compression necessary for the large RAW files (and a lot of processing power).  Also though, does this not make it likely, that the 2014 1-series camera, assuming it's in the 40MP range, may not use the above process?  If so, it might just "only" have 14 bit RAW capability.  I too was hoping it was actually going to be 16 bit, whether it actually got much over 14 stops of "real" DR or not.  That would really be something, if Canon just suddenly introduced a camera that could actually do 16 stops.

Are you planning on buying the new camera, early on?

Agreed, normally a RAW file will have lossless compression. Still, a gigabit of information is a lot...you can't compress the read stream, really...you have to process it all in order to compress the output file. So, while from a storage space standpoint it wouldn't be all that bad, from an image processing standpoint...you would need much faster processors.

Canon, or someone, mentioned around a year ago, maybe not quite that long, that Canon might push a bit depth increase with the Big MP camera. Who knows if that is the case, it was a CR1, but still, interesting nevertheless. I can't imagine anyone pushing bit depth until there is a definitive reason to do so. For all of DXO's claims about the Nikon D800 and D600 offering more than 14 stops of DR, they are talking about downscaled output images. The native DR of the hardware itself is still less than 14 stops...13.2 for the D800 IIRC.

That's with 3e- of read noise...which is INSANELY LOW (usually, you don't see that kind of read noise until you start peltier cooling sensors to sub-freezing temperatures). There are a few new ideas floating about regarding how to reduce read noise. There have been a number of patents and other things floating around lately about "black silicon", a structural modification of silicon that gives it an extremely low reflectivity index, which supports a natural read noise level of around 2e- and some of the best low light sensitivity known, and it is being researched for use in extreme low light security cameras that can see by starlight (which blows my mind.) Theoretically, this can greatly improve DR at what would be high ISO settings.

Canon's approach with dual scaling is potentially another way to get a lot more average dynamic range at low or high ISO out of a single read by using two separate signals with different gain and sampling (I guess) to effectively do a low ISO and high ISO read at the same time for each pixel, and blend the results together using on-die CP-ADC.

As for new cameras...all that is on hold until I can get my business started and start making some money again. I don't have any plans to purchase anything at the moment, outside of possibly a 5D III if the price is right. I certainly won't be buying a 1D MPM (megapixel monster) any time soon if it hits with a price over $5k. Besides, I like to wait and see how things settle first...I am still interested in the 7D II, and want to wait for both cameras to hit the street and demonstrate their real-world performance before I make a decision.

Very informative points, thank you.  And I think it was you who first mentioned "black silicon" on here earlier this year.  I recall trying to read more about it, probably a link you posted.  I think I read something on Wikipedia about it as well, for what little that is worth.

Thanks for pointing out that the compression would be useless during the read and processing stage.  I knew that but hadn't even considered it...I was just thinking of the large files being written to a storage media of some kind.  It almost seems like the high processing power is more achievable than the speed required to write and store the files, say while at 5 frames a second or more.  You would need large internal buffer capacity.  I suppose some kind of wireless technique could be used to write very large files quickly to an external computer, or watch phone or something...haha!  I guess it would all get designed to work, if the need for really large files came to the fore...or rather when it does.

When it comes to the processing power required to process the image on the sensor, it has to be uncompressed data. But not only that, it has to be uncompressed data PLUS overhead...there is always a certain amount of overhead, additional data, additional processing to combat one problem or another, etc. So while the data size may be a gigaBIT (about 140mb per image), the actual total amount of data read is going to be larger, maybe closer to 160m per image. If one wanted a high readout rate...say 9.5 fps, then the total throughput rate would need to be 1.6gigaBYTE per second! :P

And what type of device or computer is currently capable of that kind of throughput?

jrista

  • Canon EF 300mm f/2.8L IS II
  • *******
  • Posts: 3741
  • POTATO
    • View Profile
    • Nature Photography
Re: Canon Dual-Scale Column-Parallel ADC Patent
« Reply #19 on: December 16, 2013, 07:49:58 PM »
Very interesting, but recently you had said that if Canon relies on dual ISO, that's only a bandaid, and might not yield enough of a DR increase, at least with the combined benefit of a lower noise floor.  Obviously you meant more akin to what ML did, rather than starting from quasi-scratch, as this link hints at.

Using the existing downstream amplifier on half the pixels, which is what ML is doing, is a bandaid (and not ideal, as it costs you in resolution). What Canon has patented here is MUCH better...the way I would expect it to be done. Since they are reading the sensor with two different gain levels, I really don't see why there would be any reasonable limits on DR for the foreseeable future...ML is only limited to 14 stops because the ADC is 14-bit. Technically, the potential for very scalable DR is there in Canon's patent (assuming I've understood it correctly, that is.)

It seems to me there will be a lot of lossless compression necessary for the large RAW files (and a lot of processing power).  Also though, does this not make it likely, that the 2014 1-series camera, assuming it's in the 40MP range, may not use the above process?  If so, it might just "only" have 14 bit RAW capability.  I too was hoping it was actually going to be 16 bit, whether it actually got much over 14 stops of "real" DR or not.  That would really be something, if Canon just suddenly introduced a camera that could actually do 16 stops.

Are you planning on buying the new camera, early on?

Agreed, normally a RAW file will have lossless compression. Still, a gigabit of information is a lot...you can't compress the read stream, really...you have to process it all in order to compress the output file. So, while from a storage space standpoint it wouldn't be all that bad, from an image processing standpoint...you would need much faster processors.

Canon, or someone, mentioned around a year ago, maybe not quite that long, that Canon might push a bit depth increase with the Big MP camera. Who knows if that is the case, it was a CR1, but still, interesting nevertheless. I can't imagine anyone pushing bit depth until there is a definitive reason to do so. For all of DXO's claims about the Nikon D800 and D600 offering more than 14 stops of DR, they are talking about downscaled output images. The native DR of the hardware itself is still less than 14 stops...13.2 for the D800 IIRC.

That's with 3e- of read noise...which is INSANELY LOW (usually, you don't see that kind of read noise until you start peltier cooling sensors to sub-freezing temperatures). There are a few new ideas floating about regarding how to reduce read noise. There have been a number of patents and other things floating around lately about "black silicon", a structural modification of silicon that gives it an extremely low reflectivity index, which supports a natural read noise level of around 2e- and some of the best low light sensitivity known, and it is being researched for use in extreme low light security cameras that can see by starlight (which blows my mind.) Theoretically, this can greatly improve DR at what would be high ISO settings.

Canon's approach with dual scaling is potentially another way to get a lot more average dynamic range at low or high ISO out of a single read by using two separate signals with different gain and sampling (I guess) to effectively do a low ISO and high ISO read at the same time for each pixel, and blend the results together using on-die CP-ADC.

As for new cameras...all that is on hold until I can get my business started and start making some money again. I don't have any plans to purchase anything at the moment, outside of possibly a 5D III if the price is right. I certainly won't be buying a 1D MPM (megapixel monster) any time soon if it hits with a price over $5k. Besides, I like to wait and see how things settle first...I am still interested in the 7D II, and want to wait for both cameras to hit the street and demonstrate their real-world performance before I make a decision.

Very informative points, thank you.  And I think it was you who first mentioned "black silicon" on here earlier this year.  I recall trying to read more about it, probably a link you posted.  I think I read something on Wikipedia about it as well, for what little that is worth.

Thanks for pointing out that the compression would be useless during the read and processing stage.  I knew that but hadn't even considered it...I was just thinking of the large files being written to a storage media of some kind.  It almost seems like the high processing power is more achievable than the speed required to write and store the files, say while at 5 frames a second or more.  You would need large internal buffer capacity.  I suppose some kind of wireless technique could be used to write very large files quickly to an external computer, or watch phone or something...haha!  I guess it would all get designed to work, if the need for really large files came to the fore...or rather when it does.

When it comes to the processing power required to process the image on the sensor, it has to be uncompressed data. But not only that, it has to be uncompressed data PLUS overhead...there is always a certain amount of overhead, additional data, additional processing to combat one problem or another, etc. So while the data size may be a gigaBIT (about 140mb per image), the actual total amount of data read is going to be larger, maybe closer to 160m per image. If one wanted a high readout rate...say 9.5 fps, then the total throughput rate would need to be 1.6gigaBYTE per second! :P

And what type of device or computer is currently capable of that kind of throughput?

The original SATA standard was capable of 1.5Gbit/s, SATA2 was capable of 3.0Gbit/s, and SATA3 is currently capable of 6.0Gbit/s. That would be one of the SLOWEST data transfer rates for modern computing devices. A modern CPU is capable of around 192Gbit/s data throughput on the CPU itself and along its primary buses. A modern GPU is capable of even higher transfer rates in order to process graphics at up to 144 times per second (on 144Hz computer screens), meaning several hundred million pixels at least to the tune of trillions of operations per second requiring data throughputs of hundreds of billions of bits.

In order to handle 120 or 144 frames per second on modern high framerate gaming and 3D screens at 2560x1600 or even 3840x2160 (4k) with 10-bit precision, you would need at least 11,943,936,000bit/s throughput rate from video card to screen. (This is, BTW, the next generation of hardware, already trickling onto the market...high end gaming and graphics computing hardware, running on next generation GPUs and on early 4k SuperHD screens, using interfaces like Thunderbolt, which so happens to operate via a single channel at 10Gbit/s for v1, and 20Gbit/s for v2 via "aggregate" channels.)

General computing currently is capable of very significant data throughput. With the next generation of GPU's paving the way for high performance 4k 3D gaming (and even multi-screen 3D gaming, at that!), the average desktop of 2015 and beyond should be able to handle 150mp image files as easily as they handle 20/30/50/80mp image files from DSLR and MFD cameras today.

Assuming a 120mp camera operates at 9.5fps for 16-bit image frames (just speculating), since Canon has at least demonstrated a sensor like that. The raw per-frame bit size at 16-bit is 1,920,000,000 bits (1.92Gigabit), divide by 8 for bytes, which comes to 240,000,000MB (240Megabyte). Multiply by 9.5 frames per second, and you have a total data throughput of 2.28GB (2.28Gigabyte) per second, or 18.2Gbit/s. A single Thunderbolt v2 aggregate channel would be sufficient to handle that kind of data throughput, and be capable of transferring a full 120mb RAW image onto a computer in around 1 second...assuming you had comparable memory card technology that could keep up (which certainly doesn't seem unlikely given the rate at which memory card speed is improving.)

The real question is, will onboard graphics processors, DSPs (or rather computing packages, as they are today...usually a DSP stacked with a general purpose processor like ARM and usually a specialized ultra high speed memory buffer), will be able to reach the necessary data throughput rates. As a matter of fact, they already operate at fairly decent speeds. A single 120mp frame is 240MB. With a pair of DIGIC5+, you would be able to process 2 120mp frames per second. The DIGIC5+ chip was about seven times faster than it's predecessor, DIGIC4. If we assume a similar jump for the next DIGIC part, it would be capable of processing 3.36GB/s, more than the necessary 2.28GB/s to process 120mp at 9.5 frames per second, and quite probably enough to handle around 11 frames per second (and still have room for the necessary overhead.)

Given that release cycles for interchangeable lens cameras is usually on the order of several years, we probably wouldn't see next generation memory card performance until 1D X Next and 5D Next ca. 2016 or 2017. Sadly, at least historically, DSLRs have lagged even farther behind in data transfer standards support, so it could very likely be that we don't see comparable interface support in DSLRs and other interchangeable lens parts until 2019/2020. :\ Which means, instead of being able to transfer our giant 100mp+ images in about one second each, we will still have to slog through imports at about a quarter the speed our desktop computer technology is capable of...but were all used to that already. ;P (Which, BTW, is one of the key reasons I believe desktop computers are a LONG way from being dead...they are still the pinnacle of computing technology, and no matter how popular ultra-portable tablets and convertibles are, I think most people still have and use a desktop computer with a trusty old keyboard and mouse for their truly critical work. Tablets and convertibles and phablets and phones simply augment our computing repertoire.)
My Photography
Current Gear: Canon 7D | Canon EF 600mm f/4 L IS II | EF 100-400mm f/4.5-5.6 L IS | EF 16-35mm f/2.8 L | EF 100mm f/2.8 Macro | 50mm f/1.4
New Gear List: Canon 5D III/7D II | Canon EF 300mm f/2.8 L II

Don Haines

  • Canon EF 300mm f/2.8L IS II
  • *******
  • Posts: 2807
  • Posting cat pictures on the internet since 1986
    • View Profile
Re: Canon Dual-Scale Column-Parallel ADC Patent
« Reply #20 on: December 16, 2013, 08:11:00 PM »
(Which, BTW, is one of the key reasons I believe desktop computers are a LONG way from being dead...they are still the pinnacle of computing technology, and no matter how popular ultra-portable tablets and convertibles are, I think most people still have and use a desktop computer with a trusty old keyboard and mouse for their truly critical work. Tablets and convertibles and phablets and phones simply augment our computing repertoire.)

Could not agree more....

One of the packages I run at home is AutoPAno Giga, and it allows you to enable GPU computing to speed up rendering. Think of it as 1000 1Ghz cores.... as opposed to four 3.4Ghz I7 cores.... My desktop renders a large panorama 20-30 times faster than my laptop....
The best camera is the one in your hands

jrista

  • Canon EF 300mm f/2.8L IS II
  • *******
  • Posts: 3741
  • POTATO
    • View Profile
    • Nature Photography
Re: Canon Dual-Scale Column-Parallel ADC Patent
« Reply #21 on: December 16, 2013, 08:21:21 PM »
(Which, BTW, is one of the key reasons I believe desktop computers are a LONG way from being dead...they are still the pinnacle of computing technology, and no matter how popular ultra-portable tablets and convertibles are, I think most people still have and use a desktop computer with a trusty old keyboard and mouse for their truly critical work. Tablets and convertibles and phablets and phones simply augment our computing repertoire.)

Could not agree more....

One of the packages I run at home is AutoPAno Giga, and it allows you to enable GPU computing to speed up rendering. Think of it as 1000 1Ghz cores.... as opposed to four 3.4Ghz I7 cores.... My desktop renders a large panorama 20-30 times faster than my laptop....

Yeah, the PC rules. :D This is also why I think Microsoft will be a very successful company in the long term. They may not still be a "WOW" company like Apple or Facebook or Google or Twitter, but they have it where it counts. Windows 8, for all that people complain about it, offers the best of all worlds in a single, unified platform experience...WITHOUT forgetting about the desktop, keyboard, and mouse. That will be the key thread of their success five, ten, twenty years out.
My Photography
Current Gear: Canon 7D | Canon EF 600mm f/4 L IS II | EF 100-400mm f/4.5-5.6 L IS | EF 16-35mm f/2.8 L | EF 100mm f/2.8 Macro | 50mm f/1.4
New Gear List: Canon 5D III/7D II | Canon EF 300mm f/2.8 L II

canon rumors FORUM

Re: Canon Dual-Scale Column-Parallel ADC Patent
« Reply #21 on: December 16, 2013, 08:21:21 PM »

unfocused

  • 1D X
  • *******
  • Posts: 1919
    • View Profile
    • Unfocused: A photo website
Re: Canon Dual-Scale Column-Parallel ADC Patent
« Reply #22 on: December 16, 2013, 11:04:35 PM »
(Which, BTW, is one of the key reasons I believe desktop computers are a LONG way from being dead...they are still the pinnacle of computing technology, and no matter how popular ultra-portable tablets and convertibles are, I think most people still have and use a desktop computer with a trusty old keyboard and mouse for their truly critical work. Tablets and convertibles and phablets and phones simply augment our computing repertoire.)

Could not agree more....

One of the packages I run at home is AutoPAno Giga, and it allows you to enable GPU computing to speed up rendering. Think of it as 1000 1Ghz cores.... as opposed to four 3.4Ghz I7 cores.... My desktop renders a large panorama 20-30 times faster than my laptop....

Yeah, the PC rules. :D This is also why I think Microsoft will be a very successful company in the long term. They may not still be a "WOW" company like Apple or Facebook or Google or Twitter, but they have it where it counts. Windows 8, for all that people complain about it, offers the best of all worlds in a single, unified platform experience...WITHOUT forgetting about the desktop, keyboard, and mouse. That will be the key thread of their success five, ten, twenty years out.

This is interesting to me because I think it has parallels to DSLRs.

A few years ago, laptops were the wave of the future. It seemed like everyone under 30 (which leaves me way, way out) was buying a laptop and wouldn't even consider having a clunky old desktop, even though they were paying a huge premium in price and performance for that portability.

The tech gurus were predicting the death of desktop.

Then the next wave hit. Netbooks, tablets, e-readers and smart phones and the under 25 crowd looked on the laptop in much the same way their older siblings had looked on desktops. Who needs all that computing power when you just want to surf the web and post to social media? Why would you carry around some big old laptop?

Suddenly it was the laptop, not the desktop that was endangered. Those who wanted and needed real computing power found the desktop form factor much more practical (larger screen or dual screen, more memory and hard drive space, much more suited for programs that require real computing power). Laptops have become a niche market.

Today, in the photo world the tech gurus are predicting the death of the DSLR and saying the future will be mirrorless interchangeable lens cameras. But, to me, these seem like laptops. Too big to be truly portable, overpriced and with too many compromises to truly replace a DSLR.

I strongly suspect that in five years, the tech gurus will have moved on to the next big thing. Mirrorless will have run its course and the DSLR will still be plugging away because the form factor that has worked for 75 years remains the best form factor for its purpose.
pictures sharp. life not so much. www.unfocusedmg.com

jrista

  • Canon EF 300mm f/2.8L IS II
  • *******
  • Posts: 3741
  • POTATO
    • View Profile
    • Nature Photography
Re: Canon Dual-Scale Column-Parallel ADC Patent
« Reply #23 on: December 17, 2013, 05:42:26 PM »
(Which, BTW, is one of the key reasons I believe desktop computers are a LONG way from being dead...they are still the pinnacle of computing technology, and no matter how popular ultra-portable tablets and convertibles are, I think most people still have and use a desktop computer with a trusty old keyboard and mouse for their truly critical work. Tablets and convertibles and phablets and phones simply augment our computing repertoire.)

Could not agree more....

One of the packages I run at home is AutoPAno Giga, and it allows you to enable GPU computing to speed up rendering. Think of it as 1000 1Ghz cores.... as opposed to four 3.4Ghz I7 cores.... My desktop renders a large panorama 20-30 times faster than my laptop....

Yeah, the PC rules. :D This is also why I think Microsoft will be a very successful company in the long term. They may not still be a "WOW" company like Apple or Facebook or Google or Twitter, but they have it where it counts. Windows 8, for all that people complain about it, offers the best of all worlds in a single, unified platform experience...WITHOUT forgetting about the desktop, keyboard, and mouse. That will be the key thread of their success five, ten, twenty years out.

This is interesting to me because I think it has parallels to DSLRs.

A few years ago, laptops were the wave of the future. It seemed like everyone under 30 (which leaves me way, way out) was buying a laptop and wouldn't even consider having a clunky old desktop, even though they were paying a huge premium in price and performance for that portability.

The tech gurus were predicting the death of desktop.

Then the next wave hit. Netbooks, tablets, e-readers and smart phones and the under 25 crowd looked on the laptop in much the same way their older siblings had looked on desktops. Who needs all that computing power when you just want to surf the web and post to social media? Why would you carry around some big old laptop?

Suddenly it was the laptop, not the desktop that was endangered. Those who wanted and needed real computing power found the desktop form factor much more practical (larger screen or dual screen, more memory and hard drive space, much more suited for programs that require real computing power). Laptops have become a niche market.

Today, in the photo world the tech gurus are predicting the death of the DSLR and saying the future will be mirrorless interchangeable lens cameras. But, to me, these seem like laptops. Too big to be truly portable, overpriced and with too many compromises to truly replace a DSLR.

I strongly suspect that in five years, the tech gurus will have moved on to the next big thing. Mirrorless will have run its course and the DSLR will still be plugging away because the form factor that has worked for 75 years remains the best form factor for its purpose.

Very insightful! I totally agree, too. The parallel you've drawn is pretty intriguing, and while the desktop computer hasn't been around for 75 years, its life cycles have been about four times faster than cameras as well, I think the parallel scales well.

I think there will always be a market for smaller and lighter, for sure. While I think everyone does ultimately turn back to their desktop for any important and critical work, I do think that tablets, convertibles, phablets, and even laptops are here to stay. It's just that they will never actually topple the vaunted desktop...at least, not until computing becomes so ubiquitous and so omnipresent that we could literally call up a virtual keyboard and mouse on any flat surface, turn on a holographic display, and crank away wherever whenever.

(I don't really foresee that kind of thing happening, despite the fact that it seems to be every key tech companies goal for the future...too many technologies that not only have to be perfected, but seamlessly intertwined, and capable of presenting a universal, omnipresent, ubiquitous computing interface. To achieve tech companies ubiquitous computing vision of the future would basically mean scrapping and replacing ALL major power and control infrastructures everywhere, with hooks and plugins for every person and every kind of device. It MAY happen in some homes, multi million dollar homes where the cost of setting up a centralized, wireless, and omnipresent computing system is still just a drop in the bucket... But overall, I think the desktop is here to stay, even if the rate of sale of desktop computers drops (which is expected given the strained economic times, as people work more hours for less pay, are faced with artificially increasing costs at the mandate of governments, and otherwise have their disposable income soaked up in the name of status equality, it's no surprise that discretionary spending on moderately big ticket items like desktop computers has waned.))
My Photography
Current Gear: Canon 7D | Canon EF 600mm f/4 L IS II | EF 100-400mm f/4.5-5.6 L IS | EF 16-35mm f/2.8 L | EF 100mm f/2.8 Macro | 50mm f/1.4
New Gear List: Canon 5D III/7D II | Canon EF 300mm f/2.8 L II

CarlTN

  • Canon EF 300mm f/2.8L IS II
  • *******
  • Posts: 2227
    • View Profile
Re: Canon Dual-Scale Column-Parallel ADC Patent
« Reply #24 on: December 18, 2013, 02:34:12 AM »
Very interesting, but recently you had said that if Canon relies on dual ISO, that's only a bandaid, and might not yield enough of a DR increase, at least with the combined benefit of a lower noise floor.  Obviously you meant more akin to what ML did, rather than starting from quasi-scratch, as this link hints at.

Using the existing downstream amplifier on half the pixels, which is what ML is doing, is a bandaid (and not ideal, as it costs you in resolution). What Canon has patented here is MUCH better...the way I would expect it to be done. Since they are reading the sensor with two different gain levels, I really don't see why there would be any reasonable limits on DR for the foreseeable future...ML is only limited to 14 stops because the ADC is 14-bit. Technically, the potential for very scalable DR is there in Canon's patent (assuming I've understood it correctly, that is.)

It seems to me there will be a lot of lossless compression necessary for the large RAW files (and a lot of processing power).  Also though, does this not make it likely, that the 2014 1-series camera, assuming it's in the 40MP range, may not use the above process?  If so, it might just "only" have 14 bit RAW capability.  I too was hoping it was actually going to be 16 bit, whether it actually got much over 14 stops of "real" DR or not.  That would really be something, if Canon just suddenly introduced a camera that could actually do 16 stops.

Are you planning on buying the new camera, early on?

Agreed, normally a RAW file will have lossless compression. Still, a gigabit of information is a lot...you can't compress the read stream, really...you have to process it all in order to compress the output file. So, while from a storage space standpoint it wouldn't be all that bad, from an image processing standpoint...you would need much faster processors.

Canon, or someone, mentioned around a year ago, maybe not quite that long, that Canon might push a bit depth increase with the Big MP camera. Who knows if that is the case, it was a CR1, but still, interesting nevertheless. I can't imagine anyone pushing bit depth until there is a definitive reason to do so. For all of DXO's claims about the Nikon D800 and D600 offering more than 14 stops of DR, they are talking about downscaled output images. The native DR of the hardware itself is still less than 14 stops...13.2 for the D800 IIRC.

That's with 3e- of read noise...which is INSANELY LOW (usually, you don't see that kind of read noise until you start peltier cooling sensors to sub-freezing temperatures). There are a few new ideas floating about regarding how to reduce read noise. There have been a number of patents and other things floating around lately about "black silicon", a structural modification of silicon that gives it an extremely low reflectivity index, which supports a natural read noise level of around 2e- and some of the best low light sensitivity known, and it is being researched for use in extreme low light security cameras that can see by starlight (which blows my mind.) Theoretically, this can greatly improve DR at what would be high ISO settings.

Canon's approach with dual scaling is potentially another way to get a lot more average dynamic range at low or high ISO out of a single read by using two separate signals with different gain and sampling (I guess) to effectively do a low ISO and high ISO read at the same time for each pixel, and blend the results together using on-die CP-ADC.

As for new cameras...all that is on hold until I can get my business started and start making some money again. I don't have any plans to purchase anything at the moment, outside of possibly a 5D III if the price is right. I certainly won't be buying a 1D MPM (megapixel monster) any time soon if it hits with a price over $5k. Besides, I like to wait and see how things settle first...I am still interested in the 7D II, and want to wait for both cameras to hit the street and demonstrate their real-world performance before I make a decision.

Very informative points, thank you.  And I think it was you who first mentioned "black silicon" on here earlier this year.  I recall trying to read more about it, probably a link you posted.  I think I read something on Wikipedia about it as well, for what little that is worth.

Thanks for pointing out that the compression would be useless during the read and processing stage.  I knew that but hadn't even considered it...I was just thinking of the large files being written to a storage media of some kind.  It almost seems like the high processing power is more achievable than the speed required to write and store the files, say while at 5 frames a second or more.  You would need large internal buffer capacity.  I suppose some kind of wireless technique could be used to write very large files quickly to an external computer, or watch phone or something...haha!  I guess it would all get designed to work, if the need for really large files came to the fore...or rather when it does.

When it comes to the processing power required to process the image on the sensor, it has to be uncompressed data. But not only that, it has to be uncompressed data PLUS overhead...there is always a certain amount of overhead, additional data, additional processing to combat one problem or another, etc. So while the data size may be a gigaBIT (about 140mb per image), the actual total amount of data read is going to be larger, maybe closer to 160m per image. If one wanted a high readout rate...say 9.5 fps, then the total throughput rate would need to be 1.6gigaBYTE per second! :P

And what type of device or computer is currently capable of that kind of throughput?

The original SATA standard was capable of 1.5Gbit/s, SATA2 was capable of 3.0Gbit/s, and SATA3 is currently capable of 6.0Gbit/s. That would be one of the SLOWEST data transfer rates for modern computing devices. A modern CPU is capable of around 192Gbit/s data throughput on the CPU itself and along its primary buses. A modern GPU is capable of even higher transfer rates in order to process graphics at up to 144 times per second (on 144Hz computer screens), meaning several hundred million pixels at least to the tune of trillions of operations per second requiring data throughputs of hundreds of billions of bits.

In order to handle 120 or 144 frames per second on modern high framerate gaming and 3D screens at 2560x1600 or even 3840x2160 (4k) with 10-bit precision, you would need at least 11,943,936,000bit/s throughput rate from video card to screen. (This is, BTW, the next generation of hardware, already trickling onto the market...high end gaming and graphics computing hardware, running on next generation GPUs and on early 4k SuperHD screens, using interfaces like Thunderbolt, which so happens to operate via a single channel at 10Gbit/s for v1, and 20Gbit/s for v2 via "aggregate" channels.)

General computing currently is capable of very significant data throughput. With the next generation of GPU's paving the way for high performance 4k 3D gaming (and even multi-screen 3D gaming, at that!), the average desktop of 2015 and beyond should be able to handle 150mp image files as easily as they handle 20/30/50/80mp image files from DSLR and MFD cameras today.

Assuming a 120mp camera operates at 9.5fps for 16-bit image frames (just speculating), since Canon has at least demonstrated a sensor like that. The raw per-frame bit size at 16-bit is 1,920,000,000 bits (1.92Gigabit), divide by 8 for bytes, which comes to 240,000,000MB (240Megabyte). Multiply by 9.5 frames per second, and you have a total data throughput of 2.28GB (2.28Gigabyte) per second, or 18.2Gbit/s. A single Thunderbolt v2 aggregate channel would be sufficient to handle that kind of data throughput, and be capable of transferring a full 120mb RAW image onto a computer in around 1 second...assuming you had comparable memory card technology that could keep up (which certainly doesn't seem unlikely given the rate at which memory card speed is improving.)

The real question is, will onboard graphics processors, DSPs (or rather computing packages, as they are today...usually a DSP stacked with a general purpose processor like ARM and usually a specialized ultra high speed memory buffer), will be able to reach the necessary data throughput rates. As a matter of fact, they already operate at fairly decent speeds. A single 120mp frame is 240MB. With a pair of DIGIC5+, you would be able to process 2 120mp frames per second. The DIGIC5+ chip was about seven times faster than it's predecessor, DIGIC4. If we assume a similar jump for the next DIGIC part, it would be capable of processing 3.36GB/s, more than the necessary 2.28GB/s to process 120mp at 9.5 frames per second, and quite probably enough to handle around 11 frames per second (and still have room for the necessary overhead.)

Given that release cycles for interchangeable lens cameras is usually on the order of several years, we probably wouldn't see next generation memory card performance until 1D X Next and 5D Next ca. 2016 or 2017. Sadly, at least historically, DSLRs have lagged even farther behind in data transfer standards support, so it could very likely be that we don't see comparable interface support in DSLRs and other interchangeable lens parts until 2019/2020. :\ Which means, instead of being able to transfer our giant 100mp+ images in about one second each, we will still have to slog through imports at about a quarter the speed our desktop computer technology is capable of...but were all used to that already. ;P (Which, BTW, is one of the key reasons I believe desktop computers are a LONG way from being dead...they are still the pinnacle of computing technology, and no matter how popular ultra-portable tablets and convertibles are, I think most people still have and use a desktop computer with a trusty old keyboard and mouse for their truly critical work. Tablets and convertibles and phablets and phones simply augment our computing repertoire.)

Very interesting speculation and statistics.  The key word is "work".  That's the problem with the mobile world.  It's more about leisure than work...but in reality it's really just a toy, a slave obsession that robs people of living in the moment, and doing actual physical things, and interacting socially in person with people...replacing it with texting or playing games.

Speaking of games, why on earth does anyone need 4K 3D for gaming?  It's all just computer generated cartoons anyway, and the senses can only take in so much detail.  In real life your attention is on a narrow part of your field of view.  So in a 4K 3D game, most of that pixel information is unnecessary detail, it seems to me.  If this weren't so, then you should be able to read an entire large screen of text that fills your field of view, without ever moving your eyes...but you can't.  Or at least I can't.  I read a book on how to "speed read", but I never got very far.

But then I'm not a "gamer".  I would prefer to watch "How the West was Won", "To Catch a Thief", "The Empire Strikes Back", and "Raiders of the Lost Ark" (in that order) remastered in 4K 2D, than to play a game. 

CarlTN

  • Canon EF 300mm f/2.8L IS II
  • *******
  • Posts: 2227
    • View Profile
Re: Canon Dual-Scale Column-Parallel ADC Patent
« Reply #25 on: December 18, 2013, 02:37:00 AM »

Today, in the photo world the tech gurus are predicting the death of the DSLR and saying the future will be mirrorless interchangeable lens cameras. But, to me, these seem like laptops. Too big to be truly portable, overpriced and with too many compromises to truly replace a DSLR.

I strongly suspect that in five years, the tech gurus will have moved on to the next big thing. Mirrorless will have run its course and the DSLR will still be plugging away because the form factor that has worked for 75 years remains the best form factor for its purpose.

Agree.

danski0224

  • 6D
  • *****
  • Posts: 510
    • View Profile
    • Some of my Work in Progress
Re: Canon Dual-Scale Column-Parallel ADC Patent
« Reply #26 on: December 18, 2013, 06:16:39 PM »
Yeah, the PC rules. :D This is also why I think Microsoft will be a very successful company in the long term. They may not still be a "WOW" company like Apple or Facebook or Google or Twitter, but they have it where it counts. Windows 8, for all that people complain about it, offers the best of all worlds in a single, unified platform experience...WITHOUT forgetting about the desktop, keyboard, and mouse. That will be the key thread of their success five, ten, twenty years out.

One thing that amazes me about the Surface (Pro 2 specifically) is I can use my finger (or two), the keyboard touchpad, the stylus or the wireless mouse at any given point and it works. I can switch from any one to the other seamlessly. I've never had an iPad, so maybe it isn't a big deal for some, but prior iterations of touchscreen on Windows has kinda sucked...

The wireless keyboard adapter is pretty cool, too.

My only beef is I would actually prefer the iPad 4:3 form factor.

Windows 8 on a touchscreen is pretty awesome.

Big touch monitors are still kinda pricey, but there are some touch mice for desktops.
Some of my Work in Progress..... www.dftimages.com

jrista

  • Canon EF 300mm f/2.8L IS II
  • *******
  • Posts: 3741
  • POTATO
    • View Profile
    • Nature Photography
Re: Canon Dual-Scale Column-Parallel ADC Patent
« Reply #27 on: December 19, 2013, 10:07:38 AM »
Yeah, the PC rules. :D This is also why I think Microsoft will be a very successful company in the long term. They may not still be a "WOW" company like Apple or Facebook or Google or Twitter, but they have it where it counts. Windows 8, for all that people complain about it, offers the best of all worlds in a single, unified platform experience...WITHOUT forgetting about the desktop, keyboard, and mouse. That will be the key thread of their success five, ten, twenty years out.

One thing that amazes me about the Surface (Pro 2 specifically) is I can use my finger (or two), the keyboard touchpad, the stylus or the wireless mouse at any given point and it works. I can switch from any one to the other seamlessly. I've never had an iPad, so maybe it isn't a big deal for some, but prior iterations of touchscreen on Windows has kinda sucked...

The wireless keyboard adapter is pretty cool, too.

My only beef is I would actually prefer the iPad 4:3 form factor.

Windows 8 on a touchscreen is pretty awesome.

Big touch monitors are still kinda pricey, but there are some touch mice for desktops.

My Surface Pro (original) works the same way. When it comes to touch, I think Microsoft has always done well. I remember my old Windows XP tablet which supported touch and stylus allowed pretty seamless switching between input modes. My Windows Phone 7 device had incredibly responsive and fluid touch support, as does my Lumia 920. When it comes to touch and voice control, Microsoft devices are actually pretty good, and have been for at least four years or so. The voice recognition capabilities of the Lumia 920 are pretty amazing...it rarely ever missed a  beat, even while I am driving a car where noise levels are high.

Microsoft may be slow to the market, but what they do put out works exceptionally well. I honestly cannot say the same for Android. Every time I've used android devices, the newest features always seem to lack polish, have numerous quirks, don't seem to alway work with every device, etc. It took several iterations before Android's touch was consistent and fluid, and yet even today it just doesn't seem to have the responsiveness of either WP8 or iOS.
My Photography
Current Gear: Canon 7D | Canon EF 600mm f/4 L IS II | EF 100-400mm f/4.5-5.6 L IS | EF 16-35mm f/2.8 L | EF 100mm f/2.8 Macro | 50mm f/1.4
New Gear List: Canon 5D III/7D II | Canon EF 300mm f/2.8 L II

canon rumors FORUM

Re: Canon Dual-Scale Column-Parallel ADC Patent
« Reply #27 on: December 19, 2013, 10:07:38 AM »

danski0224

  • 6D
  • *****
  • Posts: 510
    • View Profile
    • Some of my Work in Progress
Re: Canon Dual-Scale Column-Parallel ADC Patent
« Reply #28 on: December 20, 2013, 09:29:26 AM »
My first tablet laptop, a HP TX series, couldn't even do pinch to zoom. There were 2 different screens available, and not knowing the differences, I chose poorly. I could switch between input devices without issues, but the rest of the "touch experience" wasn't there yet- at least for me. Vista only added to the problems.

I tried the early Windows smartphones, a HTC TouchPro2 and a HTC HD2, and I found both to fall short of expectations. When these devices were new, Microsoft didn't impress with upgrades and continued support. The HD2 still lives on if you like to mess with phones.

I went Android after that. Got burned with the HTC Amaze bluetooth issue, so that has soured my HTC experience.

I messed around a bit with Cyanogen on the HD2 through the SD card, and I actually prefer the "plain" Android interface over the skins that almost everyone else uses. Unfortunately, the phone OEM's make it increasingly difficult to root your device. I'd be happy with less bloatware.

When it came time to shop for a new phone, I would have went with the Nokia 1020, but it isn't available on my carrier natively and I'm not moving to AT&T. So, I stuck with Android, but no longer HTC.

A unifying experience across platforms (computer, tablet, phone) has appeal and that is lacking in Android.

Some of my Work in Progress..... www.dftimages.com

jrista

  • Canon EF 300mm f/2.8L IS II
  • *******
  • Posts: 3741
  • POTATO
    • View Profile
    • Nature Photography
Re: Canon Dual-Scale Column-Parallel ADC Patent
« Reply #29 on: December 20, 2013, 11:36:54 AM »
My first tablet laptop, a HP TX series, couldn't even do pinch to zoom. There were 2 different screens available, and not knowing the differences, I chose poorly. I could switch between input devices without issues, but the rest of the "touch experience" wasn't there yet- at least for me. Vista only added to the problems.

I tried the early Windows smartphones, a HTC TouchPro2 and a HTC HD2, and I found both to fall short of expectations. When these devices were new, Microsoft didn't impress with upgrades and continued support. The HD2 still lives on if you like to mess with phones.

I went Android after that. Got burned with the HTC Amaze bluetooth issue, so that has soured my HTC experience.

I messed around a bit with Cyanogen on the HD2 through the SD card, and I actually prefer the "plain" Android interface over the skins that almost everyone else uses. Unfortunately, the phone OEM's make it increasingly difficult to root your device. I'd be happy with less bloatware.

When it came time to shop for a new phone, I would have went with the Nokia 1020, but it isn't available on my carrier natively and I'm not moving to AT&T. So, I stuck with Android, but no longer HTC.

A unifying experience across platforms (computer, tablet, phone) has appeal and that is lacking in Android.

HTC doesn't make a particularly great phone. I've had a few of them, one for WP7 and two for Android...neither offered any particularly great quality. Even the first round of HTC WP8 phones were lackluster compared to the competition. I am not sure if HTC has anything really good out these days, but I've pretty much given up on them.

When it comes to Windows phones, Nokia really does well. Their Lumia line is excellent, and in areas where the HTC performs poorly, Lumia excels. So, when it comes to touch, I think issues are more hardware related than OS related. I have also been impressed with Samsung phones and tablets...but their products feel like plastic toys. People complain about the weight of the Lumia line of devices...personally, that heft to me is the mark of a solid, durable product. ;)
My Photography
Current Gear: Canon 7D | Canon EF 600mm f/4 L IS II | EF 100-400mm f/4.5-5.6 L IS | EF 16-35mm f/2.8 L | EF 100mm f/2.8 Macro | 50mm f/1.4
New Gear List: Canon 5D III/7D II | Canon EF 300mm f/2.8 L II

canon rumors FORUM

Re: Canon Dual-Scale Column-Parallel ADC Patent
« Reply #29 on: December 20, 2013, 11:36:54 AM »