Multilayer Sensors are Coming From Canon [CR2]

Lee Jay said:
StudentOfLight said:
A few questions pertaining to the usefulness of capturing IR data in a separate channel:
1) Can humans see InfraRed?
No.
2) How much of the IR spectrum can be transmitted through DLSR lenses?
All of the near IR spectrum. But little gets through the sensor's IR filter.
3) Can you gain added colour accuracy by sampling additional channels which overlap with wavelengths outside human visual perception?
I doubt it.
4) For a given ISO and Aperture, what is the difference in exposure time needed to create an IR image vs a visible light image?
With the IR filter, orders of magnitude
WRT "4)"
I meant with an IR-modified camera or for example with the 60Da
 
Upvote 0
"- 5 layers will generate so much data to process that it will be very slow camera, max 7-8 FPS (5 layers are used to run potential patent lawsuit from Sigma, but it is uncertain because Sigma can't patent physical properties of silicon which are used in multilayered sensors, but only methods of image data processing.)"

But remember, for some of us the issues you cite aren't drawbacks.

I mostly shoot landscape and architecture, almost always shoot on a tripod and rarely go over ISO 100 (in film days I almost never bought anything other than ISO 25 and 64)

ISO and FPS just don't matter to people like me.

I'd jump to buy a camera that had a max ISO of 100 and 1 fp 10 seconds if it offered better resolution and DR.

Actually I'd looking hard at a used last-generation 80mp Phase One back that is quite limited in ISO and fps.

(The prices of the last generation of MF backs seem to be crashing now that the Sony MF sensors are out, which offer much improved high ISO performance and live view)

However, this rumor has me thinking I'll put that purchase off in hope that the new Canon camera can meet my needs just as well.
 
Upvote 0
modeleste said:
I'd jump to buy a camera that had a max ISO of 100 and 1 fp 10 seconds if it offered better resolution and DR.

Get a 7D Mark II and a Gigapan. In 10 seconds, you can shoot something like a 9 shot panorama with a 5-shot bracket at each spot. That should get you 16+stops of DR and 115 megapixels.
 
Upvote 0
Lee Jay said:
modeleste said:
I'd jump to buy a camera that had a max ISO of 100 and 1 fp 10 seconds if it offered better resolution and DR.

Get a 7D Mark II and a Gigapan. In 10 seconds, you can shoot something like a 9 shot panorama with a 5-shot bracket at each spot. That should get you 16+stops of DR and 115 megapixels.
I would go with the 6D instead ;-)
 
Upvote 0
modeleste said:
"- 5 layers will generate so much data to process that it will be very slow camera, max 7-8 FPS (5 layers are used to run potential patent lawsuit from Sigma, but it is uncertain because Sigma can't patent physical properties of silicon which are used in multilayered sensors, but only methods of image data processing.)"

But remember, for some of us the issues you cite aren't drawbacks.

I mostly shoot landscape and architecture, almost always shoot on a tripod and rarely go over ISO 100 (in film days I almost never bought anything other than ISO 25 and 64)

ISO and FPS just don't matter to people like me.

I'd jump to buy a camera that had a max ISO of 100 and 1 fp 10 seconds if it offered better resolution and DR.

Actually I'd looking hard at a used last-generation 80mp Phase One back that is quite limited in ISO and fps.

(The prices of the last generation of MF backs seem to be crashing now that the Sony MF sensors are out, which offer much improved high ISO performance and live view)

However, this rumor has me thinking I'll put that purchase off in hope that the new Canon camera can meet my needs just as well.

Hi Modeleste,

I didn't said that this were drawbacks in my opinion, i was just talking about technical limits of potential new product. Just drawing the line above which there was no reason to go, because it would be now a impossible dream, as people here tend to run ahead without a realistic knowledge in this technology.

I use Sigma cams for theirs low ISO special color rendition capabilities and micro contrast of details in images, which i can't find in other cams (in that price range), slowness of operation of Sigma cams was never a problem for me at all, but for many people from Canon camp could be, because they are accustomed to high-speed operation.

MF cams (Phase One, Mamiya Leaf, HB) is other league of IQ, but if you plan go this road you know that cam is not the only the only cost you will be obliged to pay, so i would advice you to wait a little, and see what Canon maybe will show in 1Q of 2015. Even if a new Canon Multi-L sensor cam will be expensive at the beginning, it would be much cheaper to buy a needed lenses than go with MF camps.

But most important it will show you that if other manufacturers do not follow the same path of Multi-L sensors, and it will help you to plan future investments, so it does not turn out that you want to change the system again in the near future. Lots of Sigma user count very much on Canon because they could spend 10 times more money on R&D of this technology then Sigma, and speed development and market use of this type of sensors.

Potentially it's now a right moment to take step ahead the same as it was with CCD vs CMOS sensors, there is also, a new window of possibilities how to walk around today's drawbacks of this M-L sensors. There are now tested new meta materials for sensors like graphene and much easier to introduce in FAB production molybdenum disulfide and molybdenum diselenide on silicon base.

Two of the last ones materials could be now mass produced, as in last year was developed scalable industrial method their production, but what is more important it's a adventage that this materials could be used in existing FABs only with minor changes in production lines because there will be used silicon as a wafer base, so potentialy it can be introduced in the sensor materials market very quickly.

siliconalter.jpg

c36829ba4d58496967736f23ec0defb1_large.jpeg


But most important should be news that sensors with this materials exist and are tested for 3 years, so that isn't a dream but a not known for everyone reality, as this projects exist only in high tech labs in universities and labs of sensor producers.

anultrasensi.jpg


This is one of first versions of molibdenite sensors used for testing in EPFL labs, testing results are amazing, sensor needs five times less light to trigger a charge transfer than the silicon-based sensors that are currently available, so it means that sensor is five times sensitive to light, in practice it will reduce a reason for using lamp in many situations as sensor will be so much sensitive to existing light in scene. There are also other adventages, molibdenite has a 4:1 signal to noise ratio, so from this point of view noise level will be so small, and if you take in equation material light sensitivity noise level will be a problem only in astrophotography, which will benefit hugely from this sensors.

Other advantage of molibdenite is huge electron band gap, which classify graphene as much harder in use material (graphene has no electron gap, and 1:1 signal vs noise ratio), so from production point of view this material will be introduced much quicker in the sensors then graphene on which so many high tech equipment producers counts so much.

I can add at the end that, molibdenite Multi-L sensors are now also tested, and not only by one campany, but also by Samsung and Sony are in this very deeply, so you can imagine that Canon also will not miss this oportunity.

If some one want to read more about this just look here:

http://lanes.epfl.ch/publications
http://phys.org/search/?search=molybdenum
http://phys.org/search/?search=molybdenite
 
Upvote 0
Lee Jay said:
jrista said:
dgatwood said:
jrista said:
It would also be a tremendous amount of data, and a lot more data to be factored into image processing. Five layers at 25megapixels is 125megaphotodiodes. At 14-bit, that's around 235-245 megabytes per image. RAW editors would also have to add the right kind of support to utilize those extra layers.

Even three layers would be unworkable uncompressed at 25 megapixels per layer. It's hard enough to deal with 25–30 megabyte image files, much less four times that. They're clearly going to have to come up with a good lossless compression algorithm. A lossless scheme similar to PNG should get you about 2.7:1 compression, which means about 81 MB with all five layers included, or 49 MB with only three layers. But I think it is possible to do better than 2.7:1. After all, the high order bits of nearby pixels are likely to be fairly similar except near high-contrast edges, and the more bit depth you have, the more identical bits you'll probably have.

Storage space probably isn't nearly as big a concern, as yes, you can compress the files. However when your working on them, you need the full pixel data. It's like opening a large 16-bit or 32-bit TIFF in Photoshop...if you look at the memory usage, it is usually several hundred megs.

So what? When you're working in Lightroom or Camera Raw you're working on demosaiced data anyway at 16 bits per channel for four channeks. The size is 8 bytes * pixel count.

I doubt they maintain an alpha channel during processing; it would always be 1.0f/65535.

Either way, jrista is correct that when you process the data, you'll need more working space, because every time you edit the IR/UV handling, you'd have to redo the computation where you collapse the five channels into three. (I'm not going to call it demosaicing because it isn't mosaiced in the first place.) With that said, outside of cell phones, the difference between 50 megabytes (25 megapixels at two bytes each) and 250 megabytes is IMO mostly noise compared with all the other memory usage in these sorts of apps.

It also requires more CPU power to read three or five 16-bit values than one; effectively, each destination pixel in a traditional debayer algorithm requires reading on average one new subpixel value that hasn't been read before, so assuming your algorithm achieves maximum reuse of values (which it won't), a multilayer sensor would be 3–5 times as CPU-intensive. In practice, it is probably closer to a factor of two, though, and I'm pretty sure the debayer algorithm is a small percentage of the total processing, so I doubt this will be a serious problem.

Basically, if your computer is barely tolerable now, it might be intolerable with a multilayer sensor. In practice, though, the apps will probably evolve to take better advantage of multiple cores, and this will probably make the difference moot.
 
Upvote 0
dgatwood said:
Basically, if your computer is barely tolerable now, it might be intolerable with a multilayer sensor. In practice, though, the apps will probably evolve to take better advantage of multiple cores, and this will probably make the difference moot.


They should really be evolving to take advantage of GPUs. GPUs were designed to do this kind of stuff, and do it wicked-fast. They also have gobs of their own memory, so you wouldn't necessarily have to waste as much system memory on image rendering. Just about every computer has a GPU of some kind these days...either integrated into the CPU, or as an add-on card. Even laptops have dedicated GPUs.


I've been curious for some time why Lightroom doesn't make extensive use of the capabilities of my video cards...if games can render vastly more complex scenes 60 to 120 times per second using a GPU, Lightroom should be able to do what it does on a 5-layer RAW quicker than it renders a bayer RAW now.
 
Upvote 0
jrista said:
I've been curious for some time why Lightroom doesn't make extensive use of the capabilities of my video cards...if games can render vastly more complex scenes 60 to 120 times per second using a GPU, Lightroom should be able to do what it does on a 5-layer RAW quicker than it renders a bayer RAW now.

Agreed. DxO Optics Pro used to be rather slow at displaying images at 100% on my Mac, and even filmstrip thumbnails weren't very fast. A version back (IIRC), they added GPU acceleration and it sped the rendering up significantly.
 
Upvote 0
neuroanatomist said:
jrista said:
I've been curious for some time why Lightroom doesn't make extensive use of the capabilities of my video cards...if games can render vastly more complex scenes 60 to 120 times per second using a GPU, Lightroom should be able to do what it does on a 5-layer RAW quicker than it renders a bayer RAW now.

Agreed. DxO Optics Pro used to be rather slow at displaying images at 100% on my Mac, and even filmstrip thumbnails weren't very fast. A version back (IIRC), they added GPU acceleration and it sped the rendering up significantly.
Agreed!
When AutoPano (panorama rendering) added GPU rendering the time to render large panoramas dropped from hours to minutes. My video card has 512 CUDA cores running at a gigahertz.... WAY!!!! more computing power than a quad core Pentium....
 
Upvote 0
neuroanatomist said:
jrista said:
I've been curious for some time why Lightroom doesn't make extensive use of the capabilities of my video cards...if games can render vastly more complex scenes 60 to 120 times per second using a GPU, Lightroom should be able to do what it does on a 5-layer RAW quicker than it renders a bayer RAW now.

Agreed. DxO Optics Pro used to be rather slow at displaying images at 100% on my Mac, and even filmstrip thumbnails weren't very fast. A version back (IIRC), they added GPU acceleration and it sped the rendering up significantly.

The guy that writes the Camera Raw code says GPU acceleration would help very little with the Camera Raw pipeline.
 
Upvote 0
Lee Jay said:
neuroanatomist said:
jrista said:
I've been curious for some time why Lightroom doesn't make extensive use of the capabilities of my video cards...if games can render vastly more complex scenes 60 to 120 times per second using a GPU, Lightroom should be able to do what it does on a 5-layer RAW quicker than it renders a bayer RAW now.

Agreed. DxO Optics Pro used to be rather slow at displaying images at 100% on my Mac, and even filmstrip thumbnails weren't very fast. A version back (IIRC), they added GPU acceleration and it sped the rendering up significantly.

The guy that writes the Camera Raw code says GPU acceleration would help very little with the Camera Raw pipeline.


I honestly have a very hard time believing that. There is no way the current code is as parallel as it could be when run on a GPU. CPU's simply cannot achieve that kind of parallelism. I wouldn't be surprised if they had to completely rewrite the ACR pipeline to properly take advantage of GPU power, but I think they should do that anyway, and build in support for pipeline-level plugins so third parties could add things people have been asking for since v2 was released...like debanding support, or AF point overlays, etc.
 
Upvote 0
jrista said:
Lee Jay said:
neuroanatomist said:
jrista said:
I've been curious for some time why Lightroom doesn't make extensive use of the capabilities of my video cards...if games can render vastly more complex scenes 60 to 120 times per second using a GPU, Lightroom should be able to do what it does on a 5-layer RAW quicker than it renders a bayer RAW now.

Agreed. DxO Optics Pro used to be rather slow at displaying images at 100% on my Mac, and even filmstrip thumbnails weren't very fast. A version back (IIRC), they added GPU acceleration and it sped the rendering up significantly.

The guy that writes the Camera Raw code says GPU acceleration would help very little with the Camera Raw pipeline.


I honestly have a very hard time believing that. There is no way the current code is as parallel as it could be when run on a GPU. CPU's simply cannot achieve that kind of parallelism. I wouldn't be surprised if they had to completely rewrite the ACR pipeline to properly take advantage of GPU power, but I think they should do that anyway, and build in support for pipeline-level plugins so third parties could add things people have been asking for since v2 was released...like debanding support, or AF point overlays, etc.

So, you know more than the guy that's writing the code? Kind of arrogant, don't you think?
 
Upvote 0
jrista said:
Lee Jay said:
neuroanatomist said:
jrista said:
I've been curious for some time why Lightroom doesn't make extensive use of the capabilities of my video cards...if games can render vastly more complex scenes 60 to 120 times per second using a GPU, Lightroom should be able to do what it does on a 5-layer RAW quicker than it renders a bayer RAW now.

Agreed. DxO Optics Pro used to be rather slow at displaying images at 100% on my Mac, and even filmstrip thumbnails weren't very fast. A version back (IIRC), they added GPU acceleration and it sped the rendering up significantly.

The guy that writes the Camera Raw code says GPU acceleration would help very little with the Camera Raw pipeline.


I honestly have a very hard time believing that. There is no way the current code is as parallel as it could be when run on a GPU. CPU's simply cannot achieve that kind of parallelism. I wouldn't be surprised if they had to completely rewrite the ACR pipeline to properly take advantage of GPU power, but I think they should do that anyway, and build in support for pipeline-level plugins so third parties could add things people have been asking for since v2 was released...like debanding support, or AF point overlays, etc.

For creating a RAW file in the camera, it is doubtful that GPUs would accelerate the process. Creating the RAW file is a read/dump process with very little (if any) processing being done. It is basicly read from the sensor as fast as you can and dump to the buffer....

Creating a Jpg out of the RAW file is a completely different story... Processing that RAW file is a massively parallel operation... the image is typically broken up into 8x8 blocks and run through the jpg compression engine... then groups of blocks are run through the compression engine... and so on until the whole image is done. The 18Mpixel sensor makes an 5184x3456 image... and that makes 279,936 blocks to compress on the first pass, 4374 blocks on the second pass, and 68 blocks to finish off on the third pass..... Since it is essentially the same sequence of operations on each block, parallel cores on a GPU can speed things up by well over a magnitude....

Same thing holds true for rendering images in software to display on the screen or to create print files...
 
Upvote 0
Don Haines said:
jrista said:
Lee Jay said:
neuroanatomist said:
jrista said:
I've been curious for some time why Lightroom doesn't make extensive use of the capabilities of my video cards...if games can render vastly more complex scenes 60 to 120 times per second using a GPU, Lightroom should be able to do what it does on a 5-layer RAW quicker than it renders a bayer RAW now.

Agreed. DxO Optics Pro used to be rather slow at displaying images at 100% on my Mac, and even filmstrip thumbnails weren't very fast. A version back (IIRC), they added GPU acceleration and it sped the rendering up significantly.

The guy that writes the Camera Raw code says GPU acceleration would help very little with the Camera Raw pipeline.


I honestly have a very hard time believing that. There is no way the current code is as parallel as it could be when run on a GPU. CPU's simply cannot achieve that kind of parallelism. I wouldn't be surprised if they had to completely rewrite the ACR pipeline to properly take advantage of GPU power, but I think they should do that anyway, and build in support for pipeline-level plugins so third parties could add things people have been asking for since v2 was released...like debanding support, or AF point overlays, etc.

For creating a RAW file in the camera, it is doubtful that GPUs would accelerate the process. Creating the RAW file is a read/dump process with very little (if any) processing being done. It is basicly read from the sensor as fast as you can and dump to the buffer....

Creating a Jpg out of the RAW file is a completely different story... Processing that RAW file is a massively parallel operation... the image is typically broken up into 8x8 blocks and run through the jpg compression engine... then groups of blocks are run through the compression engine... and so on until the whole image is done. The 18Mpixel sensor makes an 5184x3456 image... and that makes 279,936 blocks to compress on the first pass, 4374 blocks on the second pass, and 68 blocks to finish off on the third pass..... Since it is essentially the same sequence of operations on each block, parallel cores on a GPU can speed things up by well over a magnitude....

Same thing holds true for rendering images in software to display on the screen or to create print files...

The CR Pipeline doesn't create 8x8 blocks of compressed data. It creates uncompressed raster data that's highly interdependent (think about applying gradient filters, healing spot corrections, brushed adjustments, etc.).
 
Upvote 0
Lee Jay said:
jrista said:
Lee Jay said:
neuroanatomist said:
jrista said:
I've been curious for some time why Lightroom doesn't make extensive use of the capabilities of my video cards...if games can render vastly more complex scenes 60 to 120 times per second using a GPU, Lightroom should be able to do what it does on a 5-layer RAW quicker than it renders a bayer RAW now.

Agreed. DxO Optics Pro used to be rather slow at displaying images at 100% on my Mac, and even filmstrip thumbnails weren't very fast. A version back (IIRC), they added GPU acceleration and it sped the rendering up significantly.

The guy that writes the Camera Raw code says GPU acceleration would help very little with the Camera Raw pipeline.


I honestly have a very hard time believing that. There is no way the current code is as parallel as it could be when run on a GPU. CPU's simply cannot achieve that kind of parallelism. I wouldn't be surprised if they had to completely rewrite the ACR pipeline to properly take advantage of GPU power, but I think they should do that anyway, and build in support for pipeline-level plugins so third parties could add things people have been asking for since v2 was released...like debanding support, or AF point overlays, etc.

So, you know more than the guy that's writing the code? Kind of arrogant, don't you think?


I write heavily parallelized and highly threaded code for a living. I have been for nearly two decades. I think I have the background knowledge to know.


Will you guys knock it off with this crap? I've had enough.
 
Upvote 0
jrista said:
Lee Jay said:
jrista said:
Lee Jay said:
neuroanatomist said:
jrista said:
I've been curious for some time why Lightroom doesn't make extensive use of the capabilities of my video cards...if games can render vastly more complex scenes 60 to 120 times per second using a GPU, Lightroom should be able to do what it does on a 5-layer RAW quicker than it renders a bayer RAW now.

Agreed. DxO Optics Pro used to be rather slow at displaying images at 100% on my Mac, and even filmstrip thumbnails weren't very fast. A version back (IIRC), they added GPU acceleration and it sped the rendering up significantly.

The guy that writes the Camera Raw code says GPU acceleration would help very little with the Camera Raw pipeline.


I honestly have a very hard time believing that. There is no way the current code is as parallel as it could be when run on a GPU. CPU's simply cannot achieve that kind of parallelism. I wouldn't be surprised if they had to completely rewrite the ACR pipeline to properly take advantage of GPU power, but I think they should do that anyway, and build in support for pipeline-level plugins so third parties could add things people have been asking for since v2 was released...like debanding support, or AF point overlays, etc.

So, you know more than the guy that's writing the code? Kind of arrogant, don't you think?


I write heavily parallelized and highly threaded code for a living. I have been for nearly two decades. I think I have the background knowledge to know.


Will you guys knock it off with this crap? I've had enough.

The CR Pipeline is not very parallelizable, according to the guy that writes it.
 
Upvote 0
Don Haines said:
jrista said:
Lee Jay said:
neuroanatomist said:
jrista said:
I've been curious for some time why Lightroom doesn't make extensive use of the capabilities of my video cards...if games can render vastly more complex scenes 60 to 120 times per second using a GPU, Lightroom should be able to do what it does on a 5-layer RAW quicker than it renders a bayer RAW now.

Agreed. DxO Optics Pro used to be rather slow at displaying images at 100% on my Mac, and even filmstrip thumbnails weren't very fast. A version back (IIRC), they added GPU acceleration and it sped the rendering up significantly.

The guy that writes the Camera Raw code says GPU acceleration would help very little with the Camera Raw pipeline.


I honestly have a very hard time believing that. There is no way the current code is as parallel as it could be when run on a GPU. CPU's simply cannot achieve that kind of parallelism. I wouldn't be surprised if they had to completely rewrite the ACR pipeline to properly take advantage of GPU power, but I think they should do that anyway, and build in support for pipeline-level plugins so third parties could add things people have been asking for since v2 was released...like debanding support, or AF point overlays, etc.

For creating a RAW file in the camera, it is doubtful that GPUs would accelerate the process. Creating the RAW file is a read/dump process with very little (if any) processing being done. It is basicly read from the sensor as fast as you can and dump to the buffer....


I wasn't talking about creating RAW images in the camera. I was talking about rendering RAW images to a computer.


That said, CP-ADC is effectively a means of hyperparallelizing the most critical processing done in-camera. Sony has one ADC per pixel column, vs. Canon's 8 or 16 ADCs per output channel. A patent linked here recently described a means of integrating one ADC per 2x2 pixel group, with 4 processing channels per ADC for what was effectively per-pixel ADC.


Move the DSP either onto the sensor die, or at least as part of a system-on-chip package, make parts of it column-parallel, and you can gain even more parallelism.


Don Haines said:
Creating a Jpg out of the RAW file is a completely different story... Processing that RAW file is a massively parallel operation... the image is typically broken up into 8x8 blocks and run through the jpg compression engine... then groups of blocks are run through the compression engine... and so on until the whole image is done. The 18Mpixel sensor makes an 5184x3456 image... and that makes 279,936 blocks to compress on the first pass, 4374 blocks on the second pass, and 68 blocks to finish off on the third pass..... Since it is essentially the same sequence of operations on each block, parallel cores on a GPU can speed things up by well over a magnitude....


Same thing holds true for rendering images in software to display on the screen or to create print files...


Aye. It wouldn't matter if you were rendering to JPEG or simply rendering to some kind of viewport buffer. Each pixel can be independently processed. Since you have millions of pixels, and each one is processed the same, you can write very little code, and run it on a GPU which is explicitly designed to hyperparallelize pixel processing. You would simply be executing pixel shaders instead of standard CPU code. With the modern architectures of GPUs, you can make highly efficient use of the resources available.
 
Upvote 0
Lee Jay said:
Don Haines said:
jrista said:
Lee Jay said:
neuroanatomist said:
jrista said:
I've been curious for some time why Lightroom doesn't make extensive use of the capabilities of my video cards...if games can render vastly more complex scenes 60 to 120 times per second using a GPU, Lightroom should be able to do what it does on a 5-layer RAW quicker than it renders a bayer RAW now.

Agreed. DxO Optics Pro used to be rather slow at displaying images at 100% on my Mac, and even filmstrip thumbnails weren't very fast. A version back (IIRC), they added GPU acceleration and it sped the rendering up significantly.

The guy that writes the Camera Raw code says GPU acceleration would help very little with the Camera Raw pipeline.


I honestly have a very hard time believing that. There is no way the current code is as parallel as it could be when run on a GPU. CPU's simply cannot achieve that kind of parallelism. I wouldn't be surprised if they had to completely rewrite the ACR pipeline to properly take advantage of GPU power, but I think they should do that anyway, and build in support for pipeline-level plugins so third parties could add things people have been asking for since v2 was released...like debanding support, or AF point overlays, etc.

For creating a RAW file in the camera, it is doubtful that GPUs would accelerate the process. Creating the RAW file is a read/dump process with very little (if any) processing being done. It is basicly read from the sensor as fast as you can and dump to the buffer....

Creating a Jpg out of the RAW file is a completely different story... Processing that RAW file is a massively parallel operation... the image is typically broken up into 8x8 blocks and run through the jpg compression engine... then groups of blocks are run through the compression engine... and so on until the whole image is done. The 18Mpixel sensor makes an 5184x3456 image... and that makes 279,936 blocks to compress on the first pass, 4374 blocks on the second pass, and 68 blocks to finish off on the third pass..... Since it is essentially the same sequence of operations on each block, parallel cores on a GPU can speed things up by well over a magnitude....

Same thing holds true for rendering images in software to display on the screen or to create print files...

The CR Pipeline doesn't create 8x8 blocks of compressed data. It creates uncompressed raster data that's highly interdependent (think about applying gradient filters, healing spot corrections, brushed adjustments, etc.).
Different words, but close to what I was saying.... (RAW data is NOT highly interdependent)
Sensor to RAW - serial process.... all you need is a single core to read the sensor quickly and dump to memory. There is no way to "parallelize" the process unless you redesign the sensor and A/D to dump out enough bits at a time to make it worthwhile... in other words, instead of reading one pixel at a time, read multiple pixels at a time.... perhaps it will be done that way in the future, but as things stand now with Canon sensors and A/D you get a byte at a time and any attempts to throw multiple cores at that process would probably slow it down.

RAW to JPG - parallel process. The more cores the better.

Since Neuro's and Jrista's comments were about rendering RAW (or other formats) files on a computer, the comment/argument of creating RAW files in-camera is a tangent that sidetracks from the discussion at hand, which is rendering images on a computer.

In theory, using a GPU with multiple cores (There are NVidia chips with 512 CUDA cores) will speed up rendering of images. THAT IS THE REASON THE CHIPS WERE CREATED!!!!! You can plop 3 cards with dual chips into a computer for 3072 cores.... if you so choose. BTW, Cray made a supercomputer out of Nvidia craphic chips....

In practice, on my home system, rendering a panorama from 324 images took 2 1/2 hours with the GPU disabled and 11 minutes with it enabled.... about a 14 times increase in speed.

EDIT:
I was wrong about the GPU specs.... the Nvidia 980 cards have 2048 cores running at 1.2Ghz and render 144 BILLION points per second. I could fit 3 into my chassis at home for 6144 cores... that's 7.3 TERRA flops! over 12 times the GPU power I have now......
 
Upvote 0