Multilayer Sensors are Coming From Canon [CR2]

jrista said:
Don Haines said:
Creating a Jpg out of the RAW file is a completely different story... Processing that RAW file is a massively parallel operation... the image is typically broken up into 8x8 blocks and run through the jpg compression engine... then groups of blocks are run through the compression engine... and so on until the whole image is done. The 18Mpixel sensor makes an 5184x3456 image... and that makes 279,936 blocks to compress on the first pass, 4374 blocks on the second pass, and 68 blocks to finish off on the third pass..... Since it is essentially the same sequence of operations on each block, parallel cores on a GPU can speed things up by well over a magnitude....


Same thing holds true for rendering images in software to display on the screen or to create print files...


Aye. It wouldn't matter if you were rendering to JPEG or simply rendering to some kind of viewport buffer. Each pixel can be independently processed. Since you have millions of pixels, and each one is processed the same, you can write very little code, and run it on a GPU which is explicitly designed to hyperparallelize pixel processing. You would simply be executing pixel shaders instead of standard CPU code. With the modern architectures of GPUs, you can make highly efficient use of the resources available.

YES!
The GPU's are far more efficient than general purpose CPUs for running shaders and the like.... as mentioned above, That's what the chip was designed for!
 
Upvote 0
Don Haines said:
Different words, but close to what I was saying.... (RAW data is NOT highly interdependent)

...

RAW to JPG - parallel process. The more cores the better.

...

In theory, using a GPU with multiple cores (There are NVidia chips with 512 CUDA cores) will speed up rendering of images.

The raw data is interdependent, but demosaicing the images isn't the hold up in Lightroom for just about anything. Raw to JPEG is largely irrelevant since turning a raster image into a JPEG is only done for previews and exports, and takes very little time. Turning demosaiced data into raster data *with all your corrections applied* takes some time and is not highly parallelizable.

Of all the things that are inefficient in LR, the raw processing pipeline is the least. It's actually pretty efficient. Now, handling huge numbers of previews and putting them up in a grid, doing the resizing, scrolling them, adding the metadata and other badges, interfacing to the database, saving metadata to files and to the database, updating previews and preview thumbs, handling the user interface, etc., now those are things that LR could do a lot better. The CR pipeline is already pretty good and largely not a holdup for most things.
 
Upvote 0
Lee Jay said:
Don Haines said:
Different words, but close to what I was saying.... (RAW data is NOT highly interdependent)

...

RAW to JPG - parallel process. The more cores the better.

...

In theory, using a GPU with multiple cores (There are NVidia chips with 512 CUDA cores) will speed up rendering of images.

The raw data is interdependent, but demosaicing the images isn't the hold up in Lightroom for just about anything. Raw to JPEG is largely irrelevant since turning a raster image into a JPEG is only done for previews and exports, and takes very little time. Turning demosaiced data into raster data *with all your corrections applied* takes some time and is not highly parallelizable.

Of all the things that are inefficient in LR, the raw processing pipeline is the least. It's actually pretty efficient. Now, handling huge numbers of previews and putting them up in a grid, doing the resizing, scrolling them, adding the metadata and other badges, interfacing to the database, saving metadata to files and to the database, updating previews and preview thumbs, handling the user interface, etc., now those are things that LR could do a lot better. The CR pipeline is already pretty good and largely not a holdup for most things.


It's possible to put more than simple pixel processing onto GPUs these days. That's where the term GPGPU came from, General Purpose GPU. That's why the supercomputers of today are really just massive numbers of GPUs configured in parallel, to hyperparallelize the hyperperallelism. It's possible to rewrite Lightroom to operate primarily off the GPU. You could solve all the performance problems. Most GPUs have at least a gig of memory these days, and even midrange ones have as much as three gigs. That much memory could be used to cache a lot of previews. There is a direct and ultra high speed pipeline between GPU memory and system memory, allowing massive amounts of information to be paged in on demand...and if that information is images, all the better, as it's optimized for that.


Processing a RAW...all of it, not just the demosaicing but the entire render pipeline, can easily be handled by pixel shaders. There is plenty of lag in Lightroom in the develop module when I run Lightroom full screen on my 30" CinemaDisplay. I have an extremely powerful system, an overclocked i7 4930K with 16Gb of high speed, low timing ram, and a pair of 4Gb 760's running in SLI. It's a massive amount of computing power. LR should be able to handle rendering a full-screen full detail image off a RAW at 30fps...it can barely handle 12fps (and that's with a D III 22.3mp RAW). A GPU would make it a no-brainer to achieve at least 30fps performance.


As I said before, it would probably take a rewrite of ACR. I don't doubt the current author that ACR, as it is currently written, couldn't benefit from a GPU. They would have to redesign it to take advantage of a GPU's parallelism. I don't think it's just a patch to do that...it would be a massive overhaul at the very least, if not a total rewrite. I still think it is not only valuable...it'll probably be necessary in the future if pixel counts keep increasing. General purpose CPUs aren't good at massively parallel processing. They have some parallelism, but it pales in comparison to what GPUs can do (especially when you use two or three or four of them together.)


And with that, I'm out.
 
Upvote 0
jrista said:
It's possible to rewrite Lightroom to operate primarily off the GPU. You could solve all the performance problems.

Keep in mind that something like 90% of all new machines use the on-CPU (embedded) GPU. You have to be able to support those who use those as well. The machine I use at work for running LR has no separate GPU, and buying a machine with a separate GPU isn't really allowed where I work.
 
Upvote 0
Lee Jay said:
jrista said:
It's possible to rewrite Lightroom to operate primarily off the GPU. You could solve all the performance problems.

Keep in mind that something like 90% of all new machines use the on-CPU (embedded) GPU. You have to be able to support those who use those as well. The machine I use at work for running LR has no separate GPU, and buying a machine with a separate GPU isn't really allowed where I work.
But when we buy a computer, we buy it for the task at hand...

For example, when I built my computer for home, I wanted something for image processing and my software supported GPUs with CUDA cores.... So I got a solid state "scratch" drive (on a card, NOT one of the slow SATA drives) and a video card with 1024 CUDA cores....

If they re-write Lightroom, they are going to look to the future, not the past. This is why software packages have "recommended hardware". This is why my panorama software lists a decent Nvidia card as "recommended hardware", yet still runs without.... just a LOT more slowly.
 
Upvote 0
Don Haines said:
Lee Jay said:
jrista said:
It's possible to rewrite Lightroom to operate primarily off the GPU. You could solve all the performance problems.

Keep in mind that something like 90% of all new machines use the on-CPU (embedded) GPU. You have to be able to support those who use those as well. The machine I use at work for running LR has no separate GPU, and buying a machine with a separate GPU isn't really allowed where I work.
But when we buy a computer, we buy it for the task at hand...

For example, when I built my computer for home, I wanted something for image processing and my software supported GPUs with CUDA cores.... So I got a solid state "scratch" drive (on a card, NOT one of the slow SATA drives) and a video card with 1024 CUDA cores....

If they re-write Lightroom, they are going to look to the future, not the past. This is why software packages have "recommended hardware"

In the past, all GPUs were off-CPU. Now they are 90% on-CPU. In the past, 100% of computers were desktops. Now, they are more than 70% laptops.

The last time I bought a desktop was 2004. If they are looking to the future, they are looking to smaller devices that are not going to include board-based SSDs and huge off-CPU GPU cards, since both are going away as fast as CRT televisions.
 
Upvote 0
Lee Jay said:
In the past, all GPUs were off-CPU. Now they are 90% on-CPU. In the past, 100% of computers were desktops. Now, they are more than 70% laptops.

Lightroom and Photoshop have mobile versions for the iPad. From a processing standpoint, that's about doing more with less hardware, not better leveraging the fastest hardware.
 
Upvote 0
Lee Jay said:
Don Haines said:
Lee Jay said:
jrista said:
It's possible to rewrite Lightroom to operate primarily off the GPU. You could solve all the performance problems.

Keep in mind that something like 90% of all new machines use the on-CPU (embedded) GPU. You have to be able to support those who use those as well. The machine I use at work for running LR has no separate GPU, and buying a machine with a separate GPU isn't really allowed where I work.
But when we buy a computer, we buy it for the task at hand...

For example, when I built my computer for home, I wanted something for image processing and my software supported GPUs with CUDA cores.... So I got a solid state "scratch" drive (on a card, NOT one of the slow SATA drives) and a video card with 1024 CUDA cores....

If they re-write Lightroom, they are going to look to the future, not the past. This is why software packages have "recommended hardware"

In the past, all GPUs were off-CPU. Now they are 90% on-CPU. In the past, 100% of computers were desktops. Now, they are more than 70% laptops.

The last time I bought a desktop was 2004. If they are looking to the future, they are looking to smaller devices that are not going to include board-based SSDs and huge off-CPU GPU cards, since both are going away as fast as CRT televisions.
I guess we had better tell the gamers about that...

Yes, the bulk of the market is now tablets and laptops, but there is a very vibrant market for "power systems". If you want something with decent power, that's the way you go.
 
Upvote 0
Don Haines said:
Yes, the bulk of the market is now tablets and laptops, but there is a very vibrant market for "power systems". If you want something with decent power, that's the way you go.

Adobe likely doesn't want to exclude 90% of their market by making products that only work properly on very high-performance machines.

By the way, my current laptop is about 100 times faster than my previous desktop - a desktop on which I ran the first versions of Lightroom.
 
Upvote 0
Lee Jay said:
Don Haines said:
Yes, the bulk of the market is now tablets and laptops, but there is a very vibrant market for "power systems". If you want something with decent power, that's the way you go.

Adobe likely doesn't want to exclude 90% of their market by making products that only work properly on very high-performance machines.

By the way, my current laptop is about 100 times faster than my previous desktop - a desktop on which I ran the first versions of Lightroom.
It doesn't exclude anyone. You write the software to take advantage of a GPU. If you don't have one, the software works perfectly. If you have one, it is faster. The user can run it on their tablet, thier laptop, or their desktop ( rack mount for me :) ). If you want more performance, buy better hardware. It is better to have that option than no option at all.
 
Upvote 0
Don Haines said:
It doesn't exclude anyone. You write the software to take advantage of a GPU. If you don't have one, the software works perfectly. If you have one, it is faster. The user can run it on their tablet, thier laptop, or their desktop ( rack mount for me :) ). If you want more performance, buy better hardware. It is better to have that option than no option at all.

And sooner or later the naysayer will realize that your average tablet also has a GPU that can do those calculations faster while using less energy per operation. Their cores are actually very closely related to their desktop/console counterparts, much more then the main CPUs.
They see quite a lot of action in the image manipulation done for GUI representation, so using them to manipulate images isn't exactly terra incognita.
 
Upvote 0