« on: October 15, 2014, 09:36:26 PM »
I've been curious for some time why Lightroom doesn't make extensive use of the capabilities of my video cards...if games can render vastly more complex scenes 60 to 120 times per second using a GPU, Lightroom should be able to do what it does on a 5-layer RAW quicker than it renders a bayer RAW now.
Agreed. DxO Optics Pro used to be rather slow at displaying images at 100% on my Mac, and even filmstrip thumbnails weren't very fast. A version back (IIRC), they added GPU acceleration and it sped the rendering up significantly.
The guy that writes the Camera Raw code says GPU acceleration would help very little with the Camera Raw pipeline.
I honestly have a very hard time believing that. There is no way the current code is as parallel as it could be when run on a GPU. CPU's simply cannot achieve that kind of parallelism. I wouldn't be surprised if they had to completely rewrite the ACR pipeline to properly take advantage of GPU power, but I think they should do that anyway, and build in support for pipeline-level plugins so third parties could add things people have been asking for since v2 was released...like debanding support, or AF point overlays, etc.
So, you know more than the guy that's writing the code? Kind of arrogant, don't you think?
I write heavily parallelized and highly threaded code for a living. I have been for nearly two decades. I think I have the background knowledge to know.
Will you guys knock it off with this crap? I've had enough.
The CR Pipeline is not very parallelizable, according to the guy that writes it.