« on: January 23, 2015, 05:41:07 AM »
...when it is possible to utilize more than 1500 processors on latest NVIDA cards for processing instead of just 4 or 8 cores on main CPU.
Just to level expectations - just because modern GPUs have 1500+ cores doesn't mean that you'll gain a 375x increase in performance over a quad core machine by utilising them.
GPU cores are highly specialised in the things they do well. Additionally, there's quite a large overhead in getting your data into the GPU for them to work on it to begin with, then getting it out again at the other side. It's not just a case of enabling CUDA or OpenCL in your app and watching the numbers fly.
Of course, Adobe can and should be using these technologies for both RAW decoding and their entire pipeline. Remember, RAW decoding is only part of it — they decode the RAWs, then they individually apply every edit you've done to get the final output. This likely means paging data in and out of the GPU multiple times, and you need to do a lot of work to ensure you're taking the most efficient path — perhaps one adjustment is actually really fast on the CPU already and the overhead of getting everything into the GPU isn't worth it in that instance.
LR does a good job of faking speed by creating previews, but the main performance problems are while you're modifying edits. Doing this well is a big task, and I really hope the delays are because they're taking the time to do it right. Bash Apple's Aperture all you want, but their imaging pipeline absolutely screams since it's a mature GPU-accelerated API.