July 22, 2017, 09:13:37 PM

Author Topic: Do You Wish Lightroom Was Quicker? Adobe Does Too  (Read 10662 times)

Mikehit

  • 1D X Mark II
  • *******
  • Posts: 1653
Re: Do You Wish Lightroom Was Quicker? Adobe Does Too
« Reply #90 on: July 17, 2017, 11:53:03 AM »

My explanation: Adobe has been resting on their laurels

It's highly probable - as long as a product sell well enough there are little commercial reasons to invest a lot in modifying it deeply, because there are risks there will be more new bugs to chase and fix, and stability issues (just look at GPU support...). Look at how Canon itself is often conservative in new models - it's the same approach.


is it the same approach? AvTvM is talking (as he usually does) about how the company is complacent and does not need to develop so just sits there raking the money in. You seem to be talking about the same thing I was on how Adobe (and Canon) choose to apply their R&D budget - and because it does not chime with how AvTvM want them to spend it he takes this as laziness.

canon rumors FORUM

Re: Do You Wish Lightroom Was Quicker? Adobe Does Too
« Reply #90 on: July 17, 2017, 11:53:03 AM »

LDS

  • 1D Mark IV
  • ******
  • Posts: 887
Re: Do You Wish Lightroom Was Quicker? Adobe Does Too
« Reply #91 on: July 17, 2017, 12:52:51 PM »
is it the same approach? AvTvM is talking (as he usually does) about how the company is complacent and does not need to develop so just sits there raking the money in. You seem to be talking about the same thing I was on how Adobe (and Canon) choose to apply their R&D budget - and because it does not chime with how AvTvM want them to spend it he takes this as laziness.

It is also true that commercial entities like easy money - not pleasing users if they don't see much more profits, and since the mantra "maximizing stakeholders value" was hammered into managers heads at MBAs, they need really good reasons to increase R&D beyond what they believe is adequate to maintain market share when they're already the leaders.

And since often the "stakeholders" are the CEOs and other board people, they will happily funnel money in dividends and buybacks instead of R&D. Some understand how to balance that well enough, others don't, and their company may go down the sink.

EdelweissPirate

  • SX60 HS
  • **
  • Posts: 5
Re: Do You Wish Lightroom Was Quicker? Adobe Does Too
« Reply #92 on: July 17, 2017, 05:55:11 PM »
@edelweiss: i thought i had made it clear that i dont like [Lightroom] using a grossöy underperforming implementation of a database.

Yeah...this is why I suspect you don't understand databases in general and SQLite in particular. SQLite is a stripped-down, speedy database. I understand that you think a database is unnecessary, but this is one of the fastest Adobe could possibly have used. SQLite is open-source; Adobe didn't implement it—they just used it. There's a big difference.

I'm not trying to make fun of you for not knowing these things. It's just that in order for your point to be valid, it requires a falsehood (that SQLite is an inherently slow database) to be true.

But let's put that aside for now. What operation do you think Lightroom is doing via its database that would be so much faster without one? I'm not asking what you find slow about Lightroom, though there are many aspects that could be faster. I'm asking what low-level operation you imagine is slowed down by Lightroom's use of SQLite.

I agree with LDS that the fact that the UI is written largely in Lua might have something to do with it. In most cases, Lua code is run in a user-transparent virtual machine, and that may be responsible for a lot of the UI lag. But I don't know enough about how Lightroom uses Lua to say for sure...Adobe could be using some fancy Lua-to-compiled-code maneuver (not standard compiled Lua bytecode) that's beyond my ken.


Quote from: AvTvM
i think it is proven
I don't think that word means what you think it means.

Quote
beyond a reasonable doubt, that LR performance problems - even on powerful hardware - are rooted in its database, since other raw converters/image editors - anything from canon DPP to Capture 1 pro to Adobe Bridge - does NOT have those performance problems.

This is a straightforward post hoc fallacy. Again, I'm not mocking you here; I'm just trying to point out that Lightroom's speed problem (which is absolutely real) isn't intrinsic to its use of a database.

Quote from: AvTvM
so for me the preferred solution would be a well performing "LR lite" with identical raw converter and image editor functionality but without database.
Then why not just use ACR plus Photoshop?


Quote from: AvTvM
i have no problem to find image files in windows thanks to my well thought out and disciplined file and folder naming scheme and structure.

Whether you understand it or not, your system is a database (albeit a comparatively slow one requiring a meatspace interpreter).

Quote from: AvTvM
firthermore there are apps available to conduct AI-based content specific search for images. no manual image tagging needed. no lightroom database needed.

As others have pointed out, all of these apps have their own databases. You've taken the database out of Lightroom, but you've still got the database in your workflow.

Valvebounce

  • Canon EF 400mm f/2.8L IS II
  • *********
  • Posts: 3123
  • Doing my best to get all of this to work together.
Re: Do You Wish Lightroom Was Quicker? Adobe Does Too
« Reply #93 on: July 17, 2017, 07:40:08 PM »
Hi EdelweissPirate.
Funniest thing I have read today.  ;D ;D ;D

Cheers, Graham.

Quote
"Whether you understand it or not, your system is a database (albeit a comparatively slow one requiring a meatspace interpreter)."
End quote.
7DII+Grip, 1DsIII, 7D+Grip, 40D+Grip, EF 24-105 f4L EF-S 17-85, EF-S 10-22, EF 70-200 f2.8 L IS II, EF 2x III, EF 100-400 f/4.5-5.6l IS II, Σ17-70 f2.8-4 C, EF 50mm f1.8, YN600EX-RT, YN-E3-RT, Filters, Remotes, Macro tubes, Tripods, heads etc!

1DsIII, 20D, 24-105, 17-85, Nifty 50 pre owned.

haversian

  • SX60 HS
  • **
  • Posts: 3
Re: Do You Wish Lightroom Was Quicker? Adobe Does Too
« Reply #94 on: July 17, 2017, 11:41:43 PM »
[I switched to computer engineering about 3/4 of the way through my CS degree, so while I have an adequate grasp of algorithmic complexity, my professional experience is all in hardware design, not software.]

Lightroom's scaling suggests to me that Adobe did the 'hard' work of making a multi-threaded RAW processor, but not the comparatively 'easy' work of spawning mutliple instances of it to achieve near-linear speed-up on embarrassingly parallel tasks.  Granted, there would be some overhead for inter-process communication and they might have to serialize database access, but the heavy lifting is all in the image processing.  And they don't appear to have done any work to predict users' future actions and prepare for them in advance.

Export to Disk is the only test that Puget did which exhibits reasonably linear scaling, and even then only out to 8-10 cores.  I would have liked to see Puget re-run their test with two simultaneous exports of 40 images each, rather than one export of 80.  Does that improve the scaling, perhaps by forcing LR to spawn a second worker thread?  I haven't been able to get good data running such tests myself, but I only have 4 physical (+4 logical) cores to play with, so the fact that LR export scales reasonably linearly to that point would tend to obscure any advantage I might see from the hypothetical second worker thread.

Convert to DNG should be just as embarrassingly parallel as Export, but its scaling behavior is quite different.  Speed-up from the second core is roughly linear, but the third and fourth core do very little, and beyond that there's negligible additional scaling.  Conceptually, Convert is the same render-to-bitmap operation as Export, followed by encode-to-DNG rather than encode-to-JPG.  However, as we can see from the fact that Convert is ~3x as fast on 1-2 cores as Export, the algorithm appears to be avoiding a lot of the work of manipulating image data that Export does.  Since the disk bandwidth is quite low (tens of MB / sec maximum) in both cases, either LR is phenomenally inefficient at file access (this seems very unlikely since there's no obvious reason they wouldn't load an entire file into RAM and then flush an entire file to disk when done), or the bottleneck is elsewhere.  On a system with 20MB of cache, nearly able to cache an entire RAW image from Puget's test, it seems unlikely that memory bandwidth is the limiting factor.

Generate 1:1 Previews and Generate Smart Previews both have similar scaling limits to Convert: peak performance is reached at about 4 cores, with best performance being only 2-3x as high as single-core performance.

Since there aren't any obvious resource limitations to better performance (the tasks are not CPU bound, not disk I/O bound, probably not memory bandwidth or latency bound, and don't have clear inter-process communication or coordination limits).  All of which leads me to believe that LR was architected to minimize latency (it tries to process a single image as fast as it can, via a multithreaded rendering process) rather than maximize throughput (operations per hour).  Though it's pure speculation on my part, I would guess that rendering is probably pipelined (eg demosaic -> user edits -> noise reduction -> sharpen - and yes, I realize that's not the order LR does those steps: this is an example of pipelining) and scaling is limited by the slowest pipeline stage and the total number of pipeline stages.

(It will be interesting when AMD's Threadripper CPUs get into users' hands to see how LR scaling changes in response to the different hardware behavior (such as markedly lower single-threaded memory bandwidth, but much higher multithreaded bandwidth; or the very different cache organization and performance characteristics) that the new platform brings.)

But despite this hypothetical focus on latency minimization, there seems to have been little to no effort at latency hiding.  For example, when I'm viewing a photo in the library module, LR doesn't seem to pre-render the next and previous photos so I can 'instantly' move forward or back in the film strip.  It doesn't pre-load everything it would need if I were to flip to the develop module to make some quick edits.  All three of those operations are high-probability guesses as to what my next action will be, and are good targets for trading power efficiency for higher productivity.

I would like to see Lightroom make use of idle compute power so that repetitive, predictable operations are fast because LR has already anticipated and completed the thing I'm about to ask it to do.

LDS

  • 1D Mark IV
  • ******
  • Posts: 887
Re: Do You Wish Lightroom Was Quicker? Adobe Does Too
« Reply #95 on: July 18, 2017, 04:23:18 AM »
On a system with 20MB of cache, nearly able to cache an entire RAW image from Puget's test, it seems unlikely

What 20MB cache are referring to? The CPU one?

AFAIK, when a RAW is loaded into memory it gets far more RAM than its disk size. The Adobe document I posted above says that a 40Mpx image can take 240MB of RAM, and between 0.5 and 1GB with edits applied. Even halved for a 20Mpx image, they are still far more than 20MB.

Anyway, the CPU L3 cache is shared across all cores, it is shared across all the applications running on the PC (including the OS itself and the many background services) - and Lightroom has very little control on what is in the cache at any given time - but optimizing the code for cache access (I'll avoid to go too technical in this forum), and hope :)

Actually, more parallel executions could mean more CPU cache contention - depending on what the processes are doing.

Increasing the Camera Raw cache - albeit disk based, but under full LR control, could improve performances, because LR can skip some processing stages if the image is cached.

EdelweissPirate

  • SX60 HS
  • **
  • Posts: 5
Re: Do You Wish Lightroom Was Quicker? Adobe Does Too
« Reply #96 on: July 18, 2017, 07:05:43 AM »
Cheers, Graham...I'm glad you liked the turn of phrase.

LDS, you're right that Canon raw files are compressed. I have a 7D (MK I) and its ~18 MP images should decompress from raw to about 31.3 MB in memory:

5184 * 3456 pixels * 14 bits per pixel = 250822656 Bits ~ 31.3 megabytes

The 5D MK IV has these specs:

6720 * 4480 pixels * 14 BPP = 421478400 bits ~ 52.7 MB

Many editing operations are straight-up transforms of the raw-image matrix, so you could cache the results of each editing step with an uncompressed image roughly the same size as the original raw file. If I'm right about that, then to a first approximation, a 5D MK IV raw image with about 15 cached, rendered edits should take about 1 GB of RAM to store when using the editing module.

That jibes with the Adobe document you posted. CPUs don't have nearly that much cache on silicon, of course. But many serious LR users have 32-64 GB of RAM, and in such cases LR would have enough resources to store quite a few pre-rendered images in RAM.

Haversian, thanks for the informed commentary. I think you're right that LR does zero (or nearly zero) speculative rendering to hide latency. It seems like such low-hanging fruit for the developers that I can't help wondering whether it was a design decision to prevent LR from seeming to "churn" in the background while seemingly doing nothing. Being more aggressive about speculative execution would certainly make better use of 6+ physical CPUs than LR does now.

canon rumors FORUM

Re: Do You Wish Lightroom Was Quicker? Adobe Does Too
« Reply #96 on: July 18, 2017, 07:05:43 AM »

Khalai

  • Canon 7D MK II
  • *****
  • Posts: 574
  • In the absence of light, darknoise prevails...
Re: Do You Wish Lightroom Was Quicker? Adobe Does Too
« Reply #97 on: July 18, 2017, 08:28:12 AM »
Increasing the Camera Raw cache - albeit disk based, but under full LR control, could improve performances, because LR can skip some processing stages if the image is cached.

I have allocated 30 GB of my Samsung 950 Pro NVMe PCIe drive for caching, while having 32 GB RAM as well. Doesn't help much really...
6D | Zeiss 21/2.8 ZE | 24-70/2.8L II | 35/1.4L | 50/1.2L | Zeiss 85/1.4 ZE | 100/2.8L Macro | 70-200/2.8L II

LDS

  • 1D Mark IV
  • ******
  • Posts: 887
Re: Do You Wish Lightroom Was Quicker? Adobe Does Too
« Reply #98 on: July 18, 2017, 09:17:04 AM »
LDS, you're right that Canon raw files are compressed. I have a 7D (MK I) and its ~18 MP images should decompress from raw to about 31.3 MB in memory:

5184 * 3456 pixels * 14 bits per pixel = 250822656 Bits ~ 31.3 megabytes

You forgot the color planes :) After demosaicing you get an image with more than the 14 bit per pixel of the pure pixel DR.

Let's remember LR uses a slightly modified version of ProPhoto RGB color space, which uses more than the 24 bit per pixel than sRGB (which would be rounded anyway to 32 bit because of the way CPUs work, using half a byte would make computations much more complex).

To preserver each color range, at least, LR will need to use 16 bit per color per pixel which means 48 bit per pixel - which I guess will be rounded to 64, depending on how LR encodes those data in memory, but again usually CPU instructions are optimized to work on 32/64/128 bit values.

So your 7D image would be 68MB for 32 bit, and 136 for 64 bit. That would become 114/229 for the 5D4, which is in line with what the Adobe presentation says, and makes me think LR uses 64 bit per pixel.

That also means there's a question on how good is LR in exploiting the CPU advanced instructions (the various versions of MMX, SSE and AVX) designed to improve the performance of this kind of processing (even without using the GPU) - although again it has to cope with the fact that not all the supported CPUs may have the latest ones.

The fact that the system requirements (https://helpx.adobe.com/lightroom/system-requirements.html) doesn't specify much about the processor, makes me think LR only uses the least common denominator.
« Last Edit: July 18, 2017, 10:34:53 AM by LDS »

EdelweissPirate

  • SX60 HS
  • **
  • Posts: 5
Re: Do You Wish Lightroom Was Quicker? Adobe Does Too
« Reply #99 on: July 18, 2017, 12:15:02 PM »
Thanks, LDS, for expanding on my post and filling in the stuff I missed. I'm interested in these things, but my field is mechanical engineering, not computer science or software engineering.

A quick Google search implies that Lightroom is only compiled against SSE2 and nothing later. On the other hand, I think the real question is: what instruction sets is ACR compiled against? I'd expect it's the same, but maybe others here have better information.

Thanks again for correcting my post.

LDS

  • 1D Mark IV
  • ******
  • Posts: 887
Re: Do You Wish Lightroom Was Quicker? Adobe Does Too
« Reply #100 on: July 18, 2017, 02:20:34 PM »
A quick Google search implies that Lightroom is only compiled against SSE2 and nothing later. On the other hand, I think the real question is: what instruction sets is ACR compiled against? I'd expect it's the same, but maybe others here have better information.

AFAIK LR and ACR share the same code for the features they have in common. Then on top of that LR uses the Lua engine, and it would be interesting if and how well it can again take advantage of more powerful instructions when available.

It looks some other Adobe products can use more advanced instruction sets, but those are the one more aimed at a more professional user, thereby less issue to increase the hardware requirements.

IMHO LR has room for big improvements, but the price may be to remove support for some older processors, and may require some deep code changes.

haversian

  • SX60 HS
  • **
  • Posts: 3
Re: Do You Wish Lightroom Was Quicker? Adobe Does Too
« Reply #101 on: July 19, 2017, 09:36:40 AM »
What 20MB cache are referring to? The CPU one?

Yes.  But you're right, LR almost certainly uncompresses the image to 12-24 bytes per pixel (4-8 bytes per color channel) in RAM before doing any work on it, so my reference to the RAW size was at best misleading.  An uncompressed image would get you a larger working set, but more regular memory accesses since it's a simple 2D array, which memory prefetchers are well optimized to handle.  Either way, that's slower than my RAW size reference would imply.

And that's a good comment about cache contention.  I'm in the wrong field to be able to usefully speculate about whether LR's image processing algorithms look more like a streaming application, or more like something that would provoke fights over the cache.  Surely Adobe has thoroughly profiled the code, but of course they're not going to share with us their results.  I'll have to do some digging to see if I can find anyone else who has done that research.

LDS

  • 1D Mark IV
  • ******
  • Posts: 887
Re: Do You Wish Lightroom Was Quicker? Adobe Does Too
« Reply #102 on: July 19, 2017, 11:17:44 AM »
In this thread:

https://feedback.photoshop.com/photoshop_family/topics/lightroom-clone-and-brush-tool-can-not-stress-the-cpu-is-slow-only-on-cpu-with-xeon-architectures-can-confirm?topic-reply-list%5Bsettings%5D%5Bfilter_by%5D=all&topic-reply-list%5Bsettings%5D%5Bpage%5D=2#topic-reply-list

Simon Chen (one the key people in LR development) offers a tweak to change how LR uses CPUs, and explains the design trade off Adobe made in developing the import stage.

canon rumors FORUM

Re: Do You Wish Lightroom Was Quicker? Adobe Does Too
« Reply #102 on: July 19, 2017, 11:17:44 AM »

Diko

  • Canon 6D
  • *****
  • Posts: 372
  • 7 fps...
Re: Do You Wish Lightroom Was Quicker? Adobe Does Too
« Reply #103 on: July 20, 2017, 09:11:55 AM »
The association between Lightroom's database and its performance issues exists only in your imagination. It's a mistake to claim that SQLite is an a priori cause of Lightroom's performance issues.
Do me a favour and go try putting 20 healing spots on an image. Do that for the next 10 out of 50. Let me know if it gets sluggish when you apply to the next image. Also walk around the images.

And if you ask me what that has to do with the SQLite I think our debate would end here ;-)

Now I read carefully the posts and tend to believe that most people here don't realize that there IS a scenario in which all performance issues are not related. Not caused by one and the same bottleneck.

Also all more knowledgeable IT guys that have posted seem to omit the possibility that everything is the cause.

- SQLlite
-LUA (I forgot to rant about it in my previous posts)
- Cocoa and Silver

Each of the above has its own merits and setbacks. If I recall correctly the memory leaks issues began after the transition to LUA, which was for better API for users to create custom plugins (only presumption) and also as they stated in the mentioned presentation to better get core API functions calls easily . But as noted - it is metalanguage. It needs an interpreter (virtual machine) to run. It is more like a script language than a true language.

Each one of us has experienced one or another issue. Adobe should aim at updating its core engines to include more than SSE2 (2001).to something more advanced like AVX2 (2013), which was 4 years ago, which means that if you edit photos on so old computer you need an upgrade or at least don't need that much professional software.


And all that being said it is not true that ONLY SSE2 is being used. Check the same link with Simon Chen.

Camera Raw SIMD optimization: SSE2,AVX,AVX2. But this is ONLY one of the Tools in LR!

I find this little config.lua tweak a great start in troubleshooting the performance issues :-)


Thank you Adobe!




Luck is what happens when preparation meets opportunity.

TomDibble

  • SX60 HS
  • **
  • Posts: 3
Re: Do You Wish Lightroom Was Quicker? Adobe Does Too
« Reply #104 on: Today at 06:37:01 PM »
The association between Lightroom's database and its performance issues exists only in your imagination. It's a mistake to claim that SQLite is an a priori cause of Lightroom's performance issues.
Do me a favour and go try putting 20 healing spots on an image. Do that for the next 10 out of 50. Let me know if it gets sluggish when you apply to the next image. Also walk around the images.

And if you ask me what that has to do with the SQLite I think our debate would end here ;-)

Umm, okay. Would the debate end because you don't understand what would be in the database in such a case?

This is exactly the sort of thing that is not exercising the database (unless Adobe's engineers are grossly incompetent, but if you believe that then why are you even giving their products a second glance?)

There are several databases in Lightroom:
  • The catalog, which stores references to all the images on disk, as well as a cache of the modification instructions for those images.
  • The "Previews" (1:1 and Standard) cache, which caches rendered 1:1 and "standard" size renders of the final images
  • The "Smart Previews" cache, which caches the original raw data compressed so that basic changes can be made in the UI before the original RAW file is pulled up from disk and if the original RAW file is missing

There might be more in some circumstances, but those are the biggies. Also note that almost everything above is categorized as "cache". That means, the actual source of record for that information is elsewhere - for 1:1 previews, that is the original RAW file plus the list of instructions applied to it as described in the sidecar file. These databases' whole existence is because they fixed performance problems in the non-DB-based early versions of Lightroom.

I'm assuming you are trying to say that the first database (the catalog) is the issue here, as it is what contains the stack of changes per image pulled from disk (where those healing brush source and destinations are, and the geometry of the spots). That is a fairly small amount of data to store (the mask of the healing image, which should be an 8-bit-per-pixel RLE-compressed bitmap unless, again, Adobe's engineers are wholly incompetent, is the biggest bit), and it is stored alongside all the other Develop instructions stack for each image.

Let's look at what happens when you go from image to image in the Develop module:

  • The Smart Preview is pulled up, if in cache, which is by all accounts instantaneous
  • The RAW file is pulled into memory if available, replacing the Smart Preview. This may take some time as it is a disk access.
  • The list of changes is pulled out of the Catalog database
  • Each change is applied to the image in turn, using the rendering engine appropriate for the change (ex, rotator and cropper, spot removal, exposure, USM for sharpening, etc).
  • The rendered image is displayed on screen

If the third step above were a problem, it would be a problem no matter what is in that list. For instance, rotate all your images slightly, adjust the exposure, add a contrast curve, etc. From a database perspective, 99% of the cost of step 3 is the lookup - finding the row in the database - while the rest of the cost is pulling the data out. I'm not sure about the specific RDBMS table structure in Lightroom's database, but assuming the developers know what they are doing, there are only a few possibilities that really make sense. I'd guess that likely the list of changes to apply is kept in a child table related to the parent "Photo" table.

But, we don't see such a drag just by doing "any" set of changes to the images. We need to do computationally-intense (relatively) changes to result in a measurable slowdown.

This is exactly what we would expect if #4 above is the bottleneck, because that is the first point in the whole process where an image that has ten crop/rotates and ten exposure adjustments looks different from one which has twenty healing brush adjustments.

Quote
Now I read carefully the posts and tend to believe that most people here don't realize that there IS a scenario in which all performance issues are not related. Not caused by one and the same bottleneck.

Also all more knowledgeable IT guys that have posted seem to omit the possibility that everything is the cause.

- SQLlite
-LUA (I forgot to rant about it in my previous posts)
- Cocoa and Silver

Each of the above has its own merits and setbacks. If I recall correctly the memory leaks issues began after the transition to LUA, which was for better API for users to create custom plugins (only presumption) and also as they stated in the mentioned presentation to better get core API functions calls easily . But as noted - it is metalanguage. It needs an interpreter (virtual machine) to run. It is more like a script language than a true language.

I've been involved with enough refactors and language rewrites to know that the language chosen can make a big difference in performance, but "interpreted" languages are not necessarily worse for a task. If you are dealing with evolving algorithms, in fact, moving to a higher-level, "less efficient" language will often allow for significant performance boosts at the algorithm level which would have been impractical with the "more efficient" language. We have a school scheduling engine which I've rewritten a few times now, and as one example moving from C++ to Java with the original algorithm intact cost us about 20% performance (this was back in Java 4 days, without the great JIT compilers we enjoy now), but allowed us to put a much more elegant algorithm in place which yielded gains at the 10,000% level and in some cases better (a build on track for 15 years to complete in the old codebase completing in 200 milliseconds).

Now, I can't speak to Adobe's use of Lua. But, it can be compiled all the way to machine code, and Lua can be run in a JIT-supporting bytecode interpreter, although the quality of the Lua JITs might not be anywhere near those found in .Net or Java VMs. I also don't know how "deep" Adobe's use of Lua is, if it is just at the UI and API levels or if it actually runs any of the image manipulation algorithms. I would not expect it to be used in the latter case, generally.

As for any of these being the primary bottleneck in the "moving from image to image" performance problem, I'd say Lua is much more likely to be the problem than SQLLite. Not sure what Cocoa has to do with it (this is a problem on Windows too, right?) or what you mean by "Silver" (Silverlight? Wouldn't Adobe more likely have Flash in there if anything?)

Quote
Each one of us has experienced one or another issue. Adobe should aim at updating its core engines to include more than SSE2 (2001).to something more advanced like AVX2 (2013), which was 4 years ago, which means that if you edit photos on so old computer you need an upgrade or at least don't need that much professional software.

Agreed that the use of SSE2 only (and none of the expanded newer instruction sets) is, if true, definitely going to be a performance problem, and will show up exactly where we see it: applying adjustments is just plain slow. Adobe is likely using a modular architecture which would allow them to compile key image-processing modules with several levels (SSE2 at one end and the latest at the other end, perhaps nothing-newer-than-five-years in the middle) and pull in the dylib/DLL appropriate to the host machine's architecture at runtime. This is well-trodden territory, and would not require Adobe to "write off" even customers on the oldest hardware (although at a cost of potentially having to code these low-level optimizations three times instead of once). But, as you point out later, it seems like "SSE2 only" is another one of those myths, a tidy little story people tell themselves to explain why Lightroom's performance sucks in several situations.

IMHO, more likely the issue is not with the database, or with the use of Lua, or even with the raw performance of the image manipulation steps (because I suspect they are tightly tuned to do what they do without having quality issues). More likely the root issue is that Lightroom constantly throws its work away instead of storing it or caching it. At the same time, it sits nearly 100% idle much of the time, wasting potential CPU cycles.

There is absolutely NO reason why if I apply a bunch of changes to one image in Develop (which it renders on screen), click on another image in the filmstrip, then click back, that there should be ANY delay in showing me the fully rendered image with all its changes. It just did the full render. But, it threw everything away when I went to look at another image in the filmstrip for reference.

There is also absolutely NO reason why if I am flipping through images in the Library or Develop module and pause for a second on one image, I shouldn't be able to flip to the next two or three without any rendering delay. Instead, while I am looking at image 27/300, Lightroom is twiddling its thumbs and dreaming of unicorns. It should be anticipating my next move: the guy just went from image 1 to 2, then 2 to 3, etc, then 26 to 27; they are likely going to want 28, 29, and 30 next).

The problem with the "the issue is the database" myth is that to do either of the above real, actual improvements to Lightroom performance, means putting more cached processed data into a cache database (which, ultimately, is pretty much what all of the Lightroom databases are, other than the specific catalog information). And, to do that, Adobe needs to improve its underlying database management procedures (there is no reason why Lightroom can't automatically detect changes on disk without us having to tell it to rebuild the cache via "Synchronize Folder", and there is no reason why we shouldn't have significantly better control over how long 1:1 previews and the like hang around in the caches, even to the point of being able to remove a particular image from caches). The issue isn't "too much database", it is "too much calculated on-the-fly / too little database".

canon rumors FORUM

Re: Do You Wish Lightroom Was Quicker? Adobe Does Too
« Reply #104 on: Today at 06:37:01 PM »