Do You Wish Lightroom Was Quicker? Adobe Does Too

LDS

Sep 14, 2012
1,763
293
EdelweissPirate said:
A quick Google search implies that Lightroom is only compiled against SSE2 and nothing later. On the other hand, I think the real question is: what instruction sets is ACR compiled against? I'd expect it's the same, but maybe others here have better information.

AFAIK LR and ACR share the same code for the features they have in common. Then on top of that LR uses the Lua engine, and it would be interesting if and how well it can again take advantage of more powerful instructions when available.

It looks some other Adobe products can use more advanced instruction sets, but those are the one more aimed at a more professional user, thereby less issue to increase the hardware requirements.

IMHO LR has room for big improvements, but the price may be to remove support for some older processors, and may require some deep code changes.
 
Upvote 0
LDS said:
What 20MB cache are referring to? The CPU one?

Yes. But you're right, LR almost certainly uncompresses the image to 12-24 bytes per pixel (4-8 bytes per color channel) in RAM before doing any work on it, so my reference to the RAW size was at best misleading. An uncompressed image would get you a larger working set, but more regular memory accesses since it's a simple 2D array, which memory prefetchers are well optimized to handle. Either way, that's slower than my RAW size reference would imply.

And that's a good comment about cache contention. I'm in the wrong field to be able to usefully speculate about whether LR's image processing algorithms look more like a streaming application, or more like something that would provoke fights over the cache. Surely Adobe has thoroughly profiled the code, but of course they're not going to share with us their results. I'll have to do some digging to see if I can find anyone else who has done that research.
 
Upvote 0

LDS

Sep 14, 2012
1,763
293
In this thread:

https://feedback.photoshop.com/photoshop_family/topics/lightroom-clone-and-brush-tool-can-not-stress-the-cpu-is-slow-only-on-cpu-with-xeon-architectures-can-confirm?topic-reply-list%5Bsettings%5D%5Bfilter_by%5D=all&topic-reply-list%5Bsettings%5D%5Bpage%5D=2#topic-reply-list

Simon Chen (one the key people in LR development) offers a tweak to change how LR uses CPUs, and explains the design trade off Adobe made in developing the import stage.
 
Upvote 0

Diko

7 fps...
Apr 27, 2011
441
8
41
Sofia, Bulgaria
EdelweissPirate said:
The association between Lightroom's database and its performance issues exists only in your imagination. It's a mistake to claim that SQLite is an a priori cause of Lightroom's performance issues.
Do me a favour and go try putting 20 healing spots on an image. Do that for the next 10 out of 50. Let me know if it gets sluggish when you apply to the next image. Also walk around the images.

And if you ask me what that has to do with the SQLite I think our debate would end here ;-)

Now I read carefully the posts and tend to believe that most people here don't realize that there IS a scenario in which all performance issues are not related. Not caused by one and the same bottleneck.

Also all more knowledgeable IT guys that have posted seem to omit the possibility that everything is the cause.

- SQLlite
-LUA (I forgot to rant about it in my previous posts)
- Cocoa and Silver

Each of the above has its own merits and setbacks. If I recall correctly the memory leaks issues began after the transition to LUA, which was for better API for users to create custom plugins (only presumption) and also as they stated in the mentioned presentation to better get core API functions calls easily . But as noted - it is metalanguage. It needs an interpreter (virtual machine) to run. It is more like a script language than a true language.

Each one of us has experienced one or another issue. Adobe should aim at updating its core engines to include more than SSE2 (2001).to something more advanced like AVX2 (2013), which was 4 years ago, which means that if you edit photos on so old computer you need an upgrade or at least don't need that much professional software.


And all that being said it is not true that ONLY SSE2 is being used. Check the same link with Simon Chen.

Camera Raw SIMD optimization: SSE2,AVX,AVX2. But this is ONLY one of the Tools in LR!

I find this little config.lua tweak a great start in troubleshooting the performance issues :)


Thank you Adobe!
 
Upvote 0
Diko said:
EdelweissPirate said:
The association between Lightroom's database and its performance issues exists only in your imagination. It's a mistake to claim that SQLite is an a priori cause of Lightroom's performance issues.
Do me a favour and go try putting 20 healing spots on an image. Do that for the next 10 out of 50. Let me know if it gets sluggish when you apply to the next image. Also walk around the images.

And if you ask me what that has to do with the SQLite I think our debate would end here ;-)

Umm, okay. Would the debate end because you don't understand what would be in the database in such a case?

This is exactly the sort of thing that is not exercising the database (unless Adobe's engineers are grossly incompetent, but if you believe that then why are you even giving their products a second glance?)

There are several databases in Lightroom:
[list type=decimal]
[*]The catalog, which stores references to all the images on disk, as well as a cache of the modification instructions for those images.
[*]The "Previews" (1:1 and Standard) cache, which caches rendered 1:1 and "standard" size renders of the final images
[*]The "Smart Previews" cache, which caches the original raw data compressed so that basic changes can be made in the UI before the original RAW file is pulled up from disk and if the original RAW file is missing
[/list]

There might be more in some circumstances, but those are the biggies. Also note that almost everything above is categorized as "cache". That means, the actual source of record for that information is elsewhere - for 1:1 previews, that is the original RAW file plus the list of instructions applied to it as described in the sidecar file. These databases' whole existence is because they fixed performance problems in the non-DB-based early versions of Lightroom.

I'm assuming you are trying to say that the first database (the catalog) is the issue here, as it is what contains the stack of changes per image pulled from disk (where those healing brush source and destinations are, and the geometry of the spots). That is a fairly small amount of data to store (the mask of the healing image, which should be an 8-bit-per-pixel RLE-compressed bitmap unless, again, Adobe's engineers are wholly incompetent, is the biggest bit), and it is stored alongside all the other Develop instructions stack for each image.

Let's look at what happens when you go from image to image in the Develop module:

[list type=decimal]
[*]The Smart Preview is pulled up, if in cache, which is by all accounts instantaneous
[*]The RAW file is pulled into memory if available, replacing the Smart Preview. This may take some time as it is a disk access.
[*]The list of changes is pulled out of the Catalog database
[*]Each change is applied to the image in turn, using the rendering engine appropriate for the change (ex, rotator and cropper, spot removal, exposure, USM for sharpening, etc).
[*]The rendered image is displayed on screen
[/list]

If the third step above were a problem, it would be a problem no matter what is in that list. For instance, rotate all your images slightly, adjust the exposure, add a contrast curve, etc. From a database perspective, 99% of the cost of step 3 is the lookup - finding the row in the database - while the rest of the cost is pulling the data out. I'm not sure about the specific RDBMS table structure in Lightroom's database, but assuming the developers know what they are doing, there are only a few possibilities that really make sense. I'd guess that likely the list of changes to apply is kept in a child table related to the parent "Photo" table.

But, we don't see such a drag just by doing "any" set of changes to the images. We need to do computationally-intense (relatively) changes to result in a measurable slowdown.

This is exactly what we would expect if #4 above is the bottleneck, because that is the first point in the whole process where an image that has ten crop/rotates and ten exposure adjustments looks different from one which has twenty healing brush adjustments.

Now I read carefully the posts and tend to believe that most people here don't realize that there IS a scenario in which all performance issues are not related. Not caused by one and the same bottleneck.

Also all more knowledgeable IT guys that have posted seem to omit the possibility that everything is the cause.

- SQLlite
-LUA (I forgot to rant about it in my previous posts)
- Cocoa and Silver

Each of the above has its own merits and setbacks. If I recall correctly the memory leaks issues began after the transition to LUA, which was for better API for users to create custom plugins (only presumption) and also as they stated in the mentioned presentation to better get core API functions calls easily . But as noted - it is metalanguage. It needs an interpreter (virtual machine) to run. It is more like a script language than a true language.

I've been involved with enough refactors and language rewrites to know that the language chosen can make a big difference in performance, but "interpreted" languages are not necessarily worse for a task. If you are dealing with evolving algorithms, in fact, moving to a higher-level, "less efficient" language will often allow for significant performance boosts at the algorithm level which would have been impractical with the "more efficient" language. We have a school scheduling engine which I've rewritten a few times now, and as one example moving from C++ to Java with the original algorithm intact cost us about 20% performance (this was back in Java 4 days, without the great JIT compilers we enjoy now), but allowed us to put a much more elegant algorithm in place which yielded gains at the 10,000% level and in some cases better (a build on track for 15 years to complete in the old codebase completing in 200 milliseconds).

Now, I can't speak to Adobe's use of Lua. But, it can be compiled all the way to machine code, and Lua can be run in a JIT-supporting bytecode interpreter, although the quality of the Lua JITs might not be anywhere near those found in .Net or Java VMs. I also don't know how "deep" Adobe's use of Lua is, if it is just at the UI and API levels or if it actually runs any of the image manipulation algorithms. I would not expect it to be used in the latter case, generally.

As for any of these being the primary bottleneck in the "moving from image to image" performance problem, I'd say Lua is much more likely to be the problem than SQLLite. Not sure what Cocoa has to do with it (this is a problem on Windows too, right?) or what you mean by "Silver" (Silverlight? Wouldn't Adobe more likely have Flash in there if anything?)

Each one of us has experienced one or another issue. Adobe should aim at updating its core engines to include more than SSE2 (2001).to something more advanced like AVX2 (2013), which was 4 years ago, which means that if you edit photos on so old computer you need an upgrade or at least don't need that much professional software.

Agreed that the use of SSE2 only (and none of the expanded newer instruction sets) is, if true, definitely going to be a performance problem, and will show up exactly where we see it: applying adjustments is just plain slow. Adobe is likely using a modular architecture which would allow them to compile key image-processing modules with several levels (SSE2 at one end and the latest at the other end, perhaps nothing-newer-than-five-years in the middle) and pull in the dylib/DLL appropriate to the host machine's architecture at runtime. This is well-trodden territory, and would not require Adobe to "write off" even customers on the oldest hardware (although at a cost of potentially having to code these low-level optimizations three times instead of once). But, as you point out later, it seems like "SSE2 only" is another one of those myths, a tidy little story people tell themselves to explain why Lightroom's performance sucks in several situations.

IMHO, more likely the issue is not with the database, or with the use of Lua, or even with the raw performance of the image manipulation steps (because I suspect they are tightly tuned to do what they do without having quality issues). More likely the root issue is that Lightroom constantly throws its work away instead of storing it or caching it. At the same time, it sits nearly 100% idle much of the time, wasting potential CPU cycles.

There is absolutely NO reason why if I apply a bunch of changes to one image in Develop (which it renders on screen), click on another image in the filmstrip, then click back, that there should be ANY delay in showing me the fully rendered image with all its changes. It just did the full render. But, it threw everything away when I went to look at another image in the filmstrip for reference.

There is also absolutely NO reason why if I am flipping through images in the Library or Develop module and pause for a second on one image, I shouldn't be able to flip to the next two or three without any rendering delay. Instead, while I am looking at image 27/300, Lightroom is twiddling its thumbs and dreaming of unicorns. It should be anticipating my next move: the guy just went from image 1 to 2, then 2 to 3, etc, then 26 to 27; they are likely going to want 28, 29, and 30 next).

The problem with the "the issue is the database" myth is that to do either of the above real, actual improvements to Lightroom performance, means putting more cached processed data into a cache database (which, ultimately, is pretty much what all of the Lightroom databases are, other than the specific catalog information). And, to do that, Adobe needs to improve its underlying database management procedures (there is no reason why Lightroom can't automatically detect changes on disk without us having to tell it to rebuild the cache via "Synchronize Folder", and there is no reason why we shouldn't have significantly better control over how long 1:1 previews and the like hang around in the caches, even to the point of being able to remove a particular image from caches). The issue isn't "too much database", it is "too much calculated on-the-fly / too little database".
 
Upvote 0

LDS

Sep 14, 2012
1,763
293
"Silver" is the name given by Adobe to the LR user interface, which is not the OS native one, and aims to be the same on both Windows and macOS. There's a JIT for Lua, but does LR use it?
Pre-rendering other images may have drawbacks as well. It takes resources, which becomes not available for the foreground task, and it's wasted work if the user doesn't work sequentially - I often move much more 'randomly' after the initial culling.
Probably LR should offer the user options to select what behavior they prefer, depending on how they use LR and how powerful their computer is. It makes the application more complex, but a single compromise may not fit everybody.
 
Upvote 0
Agreed; options are good, and not everyone's workflows are the same. LR should be able to learn from what you do, though, and work ahead accordingly. Simple predictive workflows are not that hard to implement especially in Lightrooms modal flow (you have a set of heuristics for the Library module, another for the Develop, etc), and if they do it right the only option would need to be how much background processing time they take, how quickly things get canceled if something else needs the CPU, etc.

Never knew that LR's UI was called "Silver". I'm not a fan of Adobe's need to make a nonstandard UI for all of their products, but in the list of gripes against LR i rate it as fairly low. Still, the non-standard UI might well be a performance issue (much better to leave that kind of thing to the OS developers!)
 
Upvote 0

LDS

Sep 14, 2012
1,763
293
TomDibble said:
Still, the non-standard UI might well be a performance issue (much better to leave that kind of thing to the OS developers!)

Again, Adobe needs to support both Windows and macOS, and ensure LR works exactly the same on both, including plug-ins. Usually, it's much easier to make an application faster when there are no cross-platform requirements and layers.

Also think what would happen to all those people writing books, publishing books and offering courses if LR had two different UIs, one for macOS and one for Windows... :D

Moreover, all the cross-platform UI I saw till now are no better than Silver. I'll give a look at how Affinity handled this issue.
 
Upvote 0

Diko

7 fps...
Apr 27, 2011
441
8
41
Sofia, Bulgaria
TomDibble said:
Never knew that LR's UI was called "Silver"...
Guess the name of the custom-tailored color space ;-)

Melissa RGB: It is using ProPhoto RGB chromaticities, but with a gamma of 1.0 instead of 1.8. Meanwhile, the Lightroom viewing space uses the same ProPhoto RGB chromaticities but with an sRGB tone response curve. Melissa Gaul, who was the QE manager for Lightroom, suggested this space should be called Melissa RGB since all RGB spaces to date have been named after men!

I find it to be a smart move.
 
Upvote 0

Diko

7 fps...
Apr 27, 2011
441
8
41
Sofia, Bulgaria
TomDibble said:
Umm, okay. Would the debate end because you don't understand what would be in the database in such a case?
....
Let's look at what happens when you go from image to image in the Develop module:
..
3. The RAW file is pulled into memory if available, replacing the Smart Preview. This may take some time as it is a disk access.
...

If the third step above were a problem, it would be a problem no matter what is in that list. For instance, rotate all your images slightly, adjust the exposure, add a contrast curve, etc. From a database perspective, 99% of the cost of step 3 is the lookup - finding the row in the database - while the rest of the cost is pulling the data out. I'm not sure about the specific RDBMS table structure in Lightroom's database, but assuming the developers know what they are doing, there are only a few possibilities that really make sense. I'd guess that likely the list of changes to apply is kept in a child table related to the parent "Photo" table.

Really?!? After all being said you really have to omit the addressing of local changes?!? Crop, curve and all the rest compared to local changes are significantly smaller records. In my case I have seen the behaviour on 10, 30, 50 megapixels. Sluggishness correlates on pixel quantity. Don't even need to mention the 100 mps, cause there I even don't try to use local adjustments for that reason :)

Even if not using absolute addressing, but relative the quantity of local adjustment records still depends on number of corrections, no matter of spot heal, or a brush. And you really haven't tried what I asked you to, did you ;-)

As for the rest, seems to be on the same or similar opinion. My take as I mentioned to biggest extend are presumptions based on observations and knowledge.

However on LUA - have no idea. From my experience interpreters tends to slow things... but I am not as proficient in the field as you seem to be.

However my main point is still valid:

1/ Different people complain from different issues. E.g I have never experienced slowdown caused by GPU compared to the sufferings of so many others. Ergo It can't be ONLY ONE bottleneck or bug. Additionally people are not using the same hardware, but software as well like OS and most importantly workflows - I don't like stains, pimples, too dark circles under the eyes etc. ergo huge usage of local adjustments and healing spot whereby I rarely go out of automatic (added when importing) exposures. And tiny crop corrections now and then.

IMHO all that makes it hard to solve all the mysteries surrounding the epic slowness of LR. It is simply different issues with different people.

TomDibble said:
The problem with the "the issue is the database" myth is that to do either of the above real, actual improvements to Lightroom performance, means putting more cached processed data into a cache database (which, ultimately, is pretty much what all of the Lightroom databases are, other than the specific catalog information).

Some versions ago they had a bug with cache...overflow? or leak... (don't know how to put it in english). One of the LR's caches was getting way too big cluttering users' hard drive space... and then they simply "fixed" it. I guess they tried to overcome slowness by utilizing caches but somehow didn't worked right for all. And.... yeah... someone noticed the non-stop disk writes worrying it will аmortizes the SSDs.

In the end - I also tend to agree for the behavioural pattern recognition for predictive system utilization... but I am also afraid of the one Microsoft tried to pull of for so many years... It may happen successfully, but at the cost of our own suffering for at least two major LR releases.
 
Upvote 0

LDS

Sep 14, 2012
1,763
293
Diko said:
Really?!? After all being said you really have to omit the addressing of local changes?!? Crop, curve and all the rest compared to local changes are significantly smaller records. In my case I have seen the behaviour on 10, 30, 50 megapixels. Sluggishness correlates on pixel quantity. Don't even need to mention the 100 mps, cause there I even don't try to use local adjustments for that reason :)

Even if not using absolute addressing, but relative the quantity of local adjustment records still depends on number of corrections, no matter of spot heal, or a brush. And you really haven't tried what I asked you to, did you ;-)

Look at the sidecar XMP files. They store the same information the database does. It's no surprise higher mpx means slower performance, they are more computationally intensive. The same data will be stored in the database or XMP file, but more processing will be needed to apply the same processing to a bigger image.

These, for example, here are the data for an heal brush stroke:

<crs:RetouchAreas>
<rdf:Seq>
<rdf:li>
<rdf:Description
crs:SpotType="heal"
crs:SourceState="sourceAutoComputed"
crs:Method="gaussian"
crs:SourceX="0.646875"
crs:OffsetY="0.421875"
crs:Opacity="1.000000"
crs:Feather="0.000000"
crs:Seed="+2">
<crs:Masks>
<rdf:Seq>
<rdf:li
crs:What="Mask/Ellipse"
crs:MaskValue="1.000000"
crs:X="0.642014"
crs:Y="0.405729"
crs:SizeX="0.005382"
crs:SizeY="0.005382"
crs:Alpha="0.000000"
crs:CenterValue="1.000000"
crs:perimeterValue="0.000000"/>
</rdf:Seq>
</crs:Masks>
</rdf:Description>
</rdf:li>

The data doesn't depend on the image pixel count, nor does it stores pixel data - it just stores the parameters required to recompute the stroke. These are the same data used by ACR - the engine is the same. Just, probably, most people using ACR will make local changes using Photoshop - which works differently.

LR doesn't apply the changes in the order you applied them - the processing pipeline knows what is the best order to apply them, so one change may require to compute not only it, but subsequent changes that depends on it too.

What LR really needs is to improve the performance of its processing pipeline - reading the "recipe" from a xmp file, or SQLite database, probably is not the issue.
 
Upvote 0