Avogadro version: 1.98.0/1.98.1
Operating system and version: MacOS 12.7
After updating Avogadro to the latest version (1.98.1), the graphical performance of the program has gotten noticeably worse compared with the previous version (1.97), especially while rotating and moving molecules in the editor using the mouse.
Following your advice, I’ve disabled ambient occlusion and edge detection, but the performance doesn’t seem to improve. My system is a 2013 MacBook Pro with an integrated Intel Iris (HD graphics 5000/Haswell) GPU (a bit outdated, I guess…). What it seems odd to me is that the previous version of Avogadro (1.97) worked fine in the same system. Is there any way of lowering the graphics quality of the program in order to improve the performance on older machines?
Thank you for your support. An option like that would be truly be appreciated by the owners of older machines!
Greetings and congratulations for the amazing work done in Avogadro 1.98!
I’ve found that the issue is not restricted to decade-old Macs so I changed the title of the topic rather than making a new one, but feel free to split this off and change the name back.
I’ve always found the graphics in Avogadro 2 to be both pretty and performant because I used it mostly at home.
However, using it at uni on a high-resolution monitor (4K), I’ve noticed the same thing as @dav267 – that integrated graphics struggles to keep up with the rendering. Clicking and dragging for creation or rotation lags significantly.
Crucially:
The performance is similar on both Linux (KDE Plasma 6) and Windows (11) machines with similar specs, and all the observations below apply equally to both.
On Linux, AppImage or Flatpak makes no difference.
I can also confirm that the 1.97 Linux AppImage works fine (though dragging bonds is stepwise rather than smooth) and that the issue first appears in 1.98, and remains in 1.99.
The PCs in question are not even old or underspecced; the Linux machine has an i7-9700 and the Windows an i5-12600, both with 32 GB of RAM and a lot of spare SSD space.
This is even with the smallest molecules like ethane.
Turning off Ambient Occlusion and Edge Detection as suggested before doesn’t help here either, with very little improvement in performance. If this turns off the real-time shading, shouldn’t it revert to 1.97-performance?
Performance is fine on a single 1080p screen. With a 4K monitor the performance tanks. If both a 1080p and 4K are connected, even if the Avogadro window is on the 1080p screen, the performance is poor.
In no other “normal” application do the graphics seem to struggle. Obviously I’m not doing Blender work or anything, but anything on the web or that’s part of a normal synthetic chemistry workflow is fluid and fast.
While a 4K monitor is maybe not the norm, it is also no longer that uncommon. Integrated graphics on the other hand is certainly what the majority of people will be running on, and these are modern mid-range Intel processors we are talking about, so I think this is more concerning than we originally thought.
With the current code, turning off rendering options doesn’t actually change the rendering path. So it’s still doing the same calculations … just not altering the rendered image.
The trick I’ve been looking at is how to turn on/off that rendering path in a performant manner (i.e., avoiding if/else statements in rendering loops).
It’s one reason that @perminder-17 worked on the fog option … which adds depth effects without the same performance hit.
At the moment, my schedule is pretty busy - I’m working on updating a bunch of the rendering code to OpenGL 4.0 “core profile” (i.e., modernizing).
I’ll definitely make sure to get in some patches for real-time shading. (I’m actually working to turn it off anyway while I work through the modernizing effort.)
We’ll see. I’m not expecting much for general rendering. The big benefit is for tessellation shaders, which will allow more surface generation and smoothing on the GPU.
Presumably ways to turn off the real-time shading will help many people too.
I feel like I should point out that Intel UHD Graphics 770 (which comes with the i5-12600) is not a good iGPU. Despite the CPU being pretty good, the integrated graphics perform worse than an Nvidia FX 5800, which was released over 2 decades ago in 2003.
On a side note, I am not sure it should be the burden of Geoff to optimize a program for an integrated graphics processor that performs like it was released before I was born, however my real gripe is just that Intel doesn’t know how to make integrated graphics that actually, ya know, display graphics.
I’m not going to spend a lot of time on it, no. But it also shows up with better GPUs when you have more atoms. So if you want good frame rates when animating a few thousand atoms, you might decide to forgo the real-time shadows. (And then maybe turn them on when you save a movie and go to lunch.)
Oh, absolutely, Geoff shouldn’t be burdened with anything. But at the end of the day, Intel integrated graphics is what the majority of users of Avogadro will have (and most likely an i5 at that), and that can’t be changed.
I honestly was shocked when I checked those statistics, I could’ve sworn that the iGPU would’ve been comparable to like, an Nvidia GTX 960 or something. I run AMD CPUs because of better thermals and energy efficiency, and they are even slightly better than a GTX 960. I guess it’s a matter of prioritizing which market aspect a company wants, although AMD seems to be doing better in both consumer CPUs and workstation/cluster CPUs…
Performance is now good on an Intel i7-9700 (which has UHD 630 integrated graphics) even on a 4K + 1080p dual-monitor setup, when Ambient Occlusion and Depth Blur are turned off. I’m not sure when exactly the proposed changes were implemented but they have done the job
I can turn Depth Blur on and still have sluggish but halfway acceptable performance (at least for small molecules). Ambient Occlusion is what causes the real big hit.
The default settings now seem to be that both are turned off, along with Edge Outline, so new users won’t have to first turn things off, which is nice, thanks.
It seems it’s possible to get the graphics card info via QOpenGLContext, using glGetString to get the value of the variable GL_RENDERER. Maybe on first launch we could do a very crude attempt at detection of dedicated graphics by looking for GeForce or Radeon RX or Xe or Arc in the obtained string? It wouldn’t have to be exhaustive, a false negative is better than a false positive imo. Then if a GPU is detected Avogadro could turn all the graphics presets on, and otherwise default to the current default (just Fog).
Though it does have to be said, while I appreciate that large 3D structures, protein active sites and things probably look good using the fog, I personally find the effect to be a little odd looking and counter-intuitive for small molecules, and I feel like it would be surprising behaviour to many people. So I’d be in favour of having all the fancy rendering settings turned off by default and let people discover and test them out according to their taste and application.
I am also in favor of this. My personal setup has a beefy processor and GPU, but I prefer very basic rendering, turning off all the fog, ambient occlusion, depth blur, edge outline, going from perspective to orthographic, etc.