Poor graphics performance on integrated graphics

I believe this to be a bug with Avogadro

Environment Information

Avogadro version: 1.98.0/1.98.1
Operating system and version: MacOS 12.7

After updating Avogadro to the latest version (1.98.1), the graphical performance of the program has gotten noticeably worse compared with the previous version (1.97), especially while rotating and moving molecules in the editor using the mouse.

Thanks for your support.

You can turn off the real-time shadows:

View ⇒ Render…

Turn off Ambient Occlusion and Edge Detection

But I’m also curious - what Mac do you have? (e.g., what’s the GPU?)

Following your advice, I’ve disabled ambient occlusion and edge detection, but the performance doesn’t seem to improve. My system is a 2013 MacBook Pro with an integrated Intel Iris (HD graphics 5000/Haswell) GPU (a bit outdated, I guess…). What it seems odd to me is that the previous version of Avogadro (1.97) worked fine in the same system. Is there any way of lowering the graphics quality of the program in order to improve the performance on older machines?

Thank you for your support.

Yes. We added real-time shading, which is obviously more demanding on the GPU.

I’ll see about an option to use the old shader.

I’m impressed at the longevity of that Mac though. (If nothing else, I’ve rarely had a drive last 10 years.)

Thank you for your support. An option like that would be truly be appreciated by the owners of older machines!
Greetings and congratulations for the amazing work done in Avogadro 1.98!

I’ve found that the issue is not restricted to decade-old Macs so I changed the title of the topic rather than making a new one, but feel free to split this off and change the name back.

I’ve always found the graphics in Avogadro 2 to be both pretty and performant because I used it mostly at home.

However, using it at uni on a high-resolution monitor (4K), I’ve noticed the same thing as @dav267 – that integrated graphics struggles to keep up with the rendering. Clicking and dragging for creation or rotation lags significantly.

Crucially:

  • The performance is similar on both Linux (KDE Plasma 6) and Windows (11) machines with similar specs, and all the observations below apply equally to both.

  • On Linux, AppImage or Flatpak makes no difference.

  • I can also confirm that the 1.97 Linux AppImage works fine (though dragging bonds is stepwise rather than smooth) and that the issue first appears in 1.98, and remains in 1.99.

  • The PCs in question are not even old or underspecced; the Linux machine has an i7-9700 and the Windows an i5-12600, both with 32 GB of RAM and a lot of spare SSD space.

  • This is even with the smallest molecules like ethane.

  • Turning off Ambient Occlusion and Edge Detection as suggested before doesn’t help here either, with very little improvement in performance. If this turns off the real-time shading, shouldn’t it revert to 1.97-performance?

  • Performance is fine on a single 1080p screen. With a 4K monitor the performance tanks. If both a 1080p and 4K are connected, even if the Avogadro window is on the 1080p screen, the performance is poor.

  • In no other “normal” application do the graphics seem to struggle. Obviously I’m not doing Blender work or anything, but anything on the web or that’s part of a normal synthetic chemistry workflow is fluid and fast.

While a 4K monitor is maybe not the norm, it is also no longer that uncommon. Integrated graphics on the other hand is certainly what the majority of people will be running on, and these are modern mid-range Intel processors we are talking about, so I think this is more concerning than we originally thought.

With the current code, turning off rendering options doesn’t actually change the rendering path. So it’s still doing the same calculations … just not altering the rendered image.

The trick I’ve been looking at is how to turn on/off that rendering path in a performant manner (i.e., avoiding if/else statements in rendering loops).

It’s one reason that @perminder-17 worked on the fog option … which adds depth effects without the same performance hit.

At the moment, my schedule is pretty busy - I’m working on updating a bunch of the rendering code to OpenGL 4.0 “core profile” (i.e., modernizing).

I’ll definitely make sure to get in some patches for real-time shading. (I’m actually working to turn it off anyway while I work through the modernizing effort.)

2 Likes

No stress, just wanted to report my observations.

Sounds like you’ve got good ideas. Maybe using the more up-to-date OpenGL will bring some performance benefit as well as a bonus.

We’ll see. I’m not expecting much for general rendering. The big benefit is for tessellation shaders, which will allow more surface generation and smoothing on the GPU.

Presumably ways to turn off the real-time shading will help many people too.

2 Likes

I feel like I should point out that Intel UHD Graphics 770 (which comes with the i5-12600) is not a good iGPU. Despite the CPU being pretty good, the integrated graphics perform worse than an Nvidia FX 5800, which was released over 2 decades ago in 2003.

On a side note, I am not sure it should be the burden of Geoff to optimize a program for an integrated graphics processor that performs like it was released before I was born, however my real gripe is just that Intel doesn’t know how to make integrated graphics that actually, ya know, display graphics.

I’m not going to spend a lot of time on it, no. But it also shows up with better GPUs when you have more atoms. So if you want good frame rates when animating a few thousand atoms, you might decide to forgo the real-time shadows. (And then maybe turn them on when you save a movie and go to lunch.)

Oh, absolutely, Geoff shouldn’t be burdened with anything. But at the end of the day, Intel integrated graphics is what the majority of users of Avogadro will have (and most likely an i5 at that), and that can’t be changed.

I honestly was shocked when I checked those statistics, I could’ve sworn that the iGPU would’ve been comparable to like, an Nvidia GTX 960 or something. I run AMD CPUs because of better thermals and energy efficiency, and they are even slightly better than a GTX 960. I guess it’s a matter of prioritizing which market aspect a company wants, although AMD seems to be doing better in both consumer CPUs and workstation/cluster CPUs…