Rendering Order

In reading Geoff’s initial design document it occurred to me that my
rendering setup is a bit faulty. I realize now that to accuratly render
things we need to let the GLWidget handle the actual order of rendering
so that transparency is handled correctly.

This is not a huge problem, we just need to kinda group our heads
together and see how we should go about this. What I would suggest is
having the rendering engines return a list of display lists and
associated Z values. Then we can order Z from back to front. What i
don’t know is how we can do this and still allow individual rendering
options per primitive.

Right now (current implementation) this is simple. Each engine has a
queue of primitives it needs to render and simply does it. If we don’t
want a primitive rendered by a certain engine we just remove it from the
queue. This does no work though because rendering is not based off the
Z value. Some transparency works and some doesnt.

What I have in my head is that we create a helper class like such:
Class DisplayList
{
GLuint dlNumber
GLfloat zValue
Engine *engine
Primitive *primitive
}

This way when a primitive gets updated we can find all DisplayLists that
need to be updated and then just call the engine to update the display
list. Queue’s would actually become queues that get cleaned out as they
are rendered.

Suggestions welcome. Technically these changes can be made without any
"external" functionality changes to the GLWidget. Although it may
require changes to the engines. I’m also not sure how to get the
zValues correctly which i’m sure Benoit has some take on.

-Donald

On Apr 9, 2007, at 8:19 PM, Donald Ephraim Curtis wrote:

In reading Geoff’s initial design document it occurred to me that my
rendering setup is a bit faulty. I realize now that to accuratly
render
things we need to let the GLWidget handle the actual order of
rendering
so that transparency is handled correctly.

I’m not sure what you’re talking about. OpenGL is a 3D environment,
so it handles the “z-values” based on wherever the camera is relative
to the objects.

I mean, take the current example of the wireframe and balls-and-
sticks view. The wireframe is correctly ignored because it falls
inside the balls-and-sticks. Hide the bsengine or turn it off on one
atom, and you see the hidden wireframe. Make the atoms transparent in
bsengine and you see the correctly composed view.

Now the labels may be something entirely different. If I could
actually see them, I could tell you for sure. :slight_smile:

But I don’t know why you need some sort of “z-index” in a 3D OpenGL
view. The library should take care of turning that 3D scene into a 2D
one – in software or GPU.

If I’m missing something, maybe a particular example might help.

Thanks,
-Geoff

So here is how to reproduce;

Draw two atoms. Then select them by clicking (drag to select is goofy
because of multiple named things being rendered (ball and stick +
wireframe).

You have two atoms {1,2}. Move the camera so that 2 is in front of 1
(on top) and you will see through the transparent shell of atom 2’s
selection allowing you to see atom 1. Now look at it from the opposite
direction, that is, rotate so atom 1 is closer to you and you will see
that it does not show atom 2.

In writing this email you made a good point. Really OGL should be
handling this. It has something to do with Depth Masking (looked it up
on my big thick OGL book). but something else appears wrong. Now i can
see through atom 1 to atom 2 but the shading is off. Even though i can
see the object, it’s treating the opaque surface like it’s behind the
object. It’s really weird and possibly due to the blending function.
I’ll read more before jumping to conclusions that it constitutes a
redesign.

I apologize in advance for my limited OpenGL experience. That’s why I
thought i’d see if you guys had any insight.

-Donald

(Mon, Apr 09, 2007 at 09:18:58PM -0400) Geoffrey Hutchison geoff.hutchison@gmail.com:

On Apr 9, 2007, at 8:19 PM, Donald Ephraim Curtis wrote:

In reading Geoff’s initial design document it occurred to me that my
rendering setup is a bit faulty. I realize now that to accuratly
render
things we need to let the GLWidget handle the actual order of
rendering
so that transparency is handled correctly.

I’m not sure what you’re talking about. OpenGL is a 3D environment,
so it handles the “z-values” based on wherever the camera is relative
to the objects.

I mean, take the current example of the wireframe and balls-and-
sticks view. The wireframe is correctly ignored because it falls
inside the balls-and-sticks. Hide the bsengine or turn it off on one
atom, and you see the hidden wireframe. Make the atoms transparent in
bsengine and you see the correctly composed view.

Now the labels may be something entirely different. If I could
actually see them, I could tell you for sure. :slight_smile:

But I don’t know why you need some sort of “z-index” in a 3D OpenGL
view. The library should take care of turning that 3D scene into a 2D
one – in software or GPU.

If I’m missing something, maybe a particular example might help.

Thanks,
-Geoff

The saga continues.

I asked some people in #opengl and they said the same thing. Also from
gamedev.net chapter 8.


http://nehe.gamedev.net/data/lessons/lesson.asp?lesson=08
Rui Martins Adds: The correct way is to draw all the transparent (with
alpha < 1.0) polys after you have drawn the entire scene, and to draw
them in reverse depth order (farthest first). This is due to the fact
that blending two polygons (1 and 2) in different order gives different
results, i.e. (assuming poly 1 is nearest to the viewer, the correct way
would be to draw poly 2 first and then poly 1. If you look at it, like
in reality, all the light comming from behind these two polys (which are
transparent) has to pass poly 2 first and then poly 1 before it reaches
the eye of the viewer. You should SORT THE TRANSPARENT POLYGONS BY DEPTH
and draw them AFTER THE ENTIRE SCENE HAS BEEN DRAWN, with the DEPTH
BUFFER ENABLED, or you will get incorrect results. I know this sometimes
is a pain, but this is the correct way to do it.

The guys in #opengl confirmed this. Also, you have to sort based on the
vector that the camera is on.

[21:16] < dcurtis> in the tutorial for blending (#8) it says to render
from back to front. This makes sense to me, but my problem is that as i
change my projection / modelview the idea of “front to back” also
changes. How do i handle this?
[21:17] < Plagman> you sort using transformed geometry
[21:17] < bezobraz> render from back to front in eye space
[21:17] < bezobraz> don’t even need transformed geometry per se
[21:17] < oc2k1> project the object position into modelview space or use
3 or 4 presorted orders
[21:17] < bezobraz> just get a vector along which the camera is looking
at in world space
[21:17] < bezobraz> and then you can sort along that vector

I hope that benoit has some input on this.

At some point we are going to have to separate transparency from
opaque and do some sort of two pass rendering if we want to say we are
doing correct rendering. Transparency is going to be key I believe.
However, a way to do this without have a TON of rendering cost could be
to do two calls to the engines. First pass is opaque then second pass
is transparent. This would mean that transparent on transparent
wouldn’t be AS accurate but it would mean that all transparent objects
get render on top of all opaque objects, this is a problem already.

(Mon, Apr 09, 2007 at 09:03:56PM -0600) Donald Ephraim Curtis d@milkbox.net:

So here is how to reproduce;

Draw two atoms. Then select them by clicking (drag to select is goofy
because of multiple named things being rendered (ball and stick +
wireframe).

You have two atoms {1,2}. Move the camera so that 2 is in front of 1
(on top) and you will see through the transparent shell of atom 2’s
selection allowing you to see atom 1. Now look at it from the opposite
direction, that is, rotate so atom 1 is closer to you and you will see
that it does not show atom 2.

In writing this email you made a good point. Really OGL should be
handling this. It has something to do with Depth Masking (looked it up
on my big thick OGL book). but something else appears wrong. Now i can
see through atom 1 to atom 2 but the shading is off. Even though i can
see the object, it’s treating the opaque surface like it’s behind the
object. It’s really weird and possibly due to the blending function.
I’ll read more before jumping to conclusions that it constitutes a
redesign.

I apologize in advance for my limited OpenGL experience. That’s why I
thought i’d see if you guys had any insight.

-Donald

(Mon, Apr 09, 2007 at 09:18:58PM -0400) Geoffrey Hutchison geoff.hutchison@gmail.com:

On Apr 9, 2007, at 8:19 PM, Donald Ephraim Curtis wrote:

In reading Geoff’s initial design document it occurred to me that my
rendering setup is a bit faulty. I realize now that to accuratly
render
things we need to let the GLWidget handle the actual order of
rendering
so that transparency is handled correctly.

I’m not sure what you’re talking about. OpenGL is a 3D environment,
so it handles the “z-values” based on wherever the camera is relative
to the objects.

I mean, take the current example of the wireframe and balls-and-
sticks view. The wireframe is correctly ignored because it falls
inside the balls-and-sticks. Hide the bsengine or turn it off on one
atom, and you see the hidden wireframe. Make the atoms transparent in
bsengine and you see the correctly composed view.

Now the labels may be something entirely different. If I could
actually see them, I could tell you for sure. :slight_smile:

But I don’t know why you need some sort of “z-index” in a 3D OpenGL
view. The library should take care of turning that 3D scene into a 2D
one – in software or GPU.

If I’m missing something, maybe a particular example might help.

Thanks,
-Geoff


Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net’s Techsay panel and you’ll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV


Avogadro-devel mailing list
Avogadro-devel@lists.sourceforge.net
avogadro-devel List Signup and Options

On Tuesday 10 April 2007 06:04:42 Donald Ephraim Curtis wrote:

The saga continues.

New episode: it’s tuesday morning, benoit wakes up :slight_smile:

At some point we are going to have to separate transparency from
opaque and do some sort of two pass rendering if we want to say we are
doing correct rendering. Transparency is going to be key I believe.
However, a way to do this without have a TON of rendering cost could be
to do two calls to the engines. First pass is opaque then second pass
is transparent. This would mean that transparent on transparent
wouldn’t be AS accurate but it would mean that all transparent objects
get render on top of all opaque objects, this is a problem already.

This is what I was going to suggest (if I understand correctly what you say
here).

First draw all opaque objects, then draw all translucent objects, without
bothering sorting them by depth.

This is what I did in Kalzium3D and it gave very satisfactory results. It
should be good enough also there. For instance, it will solve the problems
that you evocated in your example with 2 atoms.

Sorting the transparent objects by depth is a plus in some circumstances, but
it’s no fully satisfactory solution either, as it fails whenever two
transparent objects intersect.

For instance, in Kalzium3D, the only (very minor) problem that I had was that
in van-der-Waals-radii-mode (ie when the atoms are very big), when I selected
multiple atoms, the transparent selection spheres around them did intersect,
and that was not fully realistically rendered. Now that problem wouldn’t be
solved at all by depth-sorting, because precisely when two transparent
objects intersect, it’s pointless to try to determine which of them
is “closest” and which is “farthest”.

If you want to experiment with z-sorting, here’s how it goes. For each object
to render, you want to compute its distance from the camera. That’s what
we’ll call the “z-distance”. The best that you can do is to compute the
distance between the center of the object and the camera. This is computed as
follows :

object->zdistance =
( object->center() - glwidget->camera()->translationVector() ).norm();

Now you can sort the objects by zdistance. However, if two objects intersect
or are very near and non-convex, this approach will not help.

There might be an entirely different “ultimate solution”. Since Qt 4.0, one
can have a framebuffer with destination-alpha. That is, a 32-bit RGBA
framebuffer with a real alpha-channel. I’ve been told this opens a whole lot
of new possibilities for transparency. I don’t know, perhaps it helps here.
Traditionnaly, transparency was computed with only a “source alpha” channel
in the object to render but no “destination alpha” channel in the framebuffer
on which to render it. This is where the shortcomings we’re dealing with here
come from. Maybe destination alpha solves these issues?

Cheers,
Benoit

On Tue, 10 Apr 2007, Donald Ephraim Curtis wrote:

I’ll have to look into destination alpha. I think a two pass would be
sufficient but then again i don’t think it should be up to us, the user
should be able to say “true transparency”. Once we get into ribbons and
stuff like that i think we’re going to want that kinda thing. For sake
of optimization i believe i’ll start toying with the idea of using
display lists and letting the GLWidget do some sorting on the zdepth.

If you want to paint translucent ribbons using z-sorting, you’re facing
some nontrivial issues. A ribbon as a whole is not at all a convex object,
and can expand over a large part of the scene (correct me if I’m wrong) so
it’s not a good candidate for z-sorting. So you’ll have to slice it into
small enough slices, so that each slice is very small (as compared to the
scene). This slicing can be done once and for all at loading-time, so it
won’t slow down rendering, except for the fact that rendering many small
displaylists is never as fast as rendering one single big displaylist.

But yes, it seems reasonably doable.

The reason i think this may be interesting is that it would allow us not
to have to call on the engines everytime. Althought display lists would
limit us to 2^32 = 4294967296 objects. An alternate approach i thought
of would be to allow the engines to return both display lists and also
vector arrays. This way we could subclass a single “GLObject” and based
on how it was generated have the GLWidget render it.

I don’t understand why the maximum number of display lists should play any
role: why recycle display-list-id’s by compiling display lists using
display-list-id’s that were previously allocated to other display lists?

I also don’t understand what you mean by a “vector array”. I guess you
don’t mean a vertex array in the OpenGL sense, since we’re using display
lists which override them.

Cheers,
Benoit

The reason i think this may be interesting is that it would allow us not
to have to call on the engines everytime. Althought display lists would
limit us to 2^32 = 4294967296 objects. An alternate approach i thought
of would be to allow the engines to return both display lists and also
vector arrays. This way we could subclass a single “GLObject” and based
on how it was generated have the GLWidget render it.

I don’t understand why the maximum number of display lists should play any
role: why recycle display-list-id’s by compiling display lists using
display-list-id’s that were previously allocated to other display lists?

Oh, i don’t want to recycle display lists (i don’t think). I’m just
saying, that the engines should be able to return display lists to the
GLWidget for sorting. I mean, when an old display list becomes
obsolete, we just have to get the engine to generate us a new one.

The max num of display lists plays a role because what about when we
have so many things to display that we overflow? Is that even possible?
Will we ever be asking to render 4.2 billion objects? Just asking
questions. We didn’t think about this transparency problem and now
we’re having to work on the architecture more.

I also don’t understand what you mean by a “vector array”. I guess you
don’t mean a vertex array in the OpenGL sense, since we’re using display
lists which override them.

Yes i mean vertex array.

Cheers,
Benoit

I’ll have to look into destination alpha. I think a two pass would be
sufficient but then again i don’t think it should be up to us, the user
should be able to say “true transparency”. Once we get into ribbons and
stuff like that i think we’re going to want that kinda thing. For sake
of optimization i believe i’ll start toying with the idea of using
display lists and letting the GLWidget do some sorting on the zdepth.

The reason i think this may be interesting is that it would allow us not
to have to call on the engines everytime. Althought display lists would
limit us to 2^32 = 4294967296 objects. An alternate approach i thought
of would be to allow the engines to return both display lists and also
vector arrays. This way we could subclass a single “GLObject” and based
on how it was generated have the GLWidget render it.

Any input Geoff?

-D

(Tue, Apr 10, 2007 at 12:05:32PM +0200) Benoît Jacob jacob@math.jussieu.fr:

On Tuesday 10 April 2007 06:04:42 Donald Ephraim Curtis wrote:

The saga continues.

New episode: it’s tuesday morning, benoit wakes up :slight_smile:

At some point we are going to have to separate transparency from
opaque and do some sort of two pass rendering if we want to say we are
doing correct rendering. Transparency is going to be key I believe.
However, a way to do this without have a TON of rendering cost could be
to do two calls to the engines. First pass is opaque then second pass
is transparent. This would mean that transparent on transparent
wouldn’t be AS accurate but it would mean that all transparent objects
get render on top of all opaque objects, this is a problem already.

This is what I was going to suggest (if I understand correctly what you say
here).

First draw all opaque objects, then draw all translucent objects, without
bothering sorting them by depth.

This is what I did in Kalzium3D and it gave very satisfactory results. It
should be good enough also there. For instance, it will solve the problems
that you evocated in your example with 2 atoms.

Sorting the transparent objects by depth is a plus in some circumstances, but
it’s no fully satisfactory solution either, as it fails whenever two
transparent objects intersect.

For instance, in Kalzium3D, the only (very minor) problem that I had was that
in van-der-Waals-radii-mode (ie when the atoms are very big), when I selected
multiple atoms, the transparent selection spheres around them did intersect,
and that was not fully realistically rendered. Now that problem wouldn’t be
solved at all by depth-sorting, because precisely when two transparent
objects intersect, it’s pointless to try to determine which of them
is “closest” and which is “farthest”.

If you want to experiment with z-sorting, here’s how it goes. For each object
to render, you want to compute its distance from the camera. That’s what
we’ll call the “z-distance”. The best that you can do is to compute the
distance between the center of the object and the camera. This is computed as
follows :

object->zdistance =
( object->center() - glwidget->camera()->translationVector() ).norm();

Now you can sort the objects by zdistance. However, if two objects intersect
or are very near and non-convex, this approach will not help.

There might be an entirely different “ultimate solution”. Since Qt 4.0, one
can have a framebuffer with destination-alpha. That is, a 32-bit RGBA
framebuffer with a real alpha-channel. I’ve been told this opens a whole lot
of new possibilities for transparency. I don’t know, perhaps it helps here.
Traditionnaly, transparency was computed with only a “source alpha” channel
in the object to render but no “destination alpha” channel in the framebuffer
on which to render it. This is where the shortcomings we’re dealing with here
come from. Maybe destination alpha solves these issues?

Cheers,
Benoit


Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net’s Techsay panel and you’ll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV


Avogadro-devel mailing list
Avogadro-devel@lists.sourceforge.net
avogadro-devel List Signup and Options

On Apr 10, 2007, at 11:06 AM, Benoit Jacob wrote:

If you want to paint translucent ribbons using z-sorting, you’re
facing
some nontrivial issues.

But yes, it seems reasonably doable.

I haven’t seen many calls for a translucent ribbon. Most people show
an opaque ribbon. I’m going to try to code up a real ribbon render
soon – otherwise we’re just going to question how that might work.
It’s better to give it a try and see what problems we face. :slight_smile:

I think we may face some issues with molecular surfaces. On the other
hand, those are usually chopped into triangles already, so it’s not
as big an issue. (This does raise the question: should we use an
external mesh/surface library?)

On Apr 10, 2007, at 11:45 AM, Donald Ephraim Curtis wrote:

vector arrays. This way we could subclass a single “GLObject” and
based
on how it was generated have the GLWidget render it.

In my original proposal, I said there would be a render queue for the
OpenGL view. Primitives would be subclassed for things like atoms,
bonds, residues, surfaces, etc. The user could specify a specific
pair of engines and primitives – for example, using balls-and-sticks
for most of the molecule, but highlighting one specific atom (dunno,
maybe it’s the key piece of the molecule) and making that huge.
Primitives could then be in the queue multiple times – for example
having both wireframe and ribbon views of a protein.

I think Donald is suggesting that engines could return something (a
GLObject) which could be directly handled by GLWidget rather than
continually going through the queue.

Personally, I’m not sure that’s necessary. How would returning a
GLObject be different from creating a display list and returning that?

Cheers,
-Geoff

(Tue, Apr 10, 2007 at 11:47:16AM -0400) Geoffrey Hutchison geoff.hutchison@gmail.com:

On Apr 10, 2007, at 11:06 AM, Benoit Jacob wrote:

If you want to paint translucent ribbons using z-sorting, you’re
facing
some nontrivial issues.

But yes, it seems reasonably doable.

I haven’t seen many calls for a translucent ribbon. Most people show
an opaque ribbon. I’m going to try to code up a real ribbon render
soon – otherwise we’re just going to question how that might work.
It’s better to give it a try and see what problems we face. :slight_smile:

I think we may face some issues with molecular surfaces. On the other
hand, those are usually chopped into triangles already, so it’s not
as big an issue. (This does raise the question: should we use an
external mesh/surface library?)

I guess my question would be if we can accomplish this using our plugin
interface. Could it be that only the plugin would depend on the
library? Technically as long as the library returns some sort of GL
code display list / code.

On Apr 10, 2007, at 11:45 AM, Donald Ephraim Curtis wrote:

vector arrays. This way we could subclass a single “GLObject” and
based
on how it was generated have the GLWidget render it.

In my original proposal, I said there would be a render queue for the
OpenGL view. Primitives would be subclassed for things like atoms,
bonds, residues, surfaces, etc. The user could specify a specific
pair of engines and primitives – for example, using balls-and-sticks
for most of the molecule, but highlighting one specific atom (dunno,
maybe it’s the key piece of the molecule) and making that huge.
Primitives could then be in the queue multiple times – for example
having both wireframe and ribbon views of a protein.

This is currently doable. But it is focused around the engine (which
has an internal list of the primitives that it should render). We
should remember the initial implementation i had where each primitive
called the rendering engines itself. After discussion we agreed that
this was a lot of context switching (lots of functions getting called).
With this implementation (currently) and engine renders everything at
once, bam, done. The method for creating a “larger” view of a certain
set of atoms would be to create a new instance of an engine, tweak the
settings, then tell that engine not to render all of the atom. The
mechanism is there it’s just not accessible from the UI.

I think Donald is suggesting that engines could return something (a
GLObject) which could be directly handled by GLWidget rather than
continually going through the queue.

Personally, I’m not sure that’s necessary. How would returning a
GLObject be different from creating a display list and returning that?

Well, the goal was that a GLObject is essential display list with some
extra information such as what the display list is rendering / the z
depth / the engine that created this display list. For true
transparency we can’t allow the engines to simply return a single
display list because 1) for true transparency that list will change
every time the picture is rotated and 2) we can’t ask the engine
developers to do the sorting because each engine can only take into
account what objects it knows and their zdepth, not the objects of other
engines.

Even better we could have the GLWidget handle two queues; first pass,
second pass where only the second pass is sorted based on zdepth. Let
the engines generate them.

I guess my question is whether you want true transparency or just “good
enough” transparency? both(optional)? I vote for both (let the user
decide by configuring the GLWidget).

On Apr 10, 2007, at 11:44 AM, Donald Ephraim Curtis wrote:

The max num of display lists plays a role because what about when we
have so many things to display that we overflow? Is that even
possible?
Will we ever be asking to render 4.2 billion objects?

One caveat. On some OpenGL implementations, that maximum is across
all active OpenGL applications at once. I don’t know if Qt hides that
from us and really creates a separate context for

In the end, it depends on how you count an object. Since a display
list represents some number of OpenGL calls, probably not. Most
people seemed happy with Open Babel 2.0 which only had 2^16 atoms.
Now a few more have proteins with ~300k atoms. Even if you expand
that a bit to ~2 million atoms, we’re still far from running out of
display lists.

Cheers,
-Geoff

You are exactly right about counting objects.

Consider though that if each object is a primitive and more than one
engine is rendering all primitives you easily double the number of
display lists. We could also consider “grouping” where all opaque
objects for an engine are all grouped into one display list.

I am curious to try out this z-depth based rendering with a two pass.
Unless there is any major objections i’m going to put in a few hours
with it today and see how it turns out.

(Tue, Apr 10, 2007 at 12:16:50PM -0400) Geoffrey Hutchison geoff.hutchison@gmail.com:

On Apr 10, 2007, at 11:44 AM, Donald Ephraim Curtis wrote:

The max num of display lists plays a role because what about when we
have so many things to display that we overflow? Is that even
possible?
Will we ever be asking to render 4.2 billion objects?

One caveat. On some OpenGL implementations, that maximum is across
all active OpenGL applications at once. I don’t know if Qt hides that
from us and really creates a separate context for

In the end, it depends on how you count an object. Since a display
list represents some number of OpenGL calls, probably not. Most
people seemed happy with Open Babel 2.0 which only had 2^16 atoms.
Now a few more have proteins with ~300k atoms. Even if you expand
that a bit to ~2 million atoms, we’re still far from running out of
display lists.

Cheers,
-Geoff

On Tue, 10 Apr 2007, Donald Ephraim Curtis wrote:

The reason i think this may be interesting is that it would allow us not
to have to call on the engines everytime. Althought display lists would
limit us to 2^32 = 4294967296 objects. An alternate approach i thought
of would be to allow the engines to return both display lists and also
vector arrays. This way we could subclass a single “GLObject” and based
on how it was generated have the GLWidget render it.

I don’t understand why the maximum number of display lists should play any
role: why recycle display-list-id’s by compiling display lists using
display-list-id’s that were previously allocated to other display lists?

Oh, i don’t want to recycle display lists (i don’t think). I’m just
saying, that the engines should be able to return display lists to the
GLWidget for sorting. I mean, when an old display list becomes
obsolete, we just have to get the engine to generate us a new one.

OOPS. I meant “why not”, not “why.”. So please read my paragraph as:

I don’t understand why the maximum number of display lists should play any
role: why NOT recycle display-list-id’s by compiling display lists
using display-list-id’s that were previously allocated to other display
lists?

IOW I’m suggesting that you recycle old display list ids.
glNewList() on a previously allocated ID erases the old list.
This is why the maximum number of display lists isn’t an issue in
practice.

I also don’t understand what you mean by a “vector array”. I guess you
don’t mean a vertex array in the OpenGL sense, since we’re using display
lists which override them.

Yes i mean vertex array.

Once a vertex array is rendered into a displaylist, you can delete the
vertex array. No need to keep the geometric data stored twice in memory.
FOr instance if you look in Sphere, we delete the arrays as soon as we
have to compile the displaylist. Afterwards we can entirely forget that we
used a vertex array – except perhaps that vertex arrays might still have
to be enabled in the clientstate.

The reason why I chose displaylists over vertex arrays is that for
software rendering it’s the fastest solution (I asked a MESA mailing list,
could search for the thread if you want). It also has the convenience of
being able to cache almost any GL command.

Cheers,
Benoit

On Tue, 10 Apr 2007, Geoffrey Hutchison wrote:

On Apr 10, 2007, at 11:06 AM, Benoit Jacob wrote:

If you want to paint translucent ribbons using z-sorting, you’re
facing
some nontrivial issues.

But yes, it seems reasonably doable.

I haven’t seen many calls for a translucent ribbon. Most people show
an opaque ribbon. I’m going to try to code up a real ribbon render
soon – otherwise we’re just going to question how that might work.
It’s better to give it a try and see what problems we face. :slight_smile:

I think we may face some issues with molecular surfaces. On the other
hand, those are usually chopped into triangles already, so it’s not
as big an issue.

Hmm, I didn’t mean the slices to be as small as individual triangles in
the mesh. That would mean a very high number of slices… hence it’d be
slow. To achieve better performance you’d have to group together nearby
triangles into one single slice. For instance if your ribbon has the shape
of a big ring (hence does one turn), and has 1000 triangles, you might
slice it into 20 slices of 50 triangles each…

(This does raise the question: should we use an
external mesh/surface library?)

good question! That depends on what you have to do with these meshes.
– morphing and other deformations ?
– tesselation from a continuous form? (E.g. shape given by equations,
bezier, etc…)
– mesh optimization (e.g. retesselation improving rendering speed).
Any of these would justify using an external library

On Apr 10, 2007, at 11:45 AM, Donald Ephraim Curtis wrote:

vector arrays. This way we could subclass a single “GLObject” and
based
on how it was generated have the GLWidget render it.

I really amn’t good at design, and for instance I’d never been able to
design avogadro in such a nice and efficient way as you have. So I’ll let
you discuss here :slight_smile:

In my original proposal, I said there would be a render queue for the
OpenGL view. Primitives would be subclassed for things like atoms,
bonds, residues, surfaces, etc. The user could specify a specific
pair of engines and primitives – for example, using balls-and-sticks
for most of the molecule, but highlighting one specific atom (dunno,
maybe it’s the key piece of the molecule) and making that huge.
Primitives could then be in the queue multiple times – for example
having both wireframe and ribbon views of a protein.

I think Donald is suggesting that engines could return something (a
GLObject) which could be directly handled by GLWidget rather than
continually going through the queue.

Personally, I’m not sure that’s necessary. How would returning a
GLObject be different from creating a display list and returning that?

Cheers,
-Geoff


Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net’s Techsay panel and you’ll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV


Avogadro-devel mailing list
Avogadro-devel@lists.sourceforge.net
avogadro-devel List Signup and Options

On Tue, 10 Apr 2007, Geoffrey Hutchison wrote:

On Apr 10, 2007, at 11:44 AM, Donald Ephraim Curtis wrote:

The max num of display lists plays a role because what about when we
have so many things to display that we overflow? Is that even
possible?
Will we ever be asking to render 4.2 billion objects?

One caveat. On some OpenGL implementations, that maximum is across
all active OpenGL applications at once. I don’t know if Qt hides that
from us and really creates a separate context for

Again, this is not an issue, we can and should recycle display list ids.

Sorry for the confusion cause by the missing word “not” in my other email.

Cheers,
Benoit

On Tue, 10 Apr 2007, Donald Ephraim Curtis wrote:

You are exactly right about counting objects.

Consider though that if each object is a primitive and more than one
engine is rendering all primitives you easily double the number of
display lists. We could also consider “grouping” where all opaque
objects for an engine are all grouped into one display list.

Whatever you prefer, but display lists are always an additional pain, and
IMHO they’re only justified when they result in better performance.
Grouping into a single DL all opaque objects of an engine might or might
not improve performance (reasons why it might include that it will result
in caching some matrix-ops and material-ops that are not done inside the
objects DLs themselves; reasons why it might not
include that the objects themselves are already display lists, so much of
the whole is already optimized).

It’d be nice if you could measure performance, to check that what you’re
doing here is really useful. You can borrow the FPS counter from Kalzium3D
if you like.

Cheers,
Benoit

Here is a question i am curious about: How much overhead is there if
say everytime we render we also make a new display list. For instance,
say we make a new display list every time we move an atom (a new display
list for the whole scene). I have a feeling that the overhead is
limited and in fact most of the work done when creating the display list
does not get redone at rendering time. In fact the big red book says
that DL’s are so handy because most of the computation doesn’t have to
be done. I imagine the rendering like this:

|precomputation ------ | push to buffer |

and creating the DL is actually just doing the precomputation once so
that that you only need to push to buffer the next time (of course a
slight bit of overhead for having to document the calls). I will have
to run some sort of test / trial on this.

People say that the way to go for this kinda thing is to use buffer
objects but that would require us to rewrite the cylinder / circle class
because you can’t embed display lists in buffer objects. Plus buffer
objects don’t allow for color matrices.

I know this whole conversation is probably annoying for some people but
I think it’s important. We could be a great editing program an a weak
“viewing” program and while there are already a lot of great viewing
programs, there is no reason we can’t at least try to compete.

-Donald

(Wed, Apr 11, 2007 at 10:00:43AM +0200) Benoit Jacob jacob@math.jussieu.fr:

On Tue, 10 Apr 2007, Donald Ephraim Curtis wrote:

You are exactly right about counting objects.

Consider though that if each object is a primitive and more than one
engine is rendering all primitives you easily double the number of
display lists. We could also consider “grouping” where all opaque
objects for an engine are all grouped into one display list.

Whatever you prefer, but display lists are always an additional pain, and
IMHO they’re only justified when they result in better performance.
Grouping into a single DL all opaque objects of an engine might or might
not improve performance (reasons why it might include that it will result
in caching some matrix-ops and material-ops that are not done inside the
objects DLs themselves; reasons why it might not
include that the objects themselves are already display lists, so much of
the whole is already optimized).

It’d be nice if you could measure performance, to check that what you’re
doing here is really useful. You can borrow the FPS counter from Kalzium3D
if you like.

Cheers,
Benoit

Ok, so i came up with an idea. I don’t think it’s that good though.

For the sake of my proposal let us say we have a GLObject class.

class GLObject {
public:
GLuint id
GLfloat z
Engine *engine
Primitive *primitive
bool transparent
}

Now let us say that the GLEngine class has the following functions:

QList<QObject *> generateObjects(QList<Primitive *>);
void enqueueObject(QObject);
void processQueue();

How does this work. Well, it’s supposed to work by only updating the
display lists when there is an update. It also means that we can thread
the engines. Here is data flow.

GLWidget::setMolecule()
{
foreach active engine {
engine.generateObjects(all primitives of molecule);
addtorenderqueue();
}
}

GLWidget::primitiveUpdated(Primitive *p)
{
foreach QObject qo
{
if qo.primitive == p { add qo.engine.enqueueObject(qo) }
}
}

GLWidget::primitiveAdded(Primitive *p)
{
qlist<Primitive *> ql.append(p)
foreach active engine
{
engine.generateObjects(ql);
}
}

Pros:
-updateing of display lists is can be done in a separate thread
-easy to add/remove rendering options for individual atoms etc
-transparency can be handled easy.
-GLWidget is responsible for actually doing the rendering rather than
engines.

Cons:
-Complicates the engines

Middle Ground:
-a view that is constantly updating will cost us depending on
implementation

(eg, Let us consider the ball and stick engine.

We have two options: render the entire molecule under one display
list or rendering each primitive as it's own display list.

In the first option we save memory; 1 moelcule DL instead of 300k
individual ones.  Now if i'm moving an atom each time the position
gets updated i have to recompile that display list.

In the second option we would see a speedup because only the DL of
the objects being changed need to be updated.  Although i'm still
having to recompile the DL for each object.

I’ve got some other ideas brewing too. I’ll see how that goes. If we
rely on DisplayLists i think we’re going to be in trouble because they
ahve to be compiled then executed. It’s almost like double the work.
I’ve been trying to get ahold of some OGL people here on campus but
nothing back from them so far. I feel like this is a problem i’m unable
to solve and it’s pissing me off.

It would be interesting to know how this is handled in professional
gaming systems.

-Donald

(Tue, Apr 10, 2007 at 11:47:16AM -0400) Geoffrey Hutchison geoff.hutchison@gmail.com:

On Apr 10, 2007, at 11:06 AM, Benoit Jacob wrote:

If you want to paint translucent ribbons using z-sorting, you’re
facing
some nontrivial issues.

But yes, it seems reasonably doable.

I haven’t seen many calls for a translucent ribbon. Most people show
an opaque ribbon. I’m going to try to code up a real ribbon render
soon – otherwise we’re just going to question how that might work.
It’s better to give it a try and see what problems we face. :slight_smile:

I think we may face some issues with molecular surfaces. On the other
hand, those are usually chopped into triangles already, so it’s not
as big an issue. (This does raise the question: should we use an
external mesh/surface library?)

On Apr 10, 2007, at 11:45 AM, Donald Ephraim Curtis wrote:

vector arrays. This way we could subclass a single “GLObject” and
based
on how it was generated have the GLWidget render it.

In my original proposal, I said there would be a render queue for the
OpenGL view. Primitives would be subclassed for things like atoms,
bonds, residues, surfaces, etc. The user could specify a specific
pair of engines and primitives – for example, using balls-and-sticks
for most of the molecule, but highlighting one specific atom (dunno,
maybe it’s the key piece of the molecule) and making that huge.
Primitives could then be in the queue multiple times – for example
having both wireframe and ribbon views of a protein.

I think Donald is suggesting that engines could return something (a
GLObject) which could be directly handled by GLWidget rather than
continually going through the queue.

Personally, I’m not sure that’s necessary. How would returning a
GLObject be different from creating a display list and returning that?

Cheers,
-Geoff


Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net’s Techsay panel and you’ll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV


Avogadro-devel mailing list
Avogadro-devel@lists.sourceforge.net
avogadro-devel List Signup and Options

On Apr 11, 2007, at 1:22 PM, Donald Ephraim Curtis wrote:

I’ve been trying to get ahold of some OGL people here on campus but
nothing back from them so far. I feel like this is a problem i’m
unable
to solve and it’s pissing me off.

It would be interesting to know how this is handled in professional
gaming systems.

Well, I can definitely suggest going to mailing lists. Honestly, you
might try:
http://lists.apple.com/archives/Mac-opengl

Mention that you’re working on an app running in Linux and Mac and
trying to get the best performance.

Actually, Apple’s developer tools include some great performance
bits, including an OpenGL profiler, which tells you how long you’re
spending doing different GL calls.

A few other interesting articles:
http://developer.apple.com/graphicsimaging/opengl/optimizingdata.html
http://developer.apple.com/graphicsimaging/opengl/opengl_serious.html

Granted, Apple uses hardware-accelerated OpenGL, so as Benoît
mentioned, some tricks won’t help software-based GL performance. But
it may be the most organized group of OpenGL developers out there. (I
haven’t found a similar forum for Linux, and Windows generally uses
DirectX.)

Cheers,
-Geoff

After a lengthy discussion with our “realistic imaging” guy he brought
up a few good points.

He said a major thing about rendering is that in most cases you need
to know what you’re rendering. So for us, my idea was to have the
engines kinda “tell” the widget what it wants to render. And as he said
that would work (returning display lists for transparent objects). But
we need to look at this problem bigger scope (ie. 300k protiens).
Imagine selecting all atoms. That’s 300k transparent spheres. Well,
first off, these spheres might not even be in the view. Second, if we
assume they are all in the view, they are going to be so small that you
won’t be able to tell if they’re transparent or not. Third, if we
actually sorted all those objects thats going to cost us (300 log 300).

The fact is that we do need some sort of engine priority queue based off
the camera position so we can get a pretty good transparency and we do
need a two pass rendering. All opaque objects should get rendered in
pass one. The second pass is dictated by the ordering queue.

  1. In most cases everything an engine renders will be opaque.
  2. We can get a “decent” transparency by being ignorant.
  3. We can simulate transparency by using stippled patterns.
  4. If an engine needs perfect transparency it can set it’s engine queue
    priority higher.

My interest in gaming systems was denied. It is a completely different
problem but interesting nonetheless.

But i am happy because 1) my design is not that far off 2) we don’t need
to do some screwy thing our engines stay clean and 3) i have a solution.

On a side note: Benoit, one idea that the guy had was to render maybe 15
different spheres (each with it’s own level of detail) and depending on
how close the object is to the camera it chooses a higher detail object.

Peace.

(Wed, Apr 11, 2007 at 01:50:38PM -0400) Geoffrey Hutchison geoff.hutchison@gmail.com:

On Apr 11, 2007, at 1:22 PM, Donald Ephraim Curtis wrote:

I’ve been trying to get ahold of some OGL people here on campus but
nothing back from them so far. I feel like this is a problem i’m
unable
to solve and it’s pissing me off.

It would be interesting to know how this is handled in professional
gaming systems.

Well, I can definitely suggest going to mailing lists. Honestly, you
might try:
http://lists.apple.com/archives/Mac-opengl

Mention that you’re working on an app running in Linux and Mac and
trying to get the best performance.

Actually, Apple’s developer tools include some great performance
bits, including an OpenGL profiler, which tells you how long you’re
spending doing different GL calls.

A few other interesting articles:
http://developer.apple.com/graphicsimaging/opengl/optimizingdata.html
http://developer.apple.com/graphicsimaging/opengl/opengl_serious.html

Granted, Apple uses hardware-accelerated OpenGL, so as Benoît
mentioned, some tricks won’t help software-based GL performance. But
it may be the most organized group of OpenGL developers out there. (I
haven’t found a similar forum for Linux, and Windows generally uses
DirectX.)

Cheers,
-Geoff