Counting frames per second

Hi,

first let me thank James, Ross and Shahzad for contributing the new Debug
engine. Ross: I don’t know if you’re subscribed so I CC you. I don’t have the
e-mail addresses of James and Shahzad.

Your approach consists in measuring how much time GLWidget::render() took. at
the end, you do:

glFlush();
d->framesPerSecond = 1000.0 / double(startTime.elapsed());

}

The problem is that glFlush() only sends the GL commands to the pipeline, and
typically returns immediately, without waiting for them to complete. So
instead of glFlush(), the command that you want to call here is glFinish().
See here:

http://developer.apple.com/qa/qa2004/qa1158.html

As this webpage explains, there are several reasons why one should avoid
calling glFlush() or glFinish() unless it’s really necessary so maybe the
above code should look like

if( debug_engine_is_enabled ) {
  glFinish();
  d->framesPerSecond = 1000.0 / double(startTime.elapsed());
}

}

Or maybe it would be cleaner to let the debug-engine itself take care of that,
but I guess that would require API changes as currently, AFAIK, an engine
cannot say “I want to be executed last”.

Another problem with your approach is precision: as QTime only has millisecond
precision (and higher-precision timers are not easily found in portable
libraries), if you get approx. 100 FPS, elapsed() will return approx. 10, so
it will return either 9, 10 or 11, so you’ll get either “111.111 FPS” or “100
FPS” or “90.909 FPS” but no possible values between.

But there is also a different approach, which you might want to consider,
where there is no need at all to do glFinish(), and which gives much precise
results, without a high-precision timer. This approach is implemented in the
current Kalzium3D code (to be replaced shortly with a libavogadro-based
implementation) from which I take the following code:

At the end of the GLWidget::render() method, add this:

if( debug_engine_is_enabled ) {
	computeFramesPerSecond();
	update();
}

note that we call update(), which means that the scene will keep being
rerendered. This is the main drawback of this approach: it requires constant
redraw of the scene.

And the computeFramesPerSecond() method is:

void GLWidget::computeFramesPerSecond()
{
QTime t;

static bool firstTime = true;
static int old_time, new_time;
static int frames;

if( firstTime )
{
	t.start();
	firstTime = false;
	old_time = t.elapsed();
	frames = 0;
}

new_time = t.elapsed();
frames++;

if( new_time - old_time > 200 )
{
	d->framesPerSecond = 1000.0 * frames /
		double( new_time - old_time );
	frames = 0;
	old_time = new_time;
}

}

Maybe, the computeFramesPerSecond() method and d->framesPerSecond should be
moved to the debugengine, uncluttering the GLWidget.

Let me explain how this works. When computeFramesPerSecond() is called, it
checks how much time elapsed since last time it got called. Only if that time
is > 200 millisecond does d->framesPerSecond get recomputed. Otherwise, the
old value is kept. This guarantees that the computation will be precise
enough. Anyway, nobody can read a number that changes more than 5 times in a
second, so there is no point in updating d->framesPerSecond more frequently
than that.

Cheers,
Benoit