Any ideas on how to get a few specific statistics

Well i have two(multiple) viewports that render to textures, and would like to get statistiks about each of them (seperatly), however the default Statistics only show combined ones. Also i would like to get actual rendertimes instead of fps, is there a build in way to get that somewhere (Cause fps can be highly dependend on logic as well)? Or do I need to add code to the jme codebase?

re: timing, it depends on what you actually want to time. You could subtract your update times, after all.

That being said, I’ve added low-level nano render timing to my local version of JME a long time ago.

There is no easy way that I can think of to get viewport specific stats… though if you control when the viewport is rendered (I don’t know how render to texture works, actually) then you might be able to do some clever resetting of the stats to capture the info.

@EmpirePhoenix you could time your render calls in differrent viewport with a custom SceneProcessor, although it is a little hackish it is the only way I can think of without modify jME itself.

I’ve been thinking about how to gather statistics from jME aswell. It would be nice if jME published various statistics and timings to an eventbus where use can provide their own listeners, but maybe I just think that is good idea because I play with message queues/topics at work right now…

Hm if its a eventbus is a implementation detail, but I think we should add some of those somewher in the core. After all the overhead of determing such timings is not measuarable, The increase ability to optimize howver will in the end benefit it.

So look like I will fork my jme3 a bit further then.

<cite>@kwando said:</cite> I've been thinking about how to gather statistics from jME aswell. It would be nice if jME published various statistics and timings to an eventbus where use can provide their own listeners, but maybe I just think that is good idea because I play with message queues/topics at work right now..

This is my take on this statistics thingy:

To collect metrics, you just use a counter, here is a stellar implementation (http://metrics.codahale.com) that you just can not improve upon.
To publish metrics, just use JMX, it is built exactly for this, managing and monitoring, and a standard JDK feature since forever (and has events if you are so inclined) :slight_smile:
To analyse metrics, you pull them from the publisher (pushing will only end in tears) and persist them in whatever format: CSV-file and excel maybe. Personally I like to push them into an OpenTSDB but that’s just what I’m used to.

Yeah, maybe the eventbus thingy is not a good idea in this case. The point is jME has to publish metrics in a standard way, be it JMX or something else, preferable a way developers themselfs can publish metrics to. That metrics lib looks nice from a first glance.

Yes, question is what could be done easily and without disturbing performance with jme to allow that? And also if we do the work to get the publishing stuff implemented will it get to the core?

Many of the interesting and desirable metrics can only be collected deep inside of jME, which means the core team also needs to think this is a good idea…

I cannot imagine that using counters/gauges such those in the metrics lib jmaasing linked to would impact performance real world performance that bad, but the only way to know is to measure.

Maybe one could inject those guages with java annotations in some way, and then provide some switch in the build script to enable metrics collections or not? But I do not really know if annotations can be used that way, just guessing.

As long as jME collects metrics in thread safe counters/gauges it would be rather easy to publish them in different ways. Ideally jME should do the simplest possible, for example FPS is a calculation that can be done in the “analysis” step. If you can poll an incrementing counter of total frames rendered it is enough. The client knows when it collected the metric (i.e. keeping a timestamp) and the rest is “just” statistics calculations. But as you say, it’s deep down in the core of the engine so statistics probably can’t be layered on top.

Anyhow, If such counters are exposed as ordinary pojos in the application (like the current statistics) it would be pretty simple to make an SDK-plugin that exposed the metrics as JMX.

I also have it on my to do list to look into this after the release. I’m not touching any JME code before the release.

The tricky part will be keeping frame statistics together. Not everything is just a counter that can be looked at over several frames. In addition to all of the current per-frame stats counted, I’d personally also like to see the time spent in various stages of update and render. Not just the overall frame time but the broken out parts. In other engines I’ve used this is a good way to see if it’s a rendering bottleneck of a logic bottleneck or the infrastructure in between.

When the checking of statistics was lock-step with the accumulation of statistics, the “keeping together” part was less of an issue. When you will be checking the stats outside of the process (through JMX or otherwise), then it becomes a little trickier to do without generating unnecessary garbage or sync choke points.

Another really important consideration is to have this be zero-impact if not used.

My back of napkin designs always had some kind of timing listener that would be invoked at specific points in the pipeline. It could then do whatever it wanted… if null then the calls are skipped. (hotspot will make quick work of a null check and it’s cheaper than a method call.) At the time, I hadn’t considered per-viewport tracking but that may just be a case of making the calls slightly more granular and providing the viewport as reference.

Once such a listener is possible then any of the mentioned approaches can be done at whatever overhead the implementer felt was appropriate for their use-case. Some default ones could eventually make it back into core.

I also have it on my to do list to look into this after the release. I'm not touching any JME code before the release.

The tricky part will be keeping frame statistics together. Not everything is just a counter that can be looked at over several frames. In addition to all of the current per-frame stats counted, I’d personally also like to see the time spent in various stages of update and render. Not just the overall frame time but the broken out parts. In other engines I’ve used this is a good way to see if it’s a rendering bottleneck of a logic bottleneck or the infrastructure in between.

When the checking of statistics was lock-step with the accumulation of statistics, the “keeping together” part was less of an issue. When you will be checking the stats outside of the process (through JMX or otherwise), then it becomes a little trickier to do without generating unnecessary garbage or sync choke points.

It all depends on what you want of course (precision and so on), I would think a counter that does logic_time += logic_time_spent_this_frame would serve, or maybe I’m missing your point? If you sample that counter along with other time counters you can see the relation between times spent here and there. If you also have a counter for frames_rendered you can divide and get a feel for time/frame.

Another really important consideration is to have this be zero-impact if not used.

My back of napkin designs always had some kind of timing listener that would be invoked at specific points in the pipeline. It could then do whatever it wanted… if null then the calls are skipped. (hotspot will make quick work of a null check and it’s cheaper than a method call.) At the time, I hadn’t considered per-viewport tracking but that may just be a case of making the calls slightly more granular and providing the viewport as reference.

Once such a listener is possible then any of the mentioned approaches can be done at whatever overhead the implementer felt was appropriate for their use-case. Some default ones could eventually make it back into core.

Using listeners is a clever idea, I like that :slight_smile: It’s simpler than injecting byte code based on annotations or some such scheme.

@jmaasing said: It all depends on what you want of course (precision and so on), I would think a counter that does logic_time += logic_time_spent_this_frame would serve, or maybe I'm missing your point? If you sample that counter along with other time counters you can see the relation between times spent here and there. If you also have a counter for frames_rendered you can divide and get a feel for time/frame.

For example, textures used would not work this way.
For example, objects in scene would not work this way.
etc…

Also, if you were curious about the ratio to updateLogic/updateGeometric/render then unframed accumulators would be nearly useless diagnostically. Sure you’d be able to tell average (hopefully if you caught it exactly between frames and waited some number of frames) but you wouldn’t be able to see how any specific frame breaks out. You might get the render time from one frame and the update times from the next, etc…

<cite>@pspeed said:</cite> For example, textures used would not work this way. For example, objects in scene would not work this way. etc...

Yes, I’d probably want to use use a ‘gauge’ and not a ‘counter’ for those.

Also, if you were curious about the ratio to updateLogic/updateGeometric/render then unframed accumulators would be nearly useless diagnostically. Sure you'd be able to tell average (hopefully if you caught it exactly between frames and waited some number of frames) but you wouldn't be able to see how any specific frame breaks out. You might get the render time from one frame and the update times from the next, etc..

For sure, you’d get aliasing errors when sampling in that manner and it will not be frame-accurate but for statistics it should be enough, like “if I go from 100 to 200 meshes in the scene the time spent in updateGeometric does not increase linearly while the others do” - wouldn’t call that ‘nearly useless’ information even if I had to wait 30 seconds to collect the data.

Anyhow, one experiment I did was to copy the engine statistics each frame into a list and serve that using JMX. As long as the engine collects the data you can always expose it in a suitable way (for what you want to achieve), especially with that clever listener design.