How fast is Jmonkey

So I read on the main website that Jmonkey is as fast as any C++ engine.

I’ve always been under the impression that most interpreted languages (Especially java) are slow.
How fast is it really?

Its 6 fast.

9 Likes

Fast enough I’d say. Also easier to code, less prone to errors.

I was hoping for some sort of comparison to C++, I am still happy with that response

Interpreted languages can be slow. Java is not an interpreted language and hasn’t been since 1.1 when the JIT was introduced. Hotspot made this even better as it can analyze and optimize the compiled code as the app is running based on actually profiling the app.

In any case, in 3D games, the GPU is doing most of the work.

But really, the whole premise of the question is almost 20 years old now.

Let’s just put it that way … its 2 fast 2 care.

Actually I was wrong, its eleven.

http://blogsdir.cms.rrcdn.com/8/files/2014/05/volume-knobs.jpg

5 Likes

It’s as fast as your ability to code properly. Badly written java/glsl = slow game. Well written code = fast game.

See I’ve really just been teaching myself stuff. My knowledge is spotty and usually outdated. Thanks for clearing this up. I’ll probably use this engine quite a bit.


I can code kinda efficiently, but not like uber efficnet shit.

Best reply… ever!

1 Like

You usually get exactly two bottlenecks with jme frequently (and with any other engine):
The GPU and the PhysicsEngine.
Both are written in C/C++ already.

Also keep in mind, that nearly ALL AAAengines have some kind of scripting in LUA or similar in them, wich is actually way slower than java.

Then to be honest, especially for real number crunching like video after effects Java is slower than C code by a good deal, but not for normal branching code with rarely small computations over a huge amount of pixels. (Wich in case of a 3d engine are better done via gpu pixelshaders anyway)

I’m just going to point out that java has a huge disadvantage with object handling on the stack due to lack of value type support which means anytime you’re dealing with object generation and accessing it will run better/faster in a language which has value-type support. This is most likely going to show up when dealing with CPU run physics, or dealing with large amounts of data (voxel systems). This is why you see these types of systems written in C/C++ and then accessed via JNI. However, the memory access speed and dealing with these systems doesn’t mean that your actual implementation is going to suffer. Overall the best idea is to get a working system together, then optimize where you need to.

There are some that will say one of the problems with Java is that you aren’t necessarily going to be aware of how the JIT is going to handle memory layout for an algorithm you write. And that it’s difficult to inspect it, or that it may change over multiple runs or with differing data sets. This can be problematic when you attempt to optimize as you really want to know exactly where your program is doing things it doesn’t need to be doing and eliminate them. Others point out that this is also the case with standard compilers and utilizing libraries from 3rd parties anyway.

You’re not going to find a good comparison with other languages as far as speed as it’s so variable depending on the skill of the developer and where they choose to optimize. What you will be able to see is features, what the different engines support compared to each other.

Not really, when you create objects in “stack” situations the GC overhead isn’t much bigger than with C++ and stack variables.

Having to qualify your statement with "isn’t really much more’ immediately proves my statement that it is more. It may not be much compared to other ops, but by saying it isn’t much more means my statement is true, and you simply disagree with how much more I may be representing it as.

Also, I’m not talking about GC overhead only. I’m also talking about the fact that Java will create objects in the heap even where you might be able to utilize a value type that is kept only on the stack in other languages. There’s some good articles out there on this stuff, and in the end it may not matter much as relates to overall performance, but trying to say it’s not important to know how the underlying system is going to handle these types of things is not beneficial.

ACtually two things to keep in mind.
The JIT does actually optimisations with the Stack, these objects simpley do not exist on the heap anymore.

Secondary new in java is way cheaper than malloc/new in c/c++ as due to different memory allocation handling each thread in java can allocate several megabyte without needing any synchronisation. A malloc does normaly often result in a OS call, wich even might involve context change.
Using a custom allocator for games is a must have for high performance stuff.
(Of course such things are usually only done for AAA engines with several million$ of cash, but if you feel like you can do that as well ^^)
If you are like most of us more a <30 person team, then the possibility of such things wont mean much, as they are cost prohibitive and the difference is not that much.

Did I mention that some new engines have Garbage collection in C++?

No its not more, as soon as the method is JITed its virtually the same as “native” code when you stay in the stack. Before that happens its “not much more” as its all in the young generation.