I’m totally new to jME. I went through the tutorials and now I’d like to create something (and to make mistakes) because that’s
how I learn
But I could use a few advices.
I wrote a little brick breaker game a few years ago in XNA(
similar to this one, only it had better graphics
http://www.youtube.com/watch?v=EERraxMkGvY
)
That game used sprites. Now I’d like recreate it in jME with 3D objects.
(bricks, ball, bat, and something for the border and a static camera )
Very simple things first and add more complicated stuff later
(arbitrary shaped levels, hexagonal bricks, powerups to collect, bat movement on collision affects ball movement, dynamic camera that follows the ball, parallax backgrounds etc).
My main problem is how to handle the ball bouncing. Should the full physics implementation be used ?
Or should I implement some custom method using the collision system ?
(The XNA game worked like a 2D (circle) character moving in a world and instead of following the walls’ direction after collision it bounced back.)
thank you for your answers,
ferceg
I would start with a custom method using the collision system. That is, the one built into the Node objects and belonging to the scene graph (look up Node#collideWith). It has some gotchas but it should work great for this. Do your own math for bouncing the ball.
Then there is a collision system built in to the physics framework (CollisionListeners, GhostControls et c), please do not confuse them, they have nothing to do with each other. You can look into that as step 2. It isn’t that hard either but I think it is a good second step only after you have understood the scene graph.
Do not use “proper physics” for this, it won’t end well. I like jamaasing’s suggestion. For detecting collisions start with checking the bounding volumes, then if u feel adventurous look into collision shapes and collision listeners. U can do some cool stuff with physics as extras like letting the blocks fall due to gravity if the player loses or something, but don’t use it for the core of your game.
Physics would be pretty cool for the other stuff tho, like powerups and objects like that. Especially if there are sloped areas and ledges etc. where they can bounce or roll down. And of course explosions!!
Thank you guys for your replies, I’ve started experimenting.
I’ve run into a new problem.
Looked through my old c# code I extracted the 2D collision/bouncing class, rewrote it in Java and it started to work after a few basic
errors. (The original program was quite cool (I mean, for my skills) - XML definitions (textures/phases/animated sequences, levels/brick positions),
a simple “brick api” - spawn powerup, remove brick, change brick color on 1st collision and remove it from the scene on 2nd, etc,
and simple c# scripting (every level had an associated c# file that could be recompiled on the fly so the game didn’t need to be restarted)).
Ok, that has nothing to do with my current problem.
This program is just an experiment now and the scene contains the following objects:
- a fixed background image (found code in the forums somewhere - it gives a warning for p.updateGeometricState(); but it works)
- a large Quad with an alpha blended cloud-like texture behind the objects
- 4 Boxes so the ball wouldn’t leave to outer space
- 1 Box for the bat (is that the correct term for that ?)
- 1 ball (loaded from blender/ogre xml)
- few bricks (not Boxes, bevelled boxes, loaded from blender/ogre xml)
- 2 lights (ambient and directional)
The scene looks like this:
Everything goes fine, 59-60 fps. Then I wanted to move my camera, disabled the flycam and modified the position of “cam”. The framerate dropped (changed constantly as the camera moved). I fixed the camera somewhere closer and did a test:
FPS is about 30 now. Why is that ?
Ok, it’s not a power machine (2ghz laptop with radeon x1400 under linux mint) but the samples run OK (terrain sample has … I don’t know exactly, about 100 000 triangles) so I don’t think the program should slow down like this.
thanks !
Thanks for your answer. To tell you the truth things got better because I managed to hack (with some mobiliy modding app) the catalyst drivers under windows 7 and they give better performance than linux drivers (same scene ~100-200 fps depending on how many objects I have) and this is OK for me now for testing purposes.
Thanks again!
I would advise to try and reduce the object count. Let’s say you make a grid of a total of 200 blocks, that would mean 200+ objects… if you use just one light. From experience (correct me if I’m wrong), it’s better to have one object with 500K vertices, rather than breaking it down into 200 objects. I’m not saying you should forcibly have one mesh, as I don’t know where you’re heading. But keep in mind that you wouldn’t want to have a too-high object count.
Good luck in this unforgettable journey
in general that is true, but if the 500K object is your whole scene, then its better to split them up, to take advantage of frustum culling.
This article explains in simple terms how to optimize graphics performance (although it is a Unity doc ^_^):
Unity - Manual: Graphics performance fundamentals
In order to render any object on the screen, the CPU has some work to do - things like figuring out which lights affect that object, setting up the shader & shader parameters, sending drawing commands to the graphics driver, which then prepares the commands to be sent off to the graphics card. All this "per object" CPU cost is not very cheap, so if you have lots of visible objects, it can add up.
So for example, if you have a thousand triangles, it will be much, much cheaper if they are all in one mesh, instead of having a thousand individual meshes one triangle each. The cost of both scenarios on the GPU will be very similar, but the work done by the CPU to render a thousand objects (instead of one) will be significant.
there is also the jME wiki on optimisation:
https://wiki.jmonkeyengine.org/legacy/doku.php/jme3:intermediate:optimization
Frustum culling helps more when you have a lot of objects since it keeps from sending invisible objects to the GPU. Your example of one 500k vertex object is still more efficient than splitting it up. Presuming that there is always at least one object on screen then you get no benefits for splitting but now incur the extra overhead of culling and draw dispatch when there is more than one part on screen.
It’s a balancing act. Batch as much as possible but no more. Personally, I think anything less than 1000 objects is fine… but the more you can batch the better your performance will be.
Note: all comments above are for desktop… android will want even fewer objects and even fewer vertexes.
The x1400 is a rather old card, so it doesn’t run shaders very well. The jME3 lighting shader is quite complex so when a lot of lit blocks (that use the lighting shader) fill the screen, the framerate drops. You can try running it in OpenGL1 mode, or use the unshaded material instead of the lighting material.
mythruna uses 1 mesh for the whole world (voxel terrain)?
I think if a world is big it should be split up, else ur just constantly wasting resources, that the user never sees. The user might see 2 parts of the world, in which case it will render 2 objects (not rly a big deal)s, but that won’t happen as often, compared with them in the bulk of the sector.
@wezrule said:
mythruna uses 1 mesh for the whole world (voxel terrain)?
I think if a world is big it should be split up, else ur just constantly wasting resources, that the user never sees. The user might see 2 parts of the world, in which case it will render 2 objects (not rly a big deal)s, but that won't happen as often, compared with them in the bulk of the sector.
The reason Mythruna splits the world up is because it makes paging and regenerating (when a block changes) easier.
But if you have the choice of one 500k vertex object and 5 100k vertex objects, the first will tend to be faster because the CPU is doing less work and there are less draw calls. The graphics card is already trivially throwing away things not on the screen so you don't really lose the culling. CPU-side culling is important to save CPU and draw calls... but if you have fewer objects then you are already doing that.
There are almost reasons to split up large objects but frustum culling and performance is not one of them. Frustum culling is there to try to get you back some speed you lost from having split them in the first place.
Note: there is a point where the GPU will be doing more work to cull vertexes than the CPU would do to cull separate objects plus extra draw dispatch… but I think you might be surprised where that line is on most medium to modern cards. Scene graph traversals, geometry sorting, draw dispatch calls, etc. are relatively expensive operations in comparison to the GPU churning through triangles.