Bypassing refreshFlags? or direct access to glRender methods?

I hope this is the right place to post this.

I would like to implement mesh sharing between Geometry(s).

Meaning, update the single Mesh’s buffers according to stored vertex information for the instance that is being rendered, leveraging the worldTransformMatrix from the current parent Node.

–In short, right before the glRenderX call, update the single Mesh’s buffers to render that instance. Continue doing this for ever Node that is using the shared Mesh.

The render method in Control seems to be called on all Controls prior to actually rendering, so this wasn’t helpful. I guess the question is, is the above impossible in jMonkeyEngine? Or is there a provided method for injecting into the render process as each mesh is being rendered?

One more semi-related question on updateGeometricState. I completely understand the need for locking the updated local and world transform matrices for Spatials before rendering, where I am not following is why the underlying Geometry’s setMesh method (if called from a Control’s render method) would cause this to throw an error. The scene graph’s transform matrices are completely decoupled from the underlying mesh’s vertices. Shouldn’t the engine not really care if a mesh is added/removed after updateGeometricState but before glRenderX?

Geometry.clone result in a geometry with the same mesh (same instance) except if the mesh is animated (has a bind pos buffer).
Also if you call mesh.clone you’ll have a mesh with the same underlying buffers.
I guess what you want to achieve is already in place, but not sure, since you’re asking about the mean saying not much about the goal.

@TheWaffler said: One more semi-related question on updateGeometricState. I completely understand the need for locking the updated local and world transform matrices for Spatials before rendering, where I am not following is why the underlying Geometry's setMesh method (if called from a Control's render method) would cause this to throw an error. The scene graph's transform matrices are completely decoupled from the underlying mesh's vertices. Shouldn't the engine not really care if a mesh is added/removed after updateGeometricState but before glRenderX?
The bound of the mesh may have changed, so the object could come in a situation where it was culled during update and should not with the new mesh. That's why changing a mesh count as "modifying the scene graph"
1 Like
<cite>@nehon said:</cite> Geometry.clone result in a geometry with the same mesh (same instance) except if the mesh is animated (has a bind pos buffer). Also if you call mesh.clone you'll have a mesh with the same underlying buffers. I guess what you want to achieve is already in place, but not sure, since you're asking about the mean saying not much about the goal.

Thank you for the quick response. It sounds like what I am trying to achieve is not possible though. I want to manipulate the buffers for the single mesh for each node that is being rendered. It would look like this:

  1. Update the vertices to re-size the mesh according to the current node being rendered
  2. Render the mesh with the transform matrix of the current node
  3. Update the vertices to re-size the mesh according to the next node being rendered
  4. Render the mesh again using the transforms of the next node in the render list (also containing an instance of this mesh)
  5. Rinse and repeat

Am I correct in assuming this is not possible in JME?

EDIT: To answer your question concerning the reason, It seems more efficient to create a single quad, for example, and apply vertex/texCoord updates + apply transforms prior to rendering each instance, than to create a number of quads simply because there are difference between them. OpenGL provides a mechanism for accomodating this type of rendering, I was just curious if it was possible with JME.

ok I get it.
That’s not possible with JME on the CPU, but… you can do something similar to hardware skinning on the GPU. It will be a lot more faster anyway.

Idk if you are familiar with shaders though, but in short, you would have several geometries, all using the same mesh. Have a control on each geometry that computes the vertex transformations matrices. Then you’ll have to pass those matrices to the vertex shader and transform the vertex position accordingly.
So the transformation will be done at render time as you wanted, and on the GPU wich will be faster.

if you manipulate quads, it’s ok to do it like that, with more complex meshes you may want to use rules for transformations. Passing all the vertices transforms will eat too much bandwidth, so using bone animation would help.

Note that you may found yourself in the afford mentioned case that an object is culled by the CPU and that GPU transformation would place it in the view. In that case the object won’t be rendered, so you may have to play with bounds to avoid such issue.

@TheWaffler said: Thank you for the quick response. It sounds like what I am trying to achieve is not possible though. I want to manipulate the buffers for the single mesh for each node that is being rendered. It would look like this:
  1. Update the vertices to re-size the mesh according to the current node being rendered
  2. Render the mesh with the transform matrix of the current node
  3. Update the vertices to re-size the mesh according to the next node being rendered
  4. Render the mesh again using the transforms of the next node in the render list (also containing an instance of this mesh)
  5. Rinse and repeat

Am I correct in assuming this is not possible in JME?

EDIT: To answer your question concerning the reason, It seems more efficient to create a single quad, for example, and apply vertex/texCoord updates + apply transforms prior to rendering each instance, than to create a number of quads simply because there are difference between them. OpenGL provides a mechanism for accomodating this type of rendering, I was just curious if it was possible with JME.

But… when you change the mesh every object for every frame then it gets resent to the GPU every time. Where as if you were reusing the same buffers then they only get sent once.

It feels like you are micro-optimizing in the wrong direction.

Not to mention that if all you are doing is changing the scale (or any part of the world transform for that matter) then that can already be done just by modifying the scale on each geometry and sharing the mesh between the geometries…

<cite>@zarch said:</cite> Not to mention that if all you are doing is changing the scale (or any part of the world transform for that matter) then that can already be done just by modifying the scale on each geometry and sharing the mesh between the geometries...

I used Quad as an example. Scaling != Re-sizing … but thanks for the input.

<cite>@pspeed said:</cite> But... when you change the mesh every object for every frame then it gets resent to the GPU every time. Where as if you were reusing the same buffers then they only get sent once.

It feels like you are micro-optimizing in the wrong direction.

I think you are missing the intention here. This is going to be done one way or the other.

What you have essentially told me is, sending 1000 copies of the same Mesh to the GPU and then updating each one every frame is micro-optimization. That’s interesting, to say the least. A simple “JME does not allow this” would have sufficed.

@TheWaffler said: I think you are missing the intention here. This is going to be done one way or the other.

What you have essentially told me is, sending 1000 copies of the same Mesh to the GPU and then updating each one every frame is micro-optimization. That’s interesting, to say the least. A simple “JME does not allow this” would have sufficed.

No, I’m saying it is attempted micro-optimization that is not really optimizing and that you are not sending 1000 copies, you are effectively sending 1000 separate meshes. The only thing you are saving is a little RAM but the performance will be the same as if you created 1000 different different quads every frame and sent them to the GPU. You will be bus limited pretty quickly.

Also, can you explain why rescaling a quad would be different than resizing it? Edit: nevermind. You are not really using quads, I guess. For quad, scaling = sizing.

We can tell you JME doesn’t allow you to do this but it seemed more helpful to actually try to solve your problem. It seems like there are better ways to do what you are actually trying to do. The performance of the way you are trying to do it is going to be pretty bad compared to several alternatives.

@TheWaffler said: I think you are missing the intention here. This is going to be done one way or the other.

What you have essentially told me is, sending 1000 copies of the same Mesh to the GPU and then updating each one every frame is micro-optimization. That’s interesting, to say the least. A simple “JME does not allow this” would have sufficed.


JME does not allow this for sure.
But technically what you suggest would not be faster and won’t save gpu bandwidth, nor CPU cycles. It would only save direct memory.
1 mesh, updated 1000 times , sent 1000 times to the GPU.

The way I explained though will have benefits in all those areas.

<cite>@nehon said:</cite> JME does not allow this for sure. But technically what you suggest would not be faster and won't save gpu bandwidth, nor CPU cycles. It would only save direct memory. 1 mesh, updated 1000 times , sent 1000 times to the GPU.

The way I explained though will have benefits in all those areas.

I see, though if the buffers are shared, the last update will be applied to every rendered instance of the mesh. Am I missing something that makes this not happen?

I tried using clone() after you suggested it and even when using deepClone() to clone the buffers this happened (I became more confused here, as the JavaDocs state that the Buffers are cloned as well, which would lead one to believe that altering one clone would not effect the others).

<cite>@pspeed said:</cite> We can tell you JME doesn't allow you to do this but it seemed more helpful to actually try to solve your problem. It seems like there are better ways to do what you are actually trying to do. The performance of the way you are trying to do it is going to be pretty bad compared to several alternatives.

Ah. I think I see where the disconnect is. I am not doing this for the purpose optimization at all. I’m picking a strategy for rendering complex 2d animations in the GUI Node. I think I am going to use a batched approach with internal linkage and transforms per quad contained by the mesh to simulate skeletal keyframe animation. This should accomplish what I need with a single mesh.

I’ll post the code and a video result soon in case you have time to look over it and provide feedback on other approaches.

Actually the mesh will never be updated. you send the mesh once and update it in the vertex buffer when computing its projection on screen according to transforms you passed to the material. So the draw back is that you need one material for each geometries.

Anyway, if your goal it for GUI, batching will be a better alternative IMO. Look into the BatchNode, you’ll be able to attach individual control to each element of the GUI without having to care about updating the underlying mesh.

Edit : also to be thorough, Nifty GUI now has an option to batch the GUI resulting in a serious performance boost.

@TheWaffler said: Ah. I think I see where the disconnect is. I am not doing this for the purpose optimization at all. I'm picking a strategy for rendering complex 2d animations in the GUI Node. I think I am going to use a batched approach with internal linkage and transforms per quad contained by the mesh to simulate skeletal keyframe animation. This should accomplish what I need with a single mesh.

I’ll post the code and a video result soon in case you have time to look over it and provide feedback on other approaches.

Cool. A single mesh is what I was going to suggest. Even better if you send the transforms to the GPU as a matrix array or some form and let the GPU do the transform.

At any rate, 1 mesh will be faster than lots of little ones.

It’s a micro-optimisation in that rather than just doing it the easy way and seeing if performance is the problem you are diving into really fiddly stuff when you may not in fact ever need to do so and in fact may find out in the end that it’s actually slower.

What do you mean by re-sizing if you don’t mean scaling?

This really sounds like a case for the vertex shader tbh. You could put whatever strange behaviour you liked in the vertex shader based on the material parameters.

<cite>@pspeed said:</cite> Cool. A single mesh is what I was going to suggest. Even better if you send the transforms to the GPU as a matrix array or some form and let the GPU do the transform.

At any rate, 1 mesh will be faster than lots of little ones.

Actually, I was hoping for a single little one that I could apply transforms to on the GPU per Geometry render :stuck_out_tongue: Though, this only works without the scene being locked during rendering. A shadow cloned mesh should work okay and in this case I only need clone the position buffer.

Question concerning using Matrices in an Array as apposed to updating the Vertex positions:

From looking at the Mesh class, I get the impression the animations are handled by updating the vertex position buffer, thus the cloneForAnim method. Is this still used? Or is this being handled differently?

<cite>@zarch said:</cite> It's a micro-optimisation in that rather than just doing it the easy way and seeing if performance is the problem you are diving into really fiddly stuff when you may not in fact ever need to do so and in fact may find out in the end that it's actually slower.

What do you mean by re-sizing if you don’t mean scaling?

This really sounds like a case for the vertex shader tbh. You could put whatever strange behaviour you liked in the vertex shader based on the material parameters.

I’m getting the impression that you either are not following the conversation, have never used the render technique I mentioned (Which is actually quite common, just not possible with JME), or are intentionally interjecting because you you suffer from some sort of OCD?

Most of what is being discussed could be accomplished using a vertex shader, however when using an extensive list of interpolation types for animations over time you eventually realize that the number of uniforms needing to be pushed to the GPU each frame far exceeds simply updating the very small vertex position buffer.

This conversation is NOT about micro-optimization. I mentioned NOTHING about performance issues. I asked if a common render technique was supported in JME. It is not. I can live this. There are many other options available.

As for the difference between re-sizing and scaling, is this a serious question?

First, nobody has the time to fully follow all conversations; the occasional slip is entirely normal, alleging OCD or ventilating other theories about the other person is just being rude.
Second, anything shaders is about optimization, else you’d be doing it inside the CPU. It’s normal to think about performance when shaders are involved.
Third, you didn’t initially mention what you were trying to achieve, so it’s everybody’s guess. You did mention what you try to achieve now, so this should be good (unless somebody misses that point again, which is possible).
Fourth, the question is valid. You did not provide enough context for us to decide whether your idea of resizing is equivalent to scaling or not. Asking whether the question is serious, which implies that the other person is either slighting you or a complete idiot (both of which are serious accusations) is, again, being rather rude towards people who’re trying to understand what you want to do.
Fifth, it’s unclear whether you have any remaining questions. Well, it’s unclear to me, but then I haven’t read every word of the conversation, nor do I have much inclination to do that for a person who’s getting somewhat rude, nor do I have enough time to really do that. Anyway, the point being that it’s possible that any remaining questions will stay unanswered unless you rephrase them.

Just a few notes so you know how you’re coming over to an uninvolved bystander.
And yeah I know it can be frustrating to get answers to questions one didn’t ask. I’m attributing your rudeness to that frustration, not to you in general.

@TheWaffler said: Actually, I was hoping for a single little one that I could apply transforms to on the GPU per Geometry render :P Though, this only works without the scene being locked during rendering. A shadow cloned mesh should work okay and in this case I only need clone the position buffer.

The problem with the “single little one” approach is all of the draw dispatch. That will really kill you. There is a reason JME does things the way it does. You’d save a little memory at the cost of a lot of performance.

@TheWaffler said: Question concerning using Matrices in an Array as apposed to updating the Vertex positions:

From looking at the Mesh class, I get the impression the animations are handled by updating the vertex position buffer, thus the cloneForAnim method. Is this still used? Or is this being handled differently?

Software skinning modifies the position buffer. Hardware skinning modifies the vertexes in the shader.

1 Like
<cite>@TheWaffler said:</cite> I'm getting the impression that you either are not following the conversation, have never used the render technique I mentioned (Which is actually quite common, just not possible with JME), or are intentionally interjecting because you you suffer from some sort of OCD?

As for the difference between re-sizing and scaling, is this a serious question?

Scaling = changing the size of something.

You said you wanted to change the size of a mesh.

Or did you mean you want to change the number of vertices in a mesh? Resizing the buffers?

Or did you mean you want to move the vertices around? Resizing the triangles?

Or did you want to scale the shape along their normals? Resizing the object, making it “fatter”? (Trivial in a vertex shader btw).

Or did you want to scale the shape? (Resizing the object keeping the same proportions?

Or even scale the object different in different axes? (Again resizing could mean this).

You ask an unclear question (especially considering I’ve seen re-sizing used by people on these forums to mean most of the above at some time or another) and then are surprised when you don’t get the answer you were looking for?

@TheWaffler said: From looking at the Mesh class, I get the impression the animations are handled by updating the vertex position buffer, thus the cloneForAnim method. Is this still used? Or is this being handled differently?
We support both software and hardware skinning. So yes this is still used. Software update the buffers on the CPU, Hardware send the transformations matrices of the bones to the shader and all is done on the GPU.
1 Like
<cite>@toolforger said:</cite> First, nobody has the time to fully follow all conversations; the occasional slip is entirely normal, alleging OCD or ventilating other theories about the other person is just being rude. Second, anything shaders is about optimization, else you'd be doing it inside the CPU. It's normal to think about performance when shaders are involved. Third, you didn't initially mention what you were trying to achieve, so it's everybody's guess. You did mention what you try to achieve now, so this should be good (unless somebody misses that point again, which is possible). Fourth, the question is valid. You did not provide enough context for us to decide whether your idea of resizing is equivalent to scaling or not. Asking whether the question is serious, which implies that the other person is either slighting you or a complete idiot (both of which are serious accusations) is, again, being rather rude towards people who're trying to understand what you want to do. Fifth, it's unclear whether you have any remaining questions. Well, it's unclear to me, but then I haven't read every word of the conversation, nor do I have much inclination to do that for a person who's getting somewhat rude, nor do I have enough time to really do that. Anyway, the point being that it's possible that any remaining questions will stay unanswered unless you rephrase them.

Just a few notes so you know how you’re coming over to an uninvolved bystander.
And yeah I know it can be frustrating to get answers to questions one didn’t ask. I’m attributing your rudeness to that frustration, not to you in general.

Actually, I have zero tolerance for self-absorbed idiots. So, yes, I was being intentionally rude due to the fact that the subtle hint to un-invite himself from the conversation was not received or understood. After following the boards for quite a while, time and time again @zarch has shown himself to have almost no understanding of what he is talking about, but seems to like to answer questions none-the-less.

I’m sorry if you find my straight-to-the-point approach with people not to your liking.