Bypassing refreshFlags? or direct access to glRender methods?

<cite>@nehon said:</cite> We support both software and hardware skinning. So yes this is still used. Software update the buffers on the CPU, Hardware send the transformations matrices of the bones to the shader and all is done on the GPU.

At what point is one more efficient than the other? This would greatly depend on the size of the mesh, yes?

@TheWaffler said: After following the boards for quite a while, time and time again @zarch has shown himself to have almost no understanding of what he is talking about, but seems to like to answer questions none-the-less.
uh wow....well read the board again... Listen... by being obnoxious like this, you'll just end up on your own. I'm trying to answer your question nonetheless, but that's getting more and more difficult. Please, try to behave, people are trying to help here, even if you think otherwise.

So to get back on topic…
Hardware skinning is faster mostly when you have a large number of animated meshes in a scene.
see this post where I posted some perf results
http://hub.jmonkeyengine.org/forum/topic/hardware-skinning-patch-is-in-and-tested/#post-211657

It’s specially efficient on android where writing buffers is slow and generate tons of garbage.

<cite>@nehon said:</cite> uh wow....well read the board again... Listen... by being obnoxious like this, you'll just end up on your own. I'm trying to answer your question nonetheless, but that's getting more and more difficult. Please, try to behave, people are trying to help here, even if you think otherwise.

So to get back on topic…
Hardware skinning is faster mostly when you have a large number of animated meshes in a scene.
see this post where I posted some perf results
http://hub.jmonkeyengine.org/forum/topic/hardware-skinning-patch-is-in-and-tested/#post-211657

It’s specially efficient on android where writing buffers is slow and generate tons of garbage.

sigh The cliche police are in full force on this forum, aren’t they? So, if I follow you correctly, it is okay to consistently push a thread off topic, but it is not okay to tell someone to stop? I think I see how this is going to work. Now, so I can better position myself, whose ass should be the target for brown-nosing? When in Rome… and all.

Back to topic:

When you say writing buffers, are you referring to allocating memory? or updating the already created buffer? or both?

Wow, this is priceless! I received negative feedback for this post?

<cite>@TheWaffler said:</cite> At what point is one more efficient than the other? This would greatly depend on the size of the mesh, yes?

Seriously?

<cite>@TheWaffler said:</cite> Actually, I have zero tolerance for self-absorbed idiots. So, yes, I was being intentionally rude due to the fact that the subtle hint to un-invite himself from the conversation was not received or understood. After following the boards for quite a while, time and time again @zarch has shown himself to have almost no understanding of what he is talking about, but seems to like to answer questions none-the-less.

I’m sorry if you find my straight-to-the-point approach with people not to your liking.

Well, if straight-to-the-point is what you like: You’re being the biggest idiot in this thread, mostly for the reasons that you’re attributing to others.
Since I have no interest in self-absorbed idiots, I’ll just leave you to your own devices. Have a nice day.

1 Like
@TheWaffler said: *sigh* The cliche police are in full force on this forum, aren't they? So, if I follow you correctly, it is okay to consistently push a thread off topic, but it is not okay to tell someone to stop? I think I see how this is going to work. Now, so I can better position myself, whose ass should be the target for brown-nosing? When in Rome... and all.
...And that's it... you're on your own. Good luck...
<cite>@nehon said:</cite> ...And that's it... you're on your own. Good luck...

Ooops, apparently it was yours.

ROFLMAO… This thread is fucking priceless!!!

Maybe try using lwjgl or jogl directly, you seem to want to work against the scenegraph concept, which is fine but then it doesn’t make sense to use one. Oh yeah, and don’t insult people giving you help (and software) for free, even if its not 100% as you expect it to be, theres most likely a reason apart from wanting to annoy you. In this case you one-handedly made about all people that could help you in this forum your enemy so, yeah, good luck on your own from now on.

Heh. Not enemy. I’m simply disinterested in an acid-for-help deal.
Wishing him luck wasn’t sarcasm on my side. Different social contexts pull out different aspects of a personality, and he might actually be a nice guy in other threads or in person. (He could wait until he’s cooled off, find a different perspective for whatever technical problem he’s interested in, and discuss that. Choosing a different nick might help.)

That last thing about brown-nosing was really off though. I find both roles in that kind of relationship really disgusting, from both roles in it, and whether it’s figurative or personal. It’s an invitation into a needless rantfest.

<cite>@TheWaffler said:</cite> Actually, I have zero tolerance for self-absorbed idiots. So, yes, I was being intentionally rude due to the fact that the subtle hint to un-invite himself from the conversation was not received or understood. After following the boards for quite a while, time and time again @zarch has shown himself to have almost no understanding of what he is talking about, but seems to like to answer questions none-the-less.

I’m sorry if you find my straight-to-the-point approach with people not to your liking.

Really? Link me to some instances where I have “almost no understanding” of what I am talking about…

After all if its “time and time again” that should be straightforward for you to do…

<cite>@zarch said:</cite> It's a micro-optimisation in that rather than just doing it the easy way and seeing if performance is the problem you are diving into really fiddly stuff when you may not in fact ever need to do so and in fact may find out in the end that it's actually slower.

Ya know… this post right here was inflammatory. The dude had gotten his answer already and you decided to push buttons. And had to come back for more by posting to get the last word in. He definitely got the self-absorbed part right. I think idiot may have been over the top… but you instigated this shit.

@t0neg0d said: Ya know... this post right here was inflammatory. The dude had gotten his answer already and you decided to push buttons. And had to come back for more by posting to get the last word in. He definitely got the self-absorbed part right. I think idiot may have been over the top... but you instigated this shit.

And whats the point of your post? Explaining to zarch what the problem is? I’d say that was the point of his post as well. We call it a draw then? :roll:

Well, she does have a small point there. It is a bit odd that I answered an already-answered question and I’ve no idea why I didn’t see that the question had already been answered since if I had I wouldn’t have replied. It’s entirely possible the thread was open on my browser for a few hours before I got around to it and I didn’t realise it was old. That happens sometimes although usually I do notice before posting.

Even so a duplicated response to a question is hardly cause to attack the responder - and I also don’t really see where it’s inflammatory. I just answered the question as to why it might be considered a micro-optimisation.

Oh well, it happens.

<cite>@zarch said:</cite> Well, she does have a small point there. It is a bit odd that I answered an already-answered question and I've no idea why I didn't see that the question had already been answered since if I had I wouldn't have replied. It's entirely possible the thread was open on my browser for a few hours before I got around to it and I didn't realise it was old. That happens sometimes although usually I do notice before posting.

Even so a duplicated response to a question is hardly cause to attack the responder - and I also don’t really see where it’s inflammatory. I just answered the question as to why it might be considered a micro-optimisation.

Oh well, it happens.

The inflammatory part was “the easy way”… insinuating that this person’s way was the opposite of that. /shrug. I understand this is a misunderstanding all the way around, but I may have taken offense to it if it was me.

If the goal of what this person was trying to do was render 2D quads… then I’m fairly sure their method of reusing a single quad and applying transforms per Geometry would be faster than the given alternatives. There are multiple 2D OpenGL based render engines (more specifically Java based engines) that do just this with better FPS results than what was suggested here.

EDIT: I should add the caveat that I haven’t done anything to test this myself… these are claims with posted benchmarks from said engines

1 Like
@t0neg0d said: If the goal of what this person was trying to do was render 2D quads... then I'm fairly sure their method of reusing a single quad and applying transforms per Geometry would be faster than the given alternatives. There are multiple 2D OpenGL based render engines (more specifically Java based engines) that do just this with better FPS results than what was suggested here.

Yes, rendering 1000 objects with different transforms will not be so bad. But that’s not really what he was doing. He’s rendering 1000 objects, mutating the position buffers of each based on the transforms. Because otherwise, JME already does share the buffers, etc…

If you are talking about “immediate mode” where everything is drawn as you go… then the OpenGL API itself is kind of buffering state for you to send it efficiently. But the whole approach is a complete alternative to scene graphs (“retained mode”) and not really compatible on several levels. Which approach is better largely depends on how static the scene is. If most things never change then immediate mode does more work than it needs to every frame.

<cite>@normen said:</cite> And whats the point of your post? Explaining to zarch what the problem is? I'd say that was the point of his post as well. We call it a draw then? :roll:

My point was only that someone just got chased off for something that was instigated by one of JME’s long-time users. And, yes… I was explaining it so hopefully it doesn’t happen again. What would be the point of you putting in all the time you do developing JME if everyone who might potentially use it gets chased off? Man, this place is in dire need of public relations specialist :wink:

2 Likes
<cite>@pspeed said:</cite> Yes, rendering 1000 objects with different transforms will not be so bad. But that's not really what he was doing. He's rendering 1000 objects, mutating the position buffers of each based on the transforms. Because otherwise, JME already does share the buffers, etc..

If you are talking about “immediate mode” where everything is drawn as you go… then the OpenGL API itself is kind of buffering state for you to send it efficiently. But the whole approach is a complete alternative to scene graphs (“retained mode”) and not really compatible on several levels. Which approach is better largely depends on how static the scene is. If most things never change then immediate mode does more work than it needs to every frame.

Yep… it is not compatible. And this would have been the answer to his question in a single sentence:

“No, JME does not support immediate mode rendering because it counters a managed scene graph. If you could be more specific about what your trying to do, we may be able to suggest alternatives.”

With this, the thread would have been answered and you would have one more happy user. Unfortunately, what you have now is a bunch of pissed off people on both sides… oh… and one less user I am betting.

You never know who someone is… he could have been helping a well known gaming company decide on a Java-based render engine solution. It could have been something that would help JME position itself. etc, etc, etc. You just never know.

Unfortunately, in this situation, no one is ever going to know =(

EDIT: I’m not pointing the finger at you… you = JME as a whole

By now I definitely want to know if the character is so feeble that he goes off about such stuff immediately and, say, quits developing a whole library with users because of such shenanigans. Cause then we can very well do without his (or her) contributions.

<cite>@normen said:</cite> By now I definitely want to know if the character is so feeble that he goes off about such stuff immediately and, say, quits developing a whole library with users because of such shenanigans. Cause then we can very well do without his (or her) contributions.

And so in response to my asking people to be kind, you tag me? You are hopeless.