SharedNodeS not rendered

Adding more than 64 locked SharedNode instances to a node causes them (or the parent node - can't say exactly) to not be rendered. If adding less than 64 locked shared nodes, they are rendered just like they should. Note that this issues occurs only when the SharedNode instances are locked. If not locking the shared nodes, the engine performs normally.

The number of meshes might differ on other machines i suppose (is 64 here).



test case:


import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.File;
import java.io.IOException;
import java.net.URL;

import jmetest.renderer.loader.TestObjJmeWrite;

import com.jme.app.SimpleGame;
import com.jme.scene.Node;
import com.jme.scene.SharedNode;
import com.jmex.model.XMLparser.JmeBinaryReader;
import com.jmex.model.XMLparser.Converters.ObjToJme;

public class TestCase extends SimpleGame {

  @Override
  protected void simpleInitGame() {
    try {
      // copied from TestObjJmeWrite.java
      ObjToJme converter=new ObjToJme();
      URL objFile=TestObjJmeWrite.class.getClassLoader().getResource("jmetest/data/model/maggie.obj");
      converter.setProperty("mtllib",objFile);
      ByteArrayOutputStream BO=new ByteArrayOutputStream();
      converter.convert(objFile.openStream(),BO);
      JmeBinaryReader jbr=new JmeBinaryReader();
      jbr.setProperty("texurl",new File(".").toURL());
      Node r=jbr.loadBinaryFormat(new ByteArrayInputStream(BO.toByteArray()));
      r.setLocalScale(.1f);
     
      // now create shared nodes
      // it doesn't matter that they are put at the same coords
      for (int i = 0; i < 65; ++i) {
        SharedNode shared = new SharedNode("blah", r);
        shared.lock();
        rootNode.attachChild(shared);
      }
    }
    catch (IOException e) {
      e.printStackTrace();
    }
  }

  /**
   * @param args
   */
  public static void main(String[] args) {
    TestCase app = new TestCase();
    app.setDialogBehaviour(ALWAYS_SHOW_PROPS_DIALOG);
    app.start();
  }

}



if shared nodes are not supposed to be locked, then this should be specified somewhere.

Talked on IM about this with Sfera, and when you lock the target node (instead of the Shared*) it works.



No clue why this weird behaviour is happening… I guess it's a bug that a displaylists are created when there's already a displaylist? (there's no checks right now). Maybe 64 is somehow a limit… no clue.



I think we should check that display lists aren't generated when there's already a display list id, or that we release it first. But other devs please chime in on this one first.



Probaby SharedMesh should override the lock methods and just give a warning. (not forward lockMeshes to the target node).

Actually, I think a check in GeomBatch's lockMeshes(Renderer) would do the trick nicely and prevent the issue in other cases aside from Shared.

Uhm, what I mean is…

currently when you already have a displaylist and you generate a new one, the old one is not released.



Not only that, it looks like (quick glance through the code) the existing list is actually called while generating the new one. I suspect that after doing that 64 time there's some kind of OpenGL/driver limit for having displaylists inside displaylists. So afaik, yes there should be a check in GeomBatch.lockMeshes(), the question is, if there's already a list should we release it, or throw a warning?



The other thing is, I don't think there's any use in calling the lock* methods on a SharedMesh. Currently the lockMeshes() method is forwarded to the target (compare to VBO related methods where a warning is thrown), and the other two are executed not on the target mesh but the shared mesh itself (where they have no good effect I can think of).

Since someone added the forwarding call post-0.10 it seems, I'm asking why that was done, before I change all the lock methods to only giving a warning.

It would be a real bad idea to squeeze all lock methods.  lockBounds, lockTransforms and lockShadows are all executed on the shared copy and have nothing to do with the original target.  We can certainly warn when locking the mesh data though.

Also, lockMeshes in Geometry should warn and ignore locking if already locked.  That's why there's an unlockMeshes

I guess I'm confused about SharedMesh here. Don't the bounds of the target batches hold the bounding? Then what good does it to lock a SharedMesh' bounds then?

The creation of a displaylist (the only place I can see locking of transforms being used) happens on the target mesh (you can't even create a displaylist for a SharedMesh itself the normal way, because right now lockMeshes() is forwared to target), so why allow you to "lock" transforms it in the first place?



(unrelated: what a weird way to swap in swapBatches()!?)



If what I said just now is true, I think we should tell people to lock the target mesh with lockMeshes() and lockBounds(), or lock() on the target if they don't use local translation/rotation/scale for their SharedMeshes (eg cause they're attached to different Nodes), and throw warning for lock* methods in SharedMesh.



In any case I'll add the warning in lockMeshes (if there's already a displaylist).

Why would the sharedmesh reuse the bounds of the target?  Then culling would be useless unless all shared instances stayed in the exact same place as the target?  (don't forget that bounds are transformed by the scene graph.)



Locking of transforms tells jME not to recalc the world transforms of a Spatial (see Spatial.updateWorldVectors)  So no, it's not just a display list thing.



So yeah, for best efficency when using static geometry (eg. trees on a landscape), lock the mesh of the target, and go ahead and lock() the sharedmesh.  Do NOT lock the transform of the target though when locking the target's mesh data, or transforms will be ignored when drawing the sharedmesh.  Evidently locking is not all that clear even to devs.  :)  It's a powerful feature so we need to rectify that.

Ok, for some reason I thought it was a good idea to see only where getLocks() was used, not the lockMode field. Obviously, I knew somewhere in my head about updateWorldData(). I admit the exact working the bounds locking still had me confused with the new batch system, but I'll figure that out on my own (at least I see the obvious part again, thank Renanse XD)

That's cool…  But I figure if locking has even us confused, we'd better whip out a good doc on the wiki or something to clear it up. hehe…



So, any volunteers?  :slight_smile:

Well, i think I thoroughly disqualified myself for that task just now :stuck_out_tongue:

i don't think writing javadoc is one of my qualities. ask llama about my typos and confusing babbling :slight_smile: