I strongly suspect new JME is pointing out an issue that was already there.
Common case: divide by zero somewhere… like normalizing a vector of zero length or something… then using that to make a quaternion or multiply one or something.
When a camera matrix goes to NaN, I automatically assume a NaN quaternion somewhere.
I think it happens while I am grabbing a geometry and moving it continuously with a VR hand. I’m increasingly suspecting it is a me problem (perhaps a NaN briefly comes back from the runtime for whatever reason). I’ve added some assertions earlier in the process to try to catch it “when it goes in” but with the randomness I will just have to wait for it to happen again. Anyway, as I say, I now think its a me problem but I’ll let people know if that changes
That wouldn’t work (or make sense) if the spatial is a node with two or more geometries attached to it – each geometry with its own material – and you want to change only one of them. I mean, a node in Gltf can have more than one geometry attached to it, am I right?
But okay, let’s not change the subject of this thread. The thing is, as @capdevon stated, I also think that there might be potential backwards compatibility issues here.
The problem with hypothetical tennis is that the court moves all the time.
…and I’m continuing this in case folks affected by this specific behavior change need to address it for the reason stated.
The original problem “My geometry is now a node”… if the intent was “I need to set a material parameter”… adding a mat param override is probably the most appropriate way.
So, if the reader as affected by the problem “I look up this spatial and it was a geometry but now it’s a node” and the reason was “because I need to set a material parameter”… then the appropriate way to avoid the problem in the future is a material parameter override. If GLTF or Blender (lots of things you can do editing that can cause this same issue) has moved your Geometry into multiple Geometries under the same Node or one Geometry under a new Node… you still think of it as “one thing” and probably want to set the material parameter on all of those things. The code must have thought of it as “one thing” before else it wouldn’t have been casting it to a Geometry… which can only be “one thing” at a time.
I find the Blender to JME spatial hierarchy to be reasonable chaotic in the face of relatively benign edits so I try to avoid Geometry = getChild(“someSpecificName”) style code. There are lots of techniques for doing so but they all depend on the reasons. JMEC scripts even makes some of these easier than in Java (though not mat param overrides… they are currently as “ugly” to write in both cases.)
OK now the problem gets serious. With the new glTF/GLB model loader, the AnimComposer and SkinningControl controllers are duplicated for each Geometry. This is absolutely no good.
I’ve attached screenshots for reference:
jME-3.7.0
I’ve encountered an issue with 3.7.0-beta 1. I believe that in 3.7.0 the this.cam.lookAt method causes the camera to look at the position if the screen was the nominal size but if it has been made smaller it looks off to the side
Test application
public class LookBug extends SimpleApplication {
public static void main(String[] args) {
LookBug app = new LookBug();
AppSettings settings = new AppSettings(true);
settings.setWindowSize(300,300); // <---- this is important
app.setSettings(settings);
app.start();
}
@Override
public void simpleInitApp() {
rootNode.attachChild(box());
this.cam.setLocation(new Vector3f(1.8f,1.8f,-0.8f));
this.cam.lookAt(new Vector3f(0.5f,0.4f,0.5f), Vector3f.UNIT_Y);
}
private Spatial box(){
Box b = new Box(new Vector3f(0,0,0), new Vector3f(1,1,1));
Geometry geom = new Geometry("Box", b);
Material mat = new Material(assetManager,
"Common/MatDefs/Misc/Unshaded.j3md");
mat.setColor("Color", ColorRGBA.Blue);
geom.setMaterial(mat);
return geom;
}
}
Behaviour in 3.6.1 (it looks staight at the centre of the cube
Behaviour in 3.7.0 (It looks off to the side, but if you imaging the screen growing up and to the right up to the nominal size it would be correct)
Edit: actually, it sort of looks like setWindowSize(300,300) has changed to clipping in on the bottom left corner of the “true” view. Rather than resizing the view
So to be clear, you’ve set the window size but not the actual display size to match?
It may be a change in behavior, but I’d argue that it’s correct now. If you are telling JME to draw a large frame buffer in a small window, I think now it’s correct.
That does fix the problem… it feels surprising though. I’ve always treated setWindowSize as setting the window size and JME will figure out what is best for resolution (and that has always been the previous behaviour). I can imaging a bunch of “why is my display so weird” questions stretching out into the future.
Interestingly setting just the resolution works fine
settings.setResolution(300,300);
Because it defaults the window size to the resolution (but not the resolution to the window size).
Yeah, that’s right. The default framebuffer size is chosen by the windows manager.
This was some sort of workaround to not break current apps and make them work properly on macos, but you are supposed to set only window size. Maybe the setResolution/setWidth/setHeight methods should be marked as @Deprecated
Incidentally what Ali_RS is suggesting there is exactly what I would have expected a windowsize/resolution mismatch to mean if allowed; a pixelated view because a small image is stretched over a too large area (or pointlessly too many pixels then downsampled). It is after all the applications resolution you’re setting, the fact that it also a framebuffer is an implementation detail
It would be interesting if you did a git bisect to figure out which commit broke it.
…especially if that commit was one of Ric’s.
Edit: and my (weak) argument would be what if some app really does want stretched/pixelated display because they want a bigger window but not bigger quality?
I always just set resolution but I’m also running an older JME.
I think sometime back we allowed window resizing (like by the user stretching the window)… if so then I wonder how that plays into this.
Edit: and my (weak) argument would be what if some app really does want stretched/pixelated display because they want a bigger window but not bigger quality?
Yeah I think that would be more defensible behaviour stretching/pixelating. Probably rare you’d want that but I can imagine it. But the behaviour it actually has seems to be continuing to render outside the window (which seems pointless) or leaving some of the window black; if the window is smaller or larger than the resolution respectively.
It would render everything on the proper size framebuffer and then blit on a larger default framebuffer.
Messing with the default framebuffer size is not a safe thing to do
The default framebuffer contains a number of images, based on how it was created. All default framebuffer images are automatically resized to the size of the output window, as it is resized.
I seem to have found a regression.
This problem seems to only occur on certain special models.
After doing some git bisect I found that the first bad commit was “e440f31”