Hi !
I think something is wrong in the way view-port is taken into account for projections.
I'm trying to define a view-port that is 16:9 whatever the resolution is, but when doing this, if the actual viewport does not entirely fits the window, all that is to be rendered in the ortho queue is mis-located (the error being larger as the object goes away from the origin).
There's also something wrong when using getScreenCoordinates() in such cases …
I've looked into JME's source and found that the setOrtho() method is using the width and height of the renderer instead of the width and height of camera's view-port :
- current source :
GLU.gluOrtho2D(0, width, 0, height);
- should be ? :
float viewportWidth = width * (camera.getViewPortRight() - camera.getViewPortLeft());
float viewportHeight = height * (camera.getViewPortTop() - camera.getViewPortBottom());
GLU.gluOrtho2D(0, viewportWidth, 0, viewportHeight);
Of course, the same with the setOrthoCenter() method.
However, I thought in first place that viewport's offset should be used as well instead of 0 in the call to GLU.gluOrtho2D(). That was not working, but adding a glTranslate() does the trick ... only for objects that are located using the getScreenCoordinates().
I finally managed to find a solution, but I don't really understand why it is working :
- letting zeros in GLU.gluOrtho2D()
- no glTranslate
- the part I don't understand :?, as current source seems to be ok -> changing the getScreenCoordinates() method :
- current source :
store.x = ( ( tmp_quat.x + 1 ) * ( viewPortRight - viewPortLeft ) / 2 + viewPortLeft ) * getWidth();
store.y = ( ( tmp_quat.y + 1 ) * ( viewPortTop - viewPortBottom ) / 2 + viewPortBottom ) * getHeight();
- is working :
store.x = ( ( tmp_quat.x + 1 ) * ( viewPortRight - viewPortLeft ) / 2 ) * getWidth();
store.y = ( ( tmp_quat.y + 1 ) * ( viewPortTop - viewPortBottom ) / 2 ) * getHeight();
Here again, the same with the getWorldCoordinates() method.
I sum-up all that -> what seems to be working is :
public Vector3f getScreenCoordinates( Vector3f worldPosition, Vector3f store ) {
if ( store == null ) {
store = new Vector3f();
}
checkViewProjection();
tmp_quat.set( worldPosition.x, worldPosition.y, worldPosition.z, 1 );
modelViewProjection.mult( tmp_quat, tmp_quat );
tmp_quat.multLocal( 1.0f / tmp_quat.w );
store.x = ( ( tmp_quat.x + 1 ) * ( viewPortRight - viewPortLeft ) / 2 ) * getWidth();
store.y = ( ( tmp_quat.y + 1 ) * ( viewPortTop - viewPortBottom ) / 2 ) * getHeight();
store.z = ( tmp_quat.z + 1 ) / 2;
return store;
}
public void setOrtho() {
if (inOrthoMode) {
throw new JmeException("Already in Orthographic mode.");
}
// set up ortho mode
RendererRecord matRecord = (RendererRecord) DisplaySystem
.getDisplaySystem().getCurrentContext().getRendererRecord();
matRecord.switchMode(GL11.GL_PROJECTION);
GL11.glPushMatrix();
GL11.glLoadIdentity();
float viewportWidth = width * (camera.getViewPortRight() - camera.getViewPortLeft());
float viewportHeight = height * (camera.getViewPortTop() - camera.getViewPortBottom());
GLU.gluOrtho2D(0, viewportWidth, 0, viewportHeight);
matRecord.switchMode(GL11.GL_MODELVIEW);
GL11.glPushMatrix();
GL11.glLoadIdentity();
inOrthoMode = true;
}
Finally, width, height, getWidth(), getHeight() are used in lots of places, and I think it sounds reasonable to check their use against view-port's actual size.
Any suggestion on what is to be done ?