Renderer has a method createCamera(width, height), but the width and height appear to be kept in sync with the associated display. All of the uses I've examined appear to construct a camera using the display's dimensions and update the camera 's fields when the display dimensions change. Are there other uses, or are these parameters unnecessary? Would it make more sense to eliminate the separate width and height fields and associate the display with the camera at construction?
Hmm, maybe this is more useful for things like split-screen views?
Maybe could manipulate the camera to get a cool warp effect when your player ate some forbidden mushrooms?
It looks to me like the Camera could currently be viewed in two different ways:
- 1. The camera is the view into the world. Looking at the code-base, there can only be one camera defined for an individual renderer. Cameras can be provided for both on-screen and offscreen (i.e. TextureRenderer) surfaces. OpenGL only allows a single view port.
- 2. An abstract representation of a camera view, which includes location, heading, etc. This is indicated by a flag in the constructor or by calling setDataOnly(boolean), but both options are not available through the Renderer interface.
If the height, width, viewport, etc. were all tied to the use of the Camera with a particular display (OpenGL surface), then you could pull this functionality out of Camera and leave it with the rendering system. This would leave a single use of Camera (case 2). This would make Camera a reusable object abstracted away from the rendering system, with some form of a Flyweight pattern needed when rendering (just like all of the Geometry implementations).
Benefits of this approach are a single Camera implementation, with simple rendering engine support. The ability to construct a Camera and configure it outside of the OpenGL thread. This is a step towards a thread-safe background loading ColladaImporter.