Hello everyone. I am creating my first project with a player derived from Node. I have created a camera in the Player class and attached it as a child of the player using a CameraNode. After running the game I am not sure which camera is actually being used. Is there a concept such as a main camera? Do I need to disable the default camera that SimpleApplication provides? If so how?
I know that I can pass a reference to the camera down through my app state and then from there to the player, but I really donât like passing references around like that. I am hoping that I can just create a camera instance and activate it.
Thanks but I have seen that page and I am not looking to create multiple views. I simply want to use my own camera rather than the one provided by SimpleApplication. Seems strange to have a scene graph system where you cannot just attach a new working node without having to deal with a viewport and renderer.
While I canât tell many details from what you have provided, I get the feeling that you are doing this upside down.
Camera may follow the player but that doesnât mean the player needs to âhave aâ camera.
Edit: put another way, think of the camera as a global thing that the whole application shares⌠like the keyboard, the joysticks, the screen, the sound system.
Sorry but I do not think of a camera in that way. Having used other engines like Unity and Godot, the player having a camera is very common and so I find your comment that it is âbackwardsâ a bit strange since the camera is akin to the players âheadâ in an OOP design. I would expect a scene graph system to support a simple camera node that one could instantiate and attach as a child and I find it backwards that JME does not. Thanks for the reply though.
That makes me wonder how Unity and Godot would handle split screens or having cameras that jump around from role to role.
Iâve used a variety of engines and scene graphs over several decades and they all do cameras a little differently. (Including one scene graph where the camera had no position at all and you moved the scene around in front of it.)
In JME, a camera is view into a viewport. It has projection, view frustum settings, etc⌠Position+orientation are almost secondary. It is a window into a frame buffer.
It is somewhere between both of extremes of the systems that Iâve used over the years but I think it sits nicely to be the most flexible. For example, itâs pretty trivial to have 4 player split screen assuming you have input that can handle that.
Unity/Godot must under-the-covers do some hacky globals stuff to wire you your âcreated on the fly in a leafâ style âcameraâ to the âphysical rendering to the screenâ stuff. You could implement some similar hacky-globals stuff with JME, too⌠it just doesnât provide it out of the box. (Similar to keyboard, mouse, etc.)
I canât say how other engines work under the hood, but I can say that Unity, Godot, Libgdx and a few others that I have used do not require you to use a global camera and pass references around. I am still learning JME, and I am having a hard time adjusting to the opinionated design of the SimpleApplication. I dislike having to pass references down two or three levels and a camera doesnât seem like it should be a global object to me. I guess I could study the SimpleApplication and LegacyApplication code and implement my own starter app that fits my coding style a bit better. Godot really gets this right in the sense that everything is a node including the camera. It seems that in order to make this work, I imagine that only one camera is the main camera or active camera at any one time perhaps.
But then youâd have to pass down the viewport to tell the camera where to render to (for if the camera was rendering into the main screen viewport or any other developer defined viewports). The idea of instantiating a camera and having it magically connecting to some viewport seems very anti object orientated. Which is i guess fine, object orientated isnt the only approach, but it is the java approach which makes JME feel very natural to java developers (and i feel i would dislike godot and unity). And what if someone instantiates two cameras?
If you really want a statically accessible camera you can put the camera in a static variable, itâs a bad idea in my opinion but you can
Iâd suggest give JME a go and decide if you like its approach, lots of people do but there are many engines because different people liek different approaches.
[As an aside; i wonder how VR works in this sort of approach. I was able to write a VR library with JME having no idea i was doing it, it was all just cameras and viewports]
What do they do when you create 100 different cameras in your scene graph? How does it know which one to actually render to the screen?
Is the keyboard also a node? Are joysticks also nodes? My point is that the disconnect is what the two points of view are calling a âcameraâ.
In the end, there is only one global GPU. Something has to associate what you are calling a âcameraâ with something that manages that global GPU⌠probably with a framebuffer, viewport, etc. setup. JME calls this a âcameraâ⌠because cameraâs take pictures⌠and there is only one roll of film in this case.
Other engines may call something else a âcameraâ⌠meaning the position and location of the otherwise definitely global âthing that takes picturesâ.
From the perspective of a scene graph, it also makes no sense for the âthing that takes picturesâ to be a part of the scene graph. It does not participate in culling, it does not get âlitâ, it does not get rendered, it does not factor into bounding shape calculation, etcâŚ
Engines that put the âthing that takes picturesâ into the scene rather than the tripod âposition locationâ, create a dozen problems for themselves that then they must hack around internally. Not the least of which is what happens when you decide to move the âthing that takes picturesâ to a different tripod⌠and want a bunch of different tripods with some carefully managed âthings that take picturesâ.
Iâve worked on and contributed code to engines that put the âthing that takes picturesâ right into the scene and invariably there is a bunch of special case code that disappears if you only put the tripod into the scene.
Iâm not familiar with Godot so I picked a random explanation of using a camera in 3D scene in Godot:
And correct me if Iâm wrong but thereâs not much different between the way Godot is using a camera and JME. In this clip:
The user creates a sub node attached to the player object. He calls it âCameraPositionTargetâ
The CameraPositionTarget is manipulated however the user decides
The CameraPositionTarget is then linked to Sceneâs main camera node as a target
The main camera has a custom code moving towards that target
So one way to do that in JME will be:
Create a sub node attached to the player object. Call it âCameraPositionTargetâ
Create a camera AppState accepting your CameraPositionTarget in itâs constructor. Attach this AppState to the scene.
In your AppState, update the camera position based on the CameraPositionTarget node
So I find both engines do it quite similar. At least in this clips the user didnât create a new Camera node for the player. It created a camera position node (just a regular node) and used that as a target to the main camera.
Also I would not relate the way you are using the camera to OOP. All entities camera included are objects and it is for you to decide which patterns to use with your objects. Treating your playerâs âheadâ as the âcameraâ doesnât make it more object oriented design.