From the Youtube link below, I notice that for VR development, you probably would be using 2 JMonkey “Viewports”, in this case.
My question to you is, since you would be displaying one on the left and one more on the right, shouldn’t it be more optimise to use less density resolution when you display through 2 viewports.
For example, the same game with no VR, you would be using one viewport with 100 percent density resolution.
Second example, the same game with VR, you would be using 2 viewports with 50 percent density resolution on the left and another 50 percent density resolution on the right.
Would it also be more optimise to use different sets of level of details, when programming using 2 viewports for VR?
Do you know that jme already has VR support through the jme3-vr library?
It can handle the viewports for you, so you don’t have to care about them most of the time.
It gets a bit fiddly when working with filters. For some types you only need one Filter instance for both viewports. For others which depend on the view or the depth buffer, you have to make separate instances.
And yes, the VR goggles are one big display and it uses two viewports/frame buffers, each with half the width.
jme3-vr uses the resolution provided by the VR hardware. In my case with the Vive it’s 1763x1959 per eye. I just checked it. That resolution is set by SteamVR manually or automatically depending on some performance criteria and it’s fixed in the code, so you can’t change it without modifying jme3-vr (or maybe there’s a hacky way).
Regarding LOD, if you also want to run your program in Non-VR mode, it could make sense. But if you mean different modes of LOD for each viewport, I don’t think so. It would actually kill the stereoscopic image because you could get different modes for each eye.
Do whatever you need to get 90 FPS (or the minimum for your hardware). If the frames take longer than 1/90s, the hardware will throttle it down to 45 FPS because it has to sync frames with the tracking!
Thank you so much. Now, if you do not mind one more more question
For my learning experience, let’s say, for example, a non-VR game, which would divide the monitor screen into left and right, since this game would display identical things to left and right viewports, I would think that “JMonkey Filter Instance” can do this type of work for me correct?
Yes, two cameras which have the same direction but are located 0.06-0.07 apart from eachother (distance between eyes: IPD = Interpupillary Distance). This makes the stereoscopic 3D effect and allows to perceive the depth.
If you can, try to experiment with this camera distance. It can make a huge difference on the 3D impression and also ergonomics. The eyes will adjust for a wrong distance but it’s exhausting.
Note that you may have to adjust the camera’s frustum to deliver a correct perception. Each frustum is slightly skewed towards the center, that is, frustumLeft and frustumRight are slightly different. But I don’t know if it’s really necessary and what the values are. They probably depend on the lenses. OpenVR has functions that provide a complete projection matrix for the cameras and jme3-vr uses these.