I have been trying to establish the use of multiple cameras in my JME3 application, the rub however, is that each extra camera has a different resolution/aspect ratio and fov degree. What I am doing is at an instant in time(key-press) all camera images are rendered offscreen(Using the RenderToMemoryTest as basis) and displayed in JFrame. Unfortunately it would seem that any camera that is not close to the startup size of JME3 application will render incorrectly, by means of splicing the image in half vertically and rendering that one half twice and adding in segmented empty space as vertical bars.
My understanding is that since the same RenderManager is used for all the viewports of each camera, then the same renderer is being used. Namely the renderer that is sized for the jme3 application. My first thought was maybe seeing if it would be possible to resize the renderer for the instant i need it, but I have not been able to find how to do it. My second thought was to create independent renderers/render-managers for each camera-viewport though in these threads I have read that that would be a very bad thing to do.
Thank you for your time