I have read most of the previous topics on how to create a scrollable panel. There seems to be a limitation on clipping so the suggested solutions are either:
- make use of view ports (and there is a demo that implements ViewportPanels)
- or calculate programmatically what will be displayed. Apparently the approach that is used already by lemur scrollable components is the latter.
I don’t like neither because the use of view ports seems hacky and best case scenario it will introduce more limitations. One of the limitations I can think on top of my head is that each viewport needs to have its own node tree. So I would have to coordinate multiple trees.
And for the programmatic approach I don’t think it will work on my case (in some other places I use it and it’s great) because I want scrolling on both axes for a text that has no wrapping. I would like to avoid calculations such as how many letters fit to its scroll panel.
So then I wondered how is nifty gui able to do it? and the answer is clipping! Since Nifty gui is not tightly dependent on jme it has much more control on when to enable and disable clipping.
On the other hand, even though jme provides clipping methods in Renderer
class, it uses a very specific and strict flow implemented in RenderManager that does not allow any interventions before and after the rendering of a Geometry object.
The good news is that all this digging helped me understand a lot on how jme works under the hood and I finally made sense on @pspeed quote: “JME provides no way to do clipping so Lemur cannot implement a proper scroll panel without a lot of caveats”. Actually jme provides clipping. The problem is that you cannot ‘inject’ it in the parts of the flow that you want. btw RenderManger
needs refactor. Too many things are going on there.
That’s clearly a huge JME limitation and not a lemur limitation. However the purpose of this topic is to help improving things.
So what I suggest is the implementation of something similar to SceneProcessors but per Geometry instead of Viewport. More specifically each Geometry
object (or even Spatial
) will carry the implementation of an interface that will have a preRender and postRender method so the developer will be able to customize the renderer before the rendering of a spatial and then clean it to its original state.
I think this idea is compatible with the existing architecture and vision of jme. I would very much appreciate your input on this. If you agree I m willing to contribute on this because I in any case I prefer having my own fork of jme than having to make compromises. So why not make a PR as well.
PS: I m not sure this topic belongs to lemur category since it does not describe a lemur issue but the goal is lemur related. Feel free to suggest me to move it or move it.