Another scroll panel topic

I have read most of the previous topics on how to create a scrollable panel. There seems to be a limitation on clipping so the suggested solutions are either:

  • make use of view ports (and there is a demo that implements ViewportPanels)
  • or calculate programmatically what will be displayed. Apparently the approach that is used already by lemur scrollable components is the latter.

I don’t like neither because the use of view ports seems hacky and best case scenario it will introduce more limitations. One of the limitations I can think on top of my head is that each viewport needs to have its own node tree. So I would have to coordinate multiple trees.
And for the programmatic approach I don’t think it will work on my case (in some other places I use it and it’s great) because I want scrolling on both axes for a text that has no wrapping. I would like to avoid calculations such as how many letters fit to its scroll panel.

So then I wondered how is nifty gui able to do it? and the answer is clipping! Since Nifty gui is not tightly dependent on jme it has much more control on when to enable and disable clipping.
On the other hand, even though jme provides clipping methods in Renderer class, it uses a very specific and strict flow implemented in RenderManager that does not allow any interventions before and after the rendering of a Geometry object.

The good news is that all this digging helped me understand a lot on how jme works under the hood and I finally made sense on @pspeed quote: “JME provides no way to do clipping so Lemur cannot implement a proper scroll panel without a lot of caveats”. Actually jme provides clipping. The problem is that you cannot ‘inject’ it in the parts of the flow that you want. btw RenderManger needs refactor. Too many things are going on there.

That’s clearly a huge JME limitation and not a lemur limitation. However the purpose of this topic is to help improving things.

So what I suggest is the implementation of something similar to SceneProcessors but per Geometry instead of Viewport. More specifically each Geometry object (or even Spatial) will carry the implementation of an interface that will have a preRender and postRender method so the developer will be able to customize the renderer before the rendering of a spatial and then clean it to its original state.

I think this idea is compatible with the existing architecture and vision of jme. I would very much appreciate your input on this. If you agree I m willing to contribute on this because I in any case I prefer having my own fork of jme than having to make compromises. So why not make a PR as well.

PS: I m not sure this topic belongs to lemur category since it does not describe a lemur issue but the goal is lemur related. Feel free to suggest me to move it or move it.

Indeed.

There is a third solution, too: render to texture. It has the same downsides you mention for viewports… though it does have the added benefit of being way more flexible than simple clip rectangles. (I used this for a UI in curled book pages once.)

And maybe a fourth using tricks with scissor tests or something.

The tricky part is making it cost 0 when not used.

Note that control already supports a “prerender” of sorts… but it happens even before queuing.

There are some tricky parts to all of this as I and others have tried to wedge clipping in before.

Tricky parts:

  1. only geometry is actually rendered. It’s the only thing that makes it into the queues where you could actually setup clipping and disable it again. So any clipping done for a Node would need to somehow propagate down to the geometry.
  2. rendering may happen multiple times with different “techniques” so a general “pre-render” and “post-render” becomes trickier to shoehorn in.
  3. (and this one was the killer for a lot of approaches) a single Geometry instance can actually be rendered in any number of viewports.

That last one is especially tricky because a particular clip shape is going to be viewport-specific.

I actually consider this issue to show a fundamental issue with jMonkeyEngine’s internal design. Namely that the render queue is only geometry.

Going to the other end of the spectrum, there are scene graphs that operate by having a separate “draw graph”. During traversal, the scene graph is converted to the draw graph and the draw graph is what is rendered. This is a simplified and flattened view of “just what affects rendering”… so nothing clipped, everything already transformed and sorted, etc… but importantly, state changes (like setting the clip) can be a part of this graph.

JME went a different and simpler way.

This deficiency doesn’t come up often. And when it does, JME works around it in a variety of creative ways.

But for example, if we imagine a render queue that could hold not just geometry but render state changes then these could be pushed into the queue as the scene graph was being traversed to build the geometry queue. (Ignore sorting for the moment.) So the list box node could push a clip rectangle state change before its children were added to the queue and then reset it back after.

Because of sorting, these are kind of really their own nested queues, though. So the current geometry list would be a potential graph of geometry lists.

It’s a non-trivial “kind of thing you do if redesigning the engine over again” sort of change… but it’s one of the things on my list if I ever feel like rewriting things from scratch. :wink: (in my copious spare time)

I only bring it up to get folks thinking about the problem and the fundamental limitations that make this hard.

Nifty gets away with what it does because it’s a scene processor on its own. It’s effectively completely outside of the scene graph… and that has its own limitations that I couldn’t tolerate when designing Lemur.

The last time I investigated this, I concluded the best way to implement clipping (for GUI) would be with a custom material:

1 Like

Thank you very much for the quick reply. I ve been thinking your ideas. I don’t have a deep understanding in game engines but I think I see what you mean.

I m not aware of this. Is this a similar concept to stencils?

I found a reference of this in OGUtils. it seems low level but maybe it worths to check it out.

It would be just a null check or something like that, no? The tricky parts mentioned below seem more serious though.

oh yes! I saw that. The problem is that as you said it happens too soon for all controls before it renders anything. Actually I was inspired from this. I thought what if there was something like a control that adds rendering aspects on top of a geometry object. Something like that.

Draw graph seems interesting. It seems like a more data driven approach to me since you have a data structure that describes how things will be rendered. If this was implemented then the conversion between node graph and draw graph could be controlled by the developer, right?

No, it’s just a different way for the engine to work internally. The down side is lots of kind-of duplicate data that needs to be maintained every time the view updates (depending on how far you take it). The up side is that you get lots of places to hook things in since state changes are outside of what JME calls spatials.

This is a deeper conversation but in the “jme point of view”, the idea feels a little ridiculous because of the “mode” that JME operates in with respect to when world transforms are updated. (Usually, in a render graph the transforms from spatial to camera are part of the render graph and the spatials never keep their own “world transforms” and it’s recalculated only when asked for.)

But I still think the idea is interesting even if only steps in that direction are taken. (I’ve actually come to get used to JME caching the world transforms of every spatial and like it… so the back of my mind is always curious if there is some sweet spot between the two extremes of a ‘geometry is king’ render queue and a traditional ‘all state switching is a node’ render graph.)

I will throw in a shader based clip rect as possible way. With the use of matparamoverrides it should be possible.

You can get fancy and use stencil buffer to support 3d transformations if required. Probably going to be quite complicated to revert the stencil in a tree like data structure.

Intresting topic btw.

1 Like

So based on @sgold and @zzuegg answer does this mean that implementing a clipping shader is the way to go?

If yes then how is this new shader applied as a continuation of the existing one? (I just learned what a shader is today)

Oh, my friend, you just dipped your toe into an entire universe of frustration and wonder. :slight_smile:

2 Likes

Jme’s MatParamOverrides allows you to overcome the exact limitation you discovered yourself. It allows you to set a material paramater for all childs.

Here is how i would start:

-make a copy uf unshaded.j3md and the vert and frag shaders referenced ther
-modify the new j3md to use the copied shaders
-use the new material
-validate that everything is working by outputting a color
-add the material parameters you need
-setup the overrides in java
-validate the parameters by outputing them trough writing color
-implement the discard logic.

Initially it is a strange concept how everything is wired together, but since you already digged trough jmes rendering flow you are probably picking it up fast.

@zzuegg thanks for the guidance. I wanted to make sure that I cannot avoid creating new material files. So I created a copy of Unshaded.frag and added the clipping logic at the end. Something like this:

...
uniform bool m_EnableClipping;
uniform vec2 m_ClipMin;// Bottom-left corner of clipping rectangle
uniform vec2 m_ClipMax;// Top-right corner of clipping rectangle

void main(){
...

if (m_EnableClipping){
        // Check if the fragment is outside the clipping rectangle
        if (gl_FragCoord.x < m_ClipMin.x || gl_FragCoord.x > m_ClipMax.x ||
        gl_FragCoord.y < m_ClipMin.y || gl_FragCoord.y > m_ClipMax.y) {
            discard;// Clip the fragment (skip rendering it)
        }
    }
    gl_FragColor = color;
}

This is working pretty good! And it s indeed applied to children. The problem is that my new material is not applied as the new unshaded material to existing implementations like BitmapText. Is there a registry (or something like that) provided by jme so I can register my new material file as Unshaded? or are all creations of materials depended on the file paths?

It looks interesting but painful indeed :grin:

Basically, yes.

It’s relative file paths, though… relative to the assets folder or resources or whatever… so a lot of times if you put your material in the same directory structure then it will be used.

For Lemur, the alternative is to extend Lemur’s GuiGlobals to use your own material in createMaterial() when “lit” is false. GuiGlobals allows initializing it with your own subclass.

Simplifying this a bit, I recommend a single vec4 instead of two vec2… then add a HAS_CLIP in the defines section of your j3md… like if you name your vec4 m_ClipRect:

HAS_CLIP: ClipRect

Then in your .frag you can do:

#ifdef HAS_CLIP
#endif

…instead of the real branching.

I “think” it should still work even with material parameter overrides.

2 Likes

yeap! Using the same path works. Maybe not ideal but not too bad either