@pspeed Can I pick your brain for sec? This inspired me to try a bit of this just to see whats involved and I was pleasantly surprised at how easy it was to put together an element that handles different mouse states, drag/drop and resizing. What really peaked my interest was the ability to use lighting, shaders and particle emitters as part of the GUI.
So, given that I spent a total of 30 minutes playing with this and aside from being extremely pleased with how easy it would be to put something usable together… I came across a few things I’m not sure what to do with.
- Using JME’s ActionListener and Analog Listener, how would one stifle events from filter down to objects lower in the z-order? Or are you using a different event system?
I use a raw input listener that casts a ray into a list of scene roots… viewports basically. It detects whether to treat it as ortho or perspective and does the appropriate thing. It delivers the events in closest to farthest order, stopping if the event is consumed. I also track which spatial has the “capture” and deliver that information as part of the events and make sure to continue delivering mouse motion events and button release events to the captured control even if the mouse is not over it.
This is code I’ve been using for a while. It has no external dependencies and is one app state, one control, and an event listener interface. I could probably just post them separately.
2. Have you gotten as far as implementing clipping layers? And what approach did you (or were you planning on) using? Haven't really given this any thought yet... but would love to hear what approach you were considering!
I have found no good way to do clipping layers in JME. The Nifty gets to use open GL clipping but JME doesn’t expose that to us. So none of my UIs need it. In Mythruna for some of my existing UIs, I’ve cheated by giving my gui bucket depth (hacked JME) and then do some z-buffer tricks for clipping.
3. One aspect of Nifty that I liked (though, I hate the implementation because I could never remember what order the info was supposed to be entered) was the 9 part resizing. I gave that a bit of thought and came up with about 6 different approaches to handling this. How are you handling the image portion resize events?
You mean the way you can have a 3x3 grid image and stretch it nicely, leaving the sides the same size and only stretching the center?
Yeah, I have a TbtQuad (Three-by-three quad) that acts like a regular quad except that it has 9 squares and handles texture coordinates in a way that allows the above. At least a subset of it. I used it for the borders in the menu stuff I’ve posted. The same border around the giant Mythruna is the same one used around the slider ranges, is the same one used around the button and frames. It’s a small square texture with a hand-drawn-looking box at the edge.
Again, that class is completely independent of anything else so I could probably just post it.
4. This last one is semi-related... where can I read up on JME's vertex group support? And does it require meshes with an associated skeleton? I'd like to try implementing 4-corner-drag resizing using a 3d mesh and it would be a heck of a lot easy if I could just define vert groups on the imported model and go from there. If not... I guess I could use a simple custom mesh and track indexes.
I’ve never looked at any of that and it doesn’t have much to do with a GUI to me, either. GUIs tend to have layouts and so on… child components must move and/or resize as their parent resizes, etc… and that’s sort of part the layout functionality already.