A "3D" Capable GUI

Hi All,

Does t0neg0d’s GUI support 3D interactivity? I know that Void’s Nifty-GUI can be placed upon a surface by tapping into the Texture2D set up for that GUI (well, giving it a frambuffer to play with), however, interaction coordinates are still in “screen space”.

If not, would either GUI project leader like some math/coding assistance for their GUI to expand into that capability?

I am working on my toolset for my game project Iteag, and I want a 3D-capable GUI for my game… I think an in-game interactive holographic looking panel would be cool. (It would be the transparency pass that ignores writing to the z-buffer, or one of the last passes to a Texture2D…(ack! FrameBuffer!) in deferred rendering)

The basics for me to do this by myself would be to (rough algorithm time)

Create the object:
create a Texture2D/Framebuffer of “arbitrary size” for the GUI or GUI element
create a node (for ease of transforming all child data positional coordinates if it moves)
attach a Quad geometry (spatial, I believe?)
attach a UV-enabled material (verts and frags shaders/opengl) for rendering
attach the Texture2D of “arbitrary size” (I understand UV co-ords as a multiplier )
attach a sound source to it. (It could beep n stuff)
attach a collision detection plane (unless it’s non interactive)
Attach to rootnode, and manipulate
An assumption:
a listening source is attached to camera or “player” so that any soundfx attached to object scale properly to distance from sound source.

Detect the object:
do a raycast from mouse(viewport space) to collision(world space) only when mouse moves or clicks.
convert from world space to local space to find UV coordinates of collision(the multiplier for width/height of Texture2D);
…and there’s the X,Y cords.

Interaction with object:
2DGUI interaction from there with bells and whistles. It’s 2D stuff now.
rebind Processed Texture2D to gpu for that material, unless that’s already in code. In jMonkeyEngine, the usage of the FrameBuffers class makes a great go between, I believe. (Me <- still a noob)

Some cool implications (if done right, of course):
-A truly scaleable GUI that is independent of resolution, for that would be configured by the Texture2D/Framebuffer. The quad geometry is created to match the aspect/ratio of the texture.

-Multiple Quads could be attached to a .j3o “scene” for a truly 3d layered gui. the node would allow it to be “billboarded” to the camera.

-The Multiple Quads setup would look 2D on an orthographic view(facing the camera)… I only mention this because an editing tool with the 3d layered look would be awesome to use… even if the end result is for a 2D singular layer and you are just specifying rendering order… LOL… EDITOR!

-I gets my hologram panel of “awesomness”.

Ok, that about covers it. I’ve probably posted in the wrong part of the forum…though. Ah well.

Game On,
Charles Anderson

It supports 3D GUI Elements in a couple different ways.

  1. Rendering the gui elements offscreen and using the output as a texture for a specified geometry. Then allows you to interact with the elements using berycentric coordinates and ray casting into the offscreen scene.

  2. You can wrap 3D geometries and leverage the libraries input handling/picking/etc for objects within your 3D scene graph.

So… to answer your question in short: tonegodGUI can do what you are looking for.

As for the rest of the topic, I’ll let others chime in!

@t0neg0d
Awesome, I’ve started the research process. I’ve noted that you have an OSRBridge class, I will try my hand working at it.

@t0neg0d said: 1. Rendering the gui elements offscreen and using the output as a texture for a specified geometry. Then allows you to interact with the elements using berycentric coordinates and ray casting into the offscreen scene.

yeppers, and this is a good article on berycentric coordinates as applied to game logic: Totologic: Accurate point in triangle test

I don’t have to be as concerned with raycasting into the offscreen scene, right? Your GUI looks to be a 2D centric app that treats the Z vector as a render order.
I just need to make sure that:

  1. I get a frambuffer or Texture2D correctly connected to your app for rendering.
  2. And then It’s all on me for getting math and conversions correct for an X,Y position on the texture, then:
  3. Sweet talk for a bridge or (information on how to) so I can send fake Mouse Coordinates to your “screen”

From what I understand of Java, it looks like your code attaches to an app quite nicely, is designed for the GUINode specifically. (One of my “I’ll experiment to understand this” tests was to attach to an arbitrary Node to the root, attach the Screen to that. then mess with the node.setTranslation(vector3f) at the default of (0f,0f,0f) you can swing the Mouse to the right and a little down, and bang! it pops up into view (I did the flycam.setDragtoRotate(true) so I could play with the resizeable window, LOL)) but yeah, like the GUI node, it ignores the worldTransform matrix of a scene, and renders straight to viewport space er… the camera, I think it’s called?

Any tips for 1 and 3?

Note: Lemur can also support real 3D GUIs and not just 2D projected on 3D. Just in case anyone stumbles onto this thread and wonders about “3D” without the quotes. :slight_smile:

@Relic724 said: I don't have to be as concerned with raycasting into the offscreen scene, right? Your GUI looks to be a 2D centric app that treats the Z vector as a render order. I just need to make sure that: 1. I get *a* frambuffer or Texture2D correctly connected to your app for rendering. 2. And then It's all on me for getting math and conversions correct for an X,Y position on the texture, then: 3. Sweet talk for a bridge or (information on how to) so I can send fake Mouse Coordinates to your "screen"
  • It is 2D-centric, unless you are using the input handling against your 3D scene… then it is entirely 3D-centeric

I think a video example of interacting with a projected scene would be more helpful. The control panel on the door is SubScreen of the gui library:

[video]http://youtu.be/FYcUy5Y0oyI[/video]

  1. Yes.
  2. No… this all happens for you
  3. You can do this if you need… not sure if you’ll actually need to though.

Hopefully this answered your questions…

And @pspeed

tonegodGUI does both 2D-to-3D (projected 2D) and true 3D gui components (any spatial works as a gui element, etc). It sounded like from your response that Lemur does projected 2D-to-3D as well… did I read that correctly?

@pspeed I just realized this sounded snyde… and that wasn’t why I asked. I’m asking because I had a hell of a time getting the projected 2D to work properly and just wanted to ask:

  • How implementing it went for you?
  • How the end user implements this using Lemur?
@t0neg0d said: And @pspeed

tonegodGUI does both 2D-to-3D (projected 2D) and true 3D gui components (any spatial works as a gui element, etc). It sounded like from your response that Lemur does projected 2D-to-3D as well… did I read that correctly?

Is that 3D support pretty new? I thought I remembered that originally all of your GUI components extended a specific base class or something.

@t0neg0d said: @pspeed I just realized this sounded snyde... and that wasn't why I asked. I'm asking because I had a hell of a time getting the projected 2D to work properly and just wanted to ask:
  • How implementing it went for you?
  • How the end user implements this using Lemur?

I implemented this to do the book pages in Lemur because the UI needed to map around the curve. (In fact, I originally tested UIs on a sphere which was kind of weird to drag sliders on.)

Some framework changes were necessary to make this happen. First, my regular mouse events were using JME’s mouse event classes… which are fine for simple buttons and stuff but not ok for things that need to know where you clicked. So I added a second set of CursorEvent classes that are like mouse events but include the collision information, the viewport of the click, etc… The other thing I added was PickEventSession which is how the picking framework delivers events internally. This was also partially to support multitouch invivisibly to the regular UI components. PickEventSession also then lets listener code deliver their own viewports or whatever if they like without having to worry about how x,y coordinates map to rays and so on.

That enabled me to do 2D projected UIs with only two classes. These are currently not checked into Lemur. Sometimes I like to let things incubate until folks ask for them. a) it let’s me beat on them a bit first, b) I get a second (probably wildly) different use-case as validation, and c) it keeps cruft from accumulating in the API if it turns out to be a niche feature.

Anyway, one class is just a viewport app state, basically. I’ve posted that code to the forum before. I extend that and provide the preview viewport that’s connected to the framebuffer texture, etc. The second class is where the real magic happens and that’s the ViewPortMouseHandler. It’s just a regular event listener that can be added to any spatial to get pick events and it does the math to get the texture coordinate of the click, translate that into ViewPort space, and then pass that on to its nested PickEventSession.

So, as it exists now (if I were to commit it), users would create the app state and set the texture to what they want it on… then add the listener to whatever or as many spatials they want to receive the clicks. (For example, the Book UI in Mythruna is actually viewing a viewport of 4 pages because I use that one viewport to texture both the left half and right half of the book, plus both sides of the turning page.)

For people who have no idea what I’m talking about, re: the “book”:

I hope that answered the question… it got kind of verbose.

1 Like

@pspeed
Awesome… yep… that answered it. And now when people read your post above they’ll know that it supports both! I couldn’t tell at first… but after seeing this vid again, I remember!

Last question… how did the process go for implementing the offscreen picking? I had a hell of a time finding useful overviews on the subject with enough info to properly implement this. Actually, I think you were more helpful than the articles I found on the subject while working on it. I can’t remember the specific problem I had… but the solution ended being a little bit easier than I was making it out to be.

@t0neg0d said: @pspeed Awesome... yep... that answered it. And now when people read your post above they'll know that it supports both! I couldn't tell at first... but after seeing this vid again, I remember!

Last question… how did the process go for implementing the offscreen picking? I had a hell of a time finding useful overviews on the subject with enough info to properly implement this. Actually, I think you were more helpful than the articles I found on the subject while working on it. I can’t remember the specific problem I had… but the solution ended being a little bit easier than I was making it out to be.

I had never done off screen rendering before so that was a bit of a learning curve. I originally had a solution where I managed my own ViewPort (created with ‘new’ instead of through render manager) and was managing it all myself… but it always seemed like there was a one frame delay at least initially. Ultimately, I just went with the preview port. The hardest part about off screen rendering to me is the handful of things that can go wrong that just give you a black screen… whether it’s a lack of lighting, camera facing the wrong direction, etc… the usual graphics fun.

In the final solution, the math for what was clicked is the hardest part but that was relatively straight forward… and as said, it was pretty cool to see it working wrapped around a sphere. :slight_smile: (The nice thing is that I at least always had visuals to work with by then.)

@t0neg0d
LOL, so you have the [subscreen] class for projecting to a surface. Great to Know! subscreen will be perfect if I can “push” the input data to it, rather than attempt to work around the screen class’s “pull” of the input data.

@pspeed
yours is the solution I would be trying to recreate.

I am still learning and doing samples to understand where jME3 does it’s various processes at (from the rendering mindset I have) So, what I call a “Render Pass to a backbuffer”, you guys have a “PreviewRender to a framebuffer”. Yeppers… lingo translation fun.

I’ll definitely have more questions and post them, but they are in other aspects of my learning how to be effective in jME.

@Relic724 said: @t0neg0d LOL, so you have the [subscreen] class for projecting to a surface. Great to Know! subscreen will be perfect if I can "push" the input data to it, rather than attempt to work around the screen class's "pull" of the input data.

@pspeed
yours is the solution I would be trying to recreate.

I am still learning and doing samples to understand where jME3 does it’s various processes at (from the rendering mindset I have) So, what I call a “Render Pass to a backbuffer”, you guys have a “PreviewRender to a framebuffer”. Yeppers… lingo translation fun.

I’ll definitely have more questions and post them, but they are in other aspects of my learning how to be effective in jME.

Well, a frame buffer and frame buffer texture are very specific things. In my day, a backbuffer was also a very specific thing and unrelated to anything we’re talking about so far. (In double buffered rendering, it is the place where things are drawn while the retrace is happening on the main buffer.) So, yeah, nomenclature can be important. :slight_smile:

These are not really JME-specific terms, either. FBO (frame buffer object) is an OpenGL thing. Framebuffer Object - OpenGL Wiki

Btw, the jme jfx bridge by default renders via a texture, so it is quite easy to put that somwhere in the level as well.

1 Like
@pspeed said: In the final solution, the math for what was clicked is the hardest part but that was relatively straight forward... and as said, it was pretty cool to see it working wrapped around a sphere. :) (The nice thing is that I at least always had visuals to work with by then.)

I was super excited when I finally got this working as well… and you are correct about the math behind weighted coords… in the end it made sense (it had too… as much as it hurt my feeble brain)… but while I stumbled through trying to wrap my head around it I thought I was going to go crazy… I think I had a good four attempts that I was sure where correct that turned out to be absolute failures =)

@Empire Phoenix said: Btw, the jme jfx bridge by default renders via a texture, so it is quite easy to put that somwhere in the level as well.

I figured this would be how it was done. Has anyone tried rendering it into the scene and seeing if it all works properly?

@t0neg0d said: I was super excited when I finally got this working as well... and you are correct about the math behind *weighted* coords... in the end it made sense (it had too... as much as it hurt my feeble brain)... but while I stumbled through trying to wrap my head around it I thought I was going to go crazy... I think I had a good four attempts that I was *sure* where correct that turned out to be absolute failures =)

Yeah, I think I remember the forum posts.

It took me much less time for this part than getting the framebuffer to render. But in past lives, I’ve done live painting to models before and stuff. (Way back in the day I even wrote software-based 3D renderers and stuff.) So I was already familiar with the math.

@Relic724 said: @t0neg0d LOL, so you have the [subscreen] class for projecting to a surface. Great to Know! subscreen will be perfect if I can "push" the input data to it, rather than attempt to work around the screen class's "pull" of the input data.

@pspeed
yours is the solution I would be trying to recreate.

I think I may be having a bit of trouble following what it is you are trying to accomplish here. The two solutions you are referring to are exactly the same. There is really only a single way of making a rendered texture interactive and that is to:

  • Cast an initial ray into the scene.
  • The ray collides with the geometry using the outputted framebuffer texture of the offscreen viewport
  • You determine the Barycentric coordinates (i.e. the contact point in relation to closest texCoords and how these relate back to the offscreen viewport)
  • You then recast a ray into the offscreen viewport using the above results to find what is being collided with.

Is there something you are trying to do that doesn’t fall into the above description?

The SubScreen class and OSRBridge handle creating all of this for you allowing you to create and interact with gui components without having to know anything about how/why/where/what and when it is working. It’s as simple as:

[java]
// Create the one and only screen for your project
screen = new Screen(this, “tonegod/gui/style/atlasdef/style_map.gui.xml”);
screen.setUseTextureAtlas(true, “tonegod/gui/style/atlasdef/atlas.png”);
guiNode.addControl(screen);

// Create a SubScreen - in this case accessPanel is model from blender I loaded via the assetManager
subScreen = new SubScreen(screen, (Geometry)accessPanel.getChild(0));
screen.addSubScreen(subScreen);

// Attach the subScreen Geometry somewhere in your 3D scene graph
rootNode.attachChild(subScreen.getGeometry());
// Define the width/height of the subscene and provide an empty node to use as your offscreen guiNode
subScreen.setSubScreenBridge((int)ssWidth, (int)ssHeight, guiSubScene);
[/java]

From this point on, you create and add gui elements as you normally would… using the subScreen inplace of the screen for elements you want embedded. For instance:

[java]
Window win = new Window(subScreen, Vector2f.ZERO);
subScreen.addElement(win);
[/java]

You know have a draggable, resizable window that appears on whatever geometry you used for the subscreen.

2 Likes

@tonegod
Awesome, thanks for the heads up, I’ll be playing with it today!

@tonegod

And, of course I have to do a follow up reply and say “Kudos”. yeppers, your gui works excellently. Here’s the relevant part from one of my youtube series.

(edit) Erg, it didn’t take the time stamp request… try 3:50 into the vid. (edit)
If you’d like, I can adapt the algorithm from the article I quoted earlier to the source that you have displayed on your googlecode site and post it in a message to you.

Game On,
Charles Anderson

1 Like
@Relic724 said: @tonegod

And, of course I have to do a follow up reply and say “Kudos”. yeppers, your gui works excellently. Here’s the relevant part from one of my youtube series.

[video]http://youtu.be/ZUh1LefJdZA?t=3m48s[/video]

(edit) Erg, it didn’t take the time stamp request… try 3:50 into the vid. (edit)
If you’d like, I can adapt the algorithm from the article I quoted earlier to the source that you have displayed on your googlecode site and post it in a message to you.

Game On,
Charles Anderson

Hey! Love the vid… EDIT: I should mention the reason I liked the video. You kept my interest the entire time! I like the fact that it is both informative and that you actually sound excited about what your talking about! (This goes a long way towards keeping people interested)

Just a heads up… the click issue you were seeing is now resolved. It actually had to do with zorder of sub-elements not being handled properly. You will find after updating the plugin that the clicking in subscenes should be seemless now.

EDIT: Oh… and not that it matters, but he is actually a she =)

@t0neg0d said: Hey! Love the vid... EDIT: I should mention the reason I liked the video. You kept my interest the entire time! I like the fact that it is both informative and that you actually sound excited about what your talking about! (This goes a long way towards keeping people interested)

Just a heads up… the click issue you were seeing is now resolved. It actually had to do with zorder of sub-elements not being handled properly. You will find after updating the plugin that the clicking in subscenes should be seemless now.

EDIT: Oh… and not that it matters, but he is actually a she =)

  1. Thank You, the delivery style just comes from working with others.
  2. Awesome! I’ll update right away.
  3. Yes ma’am, duly noted for future reference. =D
1 Like