Need advice: Layered texture

I want to have a texture that’s composed of several layers of RGBA bitmaps.
Individual bitmaps won’t be large nor too numerous.
Bitmap XY position, scaling, and layer are set up at runtime. These don’t change often, maybe 10 times per second max, so there’s potential for reusing the composed texture across frames.

I was thinking about compositing the bitmaps to a GPU buffer, and using that buffer as a texture.
Questions:

  1. Is there sample code that sets up such a texture buffer? I’m not sure how the buffer would be passed to the shader that needs to apply it as a texture. Or how to draw stuff on a texture buffer in the first place.
  2. How do I set this up so that it interoperates well with the existing Material system? E.g. it would be nice to slap a national emblem on an aircraft wing, without constraining the kind of Material to use for the wing. (Is this best solved using shader nodes?)
  3. Is it a good approach at all? Maybe it’s easier/better/just as efficient to use a sequence of shader nodes, each applying just a single bitmap.

Any feedback appreciated.

Why not just paint them to a single texture if you are worried about multiple texture fetches in the shader?

Or is that what you are asking how to do?

I’m not sure whether I should be worried in the first place :slight_smile:
It’s just that I have seen Lemur faking this by putting each bitmap at its own Z layer. So I’m wondering whether there’s a problem with the paint-to-bitmap approach that I’m unaware of. Or what trade-offs in general led to that design decision.

I’m also unsure about how to set up the entire chain - setting up the GPU buffer for the texture, drawing the bitmaps to it, how to pass the ID to the Material that uses them, whether to use a standard shader, roll my own, or use shader nodes.
I have some vague ideas how to do all this, but I have never done it for real, and I find the prospect of setting up the whole pipeline without having guidance or a working, known-good example a bit scary.

@toolforger said: I'm not sure whether I should be worried in the first place :-) It's just that I have seen Lemur faking this by putting each bitmap at its own Z layer. So I'm wondering whether there's a problem with the paint-to-bitmap approach that I'm unaware of. Or what trade-offs in general led to that design decision.

I’m also unsure about how to set up the entire chain - setting up the GPU buffer for the texture, drawing the bitmaps to it, how to pass the ID to the Material that uses them, whether to use a standard shader, roll my own, or use shader nodes.
I have some vague ideas how to do all this, but I have never done it for real, and I find the prospect of setting up the whole pipeline without having guidance or a working, known-good example a bit scary.

In this case, Lemur opts for convenience of design over absolute performance. The design is more flexible and modular if I don’t have to merge layers into a single texture (not to mention that layers could be fully 3D which would blow the whole idea anyway.) Sure, for certain UI configurations it might be faster to collapse some layers to merged texture (and Lemur does not prevent you from creating that background object) but you’ll get better performance out of batching most likely. And in that case, you’d want lots of geometry using identical Material configurations (like Lemur tries to do now) versus fewer independent geometries each with their own one-off material. Either way I think you’d need a pretty high number of components before it matters.

But, for example, if you have 1000 labels all with a TbtQuad background and shadowed text… that’s something like 3000 objects unbatched but only 3 batched. That’s going to be better than 1000 merge-textured objects any day. Actually, even buttons would work this way if the text doesn’t change. So maybe convenient actually turns out to be more performant in this case.

Anyway, if you want to merge down textures then it’s not hard. You can use Java2D if you don’t care about Android compatibility or the ImagePainter plugin if you do.

I’m having concerns about choosing the right Z distances with Lemur’s approach.
Make them too small, and you’ll get Z fighting due to Z buffer granularity. Make them too large and the parallax will become noticeable. Make the display visible from the back side (think those transparent tactical displays in Star Wars films, see http://doctormikereddy.files.wordpress.com/2013/02/starwarsdisp.jpg ) and you’ll have to flip Z order depending on whether it’s visible from the front or back side.
If I read the background docs right, Z buffer coordinate typically correlated with viewing distance through some function, often a logarithmic one - i.e. you have better Z granularity near the camera. This means that the Z distances to use would depend on viewing distance, and possibly on graphics card model.
Can’t say whether these problems are real or not; I’d probably need a hardware zoo of 3D cards and Smartphones to get at facts.

ImagePainter looks very nice, I just checked out the thread, and I have to say I’m impressed.
It seems to have trouble dealing with using the fastest method for each kind of image source though. I might find myself pushing the limits there, and pushing the limits is always a way to slow progress to a crawl… so I’m not sold on it yet, I’ll have to check it out.
EDIT: Pixelwise drawing? Now that’s going to be slow no matter what… the feature set is still impressive, but I had hoped for something with GPU acceleration.

ImagePainter users ImageRaster which is pretty fast really. The only thing it doesn’t do is batch the read/write operations - in theory some performance improvements could be made there.

Lemur has a layer geometry comparator - that means you can force objects to always draw in a certain order without using z offsets to do it.

For example I have 3 quads all displaying images and with depth write turned off. I place them all at the same location and use the geometry comparator to make sure they always draw in the correct order. (Actually I use a simpler comparator than the lermur one since I didn’t need a bunch of its advanced features but the concept is the same).

@toolforger said: I'm having concerns about choosing the right Z distances with Lemur's approach. Make them too small, and you'll get Z fighting due to Z buffer granularity. Make them too large and the parallax will become noticeable. Make the display visible from the back side (think those transparent tactical displays in Star Wars films, see http://doctormikereddy.files.wordpress.com/2013/02/starwarsdisp.jpg ) and you'll have to flip Z order depending on whether it's visible from the front or back side.

…which is something you would certainly have to deal with for a flattened texture since it would like identical from the front and the back unless you render two of them. Unless that’s the desire, I guess. Then there is no way around flattening.

Also, for the floating holograms thing you can get around the z-buffer issue by not writing depth on the holograms and just make sure they are renderered late. Either by putting them in the translucent bucket (where they probably belong) or setting their layer sufficiently high. At that point, if you’ve given any depth to your gui elements at all, and given them proper layers, then they will be rendered in appropriate order no matter direction you look. Well, depending on what “proper” is. Personally, I think it would be weird to see the back side of buttons inside out. But I guess it depends on the effect.

@toolforger said: If I read the background docs right, Z buffer coordinate typically correlated with viewing distance through some function, often a logarithmic one - i.e. you have better Z granularity near the camera. This means that the Z distances to use would depend on viewing distance, and possibly on graphics card model. Can't say whether these problems are real or not; I'd probably need a hardware zoo of 3D cards and Smartphones to get at facts.

ImagePainter looks very nice, I just checked out the thread, and I have to say I’m impressed.
It seems to have trouble dealing with using the fastest method for each kind of image source though. I might find myself pushing the limits there, and pushing the limits is always a way to slow progress to a crawl… so I’m not sold on it yet, I’ll have to check it out.
EDIT: Pixelwise drawing? Now that’s going to be slow no matter what… the feature set is still impressive, but I had hoped for something with GPU acceleration.

You will run out of texture space long before you have a performance issue on smart phones, I think. Your “texture per object” approach will consume a lot of texture memory if you use it without care.

Java2D would have hardware acceleration and then it’s just a buffer copy. And then you have the whole Java2D API at your disposal… but then you can only run on desktop. So it’s a tradeoff.

Layer geometry comparator in Lemur - ah right, I had forgotten about that specific part.
I’m wondering why it’s taking so much pains to carry the Z coordinate though. Are Z positions supposed to be visible in Lemur?

I’d want to avoid Java2D, or be problematic on Android, since it’s going to be part of some library work that I’m considering.

Back to ImagePainter:
I’m still very worried about performance after skimming http://code.google.com/p/jmonkeyplatform-contributions/source/browse/trunk/ImagePainter/ImagePainter/src/com/zero_separation/plugins/imagepainter/ImagePainter.java .
It has a paintPixel() function, and it seems to be called from many if not all functions that actually draw anything.
I’m currently under the impression that ImagePainter is creating CPU-side bitmaps, which then need to be transferred to the GPU in every update loop. Is that correct?
What adds to my worries is that the video states 25 fps for ~100 cubes or ~300 bitmaps. That’s not very impressive for the use case that I have in mind: drawing a few constant textures at varying positions, drawing orders, and scaling factors. Blockworld seems to be able to draw a gazillion of bitmaps :smiley: (I don’t mean to say that ImagePainter is bad, I’m aware that it was built for a far more broad use case and that there are probably solid reasons for its design.)

Seems like I just need to use normal quads and that depth comparator.
I’ll take alook into TonegodUI to see whether it’s being done that way there, too.

BTW I might let the cat out of the bag already:
With Nifty full of architectural problems, Chris gone with a huge sulk, and Paul being too occupied to do much on Lemur, I’m considering setting up yet another GUI library.
I have reason to believe I have the skills to do that, except I’m not too experienced in 3D so I’ll be happy get any advice. Possibly including advice not to try that - there are many potential reasons for that after all.

@toolforger said: Layer geometry comparator in Lemur - ah right, I had forgotten about that specific part. I'm wondering why it's taking so much pains to carry the Z coordinate though. Are Z positions supposed to be visible in Lemur?

I’m not sure what you mean here.

@toolforger said: I'd want to avoid Java2D, or be problematic on Android, since it's going to be part of some library work that I'm considering.

Back to ImagePainter:
I’m still very worried about performance after skimming Google Code Archive - Long-term storage for Google Code Project Hosting. .
It has a paintPixel() function, and it seems to be called from many if not all functions that actually draw anything.
I’m currently under the impression that ImagePainter is creating CPU-side bitmaps, which then need to be transferred to the GPU in every update loop. Is that correct?
What adds to my worries is that the video states 25 fps for ~100 cubes or ~300 bitmaps. That’s not very impressive for the use case that I have in mind: drawing a few constant textures at varying positions, drawing orders, and scaling factors. Blockworld seems to be able to draw a gazillion of bitmaps :smiley: (I don’t mean to say that ImagePainter is bad, I’m aware that it was built for a far more broad use case and that there are probably solid reasons for its design.)

Seems like I just need to use normal quads and that depth comparator.
I’ll take alook into TonegodUI to see whether it’s being done that way there, too.

BTW I might let the cat out of the bag already:
With Nifty full of architectural problems, Chris gone with a huge sulk, and Paul being too occupied to do much on Lemur, I’m considering setting up yet another GUI library.
I have reason to believe I have the skills to do that, except I’m not too experienced in 3D so I’ll be happy get any advice. Possibly including advice not to try that - there are many potential reasons for that after all.

I do lots on Lemur I just haven’t committed it. So far there wasn’t much interest so I haven’t been in a rush to push things out.

Anyway, Lemur was supposed to be the infrastructure for building custom GUIs with some default custom GUI built on top of it. You’d save yourself a lot of time if you reused the core parts as much as possible, I think.

I mean the alternate axis stuff the layout classes (i.e. what’s usually the Z axis).

As long as you don’t push out Lemur, you won’t be seeing much interest, so you’re working on something self-fulfilling there :slight_smile:

Back to ImagePainter: I'm still very worried about performance after skimming http://code.google.com/p/jmonkeyplatform-contributions/source/browse/trunk/ImagePainter/ImagePainter/src/com/zero_separation/plugins/imagepainter/ImagePainter.java . It has a paintPixel() function, and it seems to be called from many if not all functions that actually draw anything. I'm currently under the impression that ImagePainter is creating CPU-side bitmaps, which then need to be transferred to the GPU in every update loop. Is that correct? What adds to my worries is that the video states 25 fps for ~100 cubes or ~300 bitmaps. That's not very impressive for the use case that I have in mind: drawing a few constant textures at varying positions, drawing orders, and scaling factors. Blockworld seems to be able to draw a gazillion of bitmaps :-D (I don't mean to say that ImagePainter is bad, I'm aware that it was built for a far more broad use case and that there are probably solid reasons for its design.)

They only need sending to the graphics card again if they have changed. They only get resent if a change happened that frame.

If you change a lot of textures all the time (as that example does) then you are going to work the graphics bus pretty hard constantly resending them, let alone anything else!

Really ImagePainter is designed for occasional changes to Images, anything that changes every frame I would want to animate in a shader.

Yep, performance doesn’t matter much for occasional changes.

Seems like my use case is already served by JME, just send the textures on quads, with a comparator, and let JME sort it all out.

Thanks for the discussion, onwards to experiments!

@toolforger said: I mean the alternate axis stuff the layout classes (i.e. what's usually the Z axis).

I’m still not sure what the issue is exactly. The important layouts will work in a variety of axes. Is the issue the amount of code needed to support that in a custom layout?

Not an issue actually, I’m just wondering what purpose that alternate axis is serving.
I’m seeing lots of use cases for mapping X and Y as either main or minor axis, but I can’t see a use case for mapping Z as main or minor axis, or for mapping X or Y as alternate axis. Which probably means I’m overlooking something, but I can’t figure out what it is.

@toolforger said: Not an issue actually, I'm just wondering what purpose that alternate axis is serving. I'm seeing lots of use cases for mapping X and Y as either main or minor axis, but I can't see a use case for mapping Z as main or minor axis, or for mapping X or Y as alternate axis. Which probably means I'm overlooking something, but I can't figure out what it is.

Because Lemur is designed to work in 2D or 3D. So you could have a bunch of 3D models in a grid on the floor (then you’d need x,y as your axes) and use SpringGridLayout to lay them out… perhaps even with a box around them, etc… There is less use for y,z but it comes for free.

I’d have thought that if I want a GUI on the floor, I could simply rotate the GUI-bearing Node.

@toolforger said: I'd have thought that if I want a GUI on the floor, I could simply rotate the GUI-bearing Node.

Then you’d have to rotate all of the children, too. Which actually kind of causes problems for layouts right now since they expect everything to be in normal x,y,z space for the most part. A layout ignores the rotation of its children because things get really tricky. Some time I will add a special container for this case.

As it stands now, I can arrange all sorts of really interesting 3D grid layouts… nested x,y and y,z inside of a big x,z grid on the floor. All kinds of stuff.

I’ve been building 3D non-hud guis for some time and we found these types of layouts useful so I included them.

I don’t quite understand.
I’ve always been thinking that rotating a parent node will also affect all children automatically.
I’m also having a hard time imagining how such a XY/YZ/XZ layout would look like. Wouldn’t that end with GUI elements behind each other? In what situations is that useful?

@toolforger said: I don't quite understand. I've always been thinking that rotating a parent node will also affect all children automatically. I'm also having a hard time imagining how such a XY/YZ/XZ layout would look like. Wouldn't that end with GUI elements behind each other? In what situations is that useful?

You are still thinking in the traditional “gui hanging in front of you and doesn’t move” way. Lemur allows you to continue to use gui elements right in your world.

You can also do it by manually positioning everything but I prefer to let layouts do the work for me in situations where they apply.

The grid of models/statues is still the best easy example I can come up with. If you did that in x,y space then all of your models have to be rotated Z-up so that when you rotate them into x,z that they are pointing up again. Instead you can just put them in an x,z grid and let it layout. Maybe in front of them you have child containers with x,y grids with buttons floating in front of them. Those containers in the original x,z grid.

I could come up with more but I’m starting to think it wouldn’t help. :-/

No, I’m fully aware that GUI elements can sit in the game world, and that they can have X, Y, and Z coordinates. I’m also aware that one can easily have stuff like floating buttons that way.
I’m thinking local coordinates though. With the proper transform, XY is still the same, regardless of whether the GUI is in the guiNode or splatted on a Mesh (that’s a nice idea, allows having a GUI on top of crumpled paper and such). Essentially, XY would be equivalent ot texture coordinates.
I do see a value in having a Z coordinate in that model - you can have floating buttons that way. I can see use cases for that even if I don’t have them.
What I don’t see is a use case for managing and assigning Z coordinates via layout. I’d simply assign a Z of -1.0 to each button, which would give it a Z offset from its parent Node. Hm okay, if I have multiple stacks of components, which might have different number of levels and hence different overall Z size, and I want a layer floating at a fixed distance above the highest component (whichever it is), then a layout does help.
Is that the kind of use case you were having in mind?