I’ve got a few questions about textures and how they are handled, I was hoping someone could help
I’ve read that open GL textures should have dimensions in a power of two, however JME3 accepts textures in any size. How does it actually handle that or is the power of two restriction obsolete?
I’ve been finding a few problems with my nifty texture rendering solution (http://hub.jmonkeyengine.org/groups/contribution-depot-jme3/snippets/31/)… it’s a bit slow, and it fails completely on Android for some reason…So I’ve been thinking about just painting into the texture. I am aware of http://hub.jmonkeyengine.org/groups/contribution-depot-jme3/snippets/55/ but so far as I can see that only allows drawing of basic primitives. I really need to be able to draw images, draw images at various opacity and image modes, (ideally rotate images - nifty didn’t let me do this) and draw text into the texture. Is there anything already out there to draw images and text into textures? Are there any helper classes for rendering bitmap text?
I’m trying to decide whether to spend time on fixing the nifty thing for android or just rip it out and write my own custom-texture-drawing code.
Its not really obsolete as some GPU’s really don’t support non-power-of-two textures. But jme rescales them now so effectively you don’t run into issues with that
You’d have to create the painting tools yourself but generic pixel painting algos are easily found on the web. Somebody also integrated the “processing” painting library with jme. Generally the issue is getting changes in the data into the gpu fast after painting in the cpu memory space… This always is a relatively resource intense process and GPU’s are not made for pushing data though but to do parallel computations on existing data sets in its memory. If you want to go for “sprite-like” animation then its probably best to do so with a texture atlas of all sprites and a shader. If you need to generate dynamic content for the textures routinely you will have to think about the bandwidth issue and optimize how and when you upload the images.
@normen said:
1) Its not really obsolete as some GPU's really don't support non-power-of-two textures. But jme rescales them now so effectively you don't run into issues with that
2) You'd have to create the painting tools yourself but generic pixel painting algos are easily found on the web. Somebody also integrated the "processing" painting library with jme. Generally the issue is getting changes in the data into the gpu fast after painting in the cpu memory space.. This always is a relatively resource intense process and GPU's are not made for pushing data though but to do parallel computations on existing data sets in its memory. If you want to go for "sprite-like" animation then its probably best to do so with a texture atlas of all sprites and a shader. If you need to generate dynamic content for the textures routinely you will have to think about the bandwidth issue and optimize how and when you upload the images.
Thanks for that, very helpful.
The question about powers of two was more since I'm generating the textures anyway I might as well generate them at the scale where they will be used....although having said that I'm mapping the texture onto a surface that's 5*3.2 world units so working at 512*320 pixels makes things simple. Otherwise I'd need to work at 512*512 and stretch all my images and text by 50% width-wise as I build the image.
So JME3 takes a 512*320 input texture and if the graphics card doesn't support that stretches it to 512*512 using a suitable filtering algorithm before sending to the graphics card?
The textures only change infrequently so I'm not overly worried about performance (although equally I don't want it to be slow either)... in particular there are cases where people might load a lot of these at once so performance starts to become an issue. Certainly I don't see sending them to the graphics card as being the biggest overhead although I may be wrong.
Pixel painting is easy, it was more the bitmap text stuff where any existing code that could be reused would be very helpful. Having to parse all the font files myself etc would massively expand the scope of this project!
Another question, I’ve just been looking at the Image and Texture classes in JME3 and there doesn’t seem to be a way to read per-pixel data out of them easily. The Image class does have methods that seem to let me access data but it seems to store it in any number of formats and I’d have to handle all that myself :s
Essentially I’m looking for a platform independent thing (i.e. both Android and real Java) that I can load a jpg/png/whatever into using the asset manager and from which I can then read pixel data (either as float or byte - that’s not a problem).
@zarch said:
Essentially I'm looking for a platform independent thing (i.e. both Android and real Java) that I can load a jpg/png/whatever into using the asset manager and from which I can then read pixel data (either as float or byte - that's not a problem).
I thought you saw this already: http://hub.jmonkeyengine.org/groups/contribution-depot-jme3/snippets/14/
@normen said:
I thought you saw this already: http://hub.jmonkeyengine.org/groups/contribution-depot-jme3/snippets/14/
That's the other way around. I found a way to paint into a texture easily enough. I want to be able to paint source textures into the texture though (i.e. image.paint(int x, int y, Image source, float opacity)...which means I need a way to load the png/jpg and then access the data from it in a useful format.
BufferedImage is where I would traditionally go to for this but it's not supported on Android so I'd need an abstraction layer and separate Android and other implementations....or I could build on what is (presumably) already present in JME3 to load images from disk into memory in a usable form...
data, which is a bytebuffer. I can query the bytebuffer by (I’m guessing, the javadoc doesn’t explain any of these methods) by doing image.getData().get(y).get(x)…but that gives me a byte so I guess I need to actually do x*4 and then +0,1,2,3 for r,g,b,a…
but then there are Image.Format, image.getDepth etc. Do any of these impact that? Image.Format lists 30+ different formats, all with different number of bits per pixel. So do I need to read and understand that format, use that to determine which bytes to read…extract the data from those bytes?
Or is that already done for me and the image data just containing rgba…i.e. is the format only used on loading?
None of this is explained either in the javadoc or in a quick look at the code…
@normen said:
Look at the link I posted. It sets rgb data to the image. Do the same inverse to get image data and then copy rom one image to another like that.
Really? Do the inverse? Of which bit? A constructor? If you just answered any of my questions instead of pointing me at code I've already read and that I've already explained why it isn't helpful then maybe that would help...
You mean this bit?
[java]
// set data to texture
ByteBuffer buffer = BufferUtils.createByteBuffer(data);
image = new Image(Format.RGBA8, width, height, buffer);
texture = new Texture2D(image);
texture.setMagFilter(Texture.MagFilter.Nearest);[/java]
The bit that creates and image in Format.RGBA8. Fine. So how do I take an existing Image and ask for the data in Format.RGBA8? All it has is getData()....What happens if the incoming image to my code is in Format.SomethingWierd? Or is every image (whether png, jpg, whatever when loaded) in RGBA8 already?
This creates a single ByteBuffer from the data and sets it into the image... but image.getData() returns an Array of ByteBuffers not a single ByteBuffer. So what is the mapping between the two. Is it split into the array by line, by pixel, by something else?
Yes I can go and hack at things and probably work out something that will work in some cases but I'd quite like to actually know how to do it properly so I can be fairly confident my code won't suddenly fall over at some point. I'll even do a patch for the class with some javadoc explaining what's going on - but I need to know myself first!
This link here, that I posted, not your own links : http://hub.jmonkeyengine.org/groups/contribution-depot-jme3/snippets/14/
[java]public void setPixel(int x, int y, Color color) {
int i = (x + y * width) * 4;
data = (byte) color.getRed(); // r
data = (byte) color.getGreen(); // g
data = (byte) color.getBlue(); // b
data = (byte) color.getAlpha(); // a
}[/java]
-->
[java]
public ColorRGBA getPixel(int x, int y) {
int i = (x + y * width) * 4;
return new ColorRGBA(data, data[i+1], data[i+2], data[i+3]);
}
[/java]
Yes, I looked at the code from that link. Three times. It's the same code I pulled out from for my post although you chose a different method - a method which CANNOT be reversed as simply as you suggest.
The setPixel method does not modify an Image. It modifies an internal byte[] array. That array is then wrapped as a ByteBuffer and sent to setData by the method getTexture().
You may be meaning to reverse the setData using getData but if so then as I've already said repeatedly there is no array data[] or even a simple ByteBuffer on the Image object, com.jme3.texture.Image: http://hub.jmonkeyengine.org/javadoc/com/jme3/texture/Image.html
The closest thing is getData() which returns an [java]ArrayList<ByteBuffer>[/java] which is a long way from byte[] - quite apart from anything else it's a 2 dimensional format!
You also do not address my main question which is whether the internal format in that ByteBuffer varies depending on what Image.Format was set in the Image when it was created or whether that conversion has already happened by the point I query it in which case I can assume the format is rgba8.
Image.Format describes the format of the data in the ByteBuffer.
It is not really straight forward to modify the raw data of a JME Image because you have to adapt to the different Formats. If you already know what format you want to support then it’s a lot easier.
And images are never really stored in 2D, it’s almost universally a flatened 1D buffer where the next row starts after the previous row ends.
@pspeed said:
Image.Format describes the format of the data in the ByteBuffer.
It is not really straight forward to modify the raw data of a JME Image because you have to adapt to the different Formats. If you already know what format you want to support then it's a lot easier.
And images are never really stored in 2D, it's almost universally a flatened 1D buffer where the next row starts after the previous row ends.
Thank you! Finally! That was what I feared and was why I was starting to think I was going crazy when that part of my question was just totally ignored.
I'll take a look at ImageToAwt, that sounds very helpful.
I’m happy to fix my destination image in RGBA8 - or is there any reason I shouldn’t be?
My worry was what happens with the incoming images. However:
[java]
/**
Convert the image from the given format to the output format.
It is assumed that both images have buffers with the appropriate
number of elements and that both have the same dimensions.
*
@param input
@param output
/
public static void convert(Image input, Image output){
[/java]
This looks very helpful. If any incoming image is not in RGBA8 then I can use this to convert it to RGBA8. (Ideally this would be done once on loading the source image).
Now what’s interesting is that these convert() functions just look at getData(0)…so it seems that in fact none of my previous theories were right and that the various getData(int) returns are actually different versions of the same image - for MipMaps?
So getData(0) gives me a bytebuffer. If the image is RGBA8 then the data in that bytebuffer is 4width*height bytes with RGBA packed sequentially.
[java]
private static int readPixel(ByteBuffer buf, int idx, int bpp){
buf.position(idx);
int original = buf.get() & 0xff;
while ((–bpp) > 0){
original = (original << 8) | (buf.get() & 0xff);
}
return original;
}
[/java]
So it’s reading the bytes sequentially and packing them into an int…that confirms my theory on the format - although I do wonder how that would work with a >8 bit per channel format?