Efficient Image part updating

I am very happy with updating my procedurally generated images by updating a ByteBuffer and using setData on the image to which the modifications are experienced immediately.

However, this can only be done on a “whole” image basis, unless there is something I have missed?

I have been looking at…

com.jme3.renderer.Renderer.modifyTexture(Texture tex, Image pixels, int x, int y)

In trying to determine which is the most efficient way to update parts of an image, I also notice that this function could also be used to update the whole of the image, my question is…

To perform a part update is it more efficient to “update” an area of the complete image’s ByteBuffer and write its data or to create a new image which only encapsulates the update area and use the above function? My concern is that if the new image is uploaded to the card using additional memory and the expense of this.

Are images only uploaded to the card if they are referenced in a texture? or are they uploaded and expensive from the point of creation?

If this information is not readily known I am happy to create some performance tests.

I look forward to responses discussing best practise.

Regards

I have found this thread and includes function that I am used to. I cannot find this function in the current javadoc so I presume this is JME2?

We have an ImageRaster class that allows to write pixels at arbitrary uv coordinates (look for DefaultImageRaster). It handles the buffer writing for you and supports several image codec.

Yes, and only if you use them in a material.

Performance-wise, setting the bytes yourself will be faster. If ImageRaster works fast enough, it is convenient to use.

I have not tried Renderer.modifyTexture, thus I can’t say much about this one. I wonder if this methods sends just the new data to the gpu. If that is the case it might be faster.

Theres also an “ImagePainter” plugin in the SDK plugin repo (install via Tools->Plugins or get the jar from jmonkeyplatform-contributions.googlecode.com) which allows you to directly “paint” an image into another image. But of course it also “just” uses the ImageRaster in the background.

Thanks for your help!

ImageRaster seems the way to go. Modifying my image generators to write directly using an ImageRaster seems ideal and inexpensive.

I think I will also develop a batch processor to write ByteBuffers into ImageRaster as a matter of convenience.

Once again, many thanks.

Hi The_Leo,

Isnt ImageRaster the only way to write parts of an image?

You say “setting the bytes yourself will be faster” - I am wondering how this is done without using ImageRaster?

Regards

Well you can alter the image ByteBuffer directly, by using the put method.

image.getData() returns the list of bytebuffers used for the image. for a conventional 2D image you’ll have one bytebuffer.

it can be faster than the ImageRaster because you can write by chunks, meaning bb.put(array) instead of bb.put(value) many times.
But you have to handle the buffer writing, and encode the bytes yourself.
IMO it’s not worth the hassle, unless you change the image on every frame. But if you do change the image on every frame, I would rather go the sprite sheet or texture Atlas route and have the change occur in the shader. That’ll be a lot more efficient, and may not consume much more memory.

Ok, thank you for that. That’s really helpful.

Typically updating images in real-time is considered a no-no due to its inefficiency. There are many other ways of accomplishing this that does not require updating an image.
Classes like ImageRaster and ImagePainter would work but are really intended for offline usage. Renderer.modifyTexture() is a special case - it is only used by Nifty GUI’s atlassing algorithm where it is actually required.

1 Like

If only PCs had a unified memory architecture like the PS4 and XBox One ^^

Amd apus actually have, but its not exposed trough the driver :stuck_out_tongue:

In that specific case, unified memory would also bring a synchronisation nightmare, constant pipeline stalls / or you would still have to double/tripplebuffer your data.

You can play around with it already :slight_smile: glMapBufferRange with GL_MAP_COHERENT_BIT set

Not really, everything is happening sequentially anyway, you’d only take away clogging the PCI bus from uploading the changed data.

Nope, its only sequential as long as you let the driver sync it. And i have not yet found any spec that would guarantee that the order of pushed commands has to be in the same order as you call them. As soon as you directly modify the memory, which you would do with a coherernt mapped buffer, or with a memcpy to unified memory, you walk around the driver, and you are required to fence yourself. Or end up with possible corrupted data.

Beside that cpu and gpu are out of sync by definition, and if you need to have direct synchronized vram access, you force a pipeline stall, and regarding frametime you a screwed.

Well its no worse than memory mapped files, and with those handling is somewhat simple. After all you dont need to use it for everything, but for the things it really pays and makes no problems.
So all static Assets would benefit from it without any problems at all.

Right now theres an update call on the CPU and then a render call on the GPU, I don’t see what you’re talking about.

Hi, I am interested in looking at Sprite,Sprite Sheet,Sprite Manager but I couldnt find it in the java doc or get any hits in the search. I presume these are from JME2?

Although I am quite happy updating images using the ByteBuffer, nehon suggested it would be much more efficient if I used a sprite sheet and a shader to perform the same.

I would really appreciate some help in what classes I should be looking at.

Regards

There is a sprite shader in the shaderblow plugin: http://wiki.jmonkeyengine.org/doku.php/sdk:plugin:shaderblow

That should you give all infos you need and a working shader…