TextureAtlas.makeAtlasBatch() - atlasSize?

Hi :slight_smile:

Two questions:

  1. Since TextureAtlas.makeAtlasBatch(...) returns a single Geometry rather than Node, am I to understand that is uses GeometryBatch mechanism under the hood in order to merge all the geometries into one?

  2. How do I evaluate the atlasSize parameter to pass to TextureAtlas.makeAtlasBatch(...)?
    My scenario:
    I have a scene with multiple geometries, each of them loaded from .obj + .mtl files that use some .png files as textures (for DiffuseMap only).
    If I calculate total number of pixels in all the .png files and multiply by 4 (bytes per pixel) - would that be good value for atlasSize?
    Do I have to take mip-maps into account, if I generate them on the fly with TextureKey when loading images with asset manager?

I’ll be grateful for any information on the topic :slight_smile:

Regards, Elg’

maybe now I will have more luck looking for the answers? :slight_smile:

Here is the method I use to calculate texture atlas size for font glyph images.

int maxGlyphSize = max(width, height) in all the glyph image size in pixel.
int charactersLength = how many characters in use.

int atlasSize = FastMath.nearestPowerOfTwo((int)Math.sqrt(maxGlyphSize * maxGlyphSize * (double) charactersLength));

No need to calculate mipmap size.

Are you using TextureAtlas.makeAtlasBatch(...) or hand building your atlas?

OP is using the TextureAtlas utility which may have its own constraints. I don’t think it’s widely used though so may require some code-digging to find good answers.

(I’ve personally never really used it and general use real 3D editors to do atlases… so I have no experience in this area of JME.)

I build my own atlas, but the algorithm is the same as TextureAtlas, which is called Guillotine rect bin pack.

The algorithm is very fast. You can first evaluate an expected atlasSize, and then use the algorithm to test whether the atlasSize is large enough. If it is not enough (mean you need another page), double the atlasSize and iterate again.

Sort the image size, store them in a of List

	public void sort(List<Rectangle> images) {
		if (comparator == null) {
			comparator = Comparator.comparingInt(o -> Math.max(o.getWidth(), o.getHeight()));


public boolean evaluate(int atlasSize, List<Rectangle> sortedImageSizes) {
    Node root = new Node(0, 0, atlasSize, atlasSize);
    for (Rectangle rect : sortedImageSizes) {
        Node result = root.insert(rect);
        if (result == null) {
            // mean no area for this image)
            return false;
        } else {
            result.occupied = true;
    return true;

200 rectangles cost 0.4s to calculate, with padding = 1

5000 rectangles cost 1 sec.

The origin algorithm idea come from this page:


This is how jme3 TextureAtlas works:

And this is libgdx do in the same way:

Another implements in c++:


Yes, they are combined into one geometry.

1 Like

Thank you @yan! Your insight is very helpful <3
Thanks @pspeed as well.

This does not seem to ensure the sufficient size…

… but as I understand, you already know that:

Maybe I will just go for something like this instead:
int atlasSize = ceil(sqrt(charactersLength)) * maxGlyphSize
I think this ensures the required space.
(I might then rise the value to the nearest power of 2, though I’m not sure what do I gain by that?)

Anyhow, the conclusion is - in order to use the TextureAtlas class I kind of need to process my data first. That is, iterate through all the geometries, all the materials, all the textures, make the unique list of the textures. Just to evaluate the data size needed by atlas.