[SOLVED] Bullet Memory Footprint

Nah definitely not :smiley: and thats actually why it matters thats its faster than a frame because it means it will be available in the next frame already

it is too complex to keep it short especially since i always tend to write too long answeres but illl try to point out the most important aspects:
EDIT: roflmaocopterlol i failed hard

my blockworld is single threaded, that means block changes as well as telling it which chunks to load / mesh / collisionShape etc have to happen on one thread (its not jmeMain-Thread) but its the same design.
To clarify the words, by “column” i mean a slice of 32xMAP_HEIGHTx32 blocks, each column is actually made out of vertically stacked “chunks” of size 32x32x32 (or sometimes 32x16x32 im still not 100% sure which performs better)
Tasks like generation, calculating light, creating mesh, creating collisionShape and doing damagePropagation are offloaded to executors, but i keep flags which state each chunk is in and once a chunk is done with a specific task like creating the mesh for example i enqueue a callback to the blockWorldThread so it can update its state and tick the chunk (and its neighours eventually) so they can check if they can do further calculations.

by the way, generation (that is creating the block-data in memory) happens on per column level and is subdivided into creating basic data (that is data that can be generated per block independant on neighbour blocks) and once a column is surrounded only by at least basic generated columns complex generation kicks in and places complex structures like trees that can reach into neighbour chunks. once a column is complex generated it is getting post generated, that is single floating blocks are removed and grassblocks that have opaque blocks on top turn into dirt etc, just some cleanup)

once this data-generation is done for a column, light calculation takes place and first basic calculates light, that is column inner light (so floodfill sunlight from top, spread light from light emitting blocks but dont cross column borders) and once a column is surrounded only by at least basic light calculated columns, complex light calculation takes place and floods the outer most blocks light into neighbour columns

once a column is fully lit, for each of its chunks both mesh generation and collisionShape generation are started because they both dont require each others work to be done already. mesh generation and collision shape generation take place on chunk level.

all of this is dynamic in that those “states” i mentioned that i keep for the chunks dont only hold information about what state a chunk is in but also what state a chunk should be in. that means when a chunk is ticked because a column has finished generation including light and it should not longer be in meshed-state, meshing wont be started.
that state-keeping happens on column level, too. so once a column has finished dataGeneration and no longer needs to be loaded it wont even start light calculation

and the need for collisionShapes and meshes are independant too, the need for meshes is based on occlusion culling that checks which chunks could actually be seen while for collision shapes i just assume all chunks in a specified range need to be in physicsspace because physics happen behind the player, too
the need for a chunks data to be loaded is also just based on if its within a specified range, that is because its multiplayer and the player needs to be informed if a block changes in one of those chunks even if the player cannot see the chunks because its in a cave for example

again, all this generation and calculation is done in a threadpool (i actually have an interface for this and 2 implementations, one that puts them into one threadpool based on a priorityQueue and another one that has several threadpools one for each type of generation, lightCalculation, meshGeneration, collisionShapeGeneration and damagePropagation) so the actual BlockWorldThread only keeps track of the states and once a chunks state changes kicks off some creation / generation or ticks some chunks etc

which chunks are actually visible (that is dont have their cullhint set to always but never instead) is based on the aforementioned occlusion culling and is offloaded from the jmeMain-Thread but as soon as its done, in the next frame the cullhints are set approriately. similarly which chunks rigidBodies are actually in physics space is based on which chunks are in a specified range around the player

its actually more going on but its getting too long already, so i’ll cut it here
the point is the player will be moving around a lot still and i want to save the time for generation, light calculation, mesh and collisionShape generation if possible (that is if it was previously done already) when a player approaches a chunk because that is a waste of time. just since the map is big (again not huge but still too big to hold in memory all the time) i want to store as much data that was calculated already on disk as possible

also the physics performance is awesome (it was already before i optimized the collisionShapes as mentioned in the post above, probably because as you outlined chunk based collisionShapes are better than column based collisionShapes and it can skip a lot of mesh accurate collision tests) and i can have way more active balls rolling around than i will ever have ingame before i notice framedrops due to physics, its all only about the size and that i want to reduce memory footprint of the game

Sorry, i dont get that do you mean i should create meshes for bigger parts than for chunks?

if you made it all the way here, thanks for listening :smiley:

I generally refer to a chunk as an entire column and a cell as a part of the column. So a chunk would be 16 cells tall or whatever.

This is more of a debate than instruction, but for collision in a blocky world, isn’t normal evaluation sufficient? I’m quite sure that’s how minecraft does it. Unlike regular physics, in a block world you have finite possibilities. AABB intersection. That’s another story I guess.

But anyway, all I mean is that your collision mesh cell size does not have to match your chunk cell size. Create a 3x3x3 grid of collision meshes of any size around yourself and page them as you move. They can be 8x8x8 in size or 4x4x4 or any power of 2.

In regard to not being sure what size your CHUNK cells should be… Sizes are chosen for many reasons including performance but network packet sizes and how much you can stuff into a packet is your primary bottleneck. How much you can stuff without wasting data. Sending a packet with virtually nothing in it. Waiting for that data to get transmitted and received. These are the variables to consider for a chunk size. It doesn’t matter how efficient your generation algorithm is if it is sat there waiting for 9 gigs of data. For every packet added, you multiply it by each cell and must wait to receive it. And for every packet you can remove you reduce the time required to begin generation. that’s the networking side of it. Bits and bytes and shorts and packing them all as tightly as possible.

If you look at how tiny a minecraft chunk is, it’s really small. The thought process involved is genuinely educational. Virtually nothing is wasted.

People think minecraft is simple. It’s nothing short of a masterpiece in virtually every field of game design.

1 Like

Not really. 16x256x16. I guess for some definitions of “small” maybe.

Minecraft had many years to get clever. It did some really stupid stuff for a long time.

I know you’ve traversed this path for quite some time, but this is what happens when you “just get it working”…

Needless to say, it fills hard drives like hoarders fill houses.

And the minecraft equivalent of a couple weeks of our little family playing:

Not that it matters…
For Mythruna, I used 32x32x32 “chunk” sizes (though I call them ‘leaf’ and not chunk) which is the same number of cells as 16x256x16… but that’s just a coincidence.

I preferred a 3D subdivision because it bugged me to render caves and stuff under ground when the player wasn’t even looking in that direction.

That was the size that I found was large enough not to have a huge scene graph but small enough to be able to regenerate blocks in a reasonable time frame.

Mythruna definitely uses a worker pool larger than one, though. And as I recall, it autoscales it based on number of CPUs.

A 32x32x32 .leaf is run length encoded by column before storage… with the block type information split from the masking/lighting data (which is otherwise packed together). I gzip these over the wire but I believe I still store them raw on disk (just RLE). The plan was always to gzip them for storage then I could potentially avoid the gzip before sending over the network but the benefits were never large enough to bubble it up in priority over other things.

…and now that’s all two engines ago anyway.