Engine Design Choices, geometry size and orientation, center vs upper-left

I must be crazy, so far as I know the standard(?) format for geometry is new Cube(1,1,1) makes a cube a single unit in size from the upper left (front?) corner.
It took me, I want to say three or more hours to realize/remember that jMonkeyEngine uses the center of a mesh for all calculations and there is (so far as I know and/or could find) absolutely no way to change this.

What I was doing, I believed, was simple. I wanted to create a procedure to generate a specified number of rows and columns in the form of nodes with associated meshes. (I’ll be the first to admit that some fo this is less than intelligent… please forgive me)

As follows:

private void createRows(){
    for(int i = 0; i<height;i++){
        Node row = new Node("Row " + i);
        Box box = new Box((height/2)-0.5f,0.5f,0.5f);
        Geometry rowGeom = new Geometry("Row", box);
        rowGeom.setMaterial(nul);
        row.attachChild(rowGeom);
        row.setLocalTranslation(new Vector3f(0,i,0));
        if(i == 0){
            createColumn(row);
        }
        rootNode.attachChild(row);
    }
}

private void createColumn(Node row){
    for(int i = 0; i<width;i++){
        Node col = new Node("Col " + i);
        Box box = new Box(0.5f,(width/2)+1f,0.5f);
        Geometry rowGeom = new Geometry("Column", box);
        rowGeom.setMaterial(nul);
        col.attachChild(rowGeom);
        col.setLocalTranslation(new Vector3f(2f-i,2.5f,0));
        row.attachChild(col);
}

And yet, for some reason, these did not align, in any way, as I believed they should.
After tweaking them by adding additional translation (the 2f-i,2.5f seen in createColumn), I decided to recheck my understanding of geometry in jMonkey.
Sure enough, the tutorials explicitly specify that new Cube(1,1,1): “makes the box 2x2x2 world units big.” However it takes going into the JavaDoc to understand why:

Cube a = new Cube(1,1,1);
Cube b = new Cube(1,1,1);
b.setLocalTranslation(new Vector3f(1,0,0);

causes the cubes to intersect. From my experience using 3D modeling, and programming in general, the above should move the second cube exactly one unit to the side of the first. Instead, it moves the center of the second cube one unit to the side of the center of the first.

Alright, fair enough, but this means that any time I want to align two objects I have to calculate their center (mentally) from the location I want to allign them. If, say, I want to create overlapping rectangles in a grid pattern (see original code), I have to tell the secondary rectangles to be adjusted by 1/2 their width and height.
Again, fiar enough. I can see plenty of times this would be useful… but there is absolutely no explanation as to why this design choice was made and absolutely no warning that it was made. Believe me, I went looking for it before registering to create this book of a post…

So… tl;dr:
Is there a simple, built in way to define meshes/shapes by their standard (upper-left) origin and not their center? (is the upper-left-front actually standard? This, jMonkeyEngine, is literally the only thing that I’ve seen that uses the center.)

I believe all shapes are created from the center, however for Box there is a constructor overload. The javadoc explains why.

https://javadoc.jmonkeyengine.org/com/jme3/scene/shape/Box.html#Box-com.jme3.math.Vector3f-float-float-float-

The designer of the mesh determines where the center is. It’s not difficult to define a custom mesh with a cubical shape and the “center” at one of the vertices.

Thank you.
I do see that, where it says:
“Due to constant confusion of geometry centers and the center of the box mesh this method has been deprecated” for public Box(Vector3f center, float x, float y, float z). This, however, doesn’t explain why they chose to start with “Box(Center)” as opposed to Box(Origin) (*origin in terms of rendering purposes).

Ok, after trying to find an example of why I am so confused on this, I decided to phrase it from a different angle.

When working with two-dimensional graphics, you generally do not create the object from the center. Say: Box2D(x,y); the ‘x,y’ is the upper lefthand pixel of the box.
Is this not, generally, the same for three-dimensional graphics? I was under the impression that it is, but given I couldn’t find the example that I was looking for there is the possibility that I’m wrong.

Assuming I am wrong, how would I easily position multiple objects in alignment with one another? Say multiple floorboards vertically along the width of a horizontal floorboard?
– As it stands so far as I know the math would have to look something like:
Rectangle(x,y,z): A (horizontal floorboard), B (first vertical floorboard), C (second vertical floorboard), N (nth floorboard) [Assume width and height are X and Y respectively]

B(x = A.x - (1/2 A.width) + (1/2 B.width), y = A.y + (1/2 B.height) + (1/2 A.height), z [irrelevant for this example])

and then for N:

N(x = N-1.x + 1/2 N-1.width, y = N.y, z [again, irrelevant]). <-unsure why simply adding 1 didn’t work when testing, but it didn’t…

My point being, I don’t want to manipulate or define the shape by its center, I want to define it by the upper-left vertex (either front or back, this doesn’t matter), so that all math is a simple: “+ width, height or depth desired”; and I don’t understand why it became complicated by defining the shape by its center. (I assume there is a reason but I simply cannot find this reason).

(cough) does this make more sense?

edit: also, all, please forgive me, I’m brand new to this forum system. (not bad, not bad at all, just never seen it before) so I may inadvertently format or reply in… uh… odd ways…

In your usecase it makes sense, but in other usecases it may not. I generally only use the built in shapes for demo testing. As @sgold mentioned, the creator decides its origin based on how they are going to work on it. Each usecase is different. Each person may look at it differently.

You can change the center of an origin quick n dirty by adding it to a node and moving the shape localTranslation -0.5 on each axis or whatever and then just use the node as the shape.

Or you could move the mesh vertices themselves.

So creating a sphere you’d do the same thing? Specify the corners of the imaginary box? Or?

The JME shapes try to be consistent. It’s pretty common for 3D shapes to be origin based since for 99% of cases for having a box in your scene you want it to be origin based… whether bounding shapes, rigid bodies where the center is also the center of mass, etc…

Personally, I almost always change the origin of my shapes and meshes, and even sometimes on imported models. (Although, ironically, not often for the Cube as I actually prefer it’s origin to be the center of geometry)
As already mentioned in this thread, there are at least two ways of doing this.
For most of my use cases, I use the method (I call it “the vertex buffer offset” method) of looping through each of the mesh’s vertices and adjusting them by the offset vector from it’s starting origin to my desired origin, which for me is sometimes at the bottom for a character intended to walk on the ground, but it is usually the center of gravity of the mesh (approximated), to help with calculation and application of physics forces later. (rigidBodyControl.applyTorque() doesn’t work very well to pitch/yaw/roll an airplane/spaceship/boat with an origin at the bottom left corner)
… the other method mentioned (I call it “the scene graph offset” method) is to simply attach the mesh’s geometry to a parent node and offset the geometry’s localTranslation to align whichever part of the mesh you wish to have act as the origin, then simply remember to apply any transforms to that parent node, and not the geometry.

For what you are doing, probably. Keep in mind, though, when you start animating things, transforms apply based on the origin. For most “move 3d things around world” this should not be a corner. Imagine trying to add a rotation animation. In most cases, you want it to default to spinning around it’s center.

So, two questions as of now:
Clearly I am quite ignorant of 3D graphical rendering (2D I understand, 3D… not so much). Any good sources to enhance my understanding of the standards of this? I would like to use the actual terms and concepts, rather than grasping at straws and confusing everyone involved.

Secondly, I felt I should explain the two cases I am attempting to (easily) do, and why this has been so frustrating for me:
Case A: Match 3-esk game. What I need is a simple X*Y grid to define where the objects drop from, but since ‘X’ and ‘Y’ can vary (and realistically where gravity is can as well) I don’t want to hard code these. I believed this would be simple: create node, add containing shape (for easy math), replicate ‘X’ or ‘Y’ times… But then it took me three hours (I kid you not) to figure out why the shapes didn’t line up correctly on the ‘first’ attempt. – I now know and understand this; but in 2D graphics it would have ‘just worked’ the first time (as per how I had it set up).
Any animations for the above will be exceedingly simple as I intend to use purely rudimentary shapes (spheres/cubes/rectangles), and it will be texture or particle based if not motion based (i.e. to move object A and B, get center between, rotate around center).

Case B, and vastly more complicated: eventually I intend to begin working with a pseudo-voxel world. My experience - from an external third party who hasn’t seen the code - thus far of voxel worlds is that everyone tries to explicitly insert each voxel, verbatim, into the world, which takes up a LOT more memory than is required. Instead I want to use larger generic shapes to define regions of ‘whatever the terrain is made of’ and simply add or remove from that.
The problem with this is that I can easily comprehend “rectangular shape B begins where rectangular shape A ends…” but since A and B may be different sizes (300x300x300 microcubes for one and 10x10x1 for another) getting the center of each ‘may’ be tricky… or may not. Again, ignorance is NOT bliss in these circumstances.

The reason I was resistant to explicitly spelling out what I am trying to do initially is I sincerely doubt that the method of rendering a voxel world has been done before, and I kinda don’t want someone more capable than me to see my idea and (probably rightfully) take it.

So, I guess the question for either case is: to get the simple math of “add second object where first object begins” do I take dimension/2 + dimension/2 = start position, or is that wrong?

EDIT: also, and I shouldn’t have to say this, I should just avoid saying stupid things… but… please forgive any obvious stupidity. I’m not feeling well, and I’m sure it’s causing my ‘things I shouldn’t say’ filter to not work as intended…

Take the start point. Any vector3f. Place one. Add half the size of the cube to any axis. Place one. Add half the size of a cube to any axis. Place one. And so on.

Voxel words are not a series of cubes. Your object count will obliterate your frame rate. Greedy meshing removes unnecessary vertices and it the most commonly used algorithm to use when generating voxel meshes. Density fields make up the data and greedy meshing generates the meshes. That’s the traditional approach to voxel worlds.

Also, there’s no such thing as stupid questions. You clearly want to learn. Ask away :slight_smile:

Sorry, I’m not referring to rendering, I’m referring to saving the data, or how it appears in memory.
With regards to rendering, naturally there is a more rational approach. It wouldn’t be possible to render a game live if each voxel was rendered (graphically), but when saving? 16x16x256 addresses isn’t impossible, it’s just irrational.
And, again, it is possible that I’m wrong, but on this point I seriously doubt that I am…

And the comment about saying stupid things wasn’t about questions, it was about me saying I had a ‘better idea that hadn’t been implemented yet’ (when, in fact, I don’t know this with any certainty) or various other assertions that have proven to be wrong. – Does anyone know if Unity orients from center or upper-left? I tried to get documentation when creating the original post (or shortly thereafter) but couldn’t find it for the life of me…

You use short primitives for density values if the chunk cell has changed. That’s about as cheap as it gets. 16 16x16x16 cells stacked up. So it’s 16x16x16 not 256 unless the entire height has changed, which is very rare in comparison

You keep talking about the starting point in a cube when I’ve just explained that voxel worlds are not cubes. If you truncate the decimal of an iso surface that’s what a voxel mesh is for all intent. Not cubes.

1 Like

It’s been years since I last checked how Minecraft created save files. Perhaps it has changed, but back when I did check they explicitly saved each block to memory and the save file.
Again, for these purposes I’m not referring to how it is rendered, but how the data is saved or referenced.
Perhaps you are referring to how the data is represented outside of rendering as well, but I know it at one point was not rendered as cells but as individual blocks.
Before I beat a dead horse I want to be sure that we are or are not talking about the same thing. Are you referring to rendering the world, or are you talking about how data is represented outside of graphics?

(and, yes, I was making reference to Minecraft as it is the easiest to reference in terms of voxel games)

Minecraft stores its chunks in arrays that are gzipped the last time I looked.

Any other option has trade offs.

Have a read here.

https://minecraft.gamepedia.com/Chunk_format

The “block format” section is probably the bit you’re interested in.

You are right that probably no one has done it the way you describe. There are probably good reasons for that.

The data structure is very important because the single most very most super most critically important critical super critical thing is how many triangles you end up with. So if two blocks were sitting next to each other, there are two quads you don’t even have to render.

This is why in RAM, the data is almost always in arrays… one cell per voxel… because it is the single fastest way to do neighbor checks. If you have some rectangles and some cubes and some other shapes… then you are constantly going to be spending time trying to decide how to slice them up for minimizing triangles… or perform lighting calcs, etc…

And most of the mitigation that you can do by storing additional data in your structures will chip away at the memory performance of the approach until eventually you take as much or more RAM than just keeping things in an indexed 1-D array… (x * ySize * zSize) + (z * ySize) + y style.

Especially when arrays are truly trivial to compress when not in immediate use.

1 Like

I find it hard to swallow that an array of 500 distinct, recurrent elements is more efficient than the general shapes of those elements layed out and pinged.
Hypothetically speaking if you have a flat plane of identical elements, say a rock cliff, with the unique element method you have a minimum of two triangles per element, whereas with a single element covering the entirety you have the minimum needed triangles to match the surface (or texture[s] if they are not a single recurring texture).
Have another blob of something in the middle, add another element at that location. Yes, this changes the math, but you still have two distinct elements, say dirt or ore and stone. That makes, what, nine sets of triangles if you do it poorly (or if its an irregular shape).

Certainly, it is rare that a voxel world has a singular plane of a singular element, but it would be possible to convert hills and mountains into standard terrain geometry given distance. A tree wouldn’t have to exist of N elements (where N is the heightwidthdepth of the tree).
I’m also not saying the math would be easy. If there are a lot of distinct elements mixed into one another then this would not be as simple as setting it as a single larger element… but to have a random shape of completely random elements would be idiotic. Period. Never used unless someone is specifically trying to break the program – I use this example because probably a decade ago Stranded II was going to become Stranded III and the dev considered using voxels, but their use case (attempt) was literally a random chunk of randomly colored cubes. Yeah, that’s going to lag. Additionally that Minecraft map where it is single cubes of different types with spaces in between… that’s designed to break the game…

Effectively I am trying to take the efficiency utilized in other aspects of computing and apply it to this concept (voxel worlds). You don’t designate each pixel of an image (and even if you try modern graphics fix this for you), similar chunks of pixels are put together in the logic and only displayed as such when rendered. Similarly the only elements explicitly designated in 3D graphics are the vectors, edges and aspects of the surfaces (again, please forgive my ignorance on the real names of some things…) So why on earth would you explicitly designate each hwd block in memory if they can be considered one object for logical purposes?

Anyway, I don’t want to argue, and I haven’t been capable of even attempting this so I’m going to stop here…

–Edit: just reread the last part of the above post. I’m effectively suggesting one change. Taking an array of geometry and changing it into a three dimensional model of said ‘compressed array’ outright. This would be of simplistic shapes, rectangles, squares (which is a rectangle), cones, etc.
—Edit2: and 'cause I wasn’t paying attention (or forgot…) Thank you jayfella for the link. There is always the possibility that I missed something, misremembered something (which makes holding conversations… interesting at best…) or am flat out wrong… I’ll review it to see if I remembered correctly.

Ok. I’ve got an idea. As, frequently, stated there are many things I’m ignorant on. I’m reviewing the information on the JMonkey wiki and tutorials about infinite terrain (etc). If anyone has an external reference for the standards for voxel (or even tile) based geometry and the standard ways compression works for this, may I please have it?
That way we will at least be on the same page with regards to this – I had effectively thought I was doing so in a more (insert positive adjective) way since an array is still explicit 1s and 0s in memory for each item.

Because you almost never consider them one object for logical purposes.

Give it a go.

Edit: P.S.: I’m not just guessing… I implemented a whole block world engine: http://mythruna.com/

The game is unfinished but the block world part worked pretty well.

But, see, that’s what I don’t understand. If I have 5, 10, 500 blocks of dirt that are inactive for all intents and purposes (nothing is being updated, literally the only thing that could matter about their existence, beyond that they are there and what they are, is collision detection), why wouldn’t you lump them together as a singular object? They serve no purposes logically if they aren’t actively being updated (added, removed, or doing something).
Years ago I had this argument with a few mod developers for Minecraft. Wires/Pipes don’t need to continuously be updated. The only thing they need done is to check if a connection exists (i.e. when added or removed) and for purposes of fluid or item transportation, check if full. Everything else can be done between the source and the destination.
But, no, for some reason the others were convinced that each and every object placed had to be updated individually (what if something happens to a singular wire? how can you tell if an object is getting power?) so naturally they left it as is. The answer to these two questions is you maintain an independent list of connections that is updated when something is added or removed from it. Place a wire? Add it to the list. Remove a wire? Remove it from the list, and if it was a connection between set A and set B, make the two sets independent… tada, you have a broken wire and power stops running to one of the two sets without constant updates…

Also, sorry, I didn’t realize you we5re the developer for Mythruna. So, seriously, why must each independent element be considered unique for logical purposes? What purpose does this serve?