Reconstructing mesh connectivity

For a general purpose library I would expect everything to be connected because that enables traversal. The data structure would hold all the necessary attributes that allows the desired re-conversion into OpenGL buffers, for example smoothing group IDs for a crease angle algorithm. But that would be a second step. Some of these attributes could be gathered while building. Others would be added by analysing operations.

And I personally would wish that I could add more attributes to the elements.
The method for vertex merging should be user-definable, as this might close small holes or create Degenerate Triangles where 2 of 3 resulting vertices are mapped to the same one (that might be desired or not, but probably not?).

A use case I have is to automatically generate meshes by extruding faces and then applying more operations on the resulting faces. The goal is to evolve attributes of a gun for example with a genetic algorithm, and then visualize them (like, more accuracy = longer barrel):

gist:da93255b901f6ddc9d2999431c81ea81 · GitHub (ugly static code example)
Imgur: The magic of the Internet (result)

But I wouldn’t expect a library to provide all features. Rather, I’d wish it would provide the data structure, a standard set of operations and debugging tools, and then allow me to write custom operations for it.

The BMesh article that I posted above suggests to use “euler operators” as the base for all operations, but I found it cumbersome.


I like this topic and I’d love to help out with your mesh library if you allow, but time is not always my friend.

I wrote my Bachelor’s Thesis about this topic. It’s in German, but maybe you can translate it. I think it addresses some of the points you mention, and it totally was about “healing” meshes.
Worth mentioning is that a volume calculation can be used to check how healthy a mesh is. You can calculate the volume from different points. The stability of the result can be used as the metric.


That seems a sensible approach.

I’m interested in collaboration.

Currently I’m busy with physics, but after the next release of Minie I should be ready for a fresh project. Even when/if you don’t have time write/test/debug code, you could still help by steering my efforts.

To be clear: Heart isn’t a mesh library—it’s a broad supplement to jme3-core. Heart contains many classes, only a couple of which relate to meshes. My vision for mesh editing is to build a new library atop Heart, just as the Wes animation-editing library and the Minie physics library are built atop it.

1 Like

Before I start translating your thesis into English, I should ask: do you know of any existing translations?

I’ve begun my own translation.

If you complete that translation, would you be willing to share? So far, I’m trying to get by with Google translate, and that does not retain the images.

1 Like

Provided @1000ml grants permission, I’ll gladly share my translation.

It may be a long time coming, however.

Thanks for the interest and sorry for the delay. I thought this was easier. And no I don’t have a translation.
But I found the LaTeX sources of the document which might be easier to translate.
I uploaded them here:
Since I don’t know about the licensing I’ll leave it private and add @sgold. If it’s feasible, you could add the parts that you’ve already done.
IP611_Doku.tex is the main document which includes the chapters from the contentA and contentB folders.

On first glance the google translator works quite well. I’ll try translating with maybe tomorrow. Hopefully it will streamline the process of copy-pasting from google.
Also, some chapters are not so relevant.

And I thought a bit about the mesh lib and how the data could be stored. Simpler algorithms could maybe work directly on the OpenGL buffers without conversions (would be better for real time modifications). So I’d like to write a prototype that decouples data from algorithms so the same code could work on different data structures.


Here’s a version which is partly translated:

There’s some auto-translation weirdness but I hope it’s understandable.

1 Like

I’ve made something, and probably should have posted and requested for feedback earlier.

I’m not sure if this is in line with your vision, @sgold, but here’s a list of what I did and the reasoning behind it:

I couldn’t justify the complexity and code duplication for decoupling algorithms from the data structure. So instead, I aimed for decoupling the data structure from the data storage.
I implemented Blender’s BMesh: Source/Modeling/BMesh/Design - Blender Developer Wiki

  • Hopefully sufficient connectivity information in all cases.
  • Support for non-manifold meshes (Edge with more than 2 connected faces. Traversal around T-structures and bowties where two faces share only one common vertex.)
  • Support for edges with no adjacent faces (lines, wireframe meshes)
  • If required, constraints could be added to only allow for manifold structures.
  • Downside is that the extended compatibility introduces more complexity.
  • No holes in faces. But the link above suggests that this feature could be added.
  • In theory, the euler operators (listed in the link above) should be sufficient to work with the data structure and no knowledge about BMesh is required.
  • To use it at the low level, it will require some learning. But when you learn how to handle BMesh in Java, that knowledge can also be used to write Python scripts for Blender.
  • Blender code can be used as reference or ported to Java.

For storing properties of the elements (Vertex, Edge, Face, Loop) I implemented a “horizontal object pool”, backed by primitive arrays.

  • The library is basically a wrapper around arrays and additionally provides connectivity information and an API for algorithms.

  • Each property type is stored in an own array. When a new property type is added (e.g. UV coords), a new array is created.

  • The alternative would be to store the properties directly in the element classes (e.g. Vertex.color). This can still work by providing the BMesh object with own factories for the elements.

    • However, when storing the properties in the element classes directly, they are set during initialization.
    • During initialization it is not known which algorithms are used and what data they require.
    • Adding more properties to element classes during runtime would be more difficult and less efficient.
    • With arrays, algorithms can generate new per-element properties during runtime. BMeshData takes care of the allocation.
  • The properties are associated with an element type.

    • There’s a Vec3Property that is associated with the Vertex class and stores its position.
    • Another instance of Vec3Property could be associated with the Face class to store a face normal.
    • The property classes use a generic parameter to enforce this association with an element type: Vec3Property<Vertex> for vertex positions, Vec3Property<Face> for face normals
    • That way the program won’t even compile if a property is used for the wrong elements. If it were possible, misuse could lead to runtime crashes or subtler errors.
  • Properties are similar to Java’s reflection API for fields. Accessing a value:
    Vector3f val = property.get(vertex);

  • When a new element is created, the arrays are automatically resized if needed (similar to ArrayList).

  • Those arrays can be sent directly to the OpenGL buffers. No need to iterate over the elements.

    • This can happen on a per-property basis. For example, if no normals were changed, that property-array doesn’t have to be sent back.
  • The BMeshData class maintains a free-list that stores deleted elements (=gaps in the arrays). When an element is removed, the slots in the array are remembered and reused for future elements.

  • If elements were removed, the array first has to be compacted and the gaps removed before it can be sent back to OpenGL.

  • It would be possible to use OpenGL Buffers directly as the backend for the properties.

  • But using an own array allows the library to work in a separate thread.

  • Benchmarks suggest that for widespread modifications of many elements, it’s faster to copy the OpenGL buffer to a Java array, modify the contents, and then send the array back in one call.

  • However, if modifications are only sparse, it would be faster to directly write to the OpenGL buffers.

  • Planned: Remember dirty regions in the array so only modified parts have to be sent back to OpenGL.

I implemented a simple conversion from a JME Mesh to BMesh that reconstructs connectivity information. I tested different methods for vertex deduplication.

  • The comparison paper posted by @sailsman63 above states that the 1-way sorted method was fastest for them.
  • For me, the hash based implementation is almost twice as fast in all cases even though it queries the HashMap up to 27 times (3^3 box) per insertion.
  • The hash based deduplication is also more versatile since it works incrementally: The mapping is ready again right after insertion.
  • The 1-way sorted method first needs to sort all vertices after inserting new ones.

I made a debug visualization that shows the state of the BMesh data structure. Loops (basically fragments of a face) are visualized at the edges of each face. In the pictures below, green means they are connected to an adjacent face.
The Loops follow the winding order of the face. Direction is from the dark end to the brighter end.

  • Planned: A visual step-by-step debugger for algorithms to see what it’s doing.

BMesh to JME conversion: Exists rudimentarily in BMeshVisualization.

  • Planned: Maintain the index buffer with automatic vertex duplication and only send it to OpenGL when needed.
  • Planned: Automatic triangulation of quads and polygons. For repeated modifications, faces could store the triangulation in a property. With a dirty flag only changed faces need to be triangulated again.

Possible applications:

  • Programmatic modelling. Create and manipulate meshes in Java depending on parameters.
  • Chaining of algorithms.
  • Verification, debugging, analysis and manipulation tools, possibly integrated with @jayfella’s plugin dev kit.
  • Extrusion along paths (e.g. bezier). Create roads or dynamic meshes like a curved hose.
  • Mesh simplification. Removal of features for LOD. Reduce details to normal map.
  • Mesh subdivision and smoothing.
  • Physics-independent realtime deformation, for example when objects crash into eachother.
  • Realtime decomposition, for example when objects explode. Fuzzy edges. Break at thin regions. Repeated breaking.
  • CSG / Boolean operators. Union, difference or intersection of meshes without hidden faces.
  • Optimize mesh with triangle strips / line strips. Merge coplanar faces.
  • Generate uv coords. Texture mapping.
  • Alternative to working with arrays/buffers directly. Generate indices, normals, texture coords automatically while working with quads.

Do you have any opinions or hints on the general concept, the API or implementation details?
Any more use cases? Wishlist?

Meanwhile I’ll continue to implement the euler operators and conversion methods.

Code is here: GitHub - FennelFetish/jBMesh

Example of an operator:

jme Mesh to BMesh Conversion Benchmark:

Buffer Benchmark:
Results: Buffer Benchmark Results · GitHub

BMesh debug visualization of a torus. Face normals shown as lines.

Same torus with operators applied to all faces: Extrusion, scale, translation


Thanks for sharing your work. No feedback yet, except to say I’m excited about this!

I’ve kind of let this one sit for a while.

This is the big sticking point for me. How do you apply a smoothing algorithm to that? Storing such connectivity is not the hard part. How do you get something like catmull-clark to not barf?

As far as I understand, it doesn’t matter if the data structure supports non-manifolds as long as you don’t build one.
I’m thinking about adding constraints that will throw exceptions when the structure is about to become non-manifold. Possibly by decorating the BMesh object and therefore the euler operators (which include createFace(vertices)) with a ManifoldConstraint or something.

But here it might be better to just convert the mesh to BMesh, including the T-structures, and then disconnect the problematic connections.
I attempted that in my thesis. The translated PDF I posted above describes the heuristics I used on page 43.
A nice things about BMesh is that the Edges themselves act as the EdgeAccumulators. All connected faces can be retrieved from an Edge.

Or how do your models look?