Reconstructing mesh connectivity

Anyone have pointers to reference material regarding reconstructing mesh connectivity data from JME/opengl style buffers?

I’m working on importing jme object data into an editor application. The editor, to support operations such as subdivision, needs full connectivity data. I.E.

  • a triangle has vertices and edges.
  • vertices are shared by triangles, and know to which triangles they belong.
  • a shared vertex may have data mapped separately for each face (such as UV coordinates/texture parameters)
  • surfaces are manifolds:
    • Only two faces may share an edge
    • no Mobius/Klein surfaces.
  • mesh can be composed of several disjoint manifolds

For just some starters.

JME buffer-based meshes, on the other hand, are simply a collection of loosely-related individual faces flying in close formation, sharing as much data as possible. Which is great for shipping off to the GPU, but means I need to reconstruct the connectivity.

I don’t think it will be prohibitively difficult, but I can’t be the first person to try to manage this task (in general, at least)

Any hints or resources on standard approaches/pitfalls to watch for?

Why are you starting with JME meshes? Why not catch the data earlier in the process rather than “right before it’s rendered”?

Also: Can’t you use jme to import j3os?
Because then you have Mesh#getTriangle and others.

Apart from that, you can do what the GPU would do: if the mesh mode is TRIANGLE_STRIP (and I guess we don’t have anything different), read 3 indices and load the vertices out of the buffer. That way you already have shared vertices.

Edges are the connections between the vertices (v1 - v2, v1 - v3, v2 - v3).

For UV’s I can’t help you, but there probably is a well defined way the GPU would do this.

I don’t understand the problem. You have the index buffer, from there you can reconstruct triangles in the way you want. You can have shared vertices or not…
I am not sure what editor you are using, but i guess it probably supports OBJs, if so, you can find several OBJ exporter in this forum.

There are data structures that provide this kind of connectivity/neighborhood information, for example Half-Edge, Quad-Edge etc.

Blender uses BMesh: Source/Modeling/BMesh/Design - Blender Developer Wiki
The simple representation in render buffers is often called “triangle soup”.

Those are the basis for efficient mesh manipulation or analysis. I’ve worked with Half-Edge, it’s relatively simple to implement, most often sufficient and supports 2-manifolds (no T-like face structures = no edges with >2 faces). It uses separate objects for Vertex/Edge/Face to which you can add whatever you need. You can run graph operations on it and for example find disjoint surfaces.


And similar requests at other times…

I am. Getting the buffer data (triangle soup? I like that phrase!) Is almost trivial.

There is. The GPU renders each triangle that it is given as an independent entity, without regard for ‘adjacency’ or ‘connectivity.’ it does not care that some of the vertice data is shared. Unfortunately, this does not work so well for editing. Especially anything that adjusts topology.

I have such a data structure. What I’m looking for is hints about heueristics to intelligently import ‘triangle soup’ into such a data structure.

For instance, two triangles in JME might rightly be adjacent and continuous, but not share any vertex indices, because they are on opposite sides of a UV seam.

Conversely, three triangles could very well share a pair of vertice indexes, if all of the parameters match. But in the editing structure, they must be represented as separate.

I’ve started writing mesh editing software for JME, but haven’t gotten very far. Many of the things you wish for (such connectivity tests and detection of near-duplicate vertices) are also on my wishlist.

I have methods for converting triangles to edges, copying vertices, and comparing vertices to see if they are exact duplicates. Part of the challenge is that JME uses many different varieties of meshes: both indexed and non-indexed, many different modes, many index types, optional colors, normals, tangents, binormals, texture coordinates, point sizes, and so on.

I understand the need for data structures better suited to editing, but so far I haven’t invented much.

If you want to study what I’ve done, it’s all in the MyMesh class of the Heart Library, available free from the Software Store:


This sounds kinda like a project I did (at a lesser scale) for my first JME project years ago (and it was a kind of “editor” also, but not nearly as much as your concept).

But what I’m confused about is why you’re reconstructing and modifying anything from JME/opengl buffers. That’s view stuff (I sometimes call it “view model”) right? I never worked directly off of that, those buffers are write-only. Like I think maybe Paul hinted at above, what I did was keep/cache the data used to construct the low level linear buffers that get sent to GL in a regular Java data structure. This is the real “model” to work with.

So this is only a guess, but maybe you are conflating model and view concerns? I found that an MVC-inspired design was crucial.

Thanks for linking your work. I’ll definitely take a look.

For my purposes, I can mostly ignore the mesh modes. (except perhaps to provide hinting- strip or fan modes definitely imply connectivity) I use getTriangle() variants, no need to try to interpret the index buffer. This may need to be handled differently when outputting a mesh, but that is a different, probably simpler issue.

I do consider point & line meshes to be unsupported. (For my case)
Colors, normals, etc are per-face-vertex data. (Internal representations differ)

Because that’s how aj3o stores them. You’re assuming that if I have a JME asset, it’s either:

  1. Finished and ready for production
  2. That I have the ‘source’ file in a format that I am willing to deal with.

I submit that this happy state of affairs only obtains if you’ve always practiced good asset-pipeline discipline, or are unreasonably lucky.

Well, keep in mind, even though the scene editor incorrectly cheats and uses j3os as it’s “model”, the j3o format was never designed for this.

It’s original purpose was as a “last write” sort of compilation to get your assets ready for the game. It’s not an interchange format and was not designed to be an intermediate storage form.

The fact that backwards compatibility doesn’t break more often is a minor miracle.

Editing it directly is like editing a PDF directly. Always always better to use the original document when you have it. OpenGL buffers (which is what a Mesh is) have lost almost all of the useful information that they once had, such creases/smoothing groups/etc… whatever the real mesh editor had.

Beyond a guava multimap of edges and triangles, general mesh editing is nearly always purpose-built for a particular use-case or leaves out important things for 100 other use-cases.

1 Like

I wasn’t assuming you specifically had a JME asset (just mesh data in some format, in my example’s case it was essentially a modified obj format IIRC)… but if your requirement is to load from a j3o format, I think I understand your issue a little better now.

Yeah, but I think it’s still best treated as a case of “we lost our good model and now we’ll have to recreate it”.

If all you have left is a set of OpenGL buffers then the operation is an “import” into some better format for editing… where you can then manual recreate all of the missing pieces.

Are you speaking more from a file-format standpoint, or a JME-scenegraph API viewpoint?

Pretty much what I’m looking at… I think most of this can be recreated by analyzing the normals, etc, as long as I can figure out how to de-dupe vertices.

Breaking up non-manifolds could be done as a separate pass.

Or, translate to an “inflated” triangle soup… A version where the vert buffer is 3*faces, despite data duplication. Then clean it all up in one pass.

Unless @sgold and I are talking past each other regarding our intentions?

If you’re working on an editor of some kind, you may want both versions where vertices are shared or not shared anyway, to use for different viewing modes. I found it useful. Of course without intent/context, possibly a useless suggestion. :sweat_smile:

Throw them into a HashMap or Multimap keyed off of vertex or whatever. This is the simplest problem.

Thanks to everyone for the suggestions. I was able to find [this paper] ( knowing that the representation is generically called ‘triangle soup’

Looks like it covers most of what I was concerned about. I even think that it will adapt well to healing existing meshes. @sgold you may also find it helpful.

True, for certain values of vertex. In my target structure, a vertex is:

  • a point in space
    • that is shared by all faces that connect at that point.
    • that is not shared by faces that do not connect, even if their own vertices occupy the exact same position

Therefore, requires connectivity data or interpolation. Above referenced pair makes it look fairly simple.

1 Like

The key, I think, is coming up with a practical definition of “connect”.

What needs to match in order for 2 faces to connect? More than just vertex positions, apparently. An edge? Including normals? Tangents? Texture coordinates? Do the matches need to be precise, or are there tolerances? And if so, how do the tolerances combine?

For my library, I want to cover as many use cases as possible. However, I’m still unclear what the use cases are. If I had clear requirements, I believe the coding would be easy.

Understand that this is a fairly non-standard requirement as it seems specific to what you are doing and not at all related to graphics or rendering. So no matter what, you’d have ended up rolling your own approach anyway.

For me, ‘connect’ is ‘part of the same manifold shell.’

  • Two and Only two faces share a pair of vertice locations?
    • If windings match (one has edge v1->v2 & the other has v2->v1) Connected
    • If windings do not match, and are not yet in the same shell, invert normals for one shell, connected.
    • If windings do not match, and are already in same shell, mobioid surface. Need to have separate vertices/edges even though they share the same location
    • Mis-matched vert normals do not imply a split, rather are fed back into the smoothing data.
  • More than two faces share a pair of vertice locations?
    • T-joint, at least one face is not connected.
    • Default to no connections
    • comparing vert normals, UV locations might provide some hints as to which face to exclude, this is going to be fuzzy logic, or ask the user.
  • Two or more face share 1 vertex location, not connected by intervening faces (not part of a fan, eg)
    • bowtie type topology. Not Connected.
1 Like