Replacing part of a Mesh

Hi,

We are currently working on a 3d Paperdoll as commonly seen in (MMO)RPGs.
We want to be able to visually replace the equipment on a character.

After researching the topic and reading a very interesting reddit discussion (link at bottom),
we are aiming for a similar 3-technique approach to early World of Warcraft:

  1. Head, Shoulders, Weapons are extra Objects
  2. Torso and Legs are simple Textures-Changes
  3. Gloves and Boots are replaced parts of the mesh.

Example from World of Warcraft:
http://us.media3.battle.net/cms/gallery/0NP8QRIZD9T41315846395926.jpg

Due to jMonkeys awesome bone support, we have techniques 1. and 2. working.
My first approach would be to replace vertex groups, but these are lost when importing to blender.

When I use extra Objects for Legs and Forearms (Technique 1.), the shading looks cut-off where the 2 meshes join:

How would you do this? Maybe someone has a good source we should read?
Or is replacing a part of a mesh a bad idea?

Reddit discussion on 3d paper dolling: http://www.reddit.com/r/gamedev/comments/1x9f21/how_do_3d_rpgs_handle_displayable_equipment_that/

looks like a normal issue to me.
Though I don’t really get how your mesh is built. In that picture how are the meshes split?

Here are our 2 meshs in blender (the split is visible in green)

In blender, the same “different lighting” can be seen, which definitely sounds like normal problems,
thanks for the hint.

I am researching if there is a way to recalculate the normals, as if it were a single mesh with blender.
Still open for suggestions if anyone knows a fix for this in blender or in jmonkey

Okay I analyzed the normals in Blender and the Vertex normals display correctly.

Also I joined the 2 meshes in Blender and recalculated the normals. Problem remained.

The fix for the problem (only in Blender, not useable ingame),
is to join the meshes and to “remove double vertices”.
Only then is the light calculated smoothly.

Is there a way to achieve this in jMonkey?

Maybe you can try to merge the meshes, recompute the normals, then split them again.

It may not be the easiest of the workflow though if you have different models of legs and body.

You can visualize the normals in blender, in the 3D view, go into edit mode (hit tab) then hit “n” and a panel will pop on the right. In this panel you have a “Mesh display” section and you can check to display normals in it.

It can help you figure out what the issue is.

oh I posted too late :stuck_out_tongue:
There is no built in way to do this in JME.

Usually, it’s done by having the junction being a real cloth junction.
Like if you have boots, make the junction be the top of the boots.
It may produce some overlap, but you wouldn’t have this issue.

@nehon
Yeah I just visualized my normals and saw a very small offset after joining the meshes to one.
(Difference between the double vertices normals)

I tried the workflow you suggested (merging, calculating normals and then seperating).
It does not work, as blender automatically adjusts the normals when splitting an object in 2.

What are the cloth junctions you mentioned?
We want to solve this problem, regardless of the solution.

We know that we could “hide” the feet inside a boot,
but it would cause us to have many faces which would have light calculation etc and never be visible.
Or can jMonkey adjust the culling to not do calculations for “hidden” faces?

Well let’s say your model has panties, the leg model has to start at the seam of the panties, while the panties themselves are in the body model.
Or indeed, overlap the meshes whit for example half a leg in a Boot.

Hidden faces are usually not rendered, but that’s not really a JME thing more an OpenGL thing.
Objects in the opaque bucket are sorted front to back. Meaning that objects closer to the camera are rendered first.

When rendering an object the depth information of the object is rendered to a depth buffer (at the pixel level). If when rendering a pixel there is already a depth value in the buffer below the current value, the pixel is not rendered. Basically if we already rendered a pixel that is in front of the current pixel, we don’t render it.
This usually greatly limit the overdraw of a scene.

That said when an object is “inside” another, as in the case of a leg in a boot, it might be some overdraw as the body might be sorted before the boot depending of the orientation of the cam.

Also…that doesn’t prevent the hidden vertices to be processed and sent to the gpu.

But all in all I’d be surprised that you notice a performance loss if you do this. Modern graphic cards eat vertices like mad, and that’s not really an issue.