[SOLVED] Problems With Real Time Texturing Of Meshes From Real World Camera Images

A brief explanation of what I am trying to do, I am working on an Augmented Reality User interface for controlling a robot. I have a camera that i know its location/orientation at all times in the world. I am using a 3d sensor to find planar regions in the world and making meshes in the scene in the proper locations relative to the camera. Now I want to texture those planar regions with the camera image so that in my 3d view it is easier to see what i am looking at. I first simply tried calculating the UV maps by projecting the mesh verts onto the camera image, but i run into the issue of the textures not lining up along the seams of the triangles in the mesh.

like in this post

I think my answer is I need to do some sort of projection mapping or a custom Shader but I am not sure how to do either of those at the moment and could use some guidance. Or maybe I am completely off course.

I tried to draw up a quick example of what i have and what i am trying to do. i would think this would be an easy problem, but I feel like i am missing something. any help would be greatly appreciated.

What I have

What I want

Iâ€™m sure there are many ways to achieve what you want.

If it were me, I think I would write a custom filter that replaces blue pixels with camera data and apply that filter to the 3-D view you have.

2 Likes

clever i never would have through of that one. The only problem with that in my situation is i would like to save the textures onto meshes that are not currently in view

1 Like

Ah. Then I think you need to write texture (UV) coordinates to the meshes at some moment when they are in view.

PS: JME meshes can have up to 9 sets of texture coordinates. Some cleverness may be needed to avoid exceeding this limit.

i was able to write what i believe were correct texture UV coordinants to the mesh, the problem is that the texture needs to be â€¦ unskewedâ€¦ is that a wordâ€¦ to fit on the plane, and in doing so there is a non-affine transformation on the texture causing seams in the polygone along the triangle divisions. i am pretty certain from my searching that a custom shader needs to be written, i just cant quite figure out how.

1 Like

How about using the video-capture image as your texture and the videoâ€™s screen coordinates as your UV coordinates? Then I think you could use the standard `Unshaded.j3md` material and its shaders.

That is exactly what I did, I will try and post actual pictures from what I did when I get to work in the morning, but basically it is something like this. I am using the vertex screen coordinates and covering it to a 0-1 for the UV map coordinates.
What Iâ€™m getting:
What I was expecting:

1 Like

I believe I understand the issue now.

Subdividing the mesh should help. Iâ€™ll consider and try to come up with a better fix.

The texture is in screen space so this is always going to be weird, I thinkâ€¦ but the requirement to save it out again is whatâ€™s making it hard. Else itâ€™s just a filter problem as already suggested.

The issue you will always run into trying to project this as if it were a normal texture is that textures will be perspective-corrected/foreshortened/etcâ€¦ and the texture is already that. And while this is pretty trivial to fix in the shader, essentially â€śundoingâ€ť the projection, that leaves you still unable to save anything out.

Maybe itâ€™s time to re-examine that requirement as it could be solved a different wayâ€¦ perhaps only unprojecting textures when you write them out or something. Because whatever you do at runtime is going to be WAY less performant than a screen space option. (Because the texture is already in screen spaceâ€¦ so projecting just to unproject it again is a bit of a waste.)

will post a picture soon from the actual UI showing the problem, but if i am understanding you correctly i may have used the wrong wording. I do not need to save the textures out for future use or anything, mostly as the robot is moving and a surface that was seen previously is no longer in view, i want that surface to still look like the real world texture, as i make a surface i want to take the real image, use it as the texture, and then not update the texture on that surface anymore.

Ok i think i may have a way forward for this that will be acceptable for what i a doing for now. first off, simply Subdividing the mesh did help a bit. here is the results of simply using an Unshaded.j3md material, and setting the uv coordinants to match the vertex locations on the image. it is a little hard to understand what is going on because i am running this in sim instead of on the real robot, but the left hand side of the UI shows â€śrealâ€ť video coming in for the robot, and then on the right hand side is the environment being built up based on simulated 3d scans in the robot simulator. i am also overlaying that onto the video side to augment the view with what the robot can see.

but there are a few things i dont like about this approach, mainly that the planar region has to be subdevided enough that the skewing of the image isnt noticeable,

but i think i may have a better solution, i found an old jme3 projected texture project on github that i was able to revive, with this i will be able to color my planar regions that are out of robot camera view different colors so i can still distinguish them, but anything in camera view i will be able to project the robot camera image onto those regions.