Some time ago a 3D texture material definition was created but it only allowed to use unshaded color of the texture. My question is: is it possible to create a definition that will allow the the use of 3D texture in the same way as 2D texture (with not only color, but diffused colors, and bump maps and so on).
If the above question answer is true then the other question is: is it possible to create a material definition that will allow the mix of 2D and 3D textures ??
Please ignore any performance issues for this topic. Since I have totally no knowledge about how the material definitions work I just wanna know if it is possible.
Some time ago I made a code that used the mesh shape to create flat textures out of 3D ones. But this turned out to be impractical because the code became very difficult to understand and the size of such texture was too big to handle more complicated meshes.
It turned out to be perfect for importing sky though so at least one practical use of this was found :lol:
The question is what a 3D texture should actually do. If the material is intransparent, you’re wasting lots of VRAM and bandwidth transferring useless data. The percentage will grow with N^3/2, or (roughly) with the Z depth of the texture (or whatever comes out as Z).
For semitransparent 3D textures: I started work on that (but got sidetracked even before I got to code anything interesting), and it’s a very, very different kind of shaders that you need, because standard shaders simply ignore internal material properties such as light scattering - and the usual techniques for shadow rendering are largely useless because shadows are now 3D objects instead of 2D.
Intransparent 3D textures are usually converted to 2D textures via variations of the “marching cube” algorithm. Which is complicated, patent-encumbered in its original version, and has patent-free variants. Implementations exist in the JME ecosystem, though I don’t know their completion or patent status.
tl;dr: 2D and 3D textures can be mixed by converting 3D to 2D, or by setting up a new shader library.
You might succeed in only extracting smaller parts, and then batch them with the textureatlas tool we have. Might be able to pack more efficiently than a algorithm that needs to keep the full mesh in mind.
You can do what ever you want, but your shader has to account for it.
Reading in a texture2d and in a texture3D is not the same so right now putting a texture3D in the lighting material may result in something unexpected…or a crash.