3D Texture support

Can anyone confirm if 3D textures are fully supported or not? I have found several threads that seem to indicate that they are, but I am having some very poor results when loading.

I am in the process of modifying the texture I am using for volumetric clouds from 1 slice at 1024x1024 to a true volume texture 128x128x128. I export the image from photoshop using the DDS plugin and through debugging I can see that it is being read with h=128,w=128 and d=128, which looks correct, however when I sample the texture, it looks like this:

In photoshop it looks like this:

image

I have also noticed that the texture does not load correctly if I generate the mips in the DDS plugin. It does work for other textures, but not for textures that actually have depth.

This is how I am loading the texture"

    //Set cloud shape texture
    TextureKey noisekey = new TextureKey("Textures/Clouds/CloudShapeGenerated.dds");
    noisekey.setGenerateMips(true);
    noisekey.setTextureTypeHint(Type.ThreeDimensional );

    Texture lowFreqNoiseTex3d = assetManager.loadTexture(noisekey);
    lowFreqNoiseTex3d.setWrap(WrapMode.Repeat);

Any help would be much appreciated

I’m going to answer my own question. Looks like this is actually an issue with the DDS file. I loaded a regular .png file as a 3d texture and set the depth manually and the volume works as expected. Not sure if the issue is the DDS loader in JME or the Nvidia plugin for Photoshop. I’ll investigate when I have some time.

I don’t know anything about DDS files or Volumetric Textures, but the result image looks similar to the source image scaled and repeated.
How are you sampling? (could the frequency/scale be offset?)

Maybe it’s hard to see in my screen shot, but the quality is horrendously low, like maybe the lowest level mips are being selected. I’m sampling in a shader using textureLOD with the highest value selected, but it appears to be ignored.

The texture is repeated, that’s not the issue, it’s the ultra low quality that seems to be caused by either the export plugin or the DDS loader in JME

Can you show the shader?

First of all, why are you using 2D texture as 3D in the first place?
How to create 3D noise texture?
Use 3D noise algorithm.

This is how I created 3d noise for clouds, by default it shows a 2D slice of the noise, uncomment some lines to store the result to file.

/*
 * Copyright (c) 2017, Juraj Papp
 * All rights reserved.
 * 
 * Redistribution and use in source and binary forms, with or without
 * modification, are permitted provided that the following conditions are met:
 *     * Redistributions of source code must retain the above copyright
 *       notice, this list of conditions and the following disclaimer.
 *     * Redistributions in binary form must reproduce the above copyright
 *       notice, this list of conditions and the following disclaimer in the
 *       documentation and/or other materials provided with the distribution.
 *     * Neither the name of the copyright holder nor the
 *       names of its contributors may be used to endorse or promote products
 *       derived from this software without specific prior written permission.
 * 
 * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
 * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
 * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
 * DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY
 * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
 * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
 * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
 * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
 * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
 * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 */
package noise;

import com.jme3.math.FastMath;
import com.jme3.math.Vector3f;
import java.awt.image.BufferedImage;
import java.io.DataOutputStream;
import java.io.IOException;
import templates.terrain.gen.Perlin;
import templates.util.DebugUtils;
import templates.util.Mathf;

/**
 *
 * @author Juraj Papp
 */
public class NoiseGen {

	public static void main(String[] args) {
//		cloudNoise();
		cloudDetail();
	}
	public static void cloudDetail() {
		WorleyNoise w0 = new WorleyNoise(2);
		WorleyNoise w1 = new WorleyNoise(4);
		WorleyNoise w2 = new WorleyNoise(8);
		WorleyNoise w3 = new WorleyNoise(16);
		
		w0.initRandom();
		w1.initRandom();
		w2.initRandom();
		w3.initRandom();
		
		w0.invert = w1.invert = w2.invert = w3.invert = true;
		
		
		Noise3D noise = (v) -> {
			float wf0 = w0.distance(v)*0.625f + w1.distance(v)*0.25f + w2.distance(v)*0.125f;
			float wf1 = w1.distance(v)*0.625f + w2.distance(v)*0.25f + w3.distance(v)*0.125f;
			float wf2 = w2.distance(v)*0.75f + w3.distance(v)*0.25f; 
		
			return wf0*0.625f + wf1*0.25f + wf2*0.125f;
		};
		
		Vector3f p = new Vector3f();
		BufferedImage img = new BufferedImage(128, 128, BufferedImage.TYPE_INT_RGB);
		for (int x = 0; x < 128; x++) {
			for (int y = 0; y < 128; y++) {
				p.set(x, y, 0);
				p.multLocal(1f / 128f);
				//p.multLocal(1f/w.sx, 1f/w.sy, 1f/w.sz);

				float d = noise.sample(p);
//				
				int r = b(d);
//				r = 255-r;
				img.setRGB(x, y, (r << 16) | (r << 8) | r);
//				int r = b(perlin.sample(p));
//				int g = 255-b(w0.distance(p));
//				int b = 255-b(w1.distance(p));
//				img.setRGB(x, y, (r << 16) | (g << 8) | b);

//				int r = b(perlin.sample(p));
//				int r = 255-b(w0.distance(p));
//				img.setRGB(x, y, (r << 16) | (r << 8) | r);
			}
		}

		DebugUtils.displayImages(img, img);
		
//		File f = new File("/media/leo/Kaiba/workspaces/craft/Sky/assets/Textures/clddetail.3d");
//		try(DataOutputStream dos = new DataOutputStream(new BufferedOutputStream(new FileOutputStream(f)))) {
//			writeFloat(dos, noise, 32, 32, 32);
//		} catch (IOException ex) {
//			ex.printStackTrace();
//		}
	}
	public static void cloudNoise() {
		Perlin pp = new Perlin(8);
		//p.OctavePerlin(0, 0, 0, 0, 0);

//		WorleyNoise w0 = new WorleyNoise(8, 8, 8);
//		WorleyNoise w1 = new WorleyNoise(16, 16, 16);
//		WorleyNoise w2 = new WorleyNoise(32, 32, 32);

		WorleyNoise w0 = new WorleyNoise(4);
		WorleyNoise w1 = new WorleyNoise(8);
		WorleyNoise w2 = new WorleyNoise(16);
		WorleyNoise w3 = new WorleyNoise(32);
		WorleyNoise w4 = new WorleyNoise(64);
		
		WorleyNoise w56 = new WorleyNoise(64);
		
		
//		WorleyNoise w1 = new WorleyNoise(4, 4, 4);
//		WorleyNoise w2 = new WorleyNoise(2, 2, 2);
		w0.initRandom();
		w1.initRandom();
		w2.initRandom();
		w3.initRandom();
		w4.initRandom();
		w56.initRandom();
		
		w0.invert = w1.invert = w2.invert = w3.invert = w4.invert = w56.invert = true;

		Noise3D perlin = (v) -> {
			float scale = 8.0f;
			float d = (float) pp.OctavePerlin(v.x * scale, v.y * scale, v.z * scale, 8, 0.15);
			return d;
		};

		Noise3D noise = (v) -> {
			float wfb = w1.distance(v)*0.625f + w3.distance(v)*0.25f + w56.distance(v)*0.125f;
			
			float base = remap(perlin.sample(v), 0.0f, 1.0f, wfb, 1.0f);
			
			//float w = w0.distance(v) + w1.distance(v) + w2.distance(v);
			float wf0 = w1.distance(v)*0.625f + w2.distance(v)*0.25f + w3.distance(v)*0.125f;
			float wf1 = w2.distance(v)*0.625f + w3.distance(v)*0.25f + w4.distance(v)*0.125f;
			float wf2 = w3.distance(v)*0.75f + w4.distance(v)*0.25f;
			
			float lf = wf0*0.625f + wf1*0.25f + wf2*0.125f;
			
//			return perlin.sample(v);
			return remap(base, -(1.0f - lf), 1.0f, 0.0f, 1.0f);
		};

		Vector3f p = new Vector3f();
		BufferedImage img = new BufferedImage(128, 128, BufferedImage.TYPE_INT_RGB);
		for (int x = 0; x < 128; x++) {
			for (int y = 0; y < 128; y++) {
				p.set(x, y, 0);
				p.multLocal(1f / 128f);
				//p.multLocal(1f/w.sx, 1f/w.sy, 1f/w.sz);

				float d = noise.sample(p);
//				
				int r = b(d);
//				r = 255-r;
				img.setRGB(x, y, (r << 16) | (r << 8) | r);
//				int r = b(perlin.sample(p));
//				int g = 255-b(w0.distance(p));
//				int b = 255-b(w1.distance(p));
//				img.setRGB(x, y, (r << 16) | (g << 8) | b);

//				int r = b(perlin.sample(p));
//				int r = 255-b(w0.distance(p));
//				img.setRGB(x, y, (r << 16) | (r << 8) | r);
			}
		}

		DebugUtils.displayImages(img, img);
		
//		File f = new File("/media/leo/Kaiba/workspaces/craft/Sky/assets/Textures/cldnoise.3d");
//		try(DataOutputStream dos = new DataOutputStream(new BufferedOutputStream(new FileOutputStream(f)))) {
//			writeFloat(dos, noise, 128, 128, 128);
//		} catch (IOException ex) {
//			ex.printStackTrace();
//		}
	}
	

	public static void writeByte(DataOutputStream dos, Noise3D noise, int wx, int wy, int wz) throws IOException {
		Vector3f scale = new Vector3f(1f/wx, 1f/wy, 1f/wz);
		Vector3f p = new Vector3f();
		
		for (int i = 0; i < wy; i++) {
			for (int z = 0; z < wz; z++) {
				for (int x = 0; x < wx; x++) {
					p.set(x, i, z);
					p.multLocal(scale);

					float d = noise.sample(p);
					int r = b(d);
					dos.write(r);
				}
			}
		}
	}
	public static void writeFloat(DataOutputStream dos, Noise3D noise, int wx, int wy, int wz) throws IOException {
		Vector3f scale = new Vector3f(1f/wx, 1f/wy, 1f/wz);
		Vector3f p = new Vector3f();
		
		for (int i = 0; i < wy; i++) {
			for (int z = 0; z < wz; z++) {
				for (int x = 0; x < wx; x++) {
					p.set(x, i, z);
					p.multLocal(scale);

					float d = noise.sample(p);
					dos.writeFloat(d);
				}
			}
		}
	}
	
	public static interface Noise3D {

		public float sample(Vector3f p);
	}

	public static class WorleyNoise {

		Vector3f[][][] points;
		public int sx, sy, sz;
		
		boolean invert = false;

		public WorleyNoise(int s) {
			this(s,s,s);
		}

		
		public WorleyNoise(int x, int y, int z) {
			this.sx = x;
			this.sy = y;
			this.sz = z;
			points = new Vector3f[x][y][z];
		}

		public void initRandom() {
			worleyPoints(points, sx, sy, sz);
		}

		public float distance(Vector3f _p) {
			Vector3f p = new Vector3f(_p);
			p.multLocal(sx, sy, sz);
			int gx = (int) (p.x);
			int gy = (int) (p.y);
			int gz = (int) (p.z);

			p.x = Mathf.fract(Mathf.mod(p.x, sx));
			p.y = Mathf.fract(Mathf.mod(p.y, sy));
			p.z = Mathf.fract(Mathf.mod(p.z, sz));

			Vector3f tmp = new Vector3f();
			float minDist = 1000;
			for (int i = -1; i < 2; i++) {
				for (int j = -1; j < 2; j++) {
					for (int k = -1; k < 2; k++) {
						Vector3f v = points[Mathf.mod(gx + i, sx)][Mathf.mod(gy + j, sy)][Mathf.mod(gz + k, sz)];
						tmp.set(v);
						tmp.addLocal(i, j, k);
						minDist = Math.min(minDist,
								tmp.distance(p));
					}
				}
			}
			return invert?1.0f-minDist:minDist;
		}
	}

	public static int b(float f) {
		return (int) Mathf.clamp(f * 255f, 0f, 255f);
	}
	static float remap(float originalValue, float originalMin, float originalMax, float newMin, float newMax) {
		return newMin + (((originalValue - originalMin) / (originalMax - originalMin)) * (newMax - newMin));
	}
	//public float worleyDistance()
	public static Vector3f[][][] worleyPoints(Vector3f[][][] p, int x, int y, int z) {
		for (int i = 0; i < x; i++) {
			for (int j = 0; j < y; j++) {
				for (int k = 0; k < z; k++) {
					Vector3f v = new Vector3f(FastMath.nextRandomFloat(), FastMath.nextRandomFloat(), FastMath.nextRandomFloat());
					//v.addLocal(i, j, k);
					p[i][j][k] = v;
				}
			}
		}
		return p;
	}

}

and here is the Perlin noise

/*
REF: https://gist.github.com/Flafla2/f0260a861be0ebdeef76
The author of C# code,
 * <a href="http://flafla2.github.io/about" target="_blank">Adrian
 * Biagioli</a> (alias Flafla2), claims that the code is free to use.
 * Also the author of the original algorithm, <a
 * href="https://mrl.nyu.edu/~perlin/" target="_blank">Ken Perlin</a>,
 * did not apply for any patents on the algorithm.</p>
 */
package templates.terrain.gen;

/**
 *
 * @author Juraj Papp
 */
public class Perlin {

	public int repeat;

	public Perlin() {
		this(-1);
	}
	public Perlin(int repeat) {
		this.repeat = repeat;
	}

	public double OctavePerlin(double x, double y, double z, int octaves, double persistence) {
		double total = 0;
		double frequency = 1;
		double amplitude = 1;
		double maxValue = 0;			// Used for normalizing result to 0.0 - 1.0
		for(int i=0;i<octaves;i++) {
			total += perlin(x * frequency, y * frequency, z * frequency) * amplitude;
			
			maxValue += amplitude;
			
			amplitude *= persistence;
			frequency *= 2;
		}
		
		return total/maxValue;
	}
	
	private static final int[] permutation = { 151,160,137,91,90,15,					// Hash lookup table as defined by Ken Perlin.  This is a randomly
		131,13,201,95,96,53,194,233,7,225,140,36,103,30,69,142,8,99,37,240,21,10,23,	// arranged array of all numbers from 0-255 inclusive.
		190, 6,148,247,120,234,75,0,26,197,62,94,252,219,203,117,35,11,32,57,177,33,
		88,237,149,56,87,174,20,125,136,171,168, 68,175,74,165,71,134,139,48,27,166,
		77,146,158,231,83,111,229,122,60,211,133,230,220,105,92,41,55,46,245,40,244,
		102,143,54, 65,25,63,161, 1,216,80,73,209,76,132,187,208, 89,18,169,200,196,
		135,130,116,188,159,86,164,100,109,198,173,186, 3,64,52,217,226,250,124,123,
		5,202,38,147,118,126,255,82,85,212,207,206,59,227,47,16,58,17,182,189,28,42,
		223,183,170,213,119,248,152, 2,44,154,163, 70,221,153,101,155,167, 43,172,9,
		129,22,39,253, 19,98,108,110,79,113,224,232,178,185, 112,104,218,246,97,228,
		251,34,242,193,238,210,144,12,191,179,162,241, 81,51,145,235,249,14,239,107,
		49,192,214, 31,181,199,106,157,184, 84,204,176,115,121,50,45,127, 4,150,254,
		138,236,205,93,222,114,67,29,24,72,243,141,128,195,78,66,215,61,156,180
	};
	
	private static final int[] p; 													// Doubled permutation to avoid overflow
	
	static {
		p = new int[512];
		for(int x=0;x<512;x++) {
			p[x] = permutation[x%256];
		}
	}
	
	public double perlin(double x, double y, double z) {
		if(repeat > 0) {									// If we have any repeat on, change the coordinates to their "local" repetitions
			x = x%repeat;
			y = y%repeat;
			z = z%repeat;
		}
		
		int xi = (int)x & 255;								// Calculate the "unit cube" that the point asked will be located in
		int yi = (int)y & 255;								// The left bound is ( |_x_|,|_y_|,|_z_| ) and the right bound is that
		int zi = (int)z & 255;								// plus 1.  Next we calculate the location (from 0.0 to 1.0) in that cube.
		double xf = x-(int)x;								// We also fade the location to smooth the result.
		double yf = y-(int)y;
		double zf = z-(int)z;
		double u = fade(xf);
		double v = fade(yf);
		double w = fade(zf);
															
		int aaa, aba, aab, abb, baa, bba, bab, bbb;
		aaa = p[p[p[    xi ]+    yi ]+    zi ];
		aba = p[p[p[    xi ]+inc(yi)]+    zi ];
		aab = p[p[p[    xi ]+    yi ]+inc(zi)];
		abb = p[p[p[    xi ]+inc(yi)]+inc(zi)];
		baa = p[p[p[inc(xi)]+    yi ]+    zi ];
		bba = p[p[p[inc(xi)]+inc(yi)]+    zi ];
		bab = p[p[p[inc(xi)]+    yi ]+inc(zi)];
		bbb = p[p[p[inc(xi)]+inc(yi)]+inc(zi)];
	
		double x1, x2, y1, y2;
		x1 = lerp(	grad (aaa, xf  , yf  , zf),				// The gradient function calculates the dot product between a pseudorandom
					grad (baa, xf-1, yf  , zf),				// gradient vector and the vector from the input coordinate to the 8
					u);										// surrounding points in its unit cube.
		x2 = lerp(	grad (aba, xf  , yf-1, zf),				// This is all then lerped together as a sort of weighted average based on the faded (u,v,w)
					grad (bba, xf-1, yf-1, zf),				// values we made earlier.
			          u);
		y1 = lerp(x1, x2, v);

		x1 = lerp(	grad (aab, xf  , yf  , zf-1),
					grad (bab, xf-1, yf  , zf-1),
					u);
		x2 = lerp(	grad (abb, xf  , yf-1, zf-1),
		          	grad (bbb, xf-1, yf-1, zf-1),
		          	u);
		y2 = lerp (x1, x2, v);
		
		return (lerp (y1, y2, w)+1)/2;						// For convenience we bound it to 0 - 1 (theoretical min/max before is -1 - 1)
	}
	
	public int inc(int num) {
		num++;
		if (repeat > 0) num %= repeat;
		
		return num;
	}
	
	public static double grad(int hash, double x, double y, double z) {
		int h = hash & 15;									// Take the hashed value and take the first 4 bits of it (15 == 0b1111)
		double u = h < 8 /* 0b1000 */ ? x : y;				// If the most significant bit (MSB) of the hash is 0 then set u = x.  Otherwise y.
		
		double v;											// In Ken Perlin's original implementation this was another conditional operator (?:).  I
															// expanded it for readability.
		
		if(h < 4 /* 0b0100 */)								// If the first and second significant bits are 0 set v = y
			v = y;
		else if(h == 12 /* 0b1100 */ || h == 14 /* 0b1110*/)// If the first and second significant bits are 1 set v = x
			v = x;
		else 												// If the first and second significant bits are not equal (0/1, 1/0) set v = z
			v = z;
		
		return ((h&1) == 0 ? u : -u)+((h&2) == 0 ? v : -v); // Use the last 2 bits to decide if u and v are positive or negative.  Then return their addition.
	}
	
	public static double fade(double t) {
															// Fade function as defined by Ken Perlin.  This eases coordinate values
															// so that they will "ease" towards integral values.  This ends up smoothing
															// the final output.
		return t * t * t * (t * (t * 6 - 15) + 10);			// 6t^5 - 15t^4 + 10t^3
	}
	
	public static double lerp(double a, double b, double x) {
		return a + x * (b - a);
	}
//	public static float[][][] p = new float[256][256][2];
//	static {
//		Random r = new Random();
//		Vector2f v = new Vector2f();
//		for(int x = 0; x < 256; x++)
//			for(int y = 0; y < 256; y++) {
//				v.set(r.nextFloat()*2.0f-1.0f, r.nextFloat()*2.0f-1.0f);
//				v.normalizeLocal();
//				p[x][y][0] = v.x;
//				p[x][y][1] = v.y;
//			}
//	}
//	public static void main(String[] args) {
//		Perlin p = new Perlin(0);
//		BufferedImage img = new BufferedImage(256, 256, BufferedImage.TYPE_INT_RGB);
//		for(int x = 0; x < 128; x++)
//			for(int y = 0; y < 128; y++) {
//				//int col = (int)(255*perlin(x/64f, y/64f));
//				double scale = 8.0/128.0;
////				double per = p.perlin(x*scale, y*scale, 0);
//				double per = p.OctavePerlin(x*scale, y*scale, 0, 4, 0.75);
//				int col = (int)(per*255);
//				System.out.println("int " + per);
//				img.setRGB(x, y, (col << 16) | (col << 8) | col);
//			}
//		DebugUtils.displayImage(img);
//	}
//	public static float lerp(float a0, float a1, float w) {
//		return (1.0f - w)*a0 + w*a1;
//	}
// 	public static float dotGridGradient(int ix, int iy, float x, float y) {
//     // Precomputed (or otherwise) gradient vectors at each grid node
////     extern float Gradient[IYMAX][IXMAX][2];
// 
//     // Compute the distance vector
//     float dx = x - (float)ix;
//     float dy = y - (float)iy;
// 
//     // Compute the dot-product
//     return (dx*p[iy][ix][0] + dy*p[iy][ix][1]);
//	}
// 
// // Compute Perlin noise at coordinates x, y
//	public static float perlin(float x, float y) {
// 
//     // Determine grid cell coordinates
//     int x0 = (int)x;
//     int x1 = x0 + 1;
//     int y0 = (int)y;
//     int y1 = y0 + 1;
// 
//     // Determine interpolation weights
//     // Could also use higher order polynomial/s-curve here
//     float sx = x - (float)x0;
//     float sy = y - (float)y0;
// 
//     // Interpolate between grid point gradients
//     float n0, n1, ix0, ix1, value;
//     n0 = dotGridGradient(x0, y0, x, y);
//     n1 = dotGridGradient(x1, y0, x, y);
//     ix0 = lerp(n0, n1, sx);
//     n0 = dotGridGradient(x0, y1, x, y);
//     n1 = dotGridGradient(x1, y1, x, y);
//     ix1 = lerp(n0, n1, sx);
//     value = lerp(ix0, ix1, sy);
// 
//     return value;
// }
}

@The_Leo I already have a program that creates seamless noise. I save each slice on the x axis creating a 2d texture 16384 x 128. I generate a 3d volume from this using the Photoshop DDS plugin. The plugin automatically uses the strip to create the volume. That is what is not loading correctly. A 3d texture is just slices of 2d textures, so I believe loading a png as 3d texture and manually setting the width/depth on the image works. (Correct me if I’m wrong on this, but it appears to work).

@RiccardoBlb all I am doing in the shader in that screen shot is sampling the texture and setting to gl_Frag_Color.

You mean the lowest value right? textureLOD(..., 0) with 0 as lod will give the highest detail.

That’s what i am wondering too…i think it would be useful to post the shader, or the part of the shader that is doing the sampling. Just to be sure there are no issues there, because i am pretty sure i used 3d dds textures in jme

Yes, I am using the lowest value/highest quality. Sorry I was unclear. It works fine with a square DDS file generated as a volume, the issue only occurs when I use a strip to generate the volume. I have tried others that I have found online with the same results.

Here is the useful part of the frag

gl_FragColor = textureLod(m_CloudShapeNoise, pos *.071 , 0);

You could write a quick test that displays a 2D slice of the loaded 3D texture to check if its alright.

Or you can try load another good 3d texture [if you don’t have any load the one that is present in SevenSky], display it, if it shows correctly, the problem is in your texture.

@dreamwagon
in the shader, where does the 0.071 come in?
Is the sample being taken from the texture at the position (uv coords, uvw?) multiplied by that amount?

That value is just a scalar to scale texture up or down. The quality is terrible regardless of whether it is scaled up or down. As I mentioned, this occurs only when loading a DDS volume that has depth > 1. I have not had a chance to test, but I plan to load the texture in Unity to see if it is the texture itself

To check where the expectations are wrong, I would

  • generate an own texture and see if it’s displayed as expected

  • map the texture to a mesh with 3d coords

     private static final int TEX3D_W = 32;
     private static final int TEX3D_H = 16;
     private static final int TEX3D_D = 256;
    
     @Override
     public void simpleInitApp() {
         flyCam.setMoveSpeed(15);
    
         Texture3D tex3d = create3DTexture();
         tex3d.setMinFilter(Texture.MinFilter.NearestNoMipMaps);
         tex3d.setMagFilter(Texture.MagFilter.Nearest);
    
         // Material from jme3-examples lib
         Material mat = new Material(assetManager, "jme3test/texture/tex3D.j3md");
         mat.setTexture("Texture", tex3d);
    
         Mesh mesh = new Torus(64, 64, 0.7f, 1.0f);
         generate3DTexCoords(mesh);
    
         Geometry geom = new Geometry("Geom", mesh);
         geom.setMaterial(mat);
         geom.move(0, 0, 6);
         rootNode.attachChild(geom);
     }
    
     private Texture3D create3DTexture() {
         ArrayList<ByteBuffer> data = new ArrayList<>(1);
         // Fill ByteBuffer here with RGB8 values
         data.add( BufferUtils.createByteBuffer(TEX3D_W * TEX3D_H * TEX3D_D * 3) );
    
         Image img = new Image(Image.Format.RGB8, TEX3D_W, TEX3D_H, TEX3D_D, data, ColorSpace.Linear);
         return new Texture3D(img);
     }
    
     private void generate3DTexCoords(Mesh mesh) {
         mesh.updateBound();
         BoundingBox bounds = (BoundingBox) mesh.getBound();
         Vector3f bbMin = bounds.getMin(null);
         Vector3f bbMax = bounds.getMax(null);
         Vector3f bbSize = bbMax.subtract(bbMin);
    
         VertexBuffer vbPos = mesh.getBuffer(VertexBuffer.Type.Position);
         FloatBuffer bufPos = (FloatBuffer) vbPos.getData();
         float[] uvw = BufferUtils.getFloatArray(bufPos);
    
         for(int i=0; i<uvw.length; i+=3) {
             uvw[i]   = (uvw[i]   - bbMin.x) / bbSize.x;
             uvw[i+1] = (uvw[i+1] - bbMin.y) / bbSize.y;
             uvw[i+2] = (uvw[i+2] - bbMin.z) / bbSize.z;
         }
    
         mesh.clearBuffer(VertexBuffer.Type.TexCoord);
         mesh.setBuffer(VertexBuffer.Type.TexCoord, 3, BufferUtils.createFloatBuffer(uvw));
     }

The image format is DXT5 not RGB8. I have both a PNG file and DDS. Should I run the test for both images?

I would build a 3d texture, pass it to your shader and check if the results matches your expectations, just to bypass the importer for now.
Can your shader sample a rgb8 texture? If not, maybe build a dxt5 texture.

Alternatively, i would pass the imported texture to a mesh with 3d tex coords.

Do Mesh VertexBuffers support a 3 float TexCoord?

Or would a third float be used in a weighted interpolation between two slices of a volume texture?

I ran a few tests and I can confirm it is some issue with how the JME DDS loader is loading the export from photoshop DDS plugin for DXT3/DXT5 formats.

I found that the loaded DXT3/DXT5 volume formats are only loaded with 1/3 of the bytes I would expect.128 cubed should be 128x128x128x3= 6,291,456 but both of these formats are loading with only a third of that in the engine (2.097,152).

I imported the photoshop DDS files into gimp and all the data is there and looks correct. When I export these same DDS from gimp and load in JME the textures load as expected so this confirms it is not an issue with the photoshop plugin, but how these volumes are being loaded from JME.

I also found that DDS export from PS with 8.8.8.8 ARGB works as expected as well, so this is not urgent, but definitely something not exactly right in the DDS loader. I would have happy to share my DDS files if anyone wants to dig deeper. Should I submit an issue on github?

1 Like

Yes, please submit an issue at GitHub. That way the knowledge won’t get buried.

https://github.com/jMonkeyEngine/jmonkeyengine/issues/new

How do you find this value? Those are compressed formats for the gpu, so they won’t use 128x128x128x3 bytes in the engine.

Can you upload the dds file?