Problem using bytes for vertex colour buffers instead of floats

Trying to render a cloud of points that represents the RGB colour space (only doing from 0…31 each time since it’s a lot of points otherwise)
[java]
private static final int SIZE = 32;

    private void generateDotMesh() {
           
            Mesh mesh = new Mesh();
            mesh.setMode(Mesh.Mode.Points);
            FloatBuffer buf = BufferUtils.createFloatBuffer(SIZE * SIZE * SIZE * 3);
            ByteBuffer colbuf = BufferUtils.createByteBuffer(SIZE * SIZE * SIZE * 4);
            for (int k = 0; k < SIZE; ++k) {
                    for (int j = 0; j < SIZE; ++j) {
                            for (int i = 0; i < SIZE; ++i) {
                                    buf.put(i).put(k).put(j);
                                    colbuf.put((byte)i).put((byte)j).put((byte)k).put((byte)1);
                            }
                    }
            }
                   
            mesh.setBuffer(VertexBuffer.Type.Position, 3, buf);
            mesh.setBuffer(VertexBuffer.Type.Color, 4, Format.UnsignedByte, colbuf);

            mesh.updateBound();
            mesh.updateCounts();

            setMesh(mesh);

    }[/java] 

That’s inside a custom Geometry class. This is the result:

Note that the first point at (0, 0, 0) is black, then the second point along each axis appears to be rgb(255, 0, 0), rgb(0, 255, 0) and rgb(0, 0, 255). Note how all points inside the cube are then white. It is as if it’s clamping the range of values to 0…1.

All the examples of vertex colour buffers on the wiki (and generally on the net when looking for JME stuff) seem to use float buffers exclusively, so i can’t find anyone else having this problem. Is this a bug, or have i done something dumb? I’m fairly familiar with OpenGL from a lower level than this, and this is how i would approach the problem normally if i were writing without a framework like JME.

I don’t want to use float colour buffers since i don’t see the need for a 128bit resolution colour space :slight_smile: 32bit will be fine for my needs

My first guess would be that the fragment shader still expects floats… You could try with a minimal self written shader

With some help from eisbehr on the IRC channels we’ve figured it out, there’s a setNormalized(boolean) method of VertexBuffer that determines what gets passed through to glVertexAttribPointer. This has solved my problems. Not 100% sure this doesn’t mean the bytes are getting converted into floats internally anyhow, but if they are then this is being done by the drivers and therefore is not my problem anymore :slight_smile:

Thanks for looking

Hahah… I did not know about this method and I’ve been doing that rescaling in the shader. So dumb. :slight_smile: