Performance issue on Float vs UNSIGNED_BYTE for RGBA colors

Hello,



I'm evaluating JME and I realized that it

It's an interesting idea, but even if it were slightly faster (which I'm not sure I believe) it wouldn't likely have much or perhaps any impact on jME.  Color data is generally a very small percentage of the data in any given scene.  As you noted in your first post, the actual card is responsible for internal representation of data so it may end up that any small performance gain from using bytes on one card would be a performance loss on another card.



All that said, if anyone can prove that wrong with actual tests, we would be happy to look at it.  :slight_smile:

But the following method from LWJGL GL11 propose to take a ByteBuffer (that can be manipulated through a IntBuffer ) ?


public static void glColorPointer(int size, boolean unsigned, int stride, ByteBuffer pointer)



Using a FloatBuffer for each color component in RGBA implies 4*4 = 16 bytes to transfer to the graphics card instead of 4 bytes for a 32bit RGBA.

It could be a performance improvement no?

FloatBuffers are used because that's what LWJGL takes as input. So, in that sense, the float buffers end up being much faster (we aren't converting to float buffers each frame).