I need to call low level OpenGL functions on a mesh’s position (vertex) buffer, more specifically glBindBuffer, glBufferData, glBufferSubData and glCopyBufferSubData. How does one correctly do that in jMonkey 3.2.1?
I’m making a voxel game where currently the chunk dimensions are 32³. I need to remove and add voxels rapidly, that’s why I don’t want to update the position buffer by sending the whole vertex data each time a single block is added or destructed.
My Solution :
I do a lot of bindings just to fit with OpenGL targets.
Create a global dynamic copy read-write buffer that is used for copying buffers
Bind the position buffer onto GL_COPY_READ_BUFFER
Bind the global dynamic copy read-write buffer onto GL_COPY_WRITE_BUFFER
Copy the content of GL_COPY_READ_BUFFER into GL_COPY_WRITE_BUFFER
Bind the position buffer onto GL_COPY_WRITE_BUFFER
Bind the global dynamic copy read-write buffer onto GL_COPY_READ_BUFFER
Resize the position buffer using glBufferData and send no data (null)
If a voxel is added
Copy the content of GL_COPY_READ_BUFFER from the start to the voxel index into
Call glBufferSubData to add the new voxel’s data
Copy the content of GL_COPY_READ_BUFFER from the voxel index to the end into
GL_COPY_WRITE_BUFFER after the new voxel data
If a voxel is removed
Copy the content of GL_COPY_READ_BUFFER from the start to the voxel index into GL_COPY_WRITE_BUFFER
Copy the content of GL_COPY_READ_BUFFER from the voxel maximum index to the end into GL_COPY_WRITE_BUFFER
Do you think this is overkill? Also, do you think it will actually improve the performance (based on your intuitions and experience, naturally)?
My interrogation towards a simple glBufferData call is that I remember seeing on a random developer forum that sending buffers with a lot of content to OpenGL many times rapidly (what is considered rapidly? My guess would be every 0.5 second) was slow.
I downloaded Mythruna. Firstly, here are my computer specifications :
Windows 7 64 bit
1366 x 768 (32 bit) (50 Hz)
Nvidia NVS 5400M
8 Gb RAM
i7-3630QM at 2.40 GHz (8 CPUs)
It needs the JRE 1.5 installed on the Windows version I have Java 1.9 and only 9 Gb left on my C hard drive, so it’s kinda frustrating. I made it work with the Linux version on my Windows machine, which is kinda ironic. Linus would be happy.
I walked two steps and got this error, so I cannot test if it is performant enough :
I’ll follow your recommendation and just update the buffer à la jMonkey.
Sending too much data to the GPU will saturate the bus and not leave enough bandwidth for other critically important things, that’s true. What’s “too much” depends entirely on the bus speed (bandwidth) that’s available and the amount of data per second that you’re sending. In other words, the bus can only transmit so many bits per second and you can only use up so many of them before your start noticing performance degrading. I’m guessing your chunk buffer is going to be pretty small though (a few KB?), so your GPU shouldn’t bat an eye if you’re re-sending the entire thing several times a second.
You can play around with sizes, but it’s no shock that many games choose 16x16x16 because it’s the best “middle ground”. This will also be a massive problem when you are networking. At this rate, it’s going to take at least three packets of data (64k limit per packet) just to send one cell of terrain. Network traffic can be major problem if you don’t factor it in. At 16x16x16 you get some headroom for additional information, such as lighting data, a heightmap, etc… Go with the flow. It just works.
You are listing worst cases, though… like if every other block was removed in a checkerboard pattern.
In reality, the buffers are waaay smaller than that. I even use a short index buffer instead of int. And it only ever became an issue when I created chunks of fully random block data just to test what would happen. (Things just get chopped randomly.)
Edit: as an example, the best case is 32x32x4… 4096 bytes.
You mean some bots / people clicked the download button en masse? Damn, people are dicks. In college, our teacher bought us an Amazon cloud EC2 instance and, a week after we launched a small test website on it, Chinese spammers / bots put it down which then resulted in a huge bill.
No, people logged into the public Mythruna world I ran would use spam clickers to place blocks rapidly. If they set the frequency too high then it would kill my server. Usually these were the hardcore players so I just asked them to stop… saved me from having to quickly implement countermeasures.
Edit: as a side note, once I put my server online for people to play, clearly 50% of my development time was implementing griefer counter-measures. It got quite advanced in the end including being able to roll back to previous versions of chunks, etc…
Yes, you are wise to skip multiplayer if you are trying to keep scope small. It complicates things by at least 100x… and I didn’t have nice libraries at the time. I had to rewrite SpiderMonkey from the ground up and much later created SimEthereal… which Mythruna hasn’t gotten to use yet.
And even after having those libraries, creating the Spacebugs demo has taken a lot longer than I’d planned/wanted.
There are a lot of ugly short cuts you can take in a single player game where the ugliness is just fine… or will only ever be a problem when you are already a success and can worry about it then.