Thanks for you reply.
Yeah that library shouldn’t be needed. It’s something I was looking into for compression but didn’t end up using. I probably just didn’t fully remove it from the workspace.
Do you plan on adding support to customize octree?
You could also combine octrees with run-lenght encoding for saving chunk data.
So I played a bit around with it, and it works great so far.
Now I have a question,
is there a method to acces/export the raw densitry arrays from a certain range?
eg from 0,0,0 to 30,30,30 or similar for network syncing & storage.
(Note I use the voxel engine for spaceship hulls, so using a minecraft like chunk logic is not really usefull in this case, as I either need the full ship or nothing of it)
(and later reapply them)
Hi,
I just created a pull request with some minor changes&fixes
Hi,
first of all this adds a gradle build, like jme uses for the next releases.
-> Removed redundant folder
-> Moved sources to src/main/java as is the default for maven and gradle
-> added assets as a ressource folder
-> renamed packages to comply to http://docs.oracle.com/javase/tutorial/java/package/namingpkgs.html
second it fixes 2 shader related parts, one using a unnecessary high GLSL level, the other being a out unifrom that was never written two. With these changes it is possible to run it even on an Intel Onboard (HD 4000) so pretty much everywhere.
German Disclaimer:
Since I’m german and often be to direct, don’t get me wrong I don’t do this to correct you or show that im better ect, In fact I do this, because I quite like what you have done so far
Sorry I didn’t reply over the weekend.
Now I have a question, is there a method to acces/export the raw densitry arrays from a certain range? eg from 0,0,0 to 30,30,30 or similar for network syncing & storage.
Seems like your asking for some kind of save/load system. Another related thing that is needed is a voxel copy paste tool.
I’ll try to see if i can get those in to the next demo.
I’m almost done with the adaptive dual contour so I’ll upload the results. I also made a design document to go with it. The code is a lot cleaner then what you’ve seen in the current implementation.
I appreciate those fixes Empire Pheonix =D
Uhh, just noticed I have forgotten to update the link,
the correct one is:
hm something with that merge has failed, the packages are still with uppercase letters somehow.
Here is another with just the case changes, lets try if it works this time.
(Also if you are on windows, you might have to reclone the repository to get the correct case in your local copy)
Hey @FuzzyMonkey,
is everything alright? Having nothing heard for around 2-3 weeks kinda worries me. If you need help or something just tell, and I will sew what I can do.
Also one question, is there a way to query if something is at a specific location? Eg get the raw density value or similar for like 15,5,2? I guess its in the isoPoints of Voxelnode somehow, but I did not yet managed to understand what they actually are representing.
Technically GSoC is over now but he did say he planned to keep working on it for a while. I’ve been away just recently but I’ll try and catch up with him. One thing we really need to do is get all the package names/format/etc sorted and pull it into JME3 as a proper plugin.
So I’ve been working on converting everything to work on an adaptive approach.
While I’ve got a meshing algorithm that creates no seams it has a lot of complexity. This has made also made it prone to bugs and very hard to create a good abstraction for.
Here is a video. The level of detail transition is very high as to make the LOD noticeable.
It’s hard to know if I’m doing the right things as far as LOD is concerned.
Some questions that I’ve been trying to answer are:
What is the best way to divide the scene into meshes?
How often do you remesh these division?
How fast should the level of detail change?
I’ve spent the last few days reflecting on some of my weaknesses and why my progress has been so slow. I think most of the my problems are due to lack of abstraction and poor design.
I would say the lod is fine, it’s equal than most implementations I have actually seen in published games. And since you said its extra noticeable, I think it will do fine
Regarding your questions my opinions are:
Hm the best way for a scene in jme is usually to just batch them to blobs of a few 10-100k polygons, and make sure its geografically bound, so that frustum culling and the far pane can eleminate most of the work even before rendering.
Do you mean how often they are changed or how often the divisons itself are changed?
-> The first one gratly depends on the game, however there are rarly games that modify each mesh each frame, usually the changes are kinda limited to one chunk at a time, and often they are quite rare in comparison to frames. If you ask this for an optimisation problem, I would say choose the simpler algorithm, as it is better understandable and allows to improve it later easier.
-> For the second one I would say it is not necessary for normal use cases, the only one I kinda would see is when the materials change. (Ps I already thought on how to implement a TextureArray based material, and transmit the material id in a vertex attribute, this could be added at a later stage, when the basic system and apis have stabilized.)
-> I would probably just make it adjustable, the approach i often see, is to estimate with a simple heuristic the (screenspace)error to the full mesh, and if it exceeds a userdefined limit use the next more detailed level. Since (if I understood dual countour correctly) you already have the edges and normals, the importance of a surface points could be estimated by the angles it has, eg close to 180° is nearly a plane and less important than one with 90° between the adjacent normals.
I would create some kinda of interface for this, that basically is given the position/vertex/normals or similar, and just returns a float (and is required to weight it accordingly to the distance to the camera). This then can be user implemented if different needs are there, and allows for easier testing of different algorithms.
I wouldn’t say that your design is poor, when reading it the most difficult part for me was to understand what the more complex methods (eg generateisopoints) actually do, a few simple comments for this might help greatly. Looking at the JFX-JME project, spending time for the Javadocs was worth it, as all the pullrequests and feature additions I recieved from various people were really great, and at least I hope that they helped them to understand what everything is supposed to do
If you have anything you want to ask for opinions regarding the abstraction, feel free to ask me anytime.
Great stuff!
A question: How is the texture mapping being done? Tri-planar mapping? Procedural textures?
I’m asking because I want to texture a large sphere, but tri-planar mapping has its downsides…
All the demo’s use triplaner texturing. But you can set it to other shaders that don’t use triplaner.
There is probably some texture mapping that you can use that would be better for spheres.
Is this project still alive? What happened to multimaterial support?
So I had the good fortune to spend a bit of time reading: http://graphics.csie.ntu.edu.tw/CMS/download/cms-eg2005.pdf
It’s another way to convert voxel data to triangles and makes some aspects of LOD easier. I also got the time to setup a workspace/test bed that would be good for testing its implementation. However I don’t think I could get far enough on my own to consider it worth working on.
I’d like to make a decent foundation for a voxel engine but this would take too long on my own.
To be on the level of a modern voxel engine this would be the todo list:
- Implement cubical marching squares (CMS) for 1 octree
- Add multimaterial and basic voxel operations
- Sparse chunks where each chunk is an octree (see http://i.imgur.com/zyldh9m.png)
- LOD based on camera
- Vegetation + Other procedural tools
- Networking/Database for multiplayer editing
- Finite automata for water/moving voxels
Alone I wouldn’t have enough time but if there are others that are interested in working together I’d gladly commit some of my time.
Good plan, but could you remind me again why you don’t want to build on top of @pspeed’s existing work?
That seems like a reasonable idea, pspeed has a solid framework.