I just opened the .jar file and added the missing extension, now it is running fine on my GeForce GTX 670MX.
Alright thanks for everyone’s help!
Here is the updated version:
https://dl.dropboxusercontent.com/u/41648603/PackedJar.zip
I set the version to glsl110 and I added the extension at the top of the fragment shader.
Lets hope this one works :facepalm:
Before I’ve added
#extension GL_EXT_gpu_shader4 : enable
and got this error:
and with your new upload I get this:
Since you are using texture arrays why don’t you stick with OGL 3+?
@Neomex sorry for the trouble! I’ll get a nvidia card at some point this summer so we wont have to go through this >.<
Okay I just changed the glsl version to 150. I also turned off shadows in case that was the problem >.>
From now on I’ll try to test on more hardware before posting demo’s. Anyone know of a hardware simulator? So I can at least see if this will compile on different hardware?
Here’s the new jar. You can just replace it with the old jar:
https://dl.dropboxusercontent.com/u/41648603/MC_Demo.jar
As always, just show me the error and I’ll do my best
Also does anyone know how to pack the jar differently in eclipse so it doesn’t pack the assets inside the jar itself?
Yeah, it will take us a while to debug it this way…
I’m not 100% sure but I think you would have to buy emulation system. (like this: http://www.mentor.com/products/fv/emulation-systems/ )
@Neomex 4th times the charm? I’m only fixing the error. I’m hoping it will compile despite all the warnings :-/
Btw I started reading the cubical marching squares paper in depth. I’ll probably attempt an implementation tonight.
Here’s the new jar :facepalm:
https://dl.dropboxusercontent.com/u/41648603/MC_Demo.jar
Ah, this time it worked!
Good luck on CMS.
I think it would be a good idea to add mesh simplifier later on, atleast for flat surfaces. Though even with flat surface you would still need to take multiple materials into consideration (I think)
@jesusmb1995 said: I tried to run the demo in a Intel GPU (I have switchable graphics [Intel and ATI]): Uncaught exception thrown in Thread[LW]GL Renderer Thread,5,main]UnsupportedOperationException: No default technique on material ‘Simple’ is supported by the video hardware. The caps [GLSL150] are required.
:explode:
I have Intel and Nvidia and I got the same error with intel graphics . Also I can’t switch to Nivdia if I run the jar , in the future could you provide an exe ? thanks .
@relucri said: I have Intel and Nvidia and I got the same error with intel graphics . Also I can't switch to Nivdia if I run the jar , in the future could you provide an exe ? :) thanks .
Just choose the javaw.exe in the nvidia settings. System then should run all java programs via graphics card.
Oh yes , I didn’t thought about that , but still same error… Anyway thanks @X-Ray-Jin
accidental post…
@FuzzyMonkey said: Now consider the image below:
…someone check to see if he’s still breathing.
yeah accidentally pressed enter… its going to be a pretty long post so it’ll be a little longer. I’ll just repost instead of editing. This is why i should do everything in a word document then copy paste…
So, I’m at a bit of a cross roads. The quick 2 second summery is this:
There is a choice to be made between two different data models to run a voxel engine on.
- An easy to understand but not “well behaved” system that has a very intuitive api but doesn’t always yield the result you’d like.
- A more complex less intuitive system that yields exact results. You can think of it as having good voxel anti-aliasing.
On to the specifics:
My marching cubes demo runs on this first model. This is the pure data model, that is you edit voxel density values directly. Why i think this model is intuitive will be clear after sharing a few examples of marching cubes in 2d.
Consider the following 2d grid:
each dot is a considered to be a sub-voxel and the area between 4 sub-voxels is considered to be a voxel.
Now marching cubes brings the concept that each sub-voxel has an amount of weight to it. For now you can just think of it as a value between [1.0 - 0] where 0 means the sub-voxel is off and 1.0 mean the sub-voxel is completely on.
Now consider the image below:
The blue dot represents a fully-on sub-voxel, The green dots are the vertices of the generated mesh. And the red lines are the edges of mesh. The black dots are sub-voxels that are off (0 weight)
So you can kind of see marching cubes primitive unit as a diamond shape.
Here’s what this looks like in the MC demo:
Now here is a slightly more complex case of marching cubes:
The dark blue dott represents a fully-on sub-voxel, the light blue dot is a 1/4th on.
Here’s what this looks like in the MC demo:
Now marching cubes doesnt support sharp features but dual contour does. Here is an image of dual contour in our 2D case:
Now this is dual contour (DC) without using normal data (which is the whole point of DC). The red and green lines are of the same meaning as before. The yellow lines are what would have been generated if marching cubes were run on the same data. Now dual contour generates 1 vertex per voxel unlike marching cubes which generates 1 to many triangles per voxel.
Now all I really want to show with the next image is how greatly this normal data can effect the output of DC:
The pink lines are the normals at the “interpolated” points. As you can see a thin rectangle gets generated instead of a square as in the image before.
Anyway the point is that this normal data can greatly influence the shape of the outputted mesh.
So, this first model has you simply editing these sub-voxel values to get the geometry that you want. Its almost minecraft like in style, voxels just have a variable amount of on-ness. But there are a lot of problems with this model, specifically its hard to get exactly the shapes and geometries you want to be displayed.
Now before I discuss the second model that all the big engines like upvoid and voxelfarm use I have to introduce a few concepts.
The first idea is that of the density function. Its a function that takes a point and outputs a density. Specifically f(x,y,z) = c, where c is called an isovalue and represents how far away the point (x,y,z) is from the surface. Now dual contour and marching cubes are really made to run on these density functions.
Here is an example of a density function that generates a sphere:
f(x,y,z) = r - √(〖x0-x〗^2+〖y0-y〗^2+〖z0-z〗^2 ), Where r is the radius, and (x0,y0,z0) is the center.
So the density is greater then zero when your inside the sphere and negative when your outside. The meshes that we generate in marching cubes are trying to approximate where f(x,y,z) = 0 aka where the surface lies.
One of the reasons this is a useful way of representing things is because it can be used with constructive solid geometry (CSG) operations. These are the standard set operations we CS people are used to seeing. Consider the following example:
(a) would be density functions describing a cube and (b) would be a density function describing a sphere. You can do an operation like union by setting the net density function to something like: f(x,y,z) = max(fcube(x,y,z),fsphere(x,y,z))
Here are a couple good sources if you want to read more:
http://www.volume-gfx.com/volume-rendering/volume-model/
https://upvoid.com/devblog/2013/08/density-function-design/
So voxelfarm and upvoid don’t do operations directly on a grid as I showed in my marching cubes examples. Instead they use well formed CSG operations to determine what the density values should be at particular points. You can think of it as there being a specific shape the world “should” be, this allows you to fully utilize methods like dual contour (DC) and cubical marching squares (CMS).
However there are a lot of pro’s and con’s to this method here’s the short list:
- Terrain is better behaved. Editing quality is more exact and sharp features are more exact.
- Data storage maybe be significantly better.
- Probably involves a complex expression system.
- Supporting very high amounts of edits is complex.
You can read more here:
http://upvoid.com/devblog/2013/07/terrain-engine-part-2-volume-generation-and-the-csg-tree/
Right now I’m still trying to find out how good I can get the editing behavior to work under the first model.
I’d like to hear what you guys think about this.
Let me know if there is anything I can make clearer about this problem.
This doesn’t really apply directly to your problem but maybe it inspires a solution… though I guess it’s a bit obvious so there may be some general ideological problems with it… but anyway…
Under the straight marching cubes stuff I started open sources, I have a generic interface called DensityVolume:
https://code.google.com/p/simsilica-tools/source/browse/trunk/IsoSurface/src/main/java/com/simsilica/iso/DensityVolume.java
This can be purely generative as in a fractal:
https://code.google.com/p/simsilica-tools/source/browse/trunk/IsoSurface/src/main/java/com/simsilica/iso/fractal/GemsFractalDensityVolume.java
Or can be an extraction or concrete “array” of data as in:
https://code.google.com/p/simsilica-tools/source/browse/trunk/IsoSurface/src/main/java/com/simsilica/iso/volume/ArrayDensityVolume.java
I can then do interesting things like composite many together somehow (I don’t have those checked in yet) or resample at different resolutions or locations:
https://code.google.com/p/simsilica-tools/source/browse/trunk/IsoSurface/src/main/java/com/simsilica/iso/volume/ResamplingVolume.java
If you do not have this sort of interface between the mesh generator and the source data then you should. In theory it covers all of your use-cases above and even allows you to combine them in CSG-like ways. I did some work like this as I initially played with tree generation. I tried to see if I could skin a tree made from stretched density fields… the results were not so good but that was perhaps more a limitation of regular marching cubes than anything else. At any rate, for that project it was clear that anything that looked good was going to be a LOT of polygons and there was virtually no control over it. So I went with a simpler and ultimately way better looking approach.
…but still the concepts and interfaces are valid. I used something similar to carve the caves out of Mythruna’s land mass. The idea behind these sorts of meta-balls or “influencers” as I call them is very appealing to me because it reminds me a lot of radio signal propagation models I used to work on a long time ago. So I always still think of it as shaped signal fields.
Thanks for you input @pspeed your interface looks really clean.
Would it be possibile to implement the complex method with normals, yet ‘simulate’ simple density field wherever needed?
By simulate I mean using a neutral normal that would yield same result as you would have with density fields.
Let me know if I’m not speaking clearly.
Hmm i dont think my DC example was very good this picture might make things more clear:
The hermite data represents the normal of the isosurface (i.e. our terrain mesh).
@Neomex said: Would it be possibile to implement the complex method with normals, yet 'simulate' simple density field wherever needed?By simulate I mean using a neutral normal that would yield same result as you would have with density fields.
Let me know if I’m not speaking clearly.
I’m not sure I understand. All density fields have normals, really.
@pspeed said: I'm not sure I understand. All density fields have normals, really.Do you mean three value normal used in computer graphics? I've meant the single float/int with level of 'fullness'