Ok ok, thx for the info. Lookin’good.
I never took part on this threads but today i feel like to do it
Working on blender:
http://s17.postimg.org/xecglkbhb/background21.png
To learn more about JME and port my Unity game I decided to make a domino game first while I feel more comfortable with it.
Playing with shadows and appstates, importing blender models, etc.
I’m going to try to open source the network synch library that I will use to synchronize the physics state in Mythruna. So, I need a simple game to test/demo it. I have a design for a sort of multiplayer asteroids++… 2D but I’ll do it in JME.
I was playing with some premade assets tonight from http://kenney.nl/ and made a video. Just something simple to try out some of the assets.
I also grabbed some star maps from cgtextures and layered them for a little parallax. I have one more thing I want to prototype before settling on a design.
Played a little more with leaving energy signatures:
Description in the video… but the short version is I needed to prototype the idea to help make a design decision that would change the whole approach to the game. I really wanted a more “hunter” style game because it’s a better test of network zones and because it sounds more fun to me. The clincher was being able to display energy trails.
…I think it works ok. Good enough for a prototype.
Carpe Diem is a turn based strategy game I’ve been working on with Blender and jME3 for a period of time now. This strategy is all about tactical combat, there’s no researching of technologies involved, just build space stations to collect resources and build a variety of military craft to defend your territory and forcefully take new territory!
Maps are randomly generated using two different user selected algorithms. Start positions are also random save for the fact that the algorithm ensures that no two starting positions are immediately next to one another, although the user can select to turn that off if they don’t mind the possibility of starting right next to someone.
Carpe Diem makes use of NiftyGUI for the interface. The UI is largely customizable in that you can close, minimize and move windows. The root windowing class allows for resizing windows by dragging out any of the four corners, but currently only the resource window allows this and only on the y-axis.
The visibility of the grid, territory boundaries and unit info cards can be toggled on and off via the map window. Typically the map window and game view hides areas that have not already been sighted by the player and units that are not currently visible to the player, but I disabled that for the time being for testing purposes so in these shots the entire map is visible.
Here is a complete rework of the Hover tank model for a PBR demo I’m preping. Texturing is done with substance painter
you can even see it live on sketchfab
Here is what it looks like in JME.
The reflexion looks more grainy, I have to fix it, and there are some unfortunate distorsions in the normal map compared to what I have in painter, not sure why…
Fancy!
PBR looks awesome. So excited for it
@nehon
Can you maybee do something like a small log about converting the model?
Like what problems did occour how you solved them ect?
As I guess that there are many (Me included) who will pretty soon want to convert a large amount of models for that lighting model
mhh…well… I didn’t really convert the maps…I re textured it with substance painter…
At least that’s what I was planning to do at first…but in the end I took the old model as a basis and remodeled it almost from scratch …and then textured it with substance painter…
So IMO not a very valid workflow for “conversion” sake.
Also substance painter is a charged application, not much considering the awesomness, but still…
For the conversion though there are some examples here and there but I didn’t see something really magic from classic diffuse specular to PBR.
You often see conversion from PBR different workflow (metal/rough vs spec/gloss) though.
The huge difference is that the base color map should not contain any light data (like AO, or baked lighting) which was very common with the old system…so that’s hard to convert.
So if my current models already have cleanly done normal,diffuse ect maps it would be somewhat possible to use them with that lightinga s well
It won’t be possible without conversion. The only one that doesn’t need conversion is the normal map.
And this conversion can hardly be done automatically.
@nehon: First of all, great work this is likely to be one of the most highly anticipated features I’m sure. Awesome model too, not sure if you made it or someone else, but that tank is pretty bad ass!
Now is this implementation of PBR similar to what’s being done with Blender’s Cycles render engine? Would it be possible to practice and prototype materials with Blender Cycles and then use the knowledge gained from that experience with this implementation?
Thanks a lot. I did it myself yes.
Actually no. This implementation is pretty similar to what you’d have in UE4 or Unity, to some extend. Except it supports both of the common PBR workflows (metallic/roughness, specular/gloss).
I didn’t look much into blender cycle, but to have PBR with it you need a pretty complex node graph (blender cycle is not meant for real time, whereas all PBR implementations around are). Cycle uses real GI while PBR fake GI with image based lighting with an env map.
That’s where Substance Painter makes sense actually. Because it has a real time PBR renderer, and you can have a (almost) seamless rendering between it and JME.
I say Almost because I still have differences that I’m investigating.
The blender team plans to have a PBR viewport in future versions. I guess it will help a lot too. But IMO cycle is not really suited for the task.
Good to know. I read the series of blog posts about PBR and I imagine I shouldn’t have any trouble working with it at this point, but I do wonder if the current non-PBR render engine will be sticking around? As cool as PBR is I rather doubt my ageing laptop will be able to handle it.
This old clunker has served me well throughout the years, but I must admit I’ve got’er capabilities stretched pretty thin as it is.
I am told that the current lighting model will still work.
well it’s not much more greedy than the previous lighting model. but a bit.
And yes the previous lighting system will stick around.
The short answer would be that I’m creating a mesh along a bezier curve. I’m using this method in a couple of areas such as movement boundaries, attack boundaries, terrain boundaries and a slightly different mesh for the move path indicator in which the end does not reconnect with the beginning and the UVs are calculated differently to allow for a scrolling animation.
Creating the mesh is the easy part, creating the spline along which the mesh is created is considerably more complicated for the faction boundaries. The creation of faction boundaries is spawned off in a separate thread to prevent hiccups in the frame rate, after the mesh is created it’s queued up and added to the scene.
Long story short I iterate through a particular faction’s space stations looking for the top most station then look for the top most node(grid space) within that station’s resource collection range and begin looping over the boundary using a list of all nodes owned by this faction. I check to see which space station owns the node I’m currently adding to the boundary and remove it from a list of stations that have not been checked. Once I reach the end I rinse and repeat using the list of stations I was removing from so I don’t leave out any space stations whose boundaries are disconnected from others.
once I have the external boundaries completed and stored in a list of boundaries I iterate through the interior of those boundaries looking for holes and use a similar method to loop over the edges of those holes creating interior boundaries. Once that is done I iterate over those interior boundaries again looking for an occurrence where a node in the inner boundary is connected to another part of the boundary via a single node and separate them into two inner boundaries because it looks weird otherwise
After all of that is done I iterate over all of the boundaries creating lists of control points, splines, based on several factors then pass those splines into a custom mesh class that creates a mesh along the spline.
Here’s the two bezier mesh classes I have right now. They weren’t really developed with versatility in mind so they really only work with a 2D spline that goes along the x/z-axis with normals pointing up along the y-axis.
Cyclical bezier:
public class BezierPathCyclic extends Mesh {
private final Spline spline;
private final float width1;
private final float width2;
public BezierPathCyclic(Spline spline, float width1, float width2) {
super();
this.spline = spline;
this.width1 = width1;
this.width2 = width2;
create();
}
private void create() {
int numSegments = 32;
Vector3f tmp = new Vector3f();
Vector3f tmp2 = new Vector3f();
Node node = new Node("Path Builder");
new Node().attachChild(node);
int numPoints = (spline.getControlPoints().size() + 2) / 3;
int currentControlPoint = 0;
float[] verts = new float[(((numPoints - 1) * numSegments + 1) * 3) * 2];
float[] normals = new float[verts.length];
float[] texCoord = new float[(((numPoints - 1) * numSegments + 1) * 2) * 2];
float[] texCoord2 = new float[texCoord.length];
int index = 0;
int nIndex = 0;
int uvIndex = 0;
int uvIndex2 = 0;
float uvY = 0f;
float uvYSize = 0.04f;
float gameWidth = UI.getGame().getWidth() * CDNode.SIZE;
float gameHeight = UI.getGame().getHeight() * CDNode.SIZE;
for (int i = 0; i < numPoints - 1; i++) {
if (i == 0) {
spline.interpolate(1f / numSegments, 0, tmp);
node.setLocalTranslation(spline.getControlPoints().get(currentControlPoint));
node.lookAt(tmp, Vector3f.UNIT_Y);
node.localToWorld(new Vector3f(width1, 0f, 0f), tmp2);
verts[index++] = tmp2.x;
verts[index++] = 0f;
verts[index++] = tmp2.z;
node.localToWorld(new Vector3f(-width2, 0f, 0f), tmp2);
verts[index++] = tmp2.x;
verts[index++] = 0f;
verts[index++] = tmp2.z;
normals[nIndex++] = 0;
normals[nIndex++] = 1;
normals[nIndex++] = 0;
normals[nIndex++] = 0;
normals[nIndex++] = 1;
normals[nIndex++] = 0;
texCoord[uvIndex++] = 0;
texCoord[uvIndex++] = 0;
texCoord[uvIndex++] = 1;
texCoord[uvIndex++] = 0;
texCoord2[uvIndex2++] = 1f - (verts[index - 6] / gameWidth);
texCoord2[uvIndex2++] = verts[index - 4] / gameHeight;
texCoord2[uvIndex2++] = 1f - (verts[index - 3] / gameWidth);
texCoord2[uvIndex2++] = verts[index - 1] / gameHeight;
} else {
tmp = node.getLocalTranslation().clone();
node.setLocalTranslation(spline.getControlPoints().get(currentControlPoint));
node.lookAt(tmp, Vector3f.UNIT_Y);
node.localToWorld(new Vector3f(-width1, 0f, 0f), tmp2);
verts[index++] = tmp2.x;
verts[index++] = 0f;
verts[index++] = tmp2.z;
node.localToWorld(new Vector3f(width2, 0f, 0f), tmp2);
verts[index++] = tmp2.x;
verts[index++] = 0f;
verts[index++] = tmp2.z;
normals[nIndex++] = 0;
normals[nIndex++] = 1;
normals[nIndex++] = 0;
normals[nIndex++] = 0;
normals[nIndex++] = 1;
normals[nIndex++] = 0;
uvY += tmp.distance(node.getLocalTranslation()) * uvYSize;
texCoord[uvIndex++] = 0;
texCoord[uvIndex++] = uvY;
texCoord[uvIndex++] = 1;
texCoord[uvIndex++] = uvY;
texCoord2[uvIndex2++] = 1f - (verts[index - 6] / gameWidth);
texCoord2[uvIndex2++] = verts[index - 4] / gameHeight;
texCoord2[uvIndex2++] = 1f - (verts[index - 3] / gameWidth);
texCoord2[uvIndex2++] = verts[index - 1] / gameHeight;
}
for (int s = 1; s < numSegments; s++) {
tmp = node.getLocalTranslation().clone();
spline.interpolate((float)s / numSegments, currentControlPoint, tmp2);
node.setLocalTranslation(tmp2);
node.lookAt(tmp, Vector3f.UNIT_Y);
node.localToWorld(new Vector3f(-width1, 0f, 0f), tmp2);
verts[index++] = tmp2.x;
verts[index++] = 0f;
verts[index++] = tmp2.z;
node.localToWorld(new Vector3f(width2, 0f, 0f), tmp2);
verts[index++] = tmp2.x;
verts[index++] = 0f;
verts[index++] = tmp2.z;
normals[nIndex++] = 0;
normals[nIndex++] = 1;
normals[nIndex++] = 0;
normals[nIndex++] = 0;
normals[nIndex++] = 1;
normals[nIndex++] = 0;
uvY += tmp.distance(node.getLocalTranslation()) * uvYSize;
texCoord[uvIndex++] = 0;
texCoord[uvIndex++] = uvY;
texCoord[uvIndex++] = 1;
texCoord[uvIndex++] = uvY;
texCoord2[uvIndex2++] = 1f - (verts[index - 6] / gameWidth);
texCoord2[uvIndex2++] = verts[index - 4] / gameHeight;
texCoord2[uvIndex2++] = 1f - (verts[index - 3] / gameWidth);
texCoord2[uvIndex2++] = verts[index - 1] / gameHeight;
}
currentControlPoint += 3;
}
tmp = node.getLocalTranslation().clone();
node.setLocalTranslation(spline.getControlPoints().get(spline.getControlPoints().size() - 1));
node.lookAt(tmp, Vector3f.UNIT_Y);
verts[index++] = verts[0];
verts[index++] = 0f;
verts[index++] = verts[2];
verts[index++] = verts[3];
verts[index++] = 0f;
verts[index++] = verts[5];
node.removeFromParent();
normals[nIndex++] = 0;
normals[nIndex++] = 1;
normals[nIndex++] = 0;
normals[nIndex++] = 0;
normals[nIndex++] = 1;
normals[nIndex++] = 0;
uvY += tmp.distance(node.getLocalTranslation()) * uvYSize;
texCoord[uvIndex++] = 0;
texCoord[uvIndex++] = uvY;
texCoord[uvIndex++] = 1;
texCoord[uvIndex++] = uvY;
texCoord2[uvIndex2++] = 1f - (verts[index - 6] / gameWidth);
texCoord2[uvIndex2++] = verts[index - 4] / gameHeight;
texCoord2[uvIndex2++] = 1f - (verts[index - 3] / gameWidth);
texCoord2[uvIndex2++] = verts[index - 1] / gameHeight;
short[] indices = new short[((numPoints - 1) * numSegments) * 6];
for (int ind = 0; ind < (numPoints - 1) * numSegments; ind++) {
int indInd = ind * 6;
int indV = ind * 2;
indices[indInd++] = (short)indV;
indices[indInd++] = (short)(indV + 1);
indices[indInd++] = (short)(indV + 2);
indices[indInd++] = (short)(indV + 1);
indices[indInd++] = (short)(indV + 3);
indices[indInd] = (short)(indV + 2);
}
setBuffer(VertexBuffer.Type.Position, 3, verts);
setBuffer(VertexBuffer.Type.Normal, 3, normals);
setBuffer(VertexBuffer.Type.TexCoord, 2, texCoord);
setBuffer(VertexBuffer.Type.TexCoord2, 2, texCoord2);
setBuffer(VertexBuffer.Type.Index, 3, indices);
updateBound();
updateCounts();
}
}
Non cyclical bezier:
public class BezierPath extends Mesh {
private final Spline spline;
public BezierPath(Spline spline) {
super();
this.spline = spline;
create();
}
private void create() {
int numSegments = 64;
Vector3f tmp = new Vector3f();
Vector3f tmp2 = new Vector3f();
Node node = new Node("Path Builder");
new Node().attachChild(node);
int numPoints = (spline.getControlPoints().size() + 2) / 3;
int currentControlPoint = 0;
float[] verts = new float[(((numPoints - 1) * numSegments + 1) * 3) * 2];
float[] normals = new float[verts.length];
float[] texCoord = new float[(((numPoints - 1) * numSegments + 1) * 2) * 2];
float[] texCoord2 = new float[texCoord.length];
int index = 0;
int nIndex = 0;
int uvIndex = 0;
int uvIndex2 = 0;
float uvY = 0f;
float uvYSize = 0.04f;
float width = 4.5f;
for (int i = 0; i < numPoints - 1; i++) {
if (i == 0) {
spline.interpolate(1f / numSegments, 0, tmp);
node.setLocalTranslation(spline.getControlPoints().get(currentControlPoint));
node.lookAt(tmp, Vector3f.UNIT_Y);
node.localToWorld(new Vector3f(width * 0.5f, 0f, 0f), tmp2);
verts[index++] = tmp2.x;
verts[index++] = 0f;
verts[index++] = tmp2.z;
node.localToWorld(new Vector3f(-width * 0.5f, 0f, 0f), tmp2);
verts[index++] = tmp2.x;
verts[index++] = 0f;
verts[index++] = tmp2.z;
normals[nIndex++] = 0;
normals[nIndex++] = 1;
normals[nIndex++] = 0;
normals[nIndex++] = 0;
normals[nIndex++] = 1;
normals[nIndex++] = 0;
texCoord[uvIndex++] = 0;
texCoord[uvIndex++] = 0;
texCoord[uvIndex++] = 1;
texCoord[uvIndex++] = 0;
texCoord2[uvIndex2++] = 0;
texCoord2[uvIndex2++] = 0;
texCoord2[uvIndex2++] = 1;
texCoord2[uvIndex2++] = 0;
} else {
tmp = node.getLocalTranslation().clone();
node.setLocalTranslation(spline.getControlPoints().get(currentControlPoint));
node.lookAt(tmp, Vector3f.UNIT_Y);
node.localToWorld(new Vector3f(-width, 0f, 0f), tmp2);
verts[index++] = tmp2.x;
verts[index++] = 0f;
verts[index++] = tmp2.z;
node.localToWorld(new Vector3f(width, 0f, 0f), tmp2);
verts[index++] = tmp2.x;
verts[index++] = 0f;
verts[index++] = tmp2.z;
normals[nIndex++] = 0;
normals[nIndex++] = 1;
normals[nIndex++] = 0;
normals[nIndex++] = 0;
normals[nIndex++] = 1;
normals[nIndex++] = 0;
uvY += tmp.distance(node.getLocalTranslation()) * uvYSize;
texCoord[uvIndex++] = 0;
texCoord[uvIndex++] = uvY;
texCoord[uvIndex++] = 1;
texCoord[uvIndex++] = uvY;
}
for (int s = 1; s < numSegments; s++) {
tmp = node.getLocalTranslation().clone();
spline.interpolate((float)s / numSegments, currentControlPoint, tmp2);
node.setLocalTranslation(tmp2);
node.lookAt(tmp, Vector3f.UNIT_Y);
float wPerc;
if (currentControlPoint == 0) {
wPerc = (((float) s / numSegments) * 0.5f) + 0.5f;
texCoord2[uvIndex2++] = 0;
texCoord2[uvIndex2++] = (float) s / numSegments;
texCoord2[uvIndex2++] = 1;
texCoord2[uvIndex2++] = (float) s / numSegments;
} else if ((currentControlPoint / 3) == numPoints - 2) {
wPerc = (0.5f - (((float) s / numSegments) * 0.5f)) + 0.5f;
texCoord2[uvIndex2++] = 0;
texCoord2[uvIndex2++] = 1f - ((float) s / numSegments);
texCoord2[uvIndex2++] = 1;
texCoord2[uvIndex2++] = 1f - ((float) s / numSegments);
} else {
wPerc = 1f;
texCoord2[uvIndex2++] = 0;
texCoord2[uvIndex2++] = 1;
texCoord2[uvIndex2++] = 1;
texCoord2[uvIndex2++] = 1;
}
node.localToWorld(new Vector3f(-width * wPerc, 0f, 0f), tmp2);
verts[index++] = tmp2.x;
verts[index++] = 0f;
verts[index++] = tmp2.z;
node.localToWorld(new Vector3f(width * wPerc, 0f, 0f), tmp2);
verts[index++] = tmp2.x;
verts[index++] = 0f;
verts[index++] = tmp2.z;
normals[nIndex++] = 0;
normals[nIndex++] = 1;
normals[nIndex++] = 0;
normals[nIndex++] = 0;
normals[nIndex++] = 1;
normals[nIndex++] = 0;
uvY += tmp.distance(node.getLocalTranslation()) * uvYSize;
texCoord[uvIndex++] = 0;
texCoord[uvIndex++] = uvY;
texCoord[uvIndex++] = 1;
texCoord[uvIndex++] = uvY;
}
currentControlPoint += 3;
}
tmp = node.getLocalTranslation().clone();
node.setLocalTranslation(spline.getControlPoints().get(spline.getControlPoints().size() - 1));
node.lookAt(tmp, Vector3f.UNIT_Y);
node.localToWorld(new Vector3f(-width * 0.5f, 0f, 0f), tmp2);
verts[index++] = tmp2.x;
verts[index++] = 0f;
verts[index++] = tmp2.z;
node.localToWorld(new Vector3f(width * 0.5f, 0f, 0f), tmp2);
verts[index++] = tmp2.x;
verts[index++] = 0f;
verts[index++] = tmp2.z;
node.removeFromParent();
normals[nIndex++] = 0;
normals[nIndex++] = 1;
normals[nIndex++] = 0;
normals[nIndex++] = 0;
normals[nIndex++] = 1;
normals[nIndex++] = 0;
uvY += tmp.distance(node.getLocalTranslation()) * uvYSize;
texCoord[uvIndex++] = 0;
texCoord[uvIndex++] = uvY;
texCoord[uvIndex++] = 1;
texCoord[uvIndex++] = uvY;
texCoord2[uvIndex2++] = 0;
texCoord2[uvIndex2++] = 0;
texCoord2[uvIndex2++] = 1;
texCoord2[uvIndex2++] = 0;
short[] indices = new short[((numPoints - 1) * numSegments) * 6];
for (int ind = 0; ind < (numPoints - 1) * numSegments; ind++) {
int indInd = ind * 6;
int indV = ind * 2;
indices[indInd++] = (short)indV;
indices[indInd++] = (short)(indV + 1);
indices[indInd++] = (short)(indV + 2);
indices[indInd++] = (short)(indV + 1);
indices[indInd++] = (short)(indV + 3);
indices[indInd] = (short)(indV + 2);
}
setBuffer(VertexBuffer.Type.Position, 3, verts);
setBuffer(VertexBuffer.Type.Normal, 3, normals);
setBuffer(VertexBuffer.Type.TexCoord, 2, texCoord);
setBuffer(VertexBuffer.Type.TexCoord2, 2, texCoord2);
setBuffer(VertexBuffer.Type.Index, 3, indices);
updateBound();
updateCounts();
}
}
P.S. The second set of UV coordinates on the cyclical bezier are used for an alpha mask using world coordinates in the shader. The alpha mask is the fog of war texture which is updated only on frames in which the state of the fog of war is changed.
When visibility of a particular area is changed a viewport is enabled, takes a single frame snapshot of a black and white scene rendered to an off-screen buffer which is then blurred over a few passes after which the viewport is disabled. The resolution of the texture depends on the size of the level, but it’s not very big. Blurring it adds a nice fade effect around the edges, but also prevents the texture from looking blocky when scaled up.