It depends on the texture. In the case above, the texture looked pretty flat and the dark spots were already lower than the high spots… so I would have just added a hue/saturation layer and killed the saturation to make it a gray bump map… then generated a normal map from that. I have also been known to hand point bump maps. (The idea of painting heights pleases me on some level… like building up clay with a paint brush.)
In my experience, I get pretty good results from Photopea’s normal map generator… though it doesn’t seem to deal with repeating textures correctly and I’m a little paranoid, so I always make a tiled image, generate the normal map from that, then cut the center out. I do sort of miss nVidia’s normal map generator in photoshop.
…another tip: in whatever tool you use, name the layer to include whatever normal map generation settings you used. This has saved me a few times when I come back to a psd some months later and want to tweak things… or even just being able to compare two different normal map layers and remember what makes them different.
Could be. The birds are going to be small (for now) so I can get away with putting normal maps off for quite a while. But I also hope to paint the feathers on in gray scale so it could be things are already well setup for normal map generation. We’ll see how the UVs turn out. I like to make them easy to paint in 2D tools but sometimes that wastes too much space (like with the dog)… and then I’m stuck doing most painting in 3D.
I just meant that the clean brick pattern was why I offered to do a normal map quickly. (Because you aren’t using PBR, I’d have thrown in a specular map, too…)
Some textures are not so easy to bump map. And some brick textures, even when clean, are exactly opposite of what one would want for ‘high spots being bright’… since mortar is often white.
That floor texture is basically perfect for quick bump mapping. Low spots are dark, high spots are light.
I’m so happy right now. Improved my vegetation workflow so much that I found the solution with vanilla JMonkey. No need for a vegetation plug in. I’m just using blender and FOUR batches.
Depending where the camera is looking it goes from 500,000 triangles to 6,000 triangles. Just what I wanted without much effort. JMonkey rocks!
If you are setting up your trees individually before batching, a tip is to throw a random Y-axis rotation in. (A bit of tiny random x-axis and z-axis rotation can be good, too.)
When all is done my workflow will be to use the blender particle system use weight mapping to only place in certain areas.
Thank you for the tip but that’s already part of my list of things to do.
Here’s the list. Anyone feel free to learn from it.
How to make the vegetation feel more natural:
Random hue colors (I learned this from the creator of Smash Bros)
Random object scale
Random object rotation (what PSpeed mentions). Usually this is can be accomplished by aligning the object with the terrain normal. But that’s not it’s not always easy to do in Blender.
Also, working on something special, which all (copycat) roguelikes need: cards
at the moment the plan is to have cards be offered only a couple times per game (currently 4x, after each boss). The player will be able to reroll them for gold picked up during playthrough. I thought it’d be a good way to both engage players (as its multiplayer, maybe some gold sharing?) and make enemies that dont drop equippable items worth killing
I went back to the Domino game I made years ago and decided to make improvements, but I had to start from scratch and use it as a test for ECS. It may become a game in the future
Note: I tried using dyn4j for my physics system but I found that the longer I shuffle, it will start gobbling up memory like a Google Chrome tab. So I went to use the Box2D wrapper from Libgdx and the memory use issue was solved.
Implemented point light and spot light shadows in Renthyl.
This approach has the shadow are calculated in a pre-pass, then applied in the geometry’s shader rather than as a screenspace pass like the normal shadow filter.
Additionally, only one shadow-related texture is uploaded to each geometry by packing one shadow per bit per pixel in a “light contribution” texture. So if a pixel is not in shadow to a light, the corresponding bit is 1, otherwise 0.
I had to edit it to be less blinding but all the black areas are in shadow (0), and all the red areas are exposed to the light (1).
Went ahead and implemented crude (and naked) human enemies:
turned out it was much much easier than expected (something worked out as planned for once lol). Now i’ll be tweaking shooting feedback. Also they’re not so friendly with the beetles (likewise)
From a technical point of view:
those guys use items in the exact same form players use items (their interactions are networked) - in fact they are 95% players (can hold the exact same status effects, hold items etc) - main differences are in movement handling and equipping weapons/gloves)
They use AI similar to the one beetles use (mildly improved)
Apparently I’m developing for the SDK again. The gods of chaos have rolled the dice and decided I should add AnimLayer support.
Sadly, once I got to the final step of saving them, I realized it doesn’t seem to be supported. Does anyone know why this was never implemented in core? (AnimLayer, ArmatureMask) Was there a good reason, or just oversight?
All I know is that the essential design of MonkeyAnim (the old name) was made as direct states of Finite-automata, and so AnimLayers were just intermediary states that mask partitions of states to runtime and generally the saveables are implemented on the AnimComposer as a whole. However, I think you might create a pattern that is somehow serializes AnimLayer objects into partitions of AnimClips which will be then serialized internally by the AnimComposer, but you need to look for if other parts of the animation system supports saveables, too so you could hook on them if possible.
EDIT:
I’ve been studying this Animation system for about 2 years, so yeah I got most of the general ideas, though there are still a lot of anti-patterns and unhandled potential failure routines, I’ve not got the enough time and knowledge yet to criticize the system as a whole. This is the old implementation forked from Nehon and it has a link to the forums, I highly encourage you to have a look at the draw io file presented there:
Working on a tree planting process. One, because I need it. Two, because I have to learn about multi-threading.
I don’t use someone else’s code for this because it’s usually not simple code enough and when I try to change that code I might as well start from scratch something that matches my project workflow.
I’m going to use all that I learn from this to do Ai path finding as well for my creatures.
I’m going to create my own code for Ai path finding as well because see pargraph (1.)
All of this should be finished in 2 weeks. So I can get back to working on the core of the game again.
Edit 2:
My code is simple enough that I might not need to use Threads which would be a huge win regarding code maintenance.
The image shows trees being planted in a 5 x 5 grid formation. I calculate the height by beaming a ray downward. Also, tree planting is running once in the update loop.
Moreover, the terrain is a geometry. I’m trying to write the code first to be agnostic. It should not care whether it’s a Geometry object or a TerrainQuad object.