(February 2020) Monthly WIP Screenshot Thread

Note: gltf already deals with this somehow. I don’t know if that’s helpful but it could be your export could just leverage gltf in some way and do less work.

custom json exporter script

im also not sure why, GLTF do the work :slight_smile:
is there any reason you dont use GLTF? and add rigid bodies via code. (or if you mean blender animation, you can always Bake it from the physics in blender :slight_smile: )

Made some progress with a terraria-style game. Using dyn4j for 2D collision, texture-based per-vertex lighting and a basic inventory to add/remove blocks.

I’m not completely satisfied with the lighting. It should be smooth, but it’s a lot better. Previously I just set each vertex to the given light value, now I use a texture, so the vertices can transition from value to value smoothly.

I’m sure I could do a lot more with the collision meshes. A simple distance function could eradicate a lot of it. Dyn4J doesn’t support concave shapes so it’s a bit tricky figuring out the “perfect” mesh algorithm.


Pretty sure that it does. 80% sure that I had concave polygons in my 2D ship game I was working on… with the wings getting hooked on each other and stuff.

I could have dreamed it. I even bought a tool for generating the physics shapes from images.

Edit: or do you mean for the static shapes? But even then, I have doubts.

From the docs: http://www.dyn4j.org/documentation/getting-started/

So basically it’s saying there are tools in the lib to overcome them, but they’re not directly supported. Which means holes are a nightmare to figure out :confused:

Well done @jayfella.

This looks really very good. I also tried doing this but got stuck on the Concave shape problem.

I hope you take this further.

Good luck on the 2D journey.


I tried dyn4j at one point for Spoxel but I ran into too many weird cases with mesh generation and getting on caught on edges. It ended up rolling my own physics engine which ended up being much faster but ate up a lot of development time. In retrospective… I probably should have spent more time trying to make dyn4j work. I learned a lot doing it myself but it ate up a lot of time in the end.


Yeah, but 2D physics is super easy and there is a ton of online material on how to write one. For generated meshes, it’s going to be very appealing and probably worth it in the end.

If dyn4j doesn’t allow custom collision meshes in a way that supports this then it might be better to roll ones’ own anyway.

It accepts custom meshes but not convex. There are optimisations you can make with that information like joining horizontal only each line, etc.

It’s not a huge issue for me at this point, though we’ll see when we have a horizon full of things.

1 Like

A meant custom in the “you tell me when and I provide the contacts” sort of way. Not in the “here’s a bunch of points” sort of way.

For voxel worlds (3D or 2D), math is much better at providing collisions than brute force polygonal methods will be.

Have been adding texture compression to asset importer.
Before I used 3 png 1024x1024 textures to store color, normal, height, rought, metal, ao, emissive intensity. Single material took 12MB in memory once loaded.

Now, albedo is stored in DXT1 or DXT5 if contains alpha.
Normals are stored in RGTC2. I’ve dropped height map as I do not use parallax and don’t need it.
ERMA (emissive intensity, rough, metal, ao) is stored in DXT5. (In many materials there is no emissive, thus could drop it, and maybe store RAM in DXT1 but I guess I do not need to be that stingy.) The result is 3.3MB

I take that back after writing this I changed the map to store RAM(roughness, ambient, metal) in DXT1. Now the result is 2.6MB.

The following is the screenshot of the result with 3.3MB. (DXT5 ERMA)
Screenshot from 2020-02-27 14-17-40

The following is the screenshot of the result with 2.6MB. (DXT1 RAM)
Screenshot from 2020-02-27 14-48-22 ram

For those of you not aware, first of all these compressed formats stay compressed in the video memory. The pixels needed are uncompressed each time you are sampling them in shader. Do you this we sacrifice performance for VRAM? Not necessarily, actually we can even gain performance! Because the whole texture is smaller thus less memory traffic in GPU + the decompression is designed to be fast.

However, some quality is lost, since this is lossy compression. Generally, once can use:
DXT1 to store RGB
DXT5 to store RGBA
RGTC to store Normals
Regarding the general precision of individual channels, it is generally know as:
A > G > R > B (alpha for most precision, then green, then red, then blue for not that important stuff)