[Solved in year 2026] Optimizing a transparent mesh?

Remember this thread?

I don’t billboard quads, I adjust their transparency depending on the direction you’re looking at it. Sideways → alpha = 0 and front → alpha = 1 and interpolating in between which makes it seem more like a real cloud and not like a pile of rotating, well, billboards.

When I tested it there wasn’t any big difference in performance between the two shaders but that may increase if I double the amount of vertices.

Yes that was @RiccardoBlb the filter magician, but iirc he said that it was too expensive to run in a game anyway. Unless that tip from Apollo about upscaling solved the problem.

Using it with perlin noise was too expensive, downsampling helps a bit, but still i wouldn’t recommend it.
(Actually, potentially, downsampling could help a lot… since the principle is: one ray X pixel, so less pixel=less rays=more performances… but you can’t go too much down unless you want a blurry playstation1 kind of graphics)

But if you use some lighter noise algorithm or 3d textures (that can be also baked 3d perlin noise…) it may be decently fast, and you may be able to address better the issue and have more control on when and how the nebulas are rendered, and make it perform in the same way in all circumstances.
On top of that you can achieve a real volumetric effect.

Said that, you would have pretty much to trash all the work you did for nebulas until now, so i’m not sure if it would be convenient at this point.

Coming back to downsampling, forgetting all the raytracing thing, you can still use it with your rasterization model: you render the nebulas on a texture with a lower resolution and then blend it with the actual scene on post processing with some blur, and it might still look good and give you some performance boost.

You might consider adding a performance vs quality setting for the nebulas in your options menu and let the user decide how they like it. This way you can still show off your awesome nebulas to people with expensive graphics cards and those with less than stellar performing cards can still play the game by sacrificing some visual quality.

Yeah, but then he would have to disable the “amazing” option by default, because now I get less than 30FPS on a mid range GPU… And that’s kind of a waste to disable such good-looking thing by default.

I was considering that for animations and colors, but since that doesn’t make an impact I guess I could just reduce the quad number?

I’ve optimized them by at around 10 fps since the last posted update (with vertex colors and removing some calculations) though so it should be slightly better already.

I’ll see how far I can go with reducing the quad count and making them larger. Also not spawning medium nebulas with dense cores and batching them into parts.

1 Like

Yeah probably reduce the number of quads. Another thought, I don’t know how vast these nebulas are as you pass through them, but perhaps instead of loads of quads you could detect when the player is passing through a nebula and add fog to the scene.

I do that too actually, but it looks kind of lame since there isn’t any variation in the fog, just the standard fogfilter. I tried improving that some time ago but it didn’t end well at all.

They’re quite vast, the ship on screen is about 10 units wide and the nebulas are around 20K units in radius.

1 Like

You could try a own Postprocessor, that renders a randomize fog effect over the existing image, instead of a static fixed one.

1 Like

Just a random thought but maybe baking some cubemaps for when completely inside nebulas will help?

Can you elaborate a bit on this, I can’t quite tell what you mean. Why not just a filter? And I’ve tried using perlin noise to improve it but I’m not exactly sure how so it didn’t work well.

Well, I think what empire means is the following:

  1. A filter is equal to a post processor.
  2. Perlin noise is also designed to generate noise in 3D space, that means that there is some noise value for any given 3D input vector. If you manage to create that 3D vector based on the camera position and the “2D scene texture coordinate” of the filter, then you can look up the noise value of each pixel and work with it.
    Something very handy: You can easily reconstruct the world position of any fragment in the filter’s fragment shader, so you can also easily calculate that certain 3D vector from it.

That sounds like combining the depth values with the camera transform and mapping them from near to far frustum. I’m not sure what one could use that for exactly.

@Ogli and @Tryder are spot on. You’re hitting fill-rate limits by having a huge number of overlapping quads fill the entire screen. Two options:

  1. Reduce the number of particles. Use a more complex looking texture vs. a simple one for the particles to make the illusion that there are many of them.
  2. Make the particles as small as possible. The linked “3D Nebula” demo does this. When you go inside the nebula, its just a bunch of tiny “glow” sprites.

Naturally, these two options sort of contradict each other but end up achieving the same result - less fill-rate. You can try going all the way with either option or try to find a compromise between them.

Something like this: (of course it would need some work, to work in jme, it needs the proper cam tow orld translations applied, so it does not stay centered).
Benefit of this aprproch however is, that nebular fillrate is exactly 1 per pixel, but the shaderload is higher

1 Like

Wow, that site is awesome. And those 4 shader demos are really cool!

That was the idea where I was pointing - make use of some clever shader that renders one full-screen quad. Sounds like a scene post processor to me too (so, a “filter”). But since I’m not into graphics coding very much, I can’t explain the details. Maybe look at those nice examples and make a jME-compatible copy of one of them…

Happy coding,