Oh yeah, modern optimization techniques definitely improve performance because you’re going to run into fewer occasions where both sides are processed, but there will still be occasions where both sides are processed. It’s not as simple as saying an if statement in a shader is going to degrade performance, it depends on the shader. If different paths are being taken for every other pixel then it’s probably safe to assume that the if statement is unnecessary, but if you have large swaths of pixels taking one path and large swaths taking another path then the if statement should save you some time. In the latter both paths are likely to be processed only for blocks of pixels that border the cut off point where the path changes.
Of course this depends on the GPU because they don’t all use the same optimization techniques. I’m not sure to what degree mobile GPUs use optimization either, they tend to be a little behind the curve in other areas.
I think the size of the pixel block processed depends on the number of cores in the GPU. So let’s say you have a shader where the path in an if statement changes every 8 pixels vertically and horizontally. On an 8 core GPU this will be optimized so there’s no penalty and a 4 core GPU will also optimize this out, but a 6 core GPU will not.
Then, of course, regardless of the number of cores the optimization is going to change depending on how the shader is rendered on the screen. If it’s far away in a non-parallel view-mode then the path might change every 3 pixels or every 2 pixels. So it’s going to render faster and slower depending on where and how it’s displayed on the screen.
Unrelated, but still an interesting thing, I discovered in Mythruna that turning on vertex lighting actually made performance worse (all other things being equal, ie: no normal maps, etc.) The only logical theory I came up with was the moving the lighting calculation into the vert shader ends up incurring extra cost because a) vertexes are always calculated where as fragments may not be because of depth, b) I have a lot of vertexes. That’s the thing, back face, front face, occluded, on-screen, jjust off-screen, etc… the vertexes will always be calculated. Fragments are only calculated if they are written to the screen and the GPU does its best to avoid that.
So, moral of the story, always put those optimizations in #ifdefs. Heheh. Especially since for some random user, maybe that optimization isn’t.
Yep. I mean, the baked in lighting parameters are per vertex but they’re baked in. The sun lighting and the ambient lighting are done per pixel. (Ambient lighting because I change it just slightly based on view direction because it looks better… ambient lighting is treated as a head mounted lamp.)
I am working on cascaded shadow maps and finally got some shadows, after a few hard hours work. Now I’m looking forward to smooth them, but I will have to spend some time reading on how to do this.
Thanks to someone else earlier on earlier in this thread with the “screen glitch”, as you gave me the idea to add blurriness to my game on impacts:
I basically just modified the FXAA filter and so when it’s not blurring the scene it’s just being used for conventional FXAA… Or that’s the idea anyway.
Also I added shields to my cars, turning my game more from “realistic car PVP simulator” into “random game where any shit is possible”.
Yeah, originally I recorded it with RecordMyDesktop, cropped it with Blender and output it to individual PNG frames then saved it as a gif with GIMP. I still had the PNG frames so I loaded them back into GIMP and ran a sprite sheet script I have which just organizes the layers into rows and columns.
I’m pretty sure Fedora has a default screen cast utility that saves short clips as gifs, but I don’t remember the commands for it.