WIP Deferred Shading

Actually I would really like to play a bit with deferred rendering, especially I wnder how fast my current scene will run, any chance that you could just zip your jme folder and upload it somewhere ?



http://www.templeofcats.com/wp-content/uploads/2011/01/cat-begging.jpg



(Who can say no to this? :slight_smile:

4 Likes

Great, got it to work, and must say it scales currently quite nice. However for some reason the constant cost is quite high. Since I cant have a fullscreen 1920x1200 window with a nice framerate. Guesss I will investigate this first.

@EmpirePhoenix said:
Great, got it to work, and must say it scales currently quite nice. However for some reason the constant cost is quite high. Since I cant have a fullscreen 1920x1200 window with a nice framerate. Guesss I will investigate this first.


Good :) I'm sure I'm doing something stupid somewhere that can be changed..
Please keep me posted about what you find out since that is of interest to me too!

@EmpirePhoenix @atomix



Are you git/github users?

1 Like

Yes,plz… I’ve been waiting for this stuff for long long time. Plz share the code… and I will attach my own cat’s picture right away :stuck_out_tongue:

You are flattering me guys :slight_smile:



I will probably not release my code until I’m happy with it, right now it’s cluttered and messy…

Sorry for being annoyed , but I still want to do some test in this technique…



I’ve know basic setup has been done in TestMultiRenderTarget . But @Momoko_Fan , do you know why TestMultiRenderTarget test didn’t work?

Well reason why i ask is:

I dont want to wait much longer before i start playing with deferred shading, however I if I start it myself from scratch we later have two probably msotly incompatible solutions. And I guess neither you not I would like to throw much work out of the window. (But of course in the end there can only be one official solution)



So I would like to play arounda bit with your current implementation, see what can be done and help improve it if you want.

Else I will probably start own experiments within a few weeks.

1 Like

https://merciless.me/DeferredShading.zip

2 Likes

This is just the coolest thing ever :wink:

@kwando … Not yet a git user, but I can make an account right away :stuck_out_tongue: I also agree with @Empire_Phoenix , we don’t just want to take the final result but want to help out to improve Deferred rendering in JME3. I’ve play with Deferred Rendering in Ogre. So pretty much things can bring to jME in no time.

@kwando

git yes, github not yet, but i will make a account right away.

done i’m empirephoenix there, and same @yahoo.de if you need the mail.

There is actually not much to build upon, just a bunch of shaders and a large AppState (350 LOC)…

I actually would like some help with how to structure the code, for the moment all lights are more or less hardcoded…



I’ve done a quick cleanup of what I’ve got so far and will publish it for all interested parts. I can’t publish my version there cause I’m unsure about the licensing of some of the assets… but maybe a joint effort can be put there in the future?

2 Likes
@EmpirePhoenix said:
Great, got it to work, and must say it scales currently quite nice. However for some reason the constant cost is quite high. Since I cant have a fullscreen 1920x1200 window with a nice framerate. Guesss I will investigate this first.


I cannot sign that, on my side the demo scene scales at no cost with the resolution. I always get ~185fps. Tested at 640 windowed, 1280 windowed, and 1920 fullscreen.

As expected a lot of VRam gets used at higher resolutions. On 1920x1080@24bpp the testscene consumes already over 500mb in your case with 1200 resolution it should be even higher.

Additionally i don't know how render to texture and rerender that texture works, if the textures are passed over trought the bus 2 times/ frame this could definately be a bottleneck.

Beside that excellent work. I i find some time i need to make some tests with the tessellator :)

@zzuegg niceness :slight_smile: About the memory:

I use:

2xRGBA16F for the diffuse and normals and a 32bit depth texture. (There are 2 channels left for other data);

I also use RGBA32F for my light buffer.



I suspect these textures (as you confirmed) chews a lot of memory and memory bandwidth on the graphics card.

8 bytes per pixel. 1920*1080 = 2 million pixels. That should still be only 10Mb unless I missed something?

I doubt that it is that, i have 1 gb vram and can use several 16k textures (for terrainin my case) without any problems.

Since all tetures stay all the time in the graficcard it is mostly fillrate limited, but that cannot be the issue. I can run other games using defered shading (unreal3, bf3 ect) without much higher fps, so there must be something that can be optimized somewhere.

@EmpirePhoenix said:
I doubt that it is that, i have 1 gb vram and can use several 16k textures (for terrainin my case) without any problems.
Since all tetures stay all the time in the graficcard it is mostly fillrate limited, but that cannot be the issue. I can run other games using defered shading (unreal3, bf3 ect) without much higher fps, so there must be something that can be optimized somewhere.


What graphic card do you have? I try'd to load a 16k texture, but the driver reported that such a size is not supported... I am running a gtx580
@zarch said:
8 bytes per pixel. 1920*1080 = 2 million pixels. That should still be only 10Mb unless I missed something?


You are right, maybe Gpu-Z' sensors works wrong, but it shows a difference of ~300MB between running at 640 (~200mb) and 1980 (~540MB)

But the renderer scales very good, even with 1000 spotlights placed near the center i got playable 60fps…