Sorry for the off-topic, but I have to ask: How in the world do you get anything out of code on shadertoy?
Everything I’ve seen written there makes no sense in a regular vert/frag material shader way. Somehow a single frag shader creates geometries, the background, a lens flare and even controls the damn camera movement.
While being just over the length of the phong Lighting.frag. WAT.
I mean, when you copy paste it, it will look exactly as on shadertoy, projected on a flat surface that covers the whole screen. Yeah, that’s not very useful…
To do something meaningful with it you’d have to pass some information about your scene (camera…) to the fragment shader and then do the transformations yourself. Put it in a Filter and render it (for example) as a fullscreen effect (FilterPostProcessor).
I guess this really depends on what you want to render. You can also read the depth buffer to integrate the rendering into your scene.
The technique is called Ray Marching.
As an example:
Damn, can’t link the shader because shadertoy is down But it’s the first one you get when searching for “clouds”.
Essentially copy pasted into my game, just for testing. Downsampled to 1/3 for better performance (getting 400-600 FPS on GTX 1070).
I didn’t really see anything that would make me think screen space reflections. Env/cube maps attached to specific surfaces would do fine. Sure a bit of work. But that what i expect in such a “demo scene”.
look at the video. https://youtu.be/Gah8sHA1r_8?t=9s
Look carefully at the reflection on the couch. When the screen border gets closer, the reflection abruptly changes.
That’s a typical artifact of a screen space effect sampling the depth map for geometry information.
You got that with SSAO too . The only information is the one in the screen so there are discontinuities on the borders.