“positionTexture” is a demonstration that this is an inefficient shader.
Storing positions in a texture requires precision and hence memory bandwidth, something that we don’t really have these days, especially not in the age of 4K displays and 90 FPS VR headsets.
On the other hand, we have lots of ALU (aka, math), and we also have the depth texture which is “free” – the GPU already uses it to perform depth testing so there’s no extra work required to obtain it. There’s code here which shows how you can obtain camera space position (and direction) by using the depth texture:
I’m still working on it! Do you know the method getPosition() return value is in which space? View space?
What’s the difference between view space and clip space?
I’m ray tracing in view space.
Now the getPosition() can convert from texCoord to view space position. I mean it convert from screen space to view space.
Each loop, the position in view space added.
How can I convert NEW view space position to NEW texCoord ? Because I want to sample the new position texture.
like this:
One of three things is happening here and I’m hesitant to even respond.
I still don’t understand your question. That’s very possible since they so far only provide the bare minimum information.
You understand how to reverse the math of getPosition() but seem to be missing some piece that you haven’t explained.
You are in way over your head and need to go back to the math and maybe run some simulations to see what is happening… in which case I probably should have kept my response to myself.