Temporal Reprojection as I'm using it, is basically keeping reference to previous frame's space translation matrices e.g. WorldViewMatrix, then combining that with the current frame's information to regenerate the last frame (or frames).
Using this information results in motion blur, as in my example, being trivial to implement (you get a velocity buffer virtually for free, which you use to blur the frameBuffer).
I plan to leverage this for SMAA, Subpixel Morphological Antialiasing, which sort of antialiases across frames to get a cleaner result. I also want to this to improve ray casting or marching operations in three ways :
1) for fog, since it's rather low frequency you can get away with some cheating... rather than for example: taking 12 samples per frame, you can take only 4 samples, and blend them with the results of the previous two frames, netting 12 samples, with negligible visual discrepancies (if you are a smart monkey).
2) for AO like systems, calculating which areas of the frame have significantly changed and perform say 12 AO casts, then perform only 1 cast on remaining pixels to help continuity, ultimately resulting in far fewer ray casts.