This past weekend I started getting joints working in my physics engine… Sunday I was going to take a video but fraps was slowing my app down to 3 FPS. In the process of trying to sort that out I rebooted my machine and discovered it wouldn’t. I’ve been taking drive images and backing things up every since.
Insert video of four swinging balls on 6 DoF joints with different resistances.
From my understanding, it’s basically scaling with built-in AA. Storing a history of three frames and thus being able to discover each pixels vector (direction) and as a result, it’s potential future, and sample (second-guess) it at a higher resolution. So a 720p could become a 1080p game, without the disadvantages of doing so…?
Your demo proves your implementation works with moving objects and light/shadow.
Working on a new android game,here’s a little demo. All you have to do is making the ball run for as much time as possible,switching various forms to adapt to the ground,otherwise the ball will fall in the dark.
Temporal Reprojection as I’m using it, is basically keeping reference to previous frame’s space translation matrices e.g. WorldViewMatrix, then combining that with the current frame’s information to regenerate the last frame (or frames).
Using this information results in motion blur, as in my example, being trivial to implement (you get a velocity buffer virtually for free, which you use to blur the frameBuffer).
I plan to leverage this for SMAA, Subpixel Morphological Antialiasing, which sort of antialiases across frames to get a cleaner result. I also want to this to improve ray casting or marching operations in three ways :
for fog, since it’s rather low frequency you can get away with some cheating… rather than for example: taking 12 samples per frame, you can take only 4 samples, and blend them with the results of the previous two frames, netting 12 samples, with negligible visual discrepancies (if you are a smart monkey).
for AO like systems, calculating which areas of the frame have significantly changed and perform say 12 AO casts, then perform only 1 cast on remaining pixels to help continuity, ultimately resulting in far fewer ray casts.
background image I made in blender, nothing exciting just a relatively empty corridor section. To be used in JME though so it still counts as JME work
I know this is the WIP but I have a question I didn’t want to make a thread for - what motherboards can you get to support 4 or more graphics cards? How do people put together computers that have 6 or 12? I am pretty sure they weren’t sort of… server rack types, the ones I’ve seen that is.
It depends on what you want. PCI-Express 2.0 (x16) is the equivalent (laymans term) of PCI-Express 3.0 (x8) - but triple cards are the norm for high-end gaming rigs. Six cards are usually bitcoin miners and render monkeys. They wont be cheap - but don’t fall for the marketing hype - do your research. I have an ATI 7970 in a single x16 slot. Two of these cards couldn’t even saturate a single x16 slot. So really I could have dual x8’s or triple x8’s.
Arcade/sim blend racing game with terrain based randomly generated terrain and roads (although i have recently broken that). Has a few cars and other maps to play.
The cars are fun to drive, but I’m missing any sort of mouse camera controls. Also can you go backwards? Pressing S seems to only brake, is there a way to manually shift?
Also some suggestions:
Some cars seem to have inverted normals on some faces, maybe do a recalculation
Maybe some smooth normals on cars models? Or is that a specific AESTHETIC you’re going for? If yes, then those roads stick out quite a bit… if you want to do low-poly you need to be consistent.
A higher shadow map resolution + edgefilteringmode.
Some non-electric car sounds
Oh and how do you do the tyre marks btw? Texture splatting or projections or something else entirely?