Advances in Realtime rendering - SIGGRAPH2014

Advances in Realtime rendering - SIGGRAPH2014

Just thought some of you might enjoy knowing what was talked about in the course. The talks are about work-in-progress for real time rendering by people in the (mostly games) industry.
The slides will be available at in a week or so.

Part I

  • Next Generation Post Processing in Call of Duty: Advanced Warfare - Jorge Jimenez, Activision Blizzard
  • High-Quality Temporal Supersampling - Brian Karis, Epic Games, Inc.
  • Rendering Techniques in Ryse: Son of Rome - Nicolas Schulz, Crytek GmbH
  • Hybrid Reconstruction Anti-Aliasing - Michal Drobot, Ubisoft Montreal

In short, everyone talked about Anti-Aliasing using temporal reprojection and supersampling. As far as I could determine there was no really new techniques presented but mostly the houses are working on using several techniques at the same time in the render pipeline to achieve sub-pixel temporally stable AA. Most of the talks hinted about pretty advanced things people do to acheive their goals, sometimes using HW-specific APIs to read sub-pixel information from the rasterizer (coverage information et.c). There was a lot of hefty math and tweaks for specific renderer and asset pipelines. I didn’t get the feeling that it would be easy to port anything of this to jME.

Part II.

  • Volumetric Fog: Unified Compute Shader-Based Solution to Atmospheric Scattering - Bartlomiej Wronski, Ubisoft Montreal
    I found this rather interesting, they use 3D low-res textures and ray-marching in them to create atmospheric scattering and volumetric fog.

  • Real-Time Lighting Via Light Linked List - Abdul Bezrati, Insomniac Games
    In short, lights are rendered as geometry (spheres, cones et.c). Each light is stored in a linked list with a pointer to a global list of light property definitions. When lighting the GBuffer the shader for each fragment loops through all lights affecting that fragment and light the fragment. In other words, there is a linked list of lights for each fragment in the GBuffer. Pretty nice demonstration of the Sponza scene where e.g. smoke, translucent objects was lighted in teh same way as opaque objects without any special handling.

  • Reflection System in Thief - Peter Sikachev & Nicolas Longchamps, Eidos Montreal
    Theif uses a multi-tier solution using both (SSR) Screen-Space reflections, (IBR) Image Based Refelections and Cube map reflections. Much of it is pre-baked and artist driven.

  • Tessellation in Call of Duty: Ghosts - Wade Brainerd, Activision Blizzard
    Showed how they work with B-Splines, surface subdivision, and tesselation for their models. Also how displacement maps were used to drive tesselation. Most of the math went totally over my head but it looked awesome :slight_smile:

  • Reflections and Volumetrics of Killzone Shadow Fall - Michal Valient, Guerilla Games Amsterdam
    Talked about how they achieved reflections in Killzone. It was a similar multi-tier system as that presented for Theif. They did some ray tracing for SSR and a system of artist placed cube maps that was blended together using temporal reprojection. One clever thing was to use a jitter offset for the ray tracing part and since they use temporal reprojection of previous frames they get something that looks a little like oversampling even when the ray tracing buffer was at quarter the screen resolution.

Have a look at the slides in a week or so, they’ll be more informative than this post :slight_smile:


Sounds pretty cool :D, thanks for sharing

Stephen Hill is collecting links to papers/talks/courses presented at this years siggraph. Lots of fun stuff :slight_smile:

1 Like

interesting stuff for sure, thanks for letting us know!