Tuesday, August 12, 2008

Deferred rendering 2

I'll start off by saying check out the new papers from siggraph posted here. I was really surprised with the one on Starcraft II. Blizzard in the past has stayed behind the curve purposely to keep their requirements low and audience large. It seems this time they have kept the low range while expanding more into the high end. I was also surprised due to nature of the visuals in the game. It's part adventure game? Count me in. It's looking great. It also has an interesting deferred rendering architecture which leads me to my next thing.

Deferred rendering part II. Perhaps I should have just waited and made one monster post but now you'll just have to live with it.

Light Pre-Pass
post

This was recently proposed by Wolfgang Engel. The main idea is to split material rendering into two parts. First part is writing out depth and normal to a small G-buffer. It's possible this can even all fit in one render target. With this information you can get all that is important from the lights which is N dot L and R dot V or N dot H whichever you want. The buffer is as follows:

LightColor.r * N.L * Att
LightColor.g * N.L * Att
LightColor.b * N.L * Att
R.V^n * N.L * Att

With this information standard forward rendering can be done just once. This comprises the second part of the material rendering.

He explains that R.V^n can be derived later by dividing out the N.L * Att but I don't understand any reason to do this. This also means a divide by the color that is just wrong. There's also the mysterious exponent that must be a global or something meaning no surface changeable exponent.

There are really a number of issues here. Specular doesn't have any color at all, not even from the lights. If you instead store R.V in the forth channel and try to apply the power and multiply by LightColor * N.L * Att in the forward pass the multiplications have been shuffled with additions and it doesn't work out. There is no specular color or exponent and it is dependent on everything being the phong lighting equation. It has solved the deep framebuffer problem but it is a lot more restrictive than traditional deferred rendering. All in all it's nice for a demo but not for production.

Naughty Dog's Pre-Lighting
presentation

I have to admit when I sat through this talk I didn't really understand why they were doing what they were doing. It seemed overly complicated to me. After reading the slides afterwards the brilliance started to show through. The slides are pretty confusing so I will at least explain what I think they mean from it. Insomniac has since adopted this method as well but I can't seem to find that presentation. The idea is very similar to the Pre-pass lighting method. It is likely what you would get if you take Light Pre-Pass to it's logical conclusion.

Surface rendering is split in 2 parts. First pass it renders out depth, normal and specular exponent. Second, the lights are drawn additively into two HDR buffers, diffuse and specular. The materials specular exponent has been saved out so this can all be done correctly. These two buffers can then be used in the second surface pass as the accumulated lighting and material attributes such as diffuse color and spec color can be applied. They apply some extra trickery that complicates the slides that is combining light drawing in quads so a single pixel on screen never gets drawn during light drawing more than once.

This is completely usable in a production environment as proven by Uncharted having shipped and looking gorgeous. Lights can be handled one at a time (even though they don't) so multiple shadows pose no problems. The size of the framebuffer is smaller. HDR obviously works fine.

It doesn't solve all the problems though. Most are small and without testing it myself I can't say whether they are significant or not. The one nagging problem of being stuck with phong lighting still remains. This time it's just a different part of Phong that has been exposed and is rigid in the system.

Light Pass Combined Forward Rendering


I am going to propose another alternative that I haven't really seen talked about. The idea is similar to Light indexed deferred. The idea there was forward rendering style but with all the lights that hit that pixel rendering in one pass. This can be handled far simpler if when drawing that surface the light parameters were merely passed in when drawing the surface and more than one light is applied at a time. This is nothing new. Crysis can apply up to 4 lights at a time. What I haven't seen discussed is what to do when a light only hits part of a surface. Light indexed rendering handles this on a per pixel basis so it is a non issue. If the lights are "indexed" per surface then there can be many more lights that have to affect every pixel than is needed.

We can solve this problem in another way other than screen space. For instance, splitting the world geometry at the bounds of static lights will get you pixel perfect light coverage for any mesh you wish to split. The surfaces with the worst problems are the largest, being hit with the most lights. These are almost always large walls, floors and ceilings. Splitting this type of geometry is not typically very expensive and is rarely instanced. For objects that don't fall in this category they are typically instanced, relatively contained meshes that do not have very smooth transitions with other geometry. I suggest keeping only a fixed number of real affecting lights to render these surfaces by combining any less significant lights into a spherical harmonic. For more details see Tom Forsyth's post on it. In my experience the light count hasn't posed an issue.

The one remaining issue is shadows. Because all lights for a surface are applied at once shadows can't be done a light at a time. This is the same issue as light indexed rendering and the solution will be the same as well. All shadows have to be calculated and stored, likely as a screen space buffer. The obvious choice is 4 shadowing lights using 4 components of a RGBA8 render target. This is the same solution Crytek is using. That doesn't mean only 4 shadowing lights are allowed on screen at a time. There is nothing stopping you from rendering a surface again after you've completed everything using those 4 lights.

Given the limit of 4 shadowing lights this turns into a forward rendering architecture that is only one pass. It gets rid of all the redundant work from draws, tris, and material setup. It also gives you all the power of a forward renderer such as changing the light equation to be whatever you want it to. It doesn't rely in any way on screen space buffers for doing the lighting besides the shadow buffer. This means no additional memory and 360 edram headaches.

There are plenty of problems with this. Splitting meshes only works with static lights. In all of the games I've referenced so far this poses no problems. Most environmental lighting does not move (at least the bounds), nor does the scenery to a large extent. Splitting a mesh adds more triangles, vertices, and draw calls than before. In the cases where you split this it is typically not a major issue.

You do not get one of the cool things from deferred rendering and that is independence from the number of lights. In the Starcraft II paper that came out today they had a scene with over 50 lights in it including every bulb on a string of Xmas lights. This is not a major issue for a standard deferred renderer but it is for pass combined forward rendering. It is really cool to be able to do that but in my opinion it is not very important. The impact on the scene from those Xmas lights actually casting light is minimal and there are likely other ways of doing it besides tiny dynamic lights.

Summary

That is my round up of dynamic lighting architectures. I left out any kind of precalculated lighting such as lightmaps, environment maps or Carmack's baked lighting into a unique virtual texture as it's pretty much just a different topic.

2 comments:

Pat Wilson said...

I'd like to respectfully disagree about pre-pass being non-production ready. Approximating a per-material specular power can be done quite accurately depending on the range of specular powers in the scene. Since the color of a specular highlight is a property of the material, and not the light, so I do not believe that it should be stored during accumulation.

The value stored in the specular channel for pre-pass rendering is:
(N.H)^c * N.L * Att
Dividing this value by N.L * Att, then raising the result to (c/matSpec) gives a very usable per-material specular value.

I implemented a version of the light pass which uses the CIE-LUV color space. There is some discussion about it here: http://www.gamedev.net/community/forums/topic.asp?topic_id=514536

Brian Karis said...

The pre-pass idea is one that I like. It was more the formats explicitly that I do not think are feature full enough to support the demands of a high end game. The color for the specular response of a material is based on both the light and the material. A purely red light will not reflect anything but red light. You will be storing only the intensity of this specular response. You also don't really have N.L * Att to use to divide out afterward. You have N.L * Att * lightColor. The result would be (N.H)^c / lightColor which is not going to look right at all. Ignoring color for a moment ((N.H1)^c + (N.H2)^c)^(c/matSpec) != (N.H1)^matSpec + (N.H2)^matSpec. If all your specular exponents are very close to c this will be fine but I would just call that a fixed exponent and not give any variation at all. There is a large difference between a rough highlight and a glossy one and this will break down.

As far as adding things in LUV space this is incorrect as well as can be seen in your shots. Adding lights in multiple passes is perceptually correct if it uses RGB linear space. The reason RGB works is because it matches the L (red), M (green) and S (blue) receptors in our eyes. Adding light of these 3 frequencies remains identical to our eyes response as adding the original frequencies themselves. This is the reason monitor phosphors and all sorts of things use the RGB model. There may be other color models that give identical results upon adding as RGB but I don't know what they are. LUV certainly isn't one of them as is demonstrated in your shots.

For a production environment I would not choose the formats given to implement pre-pass lighting. I would choose Naughty Dog's method. For further clarification their method wrote out a diffuse color and specular color to 2 different render targets. These were encoded in LogLUV. The adding of the light was done in their 8 light shader and the encoding was after the light was added. Because they stored out the specular exponent in the same buffer as the normals they have this material specific specular exponent to apply to the lighting. There are more render targets required in this versus the other but it is obviously fast enough because they shipped Uncharted with it.