Thursday, November 20, 2008

Other blogs

I just stumbled on this blog the other day that I haven't seen linked in the graphics blog circle so I thought I'd make a point of mentioning it. Chris Evans, a technical artist from Crytek, has a blog. He has just left Crytek to go to ILM and I hope he keeps up the blog as it's full of art and technical topics. His main site also has some good stuff like cryTools, Crytek's suite of max scripts.

I didn't get a chance to play Resistance 2 yet but for a good rendering break down of the game check out Timothy Farrar's post. Timothy is a Senior Systems Programmer I work with at Human Head so you can trust him ;). From a part of the game I did see I think the object shadows work similarly to the first cascade of cascaded shadow maps as in there is one shadow map that is fit to the view frustum within a short range and fades out past that range. I haven't seen the rest of the game to know whether shadows can come from more than one direction that would make this not work.

Sunday, November 16, 2008

Smooth transitions

The T-rex night attack scene from Jurassic Park was a major milestone for CG. It showed a realistic and convincing CG character in a movie. As the T-rex came after a girl Dr. Grant tells her, "Don't move. If we don't move he won't see us". This is something graphics programmers should remember as well because it holds true for humans as well. Our eyes are very good at seeing sharp changes but not very good at seeing smooth changes. If there is a way to smooth a hard transition you can get away with a lot more than if you didn't. I think this is the prime thing missing in most LOD implementations. The LOD change gets pushed far enough in the distance that you can't tell when it pops from one LOD level to the next. If the pop was replaced with a smooth transition the LOD distance could be pushed significantly closer and still not be noticeable.

Fracture
I only played the demo but I really liked their cascaded shadow map implementation. It looks like there is just 2 levels to it. After that it goes to no shadows. What is really nice about it is there's a smooth transition between the levels. As you walk out to an object it will fade from no shadow to low res shadow to high res shadow. So many cascaded shadow maps in games look strange or jarring because there is a line on the ground where it goes from one res to another in the shadows. This moves with your view direction and movement.

Gears of War 2
In my opinion any serious game graphics programmer or artist is obligated to play Gears of War 2. It is the bar now for graphics on a console. For being blown away by visuals it gives Crysis a run for its money too. As far as tech that seems absurd but it's just the combination of art and tech with enough things I'd never seen before that this takes the prize for me. It isn't this through and through so it really takes playing the whole game to get what I mean.

As far as tech I was a bit surprised when Tim Sweeney showed at GDC the new things they added to the Unreal engine for the next Gears of War. I was surprised because it wasn't very much. In the time between UT2k4 and GoW they built a whole new engine. Sure it was an evolution but it was a large one. The renderer was rewritten, they created a whole set of high end tools and changed the internal framework drastically. For GoW to GoW2 they added SSAO, hordes of distant guys, water, changed character lighting, and destructible geometry (which wasn't really in the game). This doesn't sound like very much considering they have 18 engine programmers listed in the credits.

The change that impressed me the most is fading in textures when they stream in. Some UE3 games have gotten some flak for streaming images popping in. Instead of not pushing the memory as much they added fading in of mip levels. For smooth transitions this is brilliant! I'm guessing most will never know textures aren't streamed in on time because they will never see the pop again. I also noticed they may have pushed the texture streamer further because they didn't have to worry about subtle pops for new mip levels. I couldn't tell if this happens when an image downsizes because I never noticed any image downsize.

Fading new mip levels once they stream in is something I've wanted to do but I just don't know how they are doing it. If anyone knows or is doing something similar themselves I'd love to hear how it works. The problem I see is there are min and max mip levels settable for a texture sampler on a 360. These unfortunately are dwords. Lod bias is settable but this happens before the clamp to min and max. The only way I could see this working is if they calculate the gradient themselves in the shader and clamp it to the fade value as they lerp from the old clamp to the new full mip level. This seems to me like it would create a shader explosion if this needs to be turned on and off for every texture for every shader. The alternative is always manually calculating the texture gradient for all uv's used and then clamping it individually for each texture which I believe would be quite a bit slower.

Next up is the screen space ambient occlusion (SSAO). This helped with their low res lightmaps and showed off their very high poly environments. Personally I think it was over done in many cases but I guess overall it was an improvement. I was surprised by the implementation. They are using frame recirculation to reduce the per frame cost. You can tell because obscuring objects will wipe the AO away for a moment before it grows back in. It seems to be at a pretty high res, possibly screen res. Previous results are found either using the velocity buffer or just the depth buffer and camera transformation matrix. Using this position they can sample from the previous calculated results without smearing things as you look around.

Much of the visual splendor comes from clever material effects. They have scrolling distortion maps to distort the uv's. There's pulsing parallax maps that look so good for a moment I thought the whole area was deforming geometry. There was inner glow from using an additive reverse fresnel like the cave ant lions in Half Life ep2. There was a window pane with rain drops coming down it that I had to study for like 2 mins to figure out what was going on. My guess, 1 droplet map, 2 identical drop stream maps independently masked by scrolling textures. The final normal map was used to distort and look up into an environment map. Their artists really went to town with some of this stuff.

The lighting is still mostly directional lightmaps. Shadows are character based modulate shadows, this time higher res than before. It seems they are only on characters this time leaving the SSAO to handle the rest of the dynamic objects.

The Latest Games

So, I've gotten some complaints that I haven't updated this blog in a while. Partly this is due to the abundance of games that have come out lately and partly this is due to my mind share being in work specific graphics algorithms. Since I can not talk about what I do for work I decided a better area for blog topics is other peoples games. The majority of my graphics research comes from reading about and studying games in whatever form I can get it. I'll go through some of the things I've found lately.

Dead Space
One of the stand out graphical features to me for Dead Space is dynamic shadows. All lights and shadows are dynamic. The shadows are projected or cube shadow maps depending on the light. It looks like bilinear filtering which makes the shadows look very pixelated and nasty at times. They really take advantage of the shadows being dynamic though by having objects and lights move around whenever they can. Swinging and vibrating light positions are abundant. For large lights the resolution really suffered and the players shadow could reduce to blocks.

Coronas and similar effects were used a lot and looked great. With bloom being all the rage to portray bright objects it's a nice slap in the face that the old standbys can sometimes get a lot better results than the new fancy system. Similar tricks were used to get light beams through dusty corridors by using sheets that fade out as you get near them. This masks their flat shape well. It seems the artists went to town with this effect making different light beam textures for different situations. Foggy particle systems were also used that faded out when you get near them.

David Blizard, Dead Space's lighting designer, claimed their frame budget for building the deferred lighting buffer was 7.5ms, 4ms for building the shadow buffers, and 2ms for post processing including bloom and an antialiasing pass which means they are not using multisampling. For ambient light there is baked ambient occlusion for the world that modulates an ambient color coming from the lighting.

Mirrors Edge
Although it uses the Unreal engine it looks distinctly not like any other Unreal game or for that matter any other game out. This is due to its "graphic design" looking art style. Tech wise this relies heavily on global illumination. For this they replaced the normal lightmap generation tool from Unreal with Beast. From what I've seen in the first hour or so the only dynamic shadows are from characters and they are modulate ones. All environment lighting is baked into the lightmap with Beast. It looks like they're at a higher resolution than I've seen used before. My guess is they sacrificed a larger portion of their texture memory and added streaming support for the lightmaps. It has auto exposure which I haven't seen from an Unreal game before. All of the reflections but one planer one were cube maps which caught me a little by surprise because DICE had talked about doing research into getting real reflections working. Overall I was impressed. You could tell there was a very tight art / tech vision for the game.

Fallout 3
The world in Fallout 3 is stunning. Tech side I don't see much improved from Oblivion. Maybe most of the changes are under the hood to get more things running faster but I don't see much different, just a few odds and ends. It does seem like the artists have really grown and they have come up with good ways for portraying the things they needed to.

A few things stood out to me. First is the light beams. This is a common new technique of using a shader that fades out the edges of a cone like mesh and some other stuff to portray a light cone. The first place I saw it was Gears of War but it was used heavily and well in Fallout 3.

Second was the grass that was done with many grass clump models with variation. The textures had a darkened core to simulate shadowing and with AA on they used alpha to coverage and looked great. So many games use just one grass sprite to fill in grass. No matter how dense you can get it it will not look right. Grass needs variation in size, color, texture, density and shape. Great job, it's some of the best grass I've seen.

The last thing that stood out to me was their crumble decals (don't know what else to call them). To portray crumbling damaged stone and concrete, alpha tested normal mapped decals were placed on the edges of models. It looks like the same instanced model was placed around with different placement of these crumble decals. It added both an apparent uniqueness to the models as well as broke up their straight edges. There was quite a few times I was fooled into thinking some perfectly straight edge was tessellated when it was just effective use of these decals. Also of note is that these decals will fade out in the distance often before anything else LODs. There was likely a system to handle these decals specially.

Overall, the art was nice but the tech was underwhelming. They really should have shadows by this point. There is no world shadows of any sort. With this addition I think the game would look twice as good as it does.