Saturday, January 10, 2009

Virtual Geometry Images

Geometry images are one of those ideas so simple you ask yourself "Why didn't I think of this?" I'll admit it isn't the topic of much discussion concerning the "more geometry" problem for the next generation. They work great for compression but they don't inherently solve any of the other problems. Multi-chart geometry images have a complicated zipping procedure that is also invalid if a part is rendered at a different resolution.

A year ago when I was researching a solution to "more geometry" on DirectX 9 level hardware I came across this paper that was in line with the direction I was thinking. The idea is an extension to virtual textures by having another layer with the textures that is a geometry image. For every texture page that is brought in there is a geometry image page with it. By decomposing the scene into a seemless texture atlas you are also doing a Reyes like split operation. The splitting is a preprocess and the dice is real time. The paper also explains an elegant seem solution.

My plan on how to get this running really fast was to use instancing. With virtual textures every page is the same size. This simplifies many things. The way detail is controlled is similar to a quad tree. The same size pages just cover less of the surface and there are more of them. If we mirror this with geometry images every time we wish to use this patch of geometry it will be a fixed size grid of quads. This works perfectly with instancing if the actual position data is fetched from a texture like geometry images imply. The geometry you are instancing then is grid of quads with the vertex data being only texture coordinates from 0 to 1. The per instance data is passed in with a stream and the appropriate frequency divider. This passes data such as patch world space position, patch texture position and scale, edge tessellation amount, etc.

If patch tessellation is tied to the texture resolution this provides the benefit that no page table needs to be maintained for the textures. This does mean that there may be a high amount of tessellation in a flat area merely because texture resolution was required. Textures and geometry can be at a different resolution but still be tied such as the texture is 2x the size as the geometry image. This doesn't affect the system really.

If the performance is there to have the two at the same resolution a new trick becomes available. Vertex density will match pixel density so all pixel work can be pushed to the vertex shader. This gets around the quad problem with tiny triangles. If you aren't familiar with this, all pixel processing on modern GPU's gets grouped into 2x2 quads. Unused pixels in the quad get processed anyways and thrown out. This means if you have many pixel size triangles your pixel performance will approach 1/4 the speed. If the processing is done in the vertex shader instead this problem goes away. At this point the pipeline is looking similar to Reyes.

If this is not a possibility for performance reasons, and it's likely not, the geometry patches and the texture can be untied. This allows the geometry to tessellate in detailed areas and not in flat areas. The texture page table will need to come back though which is unfortunate.

Geometry images were first designed for compression so disk space should be a pretty easy problem. One issue though is edge pixels. Between each page the edge pixels need to be exact otherwise there will be cracks. This can be handled by losslessly compressing just the edge and using normal lossy image compression for the interiors. As the patches mip down they will be using shared data from disk so this shouldn't be an issue. It should be stored uncompressed in memory thought or the crack problem will return.

Unfortunately vertex texture fetch performance, at least on current console hardware, is very slow. There is a high amount of latency. Triangles are not processed in parallel either. With DirectX 11 tessellators it sounds like they will be processed in parallel. I do not know whether vertex texture fetch will be up to the speed of a pixel texture fetch. I would sure hope so. I need to read specs for both the API and this new hardware before I can postulate on how exactly this scheme can be done with tessellators instead of instanced patches but I think it will work nicely. I also have to give the disclaimer that I have not implemented this. The performance and details of the implementation are not yet known because I haven't done it.

To compare this scheme with the others it has some advantages. Given that it is still triangle rasterization dynamic objects are not a problem. To make this work with animated meshes it will probably need bone indexes and weights stored in a texture along with the position. This can be contained to an animation only geometry pool. It doesn't have the advantage subd meshes have that you can animate just the control points. This advantage may not work that well anyways because you need a fine grained cage to get good animation control which increases patch number, draw count, and tessellation of the lowest detail LOD (the cage itself).

It's ability to LOD is better than subd meshes but not as good as voxels. The reason for this is the charts a model has to be split up into are usually quite a bit bigger than the patches of a subd mesh. This really depends on how intricate the mesh is though. It scales the same subd meshes do but just with a different multiplier. Things like terrain will work very well. Things like foliage work terribly.

Tools side, anything can be converted into this format. Writing the tool unfortunately looks very complicated. This primarily lies with the texture parametrization required to build the seemless texture atlas. After UV's are calculated the rest should be pretty straight forward.

I do like this format better than subd meshes with displacement maps but it's still not ideal. Tiny triangles start to lose the benefits of rasterization. There will be overdraw and triangles missing the center of pixels. More important I think is that it doesn't handle all geometry well, so it doesn't give the advantage of telling your artists they can make any model and place it however they want and it will have no impact on the performance of the game. Once they start making trees or fences you might as well go back to how they used to work because this scheme will run even slower than the old way. The same can be said for subd meshes btw.

To sum it up I think it's a pretty cool system but it's not perfect and doesn't solve all the problems.

4 comments:

cbloom said...

"I do like this format better than subd meshes with displacement maps but it's still not ideal."

Why exactly? I can't figure out why people aren't high on displaced subd with DX11 tesselators.

Brian Karis said...

In my previous post talking about subd surfaces I listed some of the problems concerning them. There is more artist care required to create them. Possibly a fair amount of work before they can get the asset in game. The run time complexity is larger. Each different topology needs to be handled separately which means many more draw calls. In the example given from Valve, 23 for a single character. There is even more complexity in guaranteeing a watertight mesh. This is combined with a relative inability to reduce detail beyond the level we are currently at. Virtual geometry image's complexity is mainly in the importer with generating the texture parametrization. As far as advantages of subd surfaces with displacement maps the only things I can come up with is compression due to a single channel over 3 and animation on control points which I already mention may not work so well. So as I said, VGI's aren't perfect, but I think they have less disadvantages. There's also something about it that just seems more elegant. Just my opinion.

Unknown said...

Hi Brian,

Great article. I'm searching for a master thesis subject about geometry delivery for the web and Virtual Geometry Images seem to be what I've been looking for. However, I couldn't find any other information on this topic so I thought it would be great to hear your opinion about this idea as of today since the blog post is four years old now.

I'm mostly concerned about whether it could be implemented for the web, and how well it could perform in comparison to other techniques such as compressed geometry or progressive meshes? We already have a working implementation of virtual texturing in WebGL and it seems like implementations of geometry images do exist for the web. So maybe it's not such a big leap to combine them after all? Maybe you know if someone else has implemented VGI but named it differently? Of course, any thoughts would be greatly appreciated.

Thanks!
Henrik Nilsson

Brian Karis said...

Sorry for the lag time on this response. I didn't notice this comment.

A lot of time has passed since this post. My current thinking down these lines is that GPUs aren't powerful enough yet to tessellate everything down to micropolygon sizes. When they become fast enough semi uniform tessellation is a perfect solution. At the moment we are forced to render triangles covering at least 8x8 pixels on average. There are important places and unimportant places to spend those triangles and the best algorithms to do that are in the progressive mesh category. There are a ton of hard problems there to solve: data compression, view adaptive simplification, smooth transitions, etc.

If you are interested in the latest in geometry images I'd suggest looking into vector displacement mapping and ptex. It is very similar except texels store relative positions instead of absolute. This cuts down significantly on precision required which reduces storage cost. Also having a cage to start from can solve a number of problems concerning feature adaptive detail.