Tuesday, July 28, 2009

A little idea about compressing Virtual Textures

I've spent a good deal of time working on virtual textures, but took the approach of procedural generation, using the quadtree management system to get a large (10-30x) speedup through frame coherence vs having to generate the entire surface every frame, which would be very expensive.

However, I've also always been interested in compressing and storing out virtual texture data on disk, not as a complete replacement to procedural generation, but as a complement (if a particular quadtree node gets too expensive in terms of the procedural ops required to generate it, you could then store its explicit data). But compression is an interesting challenge.

Lately it seems that allot of what I do at work is geared towards finding ways to avoid writing new code, and in that spirit this morning on the way to work I started thinking about applying video compression to virtual textures.

Take something like x264 and 'trick' it into compressing a large 256k x 256k virtual texture. The raw data is roughly comparable to a movie, and you could tile out pages from 2D to 1D to preserve locality, organizing it into virtual 'frames'. Most of the code wouldn't even know the difference. The motion compensation search code in x264 is more general than 'motion compensation' would imply - it simply searches for matching macroblocks which can be used for block prediction. A huge virtual surface texture exhibits excessive spatial correlation, and properly tiled into say a 512x512x100000 3D (or video) layout, that spatial correlation becomes temporal correlation, and would probably be easier to compress than most videos. So you could get an additional 30x or so benefit on top of raw image compression, fitting that massive virtual texture into under a gigabyte or less on disk.

Even better, the decompression and compression is already really fast and solid, and maybe you could even modify some bits of a video system to get fast live edit, where it quickly recompresses a small cut of video (corresponding to a local 2D texture region), without having to rebuild the whole thing. And even if you did have to rebuild the whole thing, you could do that in less than 2 hours using x264 right now on a 4 core machine, and in much less time using a distributed farm or a GPU.

I'm curious how this compares to the compression used in Id tech 5. Its also interesting to think how this scheme exposes the similarity in data requirements between a game with full unique surface texturing and a film. Assuming a perfect 1:1 pixel/texel ratio and continous but smooth scenery change for the game, they become very similar indeed.

I look forward to seeing how the Id tech 5 stuff turns out. I imagine their terrains will look great. On the other hand, alot of modern console games now have great looking terrian environments, but are probably using somewhat simpler techniques. (I say somewhat only because all the LOD, stitching, blending and so on issues encountered when using the 'standard' hodgepodge of techniques can get quite complex.)







5 comments:

repi said...

I'm also a big fan of procedural generated/composited virtual textures!

And I like the idea of generating & "video" compressing tiles instead of having to simply throw away precious generated procedural data (esp. when using more heavy duty generation) when the memory is needed for more important tiles.

Recompression & storing out tiles could allow you to apply heavy local dynamic changes to the virtual unique textures without having to regenerate that every time that tile is needed.


J.M.P. van Waveren from id will talk about some more details of their virtual texturing at the Beyond Programmable Shading course @ Siggraph next week, will indeed be very interesting.

Sam Martin said...

It's an interesting idea!

If this is feasible, it should work for large 2D textures as well though? You take a 2D texture (x,y), perhaps add a fake 3D dimension (z) with multiple copies of the same data then compress it end-on (x,z). So one axis (x in this case) is temporally compressed. You are quids in as long as you only access limited areas at a time.

I haven't thought about this long enough to get any impression of whether this is crazy or not. But I'd be worried that:
- At least in audio, temporal compression will lose information that it knows you won't miss. The same assumptions are unlikely to apply here. All sorts of craziness might result.
- There's nothing special about the x axis in this example, and alternative decompositions may better it.
- I have a twitchy feeling in my toe...

Sam

Jake Cannell said...

repi - Yes, I like procedural generation in the form of storing the edit history, which can always be 'flattened' to a disk chunk if need be.

Won't be at siggraph unfortunately but look forward to hearing more about id's VT stuff.

Sam - Yeah the whole idea is for 2D textures. Basically a video is 3D, but you can just tile a large 2D surface into a 3D volume, you don't have to duplicate any data. The extra compression over just using JPEG alone comes from the macroblock matching system.

Sam Martin said...

Yeah - sorry, I read the post a bit quickly.

I've just boshed up a quick couple of apps that will compress a set of pics into a movie, and another that extracts them out again. I'm curious to see what the artefacts are like. Main problem is I don't have any decent data. Any suggestions on where I might get some?

I can share the apps up in a bit but they are really messy just at the moment...

Jake Cannell said...

I remember finding some very large height field and color data of mars somewhere on the web back when I was working on planetary terrain. I don't remember the exact site, but something along the lines of mars DEM data or mars imaging. Of course, there's certainly plenty of huge earth images out there.

Followers