Jay Taylor's notes

back to listing index

John Carmack - In almost all of the public VR critiques I have...

[web search]
Original source (www.facebook.com)
Tags: graphics virtual-reality vr john-carmack aliasing www.facebook.com
Clipped on: 2016-07-28

In almost all of the public VR critiques I have made, I comment about aliasing. Aliasing is the technical term for when a signal is under sampled, and it shows up in many ways across graphics (and audio), but in VR this is seen most characteristically as "shimmering" imagery when you move your head around, even slightly. This doesn't need to happen! When you do everything right, the world feels rock solid as you look around. The photos in the Oculus 360 Photos app are properly filtered -- everything should ideally feel that solid.

In an ideal world, this would be turned into a slick article with before-and-after looping animated GIFs of each point I am making, along with screenshots of which checkboxes you need to hit in Unity to do the various things. Maybe someone in devrel can find the time...

All together in one place now, the formula for avoiding aliasing, from the most important:

Generate mip maps for every texture being used, and set GL_LINEAR_MIPMAP_LINEAR filtering. This is easy, but missed by so many titles it makes me cringe.

If you dynamically generate textures, there is a little more excuse for it, but glGenerateMipmap() is not a crazy thing to do every frame if necessary.

Use MSAA for rendering. Not using MSAA should be an immediate fail for any app submission, but I still see some apps with full jaggy edges. We currently default to 4x MSAA on Mali and 2x MSAA on Adreno because there is a modest performance cost going to 4x there. We may want to go 4x everywhere, and it should certainly be the first knob turned to improve quality on a Qualcomm S7, long before considering an increase in eye buffer resolution.

In general, using MSAA means you can't use a deferred rendering engine. You don't want to for performance reasons, anyway. On PC, you can get by throwing lots of resources at it, but not on mobile.

Even 4x MSAA still leaves you with edges that have a bit of crawl, since it only gives you two bits of blending quality. For characters and the environment you just live with it, but for simple things like a floating UI panel, you can usually arrange to use alpha blending to get eight bits of blending quality, which is essentially perfect. Blending requires sorting, and ensuring a 0 alpha border around your images is a little harder than it first appears, since the size of a border gets halved with each mip level you descend. Depending on how shrunk down a piece of imagery may get, you may want to leave an eight or more pixel cleared alpha border around it. To avoid unexpected fringe colors around the graphics, make sure that the colors stay consistent all the way to the edge, even where the alpha channel is 0. It is very common to see blended graphics with a dark fringe around them because the texture went to 0 0 0 0 right at the outline, rather than just cutting it out in the alpha channel. Adding an alpha border also fixes the common problem of not getting CLAMP_TO_EDGE set properly on UI tiles, since if it fades to invisible at the edges, it doesn't matter if you are wrapping half a texel from the other side.

Don't use techniques like alpha testing or anything with a discard in the fragment shader unless you are absolutely sure that the contribution to the frame buffer has reached zero before the pixels are discarded. The best case for the standard cutout uses is to use blending, but if the sorting isn't feasible, using ALPHA_FROM_COVERAGE is a middle ground.

Avoid geometry that can alias. The classic case is things like power lines -- you can model a nice drooping catenary of thin little tubes or crossed ribbons, but as it recedes into the distance it will rapidly turn into a scattered, aliasing mess of pixels along the line with only the 1 or 2 bits of blending you get from MSAA. There are techniques for turning such things into blends, but the easiest thing to do is just avoid it in the design phase. Anything very thin should be considered carefully.

Don't try to do accurate dynamic shadows on GearVR. Dynamic shadows are rarely aliasing free and high quality even on AAA PC titles, cutting the resolution by a factor of 16 and using a single sample so it runs reasonably performant on GearVR makes it hopeless. Go back to Quake 3 style and just blend a blurry blob underneath moving things, unless you really, really know what you are doing.

Don't use specular highlights on bump maps. This is hard to get really right even with significant resources. Specular at the geometry level (still calculated in the fragment shader, not the vertex level!) is usually ok, and also a powerful stereoscopic cue. This applies to any graphics calculation done in shaders beyond just sampling a texture -- if it has any frequency component greater than the pixel rate (a clamp is infinite frequency!), there will be aliasing. Think very carefully before doing anything clever in a shader.

Use gamma correct rendering. This is usually talked about in game dev circles as part of "physically based rendering", and it does matter for correct lighting calculations, but it isn't appreciated as widely that it matters for texture sampling and MSAA even when no lighting at all is done. For photos and other continuous tone images this barely matters at all, but for high contrast line art it is still important. The most critical high contrast line art is text. It is hard to ease into this, you need to convert your window, all textures, and any frame buffer objects you use too the correct sRGB formats. Some formats, like 4444, don't have an equivalent sRGB version, so it may involve going to a full 32 bit format.

Avoid unshared vertexes. A model edge that abruptly changes between two surfaces has the same aliasing characteristics as a silhouette at the edge of a model; you only get the MSAA bits of blending. If textures are wrapped completely around a model, and all the vertex normals and other attributes are shared, you only get the edge effects along the "pelting lines" where you have no choice but to have unmatched texture coordinates. It costs geometry, so it isn't always advisable, but even seemingly hard edged things like a cube will look better if you have a small bevel with shared vertexes crossing all the edges, rather than a knife-sharp 90 degree angle. If a surface is using baked lighting, the geometry doesn't need normals, and you are fine wrapping around hard angles as long as the texture coordinates are shared.

Trilinear filtering on textures is not what you ideally want -- it blends a too-aliased sample together with a too-blurry sample to try to limit the negatives of each. High contrast textures, like, say, the floor grates in Doom 3, or the white shutters in Tuscany, can still visibly alias even with trilinear. Reducing the contrast in the image by "prefiltering" it can fix the problem, or you can programmatically add an LOD bias.

Once aliasing is under control, you can start looking at optimizing quality.