-
Notifications
You must be signed in to change notification settings - Fork 619
Tech. Transparency
How are we going to make transparency work
What we'd like is that different objects in the scene are rendered together "as expected". Including lines/points/meshes drawn inside a volume. And also two volumes that (partially) overlap. And with some objects possibly being translucent. This requires proper handling of transparency. And transparency is hard!
We also want anti-aliasing, because our graphics must look great. This is related, because if you anti-alias an object, its edges become semi-transparent.
To handle transparency perfectly, you either need very modern hardware, or an approach that requires many render passes (e.g. depth peeling). Approximate methods may be good enough, and still be very fast.
My take so far is to use a variant of the weighted average method, and to keep depth peeling in the back of our head as an optional method for the future.
Just set glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) and hope for the best :) Depth testing must be turned off, or you'd only see the top objects.
The way that objects are blended will depend on the order in which they're drawn and from the viewpoint. There will be errors in blending translucent objects, but also in blending translucent objects with opaque objects (due to the lack of a dept test).
As in the above, but with two passes. First the entire scene is rendered with the depth test on, writing to depth buffer enabled, and with an alpha test to only let opaque objects through.
Next, the scene is rendered again, with an alpha test to discard opaque objects. The deph test is on, but writing to the depth buffer is disabled.
This will at least blend the opaque objects correctly. But blending of transparent objects will still be incorrect.
This will help, but will still cause incorrect results if two objects overlap, as well as overlapping parts within an object. So a complex line that self-overlaps will already cause problems.
This is what used to be done a lot, but is less used now that we have shaders.
In this approach the triangles are sorted before rendering. This needs to be done each time that the viewpoint changes, and can therefore be a performance killer.
Even if the triangles are all correctly sorted by depth, there can still be artifacts for fragments in overlapping triangles. There's two examples of that here. You can in that case, split triangles, but I'd rather not go there...
I'm pretty confident that e.g. depth peeing is faster than sorting vertices on the CPU.
There are several more advanced methods that used some form of an alpha-buffer or A-buffer. This means that for each pixel, all fragments are stored in a list. This list is sorted and the fragments blended in a final pass. This can be done on DX11 hardware, but not on OpenGl 2.x.
Below are only methods that can be achieved on OpenGl 2.x (+FBO's in some cases).
See e.g. DualDepthPeeling.pdf and OIT presentation
Depth peeling means that you render a scene multiple times, and each pass you remember the depth. The next pass the whole scene is rendered again, but not the fragments that contribute to the previously stored depth glDepthTest(GL_SMALLER). In effect you are peeling the depth-layers off one by one (or two by two for the dual-version).
It requires relatively modest hardware, but multiple passes is not so nice for performance. For volume rendering, you'd want a mechanism to render a volume until you've reached the right depth, and proceed from that depth in the next pass.
See DualDepthPeeling.pdf and OIT presentation
Proposed by two guys from Nvidia, Louis Bavoil and Kevin Myers. It is an improvement of the weighted sum method introduced by Meshkin in 2007.
The weighted average approach works by combining the color+alpha contributions in such a way that their order does not matter. It can be done in one pass plus a post-processing pass and does not require anything special. FBO's will help for speed, but are not strictly necessary.
The downside is that it is an approximation. One that works perfect for fragments that have the same color, and works good for fragments that are relatively transparent. However, in the case of two near-opaque objects you'd see errors: regardless of which object is in front, the combined color would be the same.
Although there may be errors in the appearance of overlapping translucent objects, the error is consistent. With the two-pass naive method, for instance, you'd either have a correct result, or a very wrong result.
Caveat: you should first draw opaque geometry, or apply a trick to otherwise separate opaque and translucent fragments (see below).
(two passes + one screen-quad pass)
The weighted average technique requires drawing opaque geometry first and then the translucent ones. Not doing so would cause severe artifacts in the weighted averaging method, since you cannot do depth testing then, and thus opaque objects would simply blend (a solid red+solid blue becomes purpe-ish).
The bad news is that objects can be partly opaque and partly translucent. The anti-aliased lines being one example, rendered volumes being another. A solution is to separate the opaque fragments from the translucent ones in the shader.
One way to do this is to apply the same trick as the two-pass naive method, but in the second pass apply the weighted average method.
Similar to the previous method, this also method does an extra pass, but that pass can be done more efficiently, and can be combined with making a picking-texture (if necessary).
In the actual render pass, you use two render targets: one for the opaque fragments, and one for the translucent fragments. In the first pass the depth buffer is filled with the depth of the opaque objects. Otherwise translucent fragments may be added even if they are behind an opaque object. In the final pass (the one that weighted average also does), you can then blend the opaque objects correctly with the translucent ones.
Aside from better performance than the previous method, this method allows a shader to produce both a translucent and an opaque fragment. E.g. a volume renderer can draw it's opaque fragment, but also any translucent stuff in front of it. In this way, you can draw stuff inside volumes too, or have volumes that partially overlap.
A disadvantage is that each visual needs code in its fragment shader to put the fragment in the right render target. If the first render target is used for opaque fragments, this is not a problem for opaque geometry.
[https://dl.dropbox.com/u/1463853/images/transparency_01.png]
Christoph Kubisch's GTC 2014 talk, "Order Independent Transparency In OpenGL 4.x".