A classic rendering problem in realtime 3D graphics is overlapping transparent surfaces. Well, I recently wrote a shader for Unity that renders alpha transparency with correct depth sorting. My solution isn’t unique, but I don’t see a lot of people talking about it, so hopefully this will help people out.
The depth sorting in this shader still only works well for hard-edged cutouts, but you can mix cutouts with smooth semi-transparency in the alpha channel, and any rendering glitches will be restricted to only the semi-transparent parts. This is a huge improvement over having those rendering glitches apply to the entire model, and hopefully my explanations give you a full appreciation for the tradeoffs being made.
This issue is a problem I’ve been dealing with for ages, so I forget to even think about it until reminded. It came up recently in a thread on Unity’s forum. This particular thread is about siccity’s plugin for loading glTF models, and one user was complaining about a model looking different on Sketchfab than in Unity. Specifically, here is the model on Sketchfab:
But in Unity it was looking like this:
Let me explain what’s going on here…
In order to render fast enough for real time, the normal rendering approach assumes that only the nearest surface is visible. Sorting all the polygons in a scene (and actually that wouldn’t be granular enough, you’d need to sort each pixel) would take way too long, so graphics cards use a Z-buffer to store depth values that have already been rendered, and the rendering algorithm only draws new pixels when they are closer than the depth value in the Z-buffer.
This assumption holds for opaque surfaces, since you can’t see anything further away through an opaque surface, but when surfaces have transparency you are able to see through them to things that are further away. The way game engines typically handle this is simply not writing transparent surfaces into the Z-buffer. That way, the further away things still get rendered, and then the transparent surface gets drawn over them. This does require that transparent polygons get rendered later than opaque ones, but that’s easily handled by adding in a queue for transparent surfaces, with the rendering pipeline processing different queues at different times.
The trouble is, multiple transparent surfaces can no longer tell which is nearest. Typically game developers get around that problem by simply designing their assets so that multiple transparent surfaces don’t stack up. Players don’t notice anything odd since you don’t typically have lots of transparent objects stacked up in real life. I mean, how often do you see a stack of glass panes sitting around?
Unfortunately, the model in question does have a lot of transparent polygons on top of each other, resulting in the glitchy look of that screenshot. And note that this problem isn’t unique to Unity; downloading the glTF from Sketchfab and loading in this viewer also results in a lot of rendering glitches. So why does this model look so much nicer on Sketchfab?
Well, they are writing to the Z-buffer. Given what I explained earlier, they obviously can’t simply write to the Z-buffer in the standard way. Nevertheless, they must be writing to the Z-buffer in order to have correct depth sorting. With that insight in mind, I whipped up a shader that handles both alpha transparency and Z-buffering in the way I believe Sketchfab’s shader operates.
Furthermore, I wrote both lit and unlit versions of my shader, since the model is unlit on Sketchfab, but doing this with lighting seems like it would be useful in more instances. Here are screenshots from Unity showing the Surface shader on the left and the unlit shader on the right:
For the actual shader code, either go to the forum link I posted way at the top or follow these two gists. If you just want to use these shaders without necessarily understanding exactly what they do, then just copy/paste that and you’re good to go. For anyone else, keep reading…
Most importantly, note that there are two rendering passes, one with ZWrite on and one with ZWrite off. The first pass simply draws to the depth buffer and (thanks to the ColorMask) doesn’t actually draw anything visible. The second pass will still test against the Z-buffer, even though it isn’t writing to the Z-buffer, and because of the first pass there will be depth values for this model. As pointed out on Sketchfab’s blog about how their transparency settings work (I reverse engineered the shader from this description) “Blending mode is slow”, and that’s a clue to the additional rendering pass.
Now notice the line clip() in the first pass. That means “discard this fragment if the alpha is less than 1”. That means only fully opaque pixels will write to the Z-buffer. You still incur the cost of a texture lookup here, but this improves the visuals immensely by not including transparent parts in the Z-buffer. If transparent parts were in the Z-buffer, the final rendering would have unsightly holes in the model. (Note for porting this shader to GLSL/OpenGL: the clip() function is HLSL/CG, but it’s pretty much equivalent to the discard keyword.)
Anyway, after that trickery in the first pass, the second pass sets up alpha transparency with the Blend. If you’re wondering why we need to write depth values and color values in separate passes, as opposed to a single pass that renders both, don’t forget that semi-transparency requires rendering on top of previously drawn polygons. We’re only clipping non-opaque fragments in the first pass, not the second pass, so that color values are drawn in the second pass for semi-transparent parts. Oh and both passes need “Cull Off” since this model has a lot of polygons that need to render double sided (not a great idea to model in a way that prohibits backface culling, but hey this is the model we’ve got).