Hello, I’ve adapted the Alchemy/SAO ambient occlusion algorithm by Morgan McGuire:
It has some faults, especially the blur shaders, but it could be a starting point.
You can find it here:
There is a simple AS script (Alchemy.as) to test it. It uses deferred render path, the occlusion shader needs the depth and normal buffers, but normals can also be calculated realtime on the shader.
It is based on McGuire’s demo code (BSD licensed) for SAO, but many of the SAO improvements are not implemented so it is closer to the Alchemy algorithm than the SAO one. I’ve written some Gaussian blur filters: BlurGaussian and BlurGaussianDepth. They tend to make occlusion too big in the distance so I’ve added a fade-out effect.
The parameters are very coupled and difficult to adjust, this is exactly the opposite of the original author results, probably there are some bugs around. These parameters are: radius, intensity, projscale and bias. Projscale is used to adjust the radius scaling with the depth, this was not present in the paper so I think this is the bugged part. Bias is used to prevent self occlusion due to depth precision (e.g. a flat surface should be white).
The parameters are changeable with the keyboard (R,F,T,G,Y,H,U,J), you can disable the occlusion effect (V), enable a Gaussian blur (B), enable a depth aware Gaussian blur (N), and display the occlusion buffer (M).
In the script set “bool OpenGL” to false if you use DirectX (I couldn’t find a good flag for this).
In this example the intensity is too high, it should be a very subtle effect, but more important the occlusion buffer is blended with the final viewport (SAO_copy shader). This is not correct, AO should only affect the ambient light. A simple way to do this is to move the “quad SAO_copy” command before the “lightvolumes” command (and enable shadows to appreciate the difference). The “lightvolumes” command is not a good point to do the blend because it uses the add/subtract blend mode.
When you use a big radius, the occlusion is not correct around the screen borders because these is no depth buffer to sample outside the screen. To reduce this flaw you can use a G-buffer bigger than the viewport (for example “”), you compute the occlusion and then you can use a shader to offset and center it on the screen. This guard band is used only for reading the depth buffer, to avoid calculating the occlusion on it you can use Graphics::SetScissorTest in View::RenderQuad. This could also be done on the “lightvolumes” command in View::SetupLightVolumeBatch. You have to modify the engine but it should not be hard.
An improvement of SAO is the use of a depth buffer with mipmaps. To do this you create a texture for the depth buffer with N mipmaps, then you create N framebuffers and attach a different depth mipmap to each of them on COLOR0. Using the first framebuffer you compute the depth buffer, this will be written on the level 0 of the depth texture, then using the other framebuffers and a special shader you can build a mipmap level N by reading the level N-1.
I’ve still haven’t how to do this in Urho (RenderTargets don’t have mipmaps), and using mipmaps in HLSL is tricky/impossible with DirectX9 (on GLSL it is possible by enabling “extension GL_EXT_gpu_shader4”).
Any comment is really welcomed.