CHAPTER :: 04 / 05 LOGGED :: 2026 · APR · 04

Where Corners Stay Dark

Where the small darknesses inside corners — the ones IBL flattens out — are added back, by sampling the screen-space neighborhood of every pixel.

The IBL renderer at the end of the previous chapter produces a beautiful image until you start looking at corners. The marble bust on the table sits on the table only loosely — the contact point between its base and the wood reads at the same brightness as the rest of the bust. The slatted wooden cabinet has thirty parallel slats; each one is fully lit by the sky, including the recesses between the slats that should be receiving very little light.

The missing physical effect is accessibility: a fragment in the open hemisphere receives ambient light from every direction, but a fragment inside a cavity has half its hemisphere blocked by surrounding geometry. The path tracer answered this by tracing many shadow rays and integrating; the offline IBL solution would be a Monte Carlo bake. In real time, the trick is to ask the same question in screen space — using the G-buffer that’s already there — and trade some accuracy for tractability.

This chapter implements Alchemy SSAO (McGuire et al., 2011): for each fragment, distribute samples in a small disc around it on the screen, lift each one back to world space via the position buffer, and accumulate how much of the surrounding hemisphere is actually above the surface. The result is a per-pixel ambient occlusion mask, denoised with a bilateral filter, multiplied onto the IBL term.

The geometry of accessibility

The IBL diffuse evaluates irradiance(N) = ∫ Li(ω) · cos(θ) · V(ω) dω over the upper hemisphere — but it bakes V(ω) = 1 into the SH coefficients. Every direction is assumed unblocked. The actual physics is that nearby geometry blocks some directions. Ambient occlusion is a scalar approximation of the fraction of the hemisphere that’s actually open: 1 for a fully exposed surface, lower values for cavities.

A correct AO computation would integrate V(ω) over the hemisphere by tracing rays. The screen-space approximation makes a much weaker claim: it assumes the G-buffer’s depth and normal contain enough information to estimate occlusion from a small neighborhood, and computes that estimate with a fixed handful of samples per pixel.

The approximation has known failure modes. Geometry hidden from the camera doesn’t exist in the G-buffer — so it can’t occlude. Distant geometry that would legitimately block light can show up in the buffer, but if its depth puts it outside the sampling radius, it’s discarded. The result is faithful to nearby occlusion and silent about everything else. For ambient lighting that’s a reasonable trade — the eye notices missing contact darkening immediately, and tolerates approximate large-scale ambient.

CLICK TO ZOOM
| The full scene with SSAO toggled on and off — every place where two surfaces meet gains visible contact darkening |

Sampling the screen-space hemisphere

The Alchemy SSAO approach distributes n samples around each fragment in a logarithmic spiral. For sample index i ∈ [0, n):

// Stratified position [0, 1] for sample i
float alpha = (float(i) + 0.5) / float(n);

// Screen-space radius scales inversely with depth — keeps the world-space
// search region a constant size R regardless of how far the fragment is
float h = alpha * R / d;

// Spiral angle — 7n/9 multiplier distributes samples around the disc evenly
// and avoids regular banding. phi is a per-pixel hash for inter-pixel offset.
float theta = 2.0 * PI * alpha * (7.0 * float(n) / 9.0) + phi;

vec2 samplePos = fragCoord + vec2(cos(theta), sin(theta)) * h;

A few pieces deserve attention. The radius R is world-space — a constant search region around each fragment. Dividing by depth d converts that to screen-space pixels: a fragment far from the camera covers a smaller region of the screen, so its sample disc shrinks accordingly. A surface near the camera and a surface far from the camera both query the same world-space neighborhood, even though they look at very different numbers of screen pixels.

The angle multiplier 7n/9 is chosen to spread samples evenly around the disc — uniform in direction, stratified in radius. The per-pixel hash phi ensures neighboring fragments use different starting angles, breaking up the spiral pattern across the screen. Without phi, every fragment samples the same offsets, and the result has visible spiral artifacts.

For each sample location, the shader looks up the world position from gPosition and computes the vector from current fragment to sample:

vec3 P     = current fragment world position;
vec3 N     = current fragment normal;
vec3 Pi    = texture(gPosition, samplePos).xyz;
vec3 omega = Pi - P;                                // sample minus fragment

float c = 0.1 * R;                                  // near-clamp distance
float H = step(0.0, R - length(omega));             // 0 if outside radius

float num   = max(0.0, dot(N, omega) - bias) * H;   // projected solid angle above surface
float denom = max(c * c, dot(omega, omega));        // inverse-square falloff with clamp

S += (2.0 * PI * c / float(n)) * (num / denom);

The numerator dot(N, omega) measures how far above the surface plane the sample sits — positive only if the sample is in the upper hemisphere relative to the fragment’s normal. The bias offset prevents floating-point precision artifacts on planar surfaces from registering false occlusion. The denominator’s inverse-square falloff weights distant samples in the radius less than nearby ones; the small clamp prevents singularities when a sample sits very close to the fragment.

Summed over n samples, S is the accumulated solid-angle obscurance — a non-negative number proportional to how much of the upper hemisphere is blocked by nearby geometry. Two final knobs convert it to a usable AO factor:

float A = pow(max(0.0, 1.0 - scale * S), contrast);

scale linearly multiplies the obscurance; contrast applies a power curve that pushes mid-grey occlusion toward black while leaving fully open areas (A = 1) untouched. The two work together: scale controls how much darkening happens, contrast controls how sharply that darkening is mapped.

CLICK TO ZOOM
| Raw SSAO output before any blur — visibly noisy from the spiral sampling, but the geometric structure is clearly visible: contact points dark, open surfaces bright |
CLICK TO ZOOM
| Sample count n from 10 to 30 — at low counts the spiral pattern shows as arc-shaped artifacts; higher n smooths the raw signal |

The bilateral blur

The raw SSAO is noisy. Per-pixel sampling jitter from the phi hash plus the small sample count produces a speckled look that, multiplied onto a clean lighting result, would add visible high-frequency noise. The standard fix is to blur the AO map — but a naive Gaussian blur smears the AO across depth and normal discontinuities, producing dark halos at object silhouettes and softening crisp contact shadows into mush.

A bilateral filter weights each blur tap not just by spatial distance but by similarity to the center fragment in depth and normal. Taps that disagree with the center on either dimension get downweighted toward zero; taps that agree contribute fully. The effect is a Gaussian-like smoothing inside continuous regions, with hard cutoffs at edges:

for (int x = -radius; x <= radius; x++) {
    for (int y = -radius; y <= radius; y++) {
        vec2 sampleCoords = TexCoords + vec2(x, y) * texelSize;

        float aoSample = texture(ssaoMap, sampleCoords).r;
        vec3  Ni       = texture(gNormal,   sampleCoords).xyz;
        float di       = texture(gPosition, sampleCoords).w;

        // Three weight components
        float spatial      = gaussianWeights[abs(x)] * gaussianWeights[abs(y)];
        float normalWeight = max(0.0, dot(N, Ni));
        float depthWeight  = exp(-(d - di) * (d - di) / (2.0 * blurVariance));

        float W = spatial * normalWeight * depthWeight;
        result      += aoSample * W;
        totalWeight += W;
    }
}

FragColor = vec4(vec3(result / totalWeight), 1.0);

Each weight component does a specific job:

  • The spatial Gaussian is the standard bell-curve — closer taps contribute more than far ones, regardless of content.
  • The normal weight max(0, N · Ni) drops to zero for taps whose surface normal differs by more than 90°. Two surfaces meeting at a right angle (e.g. a wall and a floor) refuse to blur AO across each other.
  • The depth weight exp(-Δd² / 2σ²) falls off Gaussianly with depth difference. The variance σ controls how sensitive the filter is to depth jumps — small σ produces sharp edge preservation, large σ allows more cross-edge smoothing.

The Gaussian weights are precomputed on the CPU once per blur radius and uploaded as a uniform array, so changing the kernel size doesn’t recompile the shader.

CLICK TO ZOOM
| Raw vs. blurred SSAO with varying kernel radius — noise resolves while silhouettes stay crisp |

Multiplied onto the ambient term

The blurred AO map is a single grayscale texture, the same resolution as the framebuffer, with values in [0, 1]. The IBL lighting shader from the previous chapter is extended with a single multiplicative tap at the end:

if (ssao.enable) {
    float ao = ssao.enableBlur
        ? texture(blurredssaoMap, TexCoords).r
        : texture(ssaoMap,        TexCoords).r;
    color = color * ao;
}

Multiplicative blending is the right composition: ao = 1 leaves the IBL result alone in fully-open areas; ao = 0 drives the output to black in fully enclosed areas; intermediate values smoothly attenuate. The runtime toggle between raw and blurred maps makes it easy to compare the noise reduction directly against the lit scene.

A subtle but important detail: the AO multiplies all of the lighting term, not just the diffuse. Strictly, AO is a diffuse phenomenon — specular reflections from a polished surface don’t get occluded by nearby geometry the way diffuse does, because the BRDF concentrates samples in a narrow direction. Modulating specular by AO produces slight darkening on glossy surfaces in cavities. For most scenes this is too subtle to read as wrong, and decoupling AO into separate diffuse and specular factors complicates the shader without enough payoff. The pragmatic call is one AO factor for everything.

CLICK TO ZOOM
| SSAO radius scrubbed live in the lit scene — small radius captures only tight contact, larger radius extends to broader geometric cavities |

Per-model tour

The full effect of SSAO reads best up close, where the contact shadows live:

CLICK TO ZOOM
| Center statues — without SSAO, the marble bust hovers ambiguously above the table; enabling it grounds the contact and brings out the eye sockets, nose curvature, and crevices |
CLICK TO ZOOM
| Office chair — the swivel base seats firmly into the floor, the curve where the seat meets the backrest gains a soft shadow |

The pattern is consistent across the scene: SSAO darkens contact between objects, deepens recesses, and grounds geometry that floats unconvincingly in flat IBL. The bust’s eye sockets read with depth instead of looking painted-on; the slatted cabinet shows the dark stripes between slats that physical lighting would produce. The full-frame walkthrough makes the toggle easy to perceive across all of these at once:

CLICK TO ZOOM
| Full scene walkaround with SSAO toggled — every recess, contact, and crevice gains its missing depth |

What still doesn’t bounce

The renderer at the end of this chapter has correct contact darkening. The marble bust grounds onto the table; the chair sits on the floor; the cabinet’s slats darken between themselves. What it still doesn’t have is the color of the missing light.

SSAO’s output is a scalar. It darkens occluded areas equally, regardless of what’s blocking the light and what color that blocker is. A red wall casting partial occlusion on a white floor produces the same SSAO factor as a blue wall casting the same partial occlusion. Physically, a fraction of the light that would have reached the floor from above is being intercepted by the wall and bouncing off it — which means the floor near the wall is receiving some red-tinted light as a replacement for the white sky it’s missing. SSAO captures the darkening part. It misses the bouncing part.

The next chapter promotes occlusion from scalar to directional. For each pixel, sample directions across the hemisphere; for each direction, test whether it’s blocked; if it’s open, accumulate the environment radiance from that direction; if it’s blocked, treat the blocking surface as a secondary emitter and accumulate its reflected radiance instead. The result is screen-space directional occlusion — and a single bounce of indirect illumination — running entirely from the same G-buffer the renderer’s been using all along.

SCENE_GRAPH