Motion Blur Tutorial

What is motion blur?   by

Motion pictures are made up of a series of still images displayed in quick succession. These images are captured by briefly opening a shutter to expose a piece of film/electronic sensor, then closing the shutter and advancing the film/saving the data. Motion blur occurs when an object in the scene (or the camera itself) moves while the shutter is open during the exposure, causing the resulting image to streak along the direction of motion. It is an artefact which the image-viewing populous has grown so used to that its absence is conspicuous; adding it to a simulated image enhances the realism to a large degree.

Later we’ll look at a screen space technique for simulating motion blur caused only by movement of the camera. Object motion blur is a tad more complex – worth a tutorial of its own. First, though, let’s examine a ‘perfect’ (full camera and object motion blur) solution which is very simple but not really efficient enough for realtime use.

Perfect solution

This is a naive approach which has the benefit of producing completely realistic full motion blur, where both the camera movement and movement of the scene objects produces blur. For each frame render the scene multiple times at different temporal offsets, then blend together the results:

This technique is actually described in the red book (chapter 10). Unfortunately it requires that the basic framerate must be samples * framerate, which is either impossible or impractical for most realtime applications. And don’t think about just using the previous samples frames – this will give you trippy trails (and nausea) but definitely not motion blur. So how do we go about doing it quick n’ cheap?
Screen space to the rescue!

The idea is simple: each rendered pixel represents a point in the scene at the current frame. If we know where it was in the previous frame, we can apply a blur along a vector between the two points in screen space. This vector represents the size and direction of the motion of that point between the frames and we can use it to approximate the motion of a point during a single ‘exposure’.

The crux of this method is calculating a previous screen space position for each pixel. Since we’re only going to implement motion blur caused by motion of the camera, this is very simple: each frame, store the camera’s model-view-projection matrix so that in the next frame we’ll have access to it. Since this is all done client-side, the details will vary. I’ll just assume that you can supply the following to the fragment shader: the previous model-view-projection matrix and the inverse of the current model-view matrix.
Computing the blur vector

In order to compute the blur vector we take the following steps within our fragment shader:
get the pixel’s current view space position. I do this by accessing a per-pixel linear depth buffer, but there are other equally good methods (see Matt Pettineo’s blog for a good overview)
from this, compute the pixel’s current world space position using the inverse of the current model-view matrix
from this, compute the pixel’s previous normalized device coordinates using the previous model-view-projection matrix and a perspective divide
scale and bias the result to get texture coordinates
our blur vector is the current pixel’s texture coordinates minus the coordinates we just calculated

The eagle-eyed reader may have already spotted that this can be optimized, but for now we’ll do it long-hand for the purposes of clarity. Here’s the fragment program:
uniform sampler2D LDEPTH_TEX; // linear depth

uniform mat4 INV_MODELVIEW_MAT; // inverse model-view
uniform mat4 PREV_MODELVIEWPROJ_MAT; // previous model-view-proj

noperspective in vec2 TEXCOORD;
noperspective in vec3 VIEW_RAY; // for extracting current world space position

void main() {
// get current world space position:
vec3 current = VIEW_RAY * texture(LDEPTH_TEX, TEXCOORD).r;
current = INV_MODELVIEW_MAT * current;

// get previous screen space position:
vec4 previous = PREV_MODELVIEWPROJ_MAT * vec4(current, 1.0); /= previous.w;
previous.xy = previous.xy * 0.5 + 0.5;

vec2 mblur_vec = previous.xy – TEXCOORD;
Using the blur vector

So what do we do with this blur vector? We might try stepping for n samples along the vector, starting at previous.xy and ending at TEXCOORD. However this produces an ugly halo effect, as shown below:

This is caused by the fact that (for translatory motion) more distant pixels have a lower velocity, hence blur less than closer pixels (see the velocity map in the image below). The boundaries of the sample range contributing to the blur can coincide with these discontinuities in pixel velocity, which makes the discontinuities visible.

To fix this we can center the blur vector on TEXCOORD, thereby blurring across these velocity boundaries:

A sly problem

There is a potential issue around framerate: if it is very high our blur will be barely visible as the amount of motion between frames will be small, hence mblur_vec will be short. If the framerate is very low our blur will be exaggerated, as the amount of motion between frames will be high, hence mblur_vec will be long.

While this is physically realistic (higher fps = shorter exposure, lower fps = longer exposure) it might not be aesthetically desirable. This is especially true for variable-framerate games which need to maintain playability as the framerate drops without the entire image becoming a smudge. At the other end of the problem, for displays with high refresh rates (or vsync disabled) the short blur lengths mean that we basically wasted our time computing the blur vector, since the resulting blur will be pretty much invisible. What we want in these situations is for each frame to look as though it was rendered at a particular framerate (which we’ll call the ‘target framerate’) regardless of the actual framerate.

The solution is to scale mblur_vec according to the current actual fps; if the framerate goes up we increase the blur length, if it goes down we decrease the blur length. When I say “goes up” or “goes down” I mean “changes relative to the target framerate.” This scale factor is easilly calculated:
mblur_scale = current_fps / target_fps

So if our target fps is 60 but the actual fps is 30, we halve our blur length. Remember that this is not physically realistic – we’re fiddling the result in order to compensate for a variable fps.


The simplest way to improve the performance of this method is to reduce the number of blur samples. I’ve found it looks okay down to about 8 samples, where ‘banding’ artefacts start to become apparent.

As I hinted before, computing the blur vector can be streamlined. Notice that, in the first part of the fragment shader, we did two matrix multiplications:
// get current world space position:
vec3 current = VIEW_RAY * texture(LDEPTH_TEX, TEXCOORD).r;
current = INV_MODELVIEW_MAT * current;

// get previous screen space position:
vec4 previous = PREV_MODELVIEWPROJ_MAT * vec4(current, 1.0); /= previous.w;
previous.xy = previous.xy * 0.5 + 0.5;

These can be combined into a single transformation by constructing a current-to-previous matrix:

If we do this on the CPU we only have to do a single matrix multiplication per fragment in the shader. Also, this reduces the amount of data we upload to the GPU (always a good thing!). The relevant part of the fragment program now looks like this:
vec3 current = VIEW_RAY * texture(LDEPTH_TEX, TEXCOORD).r;
vec4 previous = CTP_MAT * vec4(current, 1.0); /= previous.w;
previous.xy = previous.xy * 0.5 + 0.5;

Even this limited form of motion blur makes a big improvement to the appearance of a rendered scene; moving around looks generally smoother and more realistic and at lower framerates (~30fps) the effect produces a filmic appearance, hiding some of the temporal aliasing that makes rendering (and stop-motion animation) ‘look fake’.

At some point I’ll come back and do a tutorial for full motion blur, but for now have some links:

“Stupid OpenGL Shader Tricks” Simon Green, NVIDIA

“Motion Blur as a Post Processing Effect” Gilberto Rosado, GPU Gems 3


This entry was posted in Tutorials and tagged , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s