🖍️ 5 ways to draw an outline

Different techniques for rendering outlines in Unity.

21 minute read

Jump to heading Introduction

Rendering outlines is a technique that is often used in games either for aesthetic reasons or for supporting gameplay by using it for highlights and selections around an object. For example in the game Sable, outlines are used to create a comic-book-like style. In The Last of Us, outlines are used to highlight enemies when the player goes into stealth mode.

Sable.
Sable.
The Last of Us.
The Last of Us.

In this post, I will discuss 5 techniques for rendering an outline around an object.

Jump to heading Rim Effects

Rim effect outline.

Jump to heading Technique

One of the most basic outline effects can be achieved by using a so called fresnel effect which can be used to render an outline on the rim/edge of an object. The fresnel effect describes the reflection/transmission of light when falling onto a transparent surface. However, when using it for rendering outlines, this physical meaning of the effect is not important. The following formula is used to form the outline.

Out=pow((1.0saturate(dot(N,V))),P)Out = pow((1.0 - saturate(dot(N, V))), P)

The formula takes the dot product between the normalized normal vector NN and the normalized view direction VV. Then, this gets exponentiated with a power PP. It is important to note that this is only an approximation of the fresnel effect, but it works well for our outlines.

Fresnel effect.

When putting this fresnel-based outline on a sphere, you see that when we approach the grazing angle (the edge/rim of the object), the effect gets stronger.

Jump to heading Implementation

For this approach, the objects that need to have an outline get rendered using a custom shader. This shader implements the fresnel effect and allows us to set the width, power, softness and color of the outline.

float edge1 = 1 - _OutlineWidth;
float edge2 = edge1 + _OutlineSoftness;
float fresnel = pow(1.0 - saturate(dot(normalWS, viewWS)), _OutlinePower);
return lerp(1, smoothstep(edge1, edge2, fresnel), step(0, edge1)) * _OutlineColor;

The technique produces an outline that is always an inner line and is not visible outside of the object and so maybe shouldn't even be called an outline. By controlling the width, power and softness of the outline, it is possible to create hard lines or a more soft/glowy effect.

Rim effect outline (hard).Rim effect outline (soft).

Characteristic for this approach is that it works well for objects like spheres and capsules with smooth and round edges, but it breaks down for objects like cubes or more complex models that have sharp edges.

Rim effect outline (cube).Rim effect outline (complex model).

For a cube for example, the outline will look really bad and not even resemble an outline. For a more complex model, you will have the issue of getting lots of uneven line widths, although the overall outline effect can look alright.

💬 Rim effect outlines are simple but only work well on spherical objects.

Jump to heading Vertex Extrusion

Vertex extrusion outline.

Jump to heading Technique

The second technique uses a re-rendered/duplicate version of the original object/mesh to form the outline. This duplicate object gets shown behind the original object and its vertices get extruded in order to make the duplicate object larger than the original one. The duplicate object is usually just rendered with a flat color.

Jump to heading Extrusion direction

In order to make the duplicate mesh larger, we need to change the positions of its vertices. We will be moving the vertices a certain distance along a certain direction. The first step is to pick this direction.

1. Vertex position

The first method to enlarge the mesh is to simply scale it up. This is done by moving each vertex position along the vertex position. This may sound weird but the vertex position in local space, can be seen as a vector between the center of the object and the vertex position itself and so we can move the original vertex position along that vector. For the distance, we use a width parameter.

// Move vertex along vertex position in object space.
positionOS += positionOS * width;

Doing this just kind of inflates the mesh.

Move along vertex position in object space.

For a sphere, all of the vertices have the same distance from the center point of the object and so they all get moved an equal amount. However, for other objects, these distances may vary and so vertices that are distanced further away from the center of the object, will get moved more. To fix this, you can normalize the vector along which the movement occurs.

// Move vertex along normalized vertex position in object space.
positionOS += normalize(positionOS) * width;

The result is that now all the vertices get moved an equal distance in object space, usually resulting in an outline that looks more equal-width.

Move along normalized vertex position in object space.

However, due to working in object space, the outline still isn't a perfect equal-width outline. We will address this later.

2. Normal vector

A second method is to move the vertices along their normal vector.

// Move vertex along normal vector in object space.
positionOS += normalOS * width;

The result is a pretty nice-looking outline for objects with smooth corners such as spheres and capsules. We're still working in object space so again, the outline isn't a perfect equal-width outline.

Move along normal vector in object space.

For objects with sharper corners such as cubes, you will get visible gaps in the outline. Any model with sharp angles will have these kind of artifacts.

Outline gaps on objects with sharp corners.

This can be resolved by using custom-authored normals, addressed in the next method.

3. Vertex color

A third method is to move the vertices along their vertex color. The logic behind this is that you can generate custom normals and store those in the vertex color channels of the mesh. For example you could bake spherical (smooth) normals into vertex colors and use those for a cube mesh.

// Move vertex along normal vector in object space.
positionOS += vertexColor * width;

You can see the the outline around the cube looks much better when using custom normals.

Blurred buffer outline.Blurred buffer outline.

This method can avoid artifacts with models that have sharp edges but the big downside is the manual setup involved since you need to generate custom normals for your mesh, although this process can be automated using a script that bakes the normals.

Jump to heading Extrusion space

Once we have decided the direction along which we want to move the vertices, we need to choose in which coordinate space this extrusion should happen. During the vertex stage of our shader, the coordinates of the vertices start out being defined in object space and end up being transformed to clip space. This is done by applying the MVP (model/view/projection) matrix. Throughout the whole rendering pipeline, the coordinates of the vertices go through these spaces.

1. 📦 object/model/local space

2. 🌍 world space

3. 📷 camera/eye/view space

4. ✂️ (homogeneous) clip space

5. 🖥️ screen space

6. 🖼️ viewport/window space

The significance of these coordinate spaces for our outlines will be explained below. For more info, you can read my note on spaces and transformations[coming soon].

Object space

The first method is to translate the vertices in object space.

// Move vertex along vertex position in object space.
IN.positionOS.xyz += IN.positionOS.xyz * width;

There are 2 big issues with doing the outline in object space. This is because when working in object space, the MVP transformations are yet to be applied. These transformations will alter the shape of the outline, distorting it in the process. The issues are as follows:

  1. Scaling of the outline

    -> when going from object space to world space (applying model matrix M)

  2. Foreshortening

    -> due to the perspective divide happening when going from clip space to screen space

Blurred buffer outline.

Another consideration is that when translating the vertices in object space, this is done in a 3D space. This means that some translations will be done directly towards or away from the camera, not contributing to the apparent-width of the outline. Instead of using object-space units, it might be better to be able to control the outline width in terms of screen-space pixels.

Clip space

A second method is to perform the translations of the vertices in clip space.

// Transform vertex from object space to clip space.
OUT.positionHCS = TransformObjectToHClip(IN.positionOS.xyz);

// Transform normal vector from object space to clip space.
float3 normalHCS = mul((float3x3)UNITY_MATRIX_VP, mul((float3x3)UNITY_MATRIX_M, IN.normalOS));

// Move vertex along normal vector in clip space.
OUT.positionHCS.xy += normalize(normalHCS.xy) / _ScreenParams.xy * OUT.positionHCS.w * width * 2;

As a first step, the vertex position and normal vector are both transformed from object space to clip space. As a second step, the vertex gets translated along its normal vector. Since we are working in a 2D space now, only the x and y coordinates of the vertex positions get altered. The offset gets divided by the width and height of the screen to account for the aspect ratio of the screen. Then, the offset gets multiplied by the w component of the clip space vertex position. This is done because in the next stage, the clip space coordinates will be converted to screen space coordinates with a so-called perspective divide which will divide the clip space x/y/z coordinates by the clip space w coordinate. Since we want to end up with the same outline after this transformation to screen space, we pre-multiply by this clip space w coordinate so that the perspective divide will have no net effect on the outline. Finally, the offset gets multiplied by our desired outline width and a factor 2 so that a width unit 1 will correspond with exactly 1 pixel on the screen.

Phew!

I recommend reading this post on creating an outline in clip space. Having something explained in different ways is always useful.

The result of this whole process is a very clean outline. Since we're working in clip space, the outline is equal-width, extending the same amount (visually) around the object.

Move along normal vector in clip space.

Still, (if not using custom-authored normals) the method has issues with meshes that have sharp-corners, resulting in gaps in the outline. This will be apparent in meshes that are more complex. Also, if the normals of the mesh are not set up correctly and some of them are facing the wrong way, the vertices of the outline will be moved in the opposite direction, resulting in gaps in the outline. This method being dependent on the normal vectors of the mesh is the most important downside. This is visible for the mesh in the image below.

Move along normal vector in clip space.

Jump to heading Masking

The duplicate mesh should be rendered so that only the outline sticking out is visible. The most common solution for this is to cull the front-facing geometry of the duplicated mesh, using the backfaces of the geometry to form the outline. A depth test of less than or equal to is used to make sure the backfaces only show up where the outlines should go.

Cull front.

Another option is to use a stencil mask to prevent the duplicate mesh from showing up in front of the original mesh. When using this stencil mask method, no culling is needed. One side-effect is that there will be absolutely no inner lines on the inside of the object and if two objects overlap, the outline will also only be visible around those 2 objects.

Stencil mask.

💬 Vertex extrusion outlines are simple and look good when done in clip space. There are issues with sharp corners but these can be mitigated by using custom normals, which do require some extra setup.

Jump to heading Blurred Buffer

Blurred buffer outline.

Jump to heading Technique

A third method to render an outline is by using something that I call a blurred buffer. For this technique, the silhouette of an object gets rendered to a buffer. This silhouette buffer is then blurred which expands the silhouette which is then used to render the outline.

Jump to heading 1. Silhouette Buffer

The first step of this technique is creating the silhouette buffer. For this, each object gets rendered to a texture using a shader that outputs a plain color.

Blurred buffer outline.Blurred buffer outline.

You can use the color white for all silhouettes, allowing you to choose a single color for all outlines at the end by multiplying with the desired outline color. Alternatively, you can render each object silhouette with a specific color if you want each object to have a different colored outline.

Jump to heading 2. Blur Pass

The blur pass is used for expanding the silhouette buffer. This is usually done using a box blur or gaussian blur. To improve performance, the silhouette buffer can be scaled down before blurring. This is advantageous because blur passes can be expensive, having to process multiple pixels per pixel since they work by taking a (weighted) average of the pixels surrounding a given pixel.

Blurred buffer outline.

Additionally, the blur pass should be done in 2 passes. This brings down the complexity of the algorithm from O(N2)O(N^2) to O(2N)O(2N). This can be done if the used blur algorithm is a so-called separable filter which is the case both for a box blur and a gaussian blur. When doing the blur in 2 passes, the pixels get first blurred vertically, and then the vertically-blurred buffer gets blurred horizontally resulting in the final blur.

Blurred buffer outline.
Vertical blur.
Blurred buffer outline.
Horizontal blur.

A simple seperable box blur can be implemented by taking the non-weighted average around a given pixel. For a gaussian blur, the used kernel will be a gaussian kernel so that the weighted-average will be taken.

// Vertical box blur.
half4 sum = 0;
int samples = 2 * _KernelSize + 1;
for (float y = 0; y < samples; y++)
{
float2 offset = float2(0, y - _KernelSize);
sum += SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex, IN.uv + offset * _MainTex_TexelSize.xy);
}
return sum / samples;

// Horizontal box blur.
half4 sum = 0;
int samples = 2 * _KernelSize + 1;
for (float x = 0; x < samples; x++)
{
float2 offset = float2(x - _KernelSize, 0);
sum += SAMPLE_TEXTURE2D(_MainTex, sampler_MainTex, IN.uv + offset * _MainTex_TexelSize.xy);
}
return sum / samples;

The outline width is controlled by the _KernelSize parameter of the blur shader.

Jump to heading 3. Outline Pass

After the blur pass, the blurred silhouette gets combined with the original scene to form the outline.

Blurred buffer outline.

Using a blurred buffer is great for having soft or glowing outlines, but the buffer can also be stepped to render a hard outline.

Blurred buffer outline.

Jump to heading Masking

Just like for the vertex extrusion method, a stencil mask can be used to make sure the outline only gets rendered behind the geometry.

💬 Blurred buffer outlines are great for soft and glowy outlines but can have a bigger impact on performance compared to other methods.

Jump to heading Jump Flood Algorithm

The fourth method is to use the Jump Flood algorithm to render outlines. The main advantage of this technique is that it can render really wide outlines, at a very reasonable performance cost. I won't go into details at this time since the technique has a good explanation in this article from Ben Golus.

💬 Jump flood outlines are a great option when you need performant, wide outlines.

Jump to heading Edge Detection

Edge detection outline.

Jump to heading Technique

A fifth method is to use an edge-detection pass for rendering outlines. This full-screen pass draws lines by detecting discontinuities in the scene and rendering an outline between areas that have a large enough discontinuity between them. Discontinuities can be detected between the depth buffer value, the normal vector, the albedo color or any other data that is made available.

Jump to heading Detection of discontinuity

Roberts cross

Detecting discontinuities can be done by using an edge detection operator such as the Roberts cross operator. This operator works as a differential operator by calculating the sum of the squares of the differences between diagonal pixels resulting in a cross-like pattern. In practice, edge detection operators can be applied by convolving the original image with kernels. There are 2 kernels, one for the x direction and one for the y direction. For Roberts cross, the diagonal pixels get sampled and convolved with these kernels. The kernels have a size of 2x22x2.

static const int RobertsCrossX[4] = {
1, 0,
0, -1
};

static const int RobertsCrossY[4] = {
0, 1,
-1, 0
};

These kernels can then be used as follows.

horizontal += samples[0] * RobertsCrossX[0]; // top left (factor +1)
horizontal += samples[3] * RobertsCrossX[3]; // bottom right (factor -1)

vertical += samples[2] * RobertsCrossY[2]; // bottom left (factor -1)
vertical += samples[1] * RobertsCrossY[1]; // top right (factor +1)

edge = sqrt(dot(horizontal, horizontal) + dot(vertical, vertical));

Roberts cross is a very simple operator but can already give nice results. The operator only needs 4 samples around a given pixel.

Sobel operator

Another method is to use a Sobel operator. Again, 2 kernels are used but this time the kernels have a size of 3x33x3.

static const int SobelX[9] = {
1, 0, -1,
2, 0, -2,
1, 0, -1
};

static const int SobelY[9] = {
1, 2, 1,
0, 0, 0,
-1, -2, -1
};

This time, 9 samples are used around a given pixel. The Sobel kernels can be used like this.

horizontal += samples[0] * SobelX[0]; // top left (factor +1)
horizontal += samples[2] * SobelX[2]; // top right (factor -1)
horizontal += samples[3] * SobelX[3]; // center left (factor +2)
horizontal += samples[4] * SobelX[4]; // center right (factor -2)
horizontal += samples[5] * SobelX[5]; // bottom left (factor +1)
horizontal += samples[7] * SobelX[7]; // bottom right (factor -1)

vertical += samples[0] * SobelY[0]; // top left (factor +1)
vertical += samples[1] * SobelY[1]; // top center (factor +2)
vertical += samples[2] * SobelY[2]; // top right (factor +1)
vertical += samples[5] * SobelY[5]; // bottom left (factor -1)
vertical += samples[6] * SobelY[6]; // bottom center (factor -2)
vertical += samples[7] * SobelY[7]; // bottom right (factor -1)

edge = sqrt(dot(horizontal, horizontal) + dot(vertical, vertical));

You can read this blog post on Sobel filters if you want to learn more about how Sobel filters work.

Jump to heading Sources of discontinuity

A common approach is to look for discontinuities in the textures that the render pipeline generates for the scene such as the depth texture, normals texture and color texture.

Edge detection outline.Edge detection outline.Edge detection outline.

During the edge-detection pass, these textures are sampled and discontinuities are detected using the edge detection operators mentioned earlier. The resulting edge that is drawn can be caused by any discontinuity that was found in one of the 3 buffers. With this technique, the outline gets applied to all the objects that write to these buffers and so you have less control over the outlines on a per-object basis. In the image below, edge contributions by depth/normals/color are represented by the color red/green/blue respectively.

Blurred buffer outline result.Blurred buffer outline result.

Allowing discontinuities to be detected from different sources makes for a more robust outlining system. In the debug image above you can see that while some edges would be detected by all three discontinuity sources, a lot of them only get picked up from a contribution of a specific discontinuity source. Each discontinuity source can be given a different weight and different thresholds can be used for each of them, allowing you to control the visual of the outline.

Jump to heading Edge detection modulation

Usually, just using an edge detection operator on the discontinuity source buffers is not enough to get a result without artifacts. Some modulation has to be done to get rid of these artifacts. For example, since the depth buffer is implemented non-linearly in a lot of render pipelines, two objects 1m apart located close to the camera will have a larger depth difference than two objects 1m apart located far away from the camera. To accommodate for this, the threshold used for detecting discontinuities in depth can be modulated by the depth buffer itself so that geometry located close to the camera will need to have a larger discontinuity in depth value before being detected as an edge.

depthThreshold *= _DepthDistanceModulation * SampleSceneDepth(uv);

A second common artifact is unwanted edges showing up on objects at small grazing angles. To resolve this, you can modulate with a mask that is generated from the dot product between the normal vector NN and the view direction VV. This is the same fresnel mask that was used in the initial outlining technique using a rim effect.

float fresnel = pow(1.0 - saturate(dot(normalWS, viewWS)), _Power);
float grazingAngleMask = saturate((fresnel + _GrazingAngleMaskPower - 1) / _GrazingAngleMaskPower);
depthThreshold *= 1 + smoothstep(0, 1 - _GrazingAngleMaskHardness, grazingAngleMask);

Other modulation techniques can be used, depending on the visual fidelity you want to achieve but these depend on the specific effect that you're after.

Jump to heading Custom discontinuity source

It is also possible to provide the outline shader with a custom discontinuity source. This would be a render texture that you create yourself during the render process, containing custom data that you wish to use to generate outlines. The advantage is that since you control what writes to this custom buffer, you can control which objects receive an outline.

Blurred buffer outline result.Blurred buffer outline result.

For example in the scene above, the discontinuity source is generated by rendering the vertex colors of a mesh to a texture. Other techniques that come to mind are coloring faces based on their world position or creating a custom buffer that combines both information from the depth buffer and the normals buffer.

💬 Edge detection outlines are good when you need a full-screen outlining effect. Some finetuning is needed to prevent lines from showing up where you don't want them to.

Jump to heading Conclusion

There you go, 5 ways to draw an outline. They all have their benefits, making trade-offs between performance, visual fidelity and manual setup that is required.

Jump to heading Credits

💬 The Sailor Moon 3D models used in this post were made by premudraya over on Sketchfab.

💬 The Zelda 3D model used in this post was made by theStoff over on Sketchfab.

Jump to heading Additional Resources

Jump to heading Vertex Extrusion

https://www.videopoetics.com/tutorials/pixel-perfect-outline-shaders-unity

Jump to heading Jump Flood Algorithm

https://bgolus.medium.com/the-quest-for-very-wide-outlines-ba82ed442cd9

Jump to heading Edge Detection

https://roystan.net/articles/outline-shader.html

https://jameshfisher.com/2020/08/31/edge-detection-with-sobel-filters

Published