Let's say we want to detect the edges in an image. We can do this by looking for discontinuities and tracing a line over those discontinuities. For a human this is relatively easy but for a computer we need to define what actually defines a 'discontinuity'. We can define discontinuties based on a change of color, depth, normal vector, brightness,... In this tutorial we will be defining discontinuities as a significant change in depth and/or normal vector.
Now we need an actual way for the computer to detect these changes in depth and/or normal vectors. For depth, this is pretty straightforward since the camera in Unity generates a so-called _CameraDepthTexture that gives us information about the depth in the scene relative to the Camera. For the normals you could generate a similar texture that somehow displays the normal vectors in the scene but the issue is that in LWRP, this texture is not generated. So if we want to determine the discontinuities based on normal vectors, we need this texture. In this tutorial I will show you how to generate a _CameraDepthNormalsTexture in LWRP. Depth and Normal Textures are usually combined into a _CameraDepthNormalsTexture with 4 channels and you can then extract the Normals Texture from that combined texture. Now let's convince the Lightweight Rendering Pipeline to generate this texture for us!
This is where the interesting stuff begins! We will customize the render pipeline to generate a DepthNormalsTexture for us so we can access it later in Shader Graph. The nice thing about LWRP is that it is a Scriptable Rendering Pipeline which means we can customize the rendering pipeline to fit our needs. We start by adding a DepthNormals.cs script to our project. This script implements the ScriptableRendererFeature class which allows us to inject 1 or more ScriptableRenderPasses into our existing renderer in order to customize it. The script also contains the actual ScrtipableRenderPass that we will inject into the renderer. This pass generates a DepthNormals Texture and stores it in a place where we can later access it. You can find the code for the script here. https://pastebin.com/9n0AwmZJ You can put this script anywhere in your project files.
Next, we will create a custom renderer and we will tell our active Pipeline Asset to use this custom renderer. To create a custom renderer, click Assets>Create>Rendering>Lightweight Render Pipeline>Forward Renderer. When we open the inspector of this new renderer, we are able to add a Renderer Feature by clicking on the little plus icon.
The next step is to create a custom node for shader graph that will give us access to the DepthNormalsTexture we generated. For this we will not be using the custom function node but we will be actually creating the node from scratch. To do this, we need access to the source of Shader Graph. You can find the shader graph code in your project window under Packages>Shader Graph. The issue with this folder is that it is read-only so inside of Unity, we are not able to add a custom node. However, if you click on the Shader Graph folder and click on Show in Explorer and then go to Editor>Data>Nodes, we can add a folder there and this is exactly what we will do.
Inside of the Nodes folder, we will create a new folder called Custom in which we will put our custom node. A new script called DepthNormalsTextureNode.cs will hold the code for this custom node. You can find the code here. https://pastebin.com/udxZWhVc After this is done, you will be able to find a new node in Shader Graph called Depth + Normals under Custom>Input>Texture>Depth + Normals. When we sample this node, we will be able to see the generated DepthNormals Texture.
The next step in our quest for the edge detection shader is the actual edge detection shader! We will be using the _CameraDepthTexture for depth and the _CameraDepthNormalsTexture for normals.
To make use of these 2 textures, We will be using a custom function node that refers to an Outline.hlsl file. You can find the code here. https://pastebin.com/PH0fWbGB I made the code by following this tutorial by Roystan. I highly recommend you to check out his website.
Usually if we want to make use of an edge detection effect, we want all of the objects in our scene to have this outline. Using a regular material for this is not ideal for this. It would be better if we wrote some kind of image effect that would render an outline for all of the materials. I will cover this in another tutorial.