C#

It’s a good practice to save game setting data to binary files.

To save:

FileStream dataStream = new FileStream( dataPath, FileMode.Create);
BinaryFormatter converter = new BinaryFormatter();
converter.Serialize(dataStream, toSave);
dataStream.Close();

To open/load:

FileStream dataStream = new FileStream(dataPath, FileMode.Open);
BinaryFormatter converter = new BinaryFormatter();
DataClass data = converter.Deserialize(dataStream) as DataClass;

We can use it outside game too. For example, dynamic bone’s parameter settings will lost once the model is updated, because people usually replace the whole prefab. If the model’s bones are unchanged or mostly unchanged, we can save the Dynamic Bone’s parameters as binary files, and load it back to the new model.

Projection Math

GDC: https://www.youtube.com/watch?v=RdN06E6Xn9E&t=2153s

Cat Like Coding: https://catlikecoding.com/unity/tutorials/rendering/part-15/

Position From Depth: https://mynameismjp.wordpress.com/2010/09/05/position-from-depth-3/

SSAO Tutorial: https://john-chapman-graphics.blogspot.com/2013/01/ssao-tutorial.html

Model View Projection: https://jsantell.com/model-view-projection/

In the projection math, we often need to calculate the view ray: when we transform the vertex position to the view space, the z component is the depth and the camera is located in (0,0,0). After we divide the view space position by the z component, we will have a direction from camera to the vertex.

Iteration 1:

//vertex
vec3 viewPos = mul(modelView, objectPos);
OUT.viewRay = viewPos / viewPos.z;

//fragment
vec3 viewPos = IN.viewRay * depth;
vec3 decalPos = mul(vec4(viewPos, 1.0), _ViewToObject)

Iteration 2:

// vertex
vec3 viewPos = mul(modelView, objectPos);
vec3 viewRay = viewPos/viewPos.z;
OUT.worldRay = mul( (mat3)_ViewToWrold, viewRay);

// fragment
vec3 worldPos = IN.worldRay * depth + _WorldSpaceViewPos;
vec3 decalPos = mul(_WorldToObject, vec4(worldPos, 1.0));

Iteration 3:

// vertex
vec3 viewPos = mul(modelView, objectPos);
vec3 viewRay = viewPos / viewPos.z;
OUT.objectRay = mul((mat3)_ViewToObject, viewRay);

// fragment
vec3 decalPos = IN.objectRay * depth + _ObjectSpaceViewPos;

Depth

  1. Built-in to URP: https://teodutra.com/unity/shaders/urp/graphics/2020/05/18/From-Built-in-to-URP/#summary
  2. COMPUTE_EYEDEPTH: https://light11.hatenadiary.com/entry/2019/12/18/010038
  3. DEPTH: https://zhuanlan.zhihu.com/p/92315967
  4. depth texture in OpenGL and DirectX: https://forum.unity.com/threads/rendering-depths-into-render-textures-in-both-opengl-and-direct3d.493088/
  5. cyan: https://www.cyanilux.com/tutorials/depth/
  6. depth preicison visualized: https://developer.nvidia.com/content/depth-precision-visualized
  7. NDC Space: https://forum.unity.com/threads/confused-on-ndc-space.1024414/

Eye depth

Vertex shader converts the vertex position from object space to world space via model matrix. Then to view space via view matrix. In the view space, the camera is at the origin. Therefore, the z component of position in view space is the distance between the vertex and the camera. This is the Eye Depth. The range of this depth is the same across all the platform.

// vertex
float3 positionWS = TransformObjectToWorld(input.positionOS.xyz)
float3 positionVS = TransformWorldToView(positionWS);

// fragment
float depth = -input.positionVS.z;

Scene Depth

This is from the depth texture. Linear01 is a remapped version of Eye Depth by dividing by the far plane value. It is still 0 at the camera position, but 1 is the far plane.

Shader then converts the view space to clip space via projection matrix. Clip space usually needs perspective projection, which outputs the Eye Depth as the w component. For orthographic projection, the w component is 1. After the vertex shader, the clip space position is remapped (compute screen position) and divided by it’s W component (perspective division). This gives the Normalized Device Coordinates (NDC). The screen position where the XY axis ranges from (0,0) in the bottom left corner and (1,1) in the top right.

After the projection, the depth is not linear anymore in the view space. This is the value ends up in the depth buffer and Depth Texture. (Raw Scene Depth).

NDC has the same range for x component: [L, R] to [-1, 1], y component: [B, T] to [-1, 1]

The range of the NDC.z, or Z Buffer depth, is also the same for both projections, but varies depending on the platform: (https://docs.unity3d.com/Manual/SL-PlatformDifferences.html)

  • Direct3D is from 1 (near) to 0 (far): Direct3D, Metal, consols
  • OpenGL is from -1 (near) to 1 (far): OpenGL, OpenGL ES

Non-linear

Non-linear is better for precision (https://developer.nvidia.com/content/depth-precision-visualized),

Depth Buffer vs. Depth Texture

Depth buffer ensures that objects closer to the camera will be on-top of objects that are further away. Opaque geometry usually writes to the buffer. The opaque geometry renders front-to-back, so objects that are closer to the camera are drawn first, and color and depth buffer are written first too. The further away objects tests against the values in the buffer based on ZTest. Transparent Geometry (usually) doesn’t write depth and renders back-to-front to have correct alpha blending. The objects are sorted by how close their origin is to the camera so it might change when the camera moves.

URP copies the depth buffer of opaque queue and stores them to Depth Texture. This allows transparent shader to interact with opaque objects (intersection). Transparent objects don’t appear on Depth Texture because depth values are copied before transparent queue. A Depth Prepass is used when the copy does not work.

Depth in Shader

Fragment shader usually controls their depth from the mesh, based on the interpolated values between vertex and fragment shader. SV_Depth can overwrite the semantics.

struct FragOut
{
	half4 color: SV_Target;
	float depth : SV_Depth;
}

half4 frag(Varyings input) : SV_Target
{

	FragOut output;
	output.color = color;
	output.depth = depth;
	return output;
}

However, using SV_Depth turns off early-Z. Early-Z tests against the depth buffer early and discard the fragment that failed the test. Howerver, fragment shader with SV_Depth has to run the fragment first to have the depth value. Using Alpha Clipping or discard also turns off early-Z. A value has to be written to the depth buffer if the test passes (if ZWrite is on). Discard the fragment might lead to incorrect depth buffer.

Opaque shader that uses SV_Depth should also apparently be rendered after other opaque objects during the AlphaTest queue.

Orthographic projection output a linear depth where 0 is the new plane and 1 is the far plane. And take the reversed z into account in other platforms.

float depth = 1;

#if UNITY_REVERSED_Z
depth = 1 - depth;
#endif

output.depth = depth;

Perspective projection outputs to a non-linear value.

float nearPlaneDepth = Linear01Depth(0, _ZBufferParams)
float farPlaneDepth = Linear01Depth(1, _ZBufferParams)

float nonLinear = EyeDepthToNonLinear(eyeDepth / farPlane, _ZBufferParams

Conservative Depth Output

SV_DepthGreaterEqual allows output depth as long as the value it is set to is greater than or equal to the value determined during rasterisation. Otherwise, it will be clamped to the same value the rasteriser uses. SV_DepthLessEqual is the opposite, where the value set must be less than or equal to the value determined during rasterisation.

Sampling Depth Texture

In URP, SampleSceneDepth returns the raw depth (Non-linear).

//vertex
float4 positionCS = TransformObjectToHClip(input.positionOS.xyz);
output.positionCS = positionCS;
output.screenPos = ComputeScreenPos(positionCS);

//fragment
float sceneRawDepth = SampleSceneDepth(input.screenPos.xy / input.screenPos.w);
float sceneEyeDepth = LinearEyeDepth(rawDepth, _ZBufferParams);
float sceneLinearDepth = Linear01Depth(rawDepth, _ZBufferParams);

Fog

  1. Built-in to URP: https://teodutra.com/unity/shaders/urp/graphics/2020/05/18/From-Built-in-to-URP/#summary

Soft Particles

Built-in has the “Soft Particles” option to control if the particle fades out near intersections with other scene geometry. It’s more resource intensive and only works on the platform supporting depth texture and using deferred shading.

Frame Animation In Shader

The key is to locate the correct uv.

// we need the amount of key frames of horizontal and vertical in the frame texture
float _HorizontalAmount;
float _VerticalAmount;
float _TimeSpeed;

// we use Time (as a whole number) to find out which key frame we should use
float time = floor(_Time.y * _TimeSpeed);
float row = floor( time / _HorizontalAmount );
float col = time - row * _HorizontalAmount;

// Then we calculate the correct uv
// first we divide the uv by horizontal and vertical amount
// then we off set the uv by col and row
// it worth to mention that Unity's vertical direction in the texture is from bottom to top, while frame sequence is from top to bottom
// that's why we need to subtract
float2 uv = float2 (input.uv.x / _HorizontalAmount, input.uv.y / _VerticalAmount);
uv.x += col / _HorizontalAmount;
uv.y -= row / _VerticalAmount;

Dithering & Dithered Transparency

bglous: https://forum.unity.com/threads/depth-of-field-issues-with-transparent-render-queue.1041292/

dithereing: https://www.ronja-tutorials.com/post/042-dithering/

Temporal dithering: quickly render opaque surface alpha tested in patterns over every frame. Temporal anti-aliasing ends up blurring this into something looks like transparent.