Rendering is done in 2 steps. First, geometry is passed to the vertex function which can alter the position and data of each vertex. Then, the result goes through a fragment function which outputs the color. There’s no albedo, gloss and specular properties in here, so Vertex and fragment shader is often used in non-realistic material, 2D graphics, and post-processing effects.

*Surface shader is actually compiled to vert/frag shader.

Shader "Unlit/SolidColor"
{
   SubShader
   {
	   Pass
	   {
			  CGPROGRAM

			  #pragma vertex vert
			  #pragma fragment frag

			  struct vertInput 
			  {
					float4 pos : POSITION;
			  };

			  struct vertOuput
			  {
					float4 pos : SV_POSITION;
			  };

			  vertOuput vert(vertInput input)
			  {
					vertOuput o;
					o.pos = UnityObjectToClipPos(input.pos);
					return o;
			  }

			  half4 frag(vertOuput output) : COLOR
			  {
					return half4(1.0, 0.0, 0.0, 1.0);
			  }

			  ENDCG
	   }

   }
}

vert receives the position of a vertex in world coordinates. It will be converted into screen coordinates by the UnityObjectToClipPos , which is the good old mul(UNITY_MATRIX_MVP, input.pos). (model-view projection).

#pragma is a pre-compiler directive. It tells what shader function to compile.

Binding Semantics

the colon in vertInput and vertOutput indicates the variable will play a special role. POSITION is the vertex position, SV_POSITION is the screen position of a vertex and has Z and W components.

Input Semantics

  • POSITION, SV_POSITION: the pos of a vertex in world coordinates
  • NORMAL: normal of a vertex, relative to the world
  • COLOR, COLOR0, DIFFUSE, SV_TARGET: color info stored in the vertex
  • COLOR1, SPECULAR: secondary color info stored in the vertex;
  • FOGCOORD: fog coordinate;
  • TEXCOORD0, TEXCOORD1, …, TEXCOORDi: the i-th UV data stored in the vertex

Output Semantics

  • POSITION, SV_POSITION, HPOS: the pos of a vertex in camera coordinates (clip space)
  • COLOR, COLOR0, COL0, COL, SV_TARGET: the front primary color;
  • COLOR1, COL1: the front secondary color;
  • FOGC, FOG: the fog coordinates
  • TEXCOORD0, TEXCOORD1, …, TEXCOORDi, TEXi: the i-th UV data stored in the vertex;
  • PSIZE, PSIZ: the size of the point we are drawing;
  • WPOS: the pos within the window, in pixel (origin i nthe lower left corner)

Most hardwares forces all fileds of the structs to have a binding semantics. If any data is not listed above, we need to find another way or store it in TEXi. Certain hardwares only allow some semantics ( WPOS should be replaced by ComputerScreenPos).

Glass Shading

Shader "Custom/GlassShader"
{
	Properties
	{
		_MainTex("Base (RGB) Trans (A)", 2D) = "white" {}
		_Color ("Color", Color) = (1,1,1,1)
		_NormalMap("Normal texture", 2D) = "Normal" {}
		_Magnitude("Magnitude", Range(0, 1)) = 0.05
	}
	SubShader
	{
		Tags
		{
			"Queue" = "Transparent"
			"IgnoreProjector" = "True"
			"RenderType" = "Opaque"
		}
		ZWrite On 
		Lighting Off 
		Cull Off 
		Fog { Mode Off } 
		Blend One Zero
	
		GrabPass { "_GrabTexture" }

		Pass
		{
			CGPROGRAM
			#pragma vertex vert
			#pragma fragment frag
			#include "UnityCG.cginc"

			sampler2D _GrabTexture;

			sampler2D _MainTex;
			fixed4 _Color;

			sampler2D _NormalMap;
			float _Magnitude;
		
			struct appdata
			{
				float4 vertex : POSITION;
				float4 color : COLOR;
				float2 texcoord : TEXCOORD0;
			};

			struct v2f
			{
				float4 vertex : POSITION;
				fixed4 color : COLOR;
				float2 texcoord : TEXCOORD0;

				float4 uvgrab : TEXCOORD1;
			};

			//vertex func
			v2f vert(appdata v)
			{
				v2f o;
				o.vertex = UnityObjectToClipPos(v.vertex);
				o.color = v.color;

				o.texcoord = v.texcoord;

				o.uvgrab = ComputeGrabScreenPos(o.vertex);
				return o;
			}

			// frag func
			half4 frag (v2f i) : COLOR
			{
				half4 mainColor = tex2D(_MainTex, i.texcoord);

				half4 normal = tex2D(_NormalMap, i.texcoord);
				half2 distortion = UnpackNormal(normal).rg;

				i.uvgrab.xy += distortion * _Magnitude;

				fixed4 col = tex2Dproj(_GrabTexture, UNITY_PROJ_COORD(i.uvgrab));
				return col * mainColor * _Color;
			}
			ENDCG
		}
	}
}

GrabPass grabs the contents of the screen where the object is about to be drawn into a texture. This texture can be used in subsequent passes to do advanced image based effects. ComputeGrabScreenPos is given a clip space position and returns texture coordinates. UNITY_PROJ_COORD is given a 4-component vector and returns a texture coordinate suitable for projected texture reads.

Render states: ZWrite: whether pixels from this object are written to the depth buffer. Solid object is on (default), semitransparent effects are off. Blend: how pixels are combined with what is already there.

Distortion Shaders usually create distortion via bump map. Bump maps tells how light should reflect onto a surface.

Texture Maps

Gloss: Roughness Bump: illusion of depth by faking details by using height info to tell when a point should be up or down Normal: a type of Bump map that fakes the details by using angle information to tell which direction each point of the surface should be oriented towards. Better effect with lighting and shadows. Displacement: similar to a height map but actually changes the geometry of the object by moving vertices.