Rendering-Lighting-Shading-Texture

Keywords: Surface Shading, Texture mapping

How to Understand Shading?

How to understand shading? Just like everything does in the real world, all the scenes we see are under different ligths : the sun, the lamp, the flash light etc. Shading is kind of a technique that has a significant relationship with lights and will present the colorful drawings to us at last. Surface shading means the surface is ‘painted’ with lights, it’s a process of applying a material to an object.

The Standard Local Lighting Model

I bet that you have heard the term: BRDF(Bidirectional Reflectance Distribution Function) which seems to have inextricable linkes to the Standard Lighting Model. Yes,BRDF will appear in PBS(Physically Based Shading) which is used to make more realistic modeling presenting the reactions between lights and materials. As for Standard Local Lighting Model, they can be considered as simplified versions of PBS, they are empirical models, but they are easier to understand.

The Standard Lighting Equation Overview

The Standard Lighting Model only cares about direct light (direct reflection).Lights are special entities without any corresponding geometry,and are simulated as if the light were emitting from a sight point. The rendering equation is an equation for the radiance outgoing from a point in any particular direction,the only outgoing direction that mattered in those days were the directions that pointed to the eye. Why say that? Because you know the real world doesn’t work like this where the light may reflect for dozens of times and the process is really complicated, the cost is also a luxury that could not yet be afforded.

The basic idea is to classify coming into the eye into four distinct categories, each of which has a unique method for calculating its contribution. The four categories are :

  • Emissive contribution,denoted as $c_{emis}$. It tells the amount of radiance emitted directly from the surface in the given direction. Note that without global illumination techniques,these surfaces do not actually light up anything(except themselves).

  • Specular contribution,denoted as $c_{spc}$. It accounts for light incident directly from the light source that is scattered preferentially in the direction of a perfect “mirror bounce”.

  • Diffuse contribution,denoted as $c_{diff}$. It accounts for light incident directly from the light source that is scattered in every direction evenly.

  • Ambient contribution,denoted as $c_{amb}$. It is a fudge factor to account for all indirect light.

Fig1. lighting-component-overview(from[5])

The Ambient and Emmisive Components

To model light that is reflected more than one time before it enters the eye,we can use a very crude approximation known as “ambient light”. The ambient portion of the lighting equation depends only on the properties of materials and an ambient lighting value,which is often a global value used for the entire scene.
$$
c_{amb} = g_{amb} \cdot m_{amb}
\tag{1}
$$
The factor $m_{amb}$ is the material’s “ambient color”. This is almost always the same as the diffuse color (which is often defined using texture map). The other factor,$g_{amb}$,is the ambient light value.

Somtimes a ray of light travels directly from the light source to the eye,without striking any surface in between. The standard lighting equation accounts for such rays by assigning a material an emissive color. For example,when we render the surface of the light bulb,this surface will probably appear very bright,even if there’s no other light in the scene,because the light bulb is emitting light.

In many situations,the emissive contribution doesn’t depend on environmental factor; it is simply the emissive color of the material.
$$
c_{emis} = m_{emis}
\tag{2}
$$

The Diffuse Component

For diffuse lighting, the location of the viewer is not relevant,since the reflections are scattered randomly, and no matter where we position the camera,it is equally likely that a ray will be sent our way.

Fig2. diffuse-component(From[1]))

But the direction if incidence l,which is dictated by the position of the light source relative to the surface, is important. Diffuse lighting obeys Lambert’s law: the intensity of the reflected light is proportional to the cosine of the angle between the surface normal and the rays of light.

Fig3. lambert-law(from[5])

We calculate the diffuse component according to Lambert’s Law:
$$
c_{diff} = c_{light} \cdot m_{diff} \cdot max(0,n \cdot l)
\tag{3}
$$
as before, n is the surface normal and l is a unit vector that points towards the light source. The factor $m_{diff}$ is the material’s diffuse color, which is the value that most people think of when they think of the “color” of an object. The diffuse material color often comes from a texture map. The diffuse color of the light source is $c_{light}$.

On thing needs attention, that is the max(), because we need to prevent the dot result of normal and light negative, we use $max(0,n \cdot l)$, so the object won’t be lighted by the rays from it’s back.

The Specular Component

The specular component is what gives surfaces a “shiny” appearance. If you don’t understand what a specular is, think about the professional term in animes:

Fig4. specular-in-eyes)

Now let’s see how the standard model calculates the specular contribution. For convenience,we assume that all of these vectors are unit vectors.
Fig5. specular-component(from[1]))

  • n is the local outward-pointing surface normal

  • v points towards the viewer.

  • l points towards the light source.

  • r is the reflection vector, which is the direction of a “perfect mirror bounce.” It’s the result of reflecting l about n.

  • $\theta$ is the angle between r and v.

Of the four vectors, you can see the reflection vector can be computed by

Fig6. reflection-vector(from[1]))

The Phong Model for specular reflection is :
$$
c_{spec} = c_{light} \cdot m_{spec} \cdot max(0,v \cdot r)^{m_{gls}}
\tag{4}
$$
$$
r = 2(n \cdot l)n-l
$$

$m_{gls}$ means the glossiness of the material,also known as the Phong exponent, specular exponent, or just as the material shininess. This controls how wide the “hotspot” is - a smaller $m_{gls}$ produces a larger, more gradual falloff from the hotspot,and a larger $m_{gls}$ produces a tight hotspot with sharp falloff. $m_{spec}$ is related to “shininess”, it represents the material’s specular color. While $m_{gls}$ controls the size of the hotspot, $m_{spec}$ controls its intensity and color. $c_{light}$ is essentially the “color” of the light, which contains both its color and intensity.

But!!We usually use Blinn Phong Model instead of Phong Model.

Fig7. blinn-phong-model(from[1])

The Blinn phong model can be faster to implement in hardware than the Phong model, if the viewer and light source are far enough away from the object to be considered a constant,since then h is a constant and only needs to be computed once. But when v or l may not be considered constant, the Phong model calculation might be faster.
The Blinn Phong Model for specualr reflection is :
$$
c_{spec} = c_{light} \cdot m_{spec} \cdot max(0,n \cdot h)^{m_{gls}}
\tag{4}
$$
$$
h = \frac{v + l}{|v + l|}
$$

In real coding, vector in the above (1)(2)(3)(4) should be unit vector

Limitations of the Standard Model

Why learn about this ancient history? First, it isn’t exactly ancient history, it’s alive and well. Second,the current local lighting model is one that content creators can understand and use. A final reason to learn the standard lighting model is because
many newer models bear similarities to the standard model, and you cannot
know when to use more advanced lighting models without understanding
the old standard.

Fig8. all-lighting-equation(from[1])

Since Blinn Phong Model contains all the components above,so we call the it Blinn-Phong. Actually, there are several important physical phenomena not properly captured by the Blinn-Phong model. Such as Fresnel reflectance. (:) We’ll discuss this PBS part in the Appendix).

Flat & Gouraud Shading

This part is about the Shading Frequencies. Are you confused? If not, it’s impossible. Because I’m confused at the first time learning and the second time learning. But now, I got it. So come with me.

On modern shader-based hardware, lighting calculations are usually done on a per-pixel basis. By this we mean that for each pixel, we determine a surface normal (whether by interpolating the vertex normal across the face or by fetching it from a bump map), and then we perform the full lighting equation using this surface normal. This is per-pixel lighting, and the technique of interpolating vertex normals across the face is sometimes called Phong shading, not to be confused with the Phong calculation for specular reflection. The alternative to Phong shading is to perform the lighting equation less frequently (per face, or per vertex). These two techniques are known as flat shading and Gouraud shading, respectively. Flat shading is almost never used in practice except in software rendering. This is because most modern methods of sending geometry efficiently to the hardware do not provide any face-level data whatsoever. Gouraud shading, in contrast, still has some limited use on some platforms. Some important general principles can be gleaned from studying these methods, so let’s examine their results.

Phong shading ≠ Phong Reflection Model ≠ Blinn Phong Reflection Model

Fig9. shading-frequency(from[5])

The table below can list differences among them.

per-pixel lighting per-vertex lighting per-face lighting
Phong shading Gouraud shading Flat shading
Interpolate normal vectors across each triangle Interpolate colors from vertices across triangle Triangle face is flat — one normal vector
Compute full shading model at each pixel Each vertex has a normal vector Not good for smooth surfaces
per-pixel-normal-vectors per-vertex-normals #
phong-shaded gouraud-shaded flat-shaded

Mostly used is phong shading

Talk is cheap, show me the code. Here’s a code (from[3])

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
//per-pixel lighting
Shader "Unity Shaders Book/Chapter 6/Blinn-Phong Use Built-in Functions" {
Properties {
_Diffuse ("Diffuse", Color) = (1, 1, 1, 1)
_Specular ("Specular", Color) = (1, 1, 1, 1)
_Gloss ("Gloss", Range(1.0, 500)) = 20
}
SubShader {
Pass {
Tags { "LightMode"="ForwardBase" }
CGPROGRAM
#pragma vertex vert
#pragma fragment frag

#include "Lighting.cginc"

fixed4 _Diffuse;
fixed4 _Specular;
float _Gloss;

struct a2v {
float4 vertex : POSITION;
float3 normal : NORMAL;
};
struct v2f {
float4 pos : SV_POSITION;
float3 worldNormal : TEXCOORD0;
float4 worldPos : TEXCOORD1;
};

v2f vert(a2v v) {
v2f o;
o.pos = UnityObjectToClipPos(v.vertex);

// Use the build-in funtion to compute the normal in world space
o.worldNormal = UnityObjectToWorldNormal(v.normal);

o.worldPos = mul(unity_ObjectToWorld, v.vertex);

return o;
}

fixed4 frag(v2f i) : SV_Target {
fixed3 ambient = UNITY_LIGHTMODEL_AMBIENT.xyz;

fixed3 worldNormal = normalize(i.worldNormal);
// Use the build-in funtion to compute the light direction in world space
// Remember to normalize the result
fixed3 worldLightDir = normalize(UnityWorldSpaceLightDir(i.worldPos));

fixed3 diffuse = _LightColor0.rgb * _Diffuse.rgb * max(0, dot(worldNormal, worldLightDir));

// Use the build-in funtion to compute the view direction in world space
// Remember to normalize the result
fixed3 viewDir = normalize(UnityWorldSpaceViewDir(i.worldPos));

fixed3 halfDir = normalize(worldLightDir + viewDir);
fixed3 specular = _LightColor0.rgb * _Specular.rgb * pow(max(0, dot(worldNormal, halfDir)), _Gloss);

return fixed4(ambient + diffuse + specular, 1.0);
}

ENDCG
}
}
FallBack "Specular"
}

The result is :

Fig9. phong-shading(from[5])

Light Sources

If you have used Unity, you won’t forget different kinds of lights.

Fig10. unity-lights

Standard Abstract Light Types

  • A point light source represents light that emanates from a single point outward in all directions. Point lights are also called omni lights (short for “omnidirectional”) or spherical lights.A point light has a position and color, which controls not only the hue of the light, but also its intensity. Point lights can be used to represent many common light sources, such as light bulbs, lamps, fires, and so forth.

point light = omni light = spherical light

Fig10. spot-light(from[1])

  • A spot light is used to represent light from a specific location in a specific direction. These are used for lights such as flashlights, headlights, and of course, spot lights~ As for A conical spot light, it has a circular “bottom”, the width of the cone is defined by a falloff angle(not to be confused with the falloff distance). Also, there is an inner angle that measures the size of the hotspot.

Fig11. conical-spot-light(from[1])

  • A directional light represents light emanating from a point in space sufficiently far away that all the rays of light involved in lighting the scene (or at least the object we are currently considering) can be considered as parallel. Directional lights usually do not have a position, at least as far as lighting calculations are concerned, and they usually do not attenuate. Like the sun and moon in our real world.

Directional light = parallel light = distant light

Fig12. directional-light(from[4])

  • An area light is only useful in bake. So we don’t talke about it here.

Here’s intuitional effects among the lights.

Fig13. light-effects(from[4])

Light Attenuation

In the real world, the intensity of a light is inversely proportional to the square of the distance between the light and the object, as
$$
\frac{i_1}{i_2} = (\frac{d_2}{d_1})^2
\tag{1}
$$
where i is the radiant flux (the radiant power per unit area) and d is the distance. This part will be mentioned again in the RayTracing article, it’s about Radiometry. Here you just need to know that the final amount of emitted light is obtained by multiplying the light color by its intensity:

light amount = light color * light intensity

Fig14. light-falloff(from[5])

Actually,I haven’t used light-falloff in my coding. Also this blog is for the novices, so let’s continue with a simple practice and finish this part. Just rememeber that this is the very primary part.

Talk is cheap, show me the code.(frome[3])

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
Shader "Unity Shaders Book/Chapter 9/Forward Rendering" {
Properties {

_Diffuse ("Diffuse", Color) = (1, 1, 1, 1)
_Specular ("Specular", Color) = (1, 1, 1, 1)
_Gloss ("Gloss", Range(8.0, 256)) = 20
}
SubShader {
Tags { "RenderType"="Opaque" }

Pass {
// Pass for ambient light & first pixel light (directional light)
Tags { "LightMode"="ForwardBase" }

CGPROGRAM

// Apparently need to add this declaration
//该指令可以保证我们的shader中使用的光照衰减等光照变量可以被正确赋值
#pragma multi_compile_fwdbase

#pragma vertex vert
#pragma fragment frag

#include "Lighting.cginc"

fixed4 _Diffuse;
fixed4 _Specular;
float _Gloss;

struct a2v {
float4 vertex : POSITION;
float3 normal : NORMAL;
};

struct v2f {
float4 pos : SV_POSITION;
float3 worldNormal : TEXCOORD0;
float3 worldPos : TEXCOORD1;
};

v2f vert(a2v v) {
v2f o;
o.pos = UnityObjectToClipPos(v.vertex);

o.worldNormal = UnityObjectToWorldNormal(v.normal);

o.worldPos = mul(unity_ObjectToWorld, v.vertex).xyz;

return o;
}

fixed4 frag(v2f i) : SV_Target {

fixed3 worldNormal = normalize(i.worldNormal);
fixed3 worldLightDir = normalize(_WorldSpaceLightPos0.xyz);

fixed3 ambient = UNITY_LIGHTMODEL_AMBIENT.xyz;

fixed3 diffuse = _LightColor0.rgb * _Diffuse.rgb * max(0, dot(worldNormal, worldLightDir));

fixed3 viewDir = normalize(_WorldSpaceCameraPos.xyz - i.worldPos.xyz);
fixed3 halfDir = normalize(worldLightDir + viewDir);
fixed3 specular = _LightColor0.rgb * _Specular.rgb * pow(max(0, dot(worldNormal, halfDir)), _Gloss);

fixed atten = 1.0;

return fixed4(ambient + (diffuse + specular) * atten, 1.0);
}

ENDCG
}

Pass {
// Pass for other pixel lights
Tags { "LightMode"="ForwardAdd" }

Blend One One

CGPROGRAM

// Apparently need to add this declaration
//该指令保证我们在additional pass 中访问到正确的光照变量
#pragma multi_compile_fwdadd

#pragma vertex vert
#pragma fragment frag

#include "Lighting.cginc"
#include "AutoLight.cginc"

fixed4 _Diffuse;
fixed4 _Specular;
float _Gloss;

struct a2v {
float4 vertex : POSITION;
float3 normal : NORMAL;
};

struct v2f {
float4 pos : SV_POSITION;
float3 worldNormal : TEXCOORD0;
float3 worldPos : TEXCOORD1;
};

v2f vert(a2v v) {
v2f o;
o.pos = UnityObjectToClipPos(v.vertex);

o.worldNormal = UnityObjectToWorldNormal(v.normal);

o.worldPos = mul(unity_ObjectToWorld, v.vertex).xyz;

return o;
}

fixed4 frag(v2f i) : SV_Target {

fixed3 worldNormal = normalize(i.worldNormal);

//如果当前处理的光源类型是平行光,因为平行光没有固定的位置
#ifdef USING_DIRECTIONAL_LIGHT
fixed3 worldLightDir = normalize(_WorldSpaceLightPos0.xyz);
//如果是点光源或者聚光灯,他们的位置是
#else
fixed3 worldLightDir = normalize(_WorldSpaceLightPos0.xyz - i.worldPos.xyz);
#endif

fixed3 diffuse = _LightColor0.rgb * _Diffuse.rgb * max(0, dot(worldNormal, worldLightDir));

fixed3 viewDir = normalize(_WorldSpaceCameraPos.xyz - i.worldPos.xyz);
fixed3 halfDir = normalize(worldLightDir + viewDir);
fixed3 specular = _LightColor0.rgb * _Specular.rgb * pow(max(0, dot(worldNormal, halfDir)), _Gloss);

//如果是平行光,衰减值为1
#ifdef USING_DIRECTIONAL_LIGHT
fixed atten = 1.0;
#else
//if point light,transform the vertex position from world spact to light space
//sample the texture to get the Attenuation value
//如果是点光源,把顶点坐标从世界空间到光线空间
#if defined (POINT)
float3 lightCoord = mul(unity_WorldToLight, float4(i.worldPos, 1)).xyz;
fixed atten = tex2D(_LightTexture0, dot(lightCoord, lightCoord).rr).UNITY_ATTEN_CHANNEL;
//if spot light
#elif defined (SPOT)
float4 lightCoord = mul(unity_WorldToLight, float4(i.worldPos, 1));
fixed atten = (lightCoord.z > 0) * tex2D(_LightTexture0, lightCoord.xy / lightCoord.w + 0.5).w * tex2D(_LightTextureB0, dot(lightCoord, lightCoord).rr).UNITY_ATTEN_CHANNEL;
#else
fixed atten = 1.0;
#endif
#endif

return fixed4((diffuse + specular) * atten, 1.0);
}

ENDCG
}
}
FallBack "Specular"
}

If you want to know the rendering order,you can use the Frame Debug, this tool is really useful. I think as the study goes further, this part will be mentioned again. Also this is my learning curve, maybe it matches you too. So go on with my articles.

Texture Mapping

Finally, here comes the texture mapping!I am already gearing up and eager to try. Cuz I really want to overview the shadow mapping and opacity blend again. And there are too many things cannot learn forward without the knowledge of texture.

What is a Texture?

There is much more to the appearance of an object than its shape. Different objects are different colors and have different patterns on their surface. One simple yet powerful way to capture these qualities is through texture mapping. A texture map is a bitmap image that is “pasted” to the surface of an object.

Fig15. texture(from[5])

bitmap image is pixel-image, on the contrary, vector-image

So a texture map is just a regular bitmap that is applied onto the surface of a model. Exactly how does this work? The key idea is that, at each point on the surface of the mesh, we can obtain texture-mapping coordinates, which define the 2D location in the texture map that corresponds to this 3D location. Traditionally, these coordinates are assigned the variables (u,v), where u is the horizontal coordinate and v is the vertical coordinate; thus texture-mapping coordinates are often called UV coordinates or simply UVs.

On thing needs attention : The origin is in the upper left-hand corner of the image, which is the DirectX-style convention, or in the lower left-hand corner, the OpenGL conventions.In unity, the powerful engine has solved the problem for us, unity use uniform left-hand corner as OpenGL.

Fig16. uv-coordinates-in-unity(from[3])

Although bitmaps come in different sizes, UV coordinates are normalized such that the mapping space ranges from 0 to 1 over the entire width(u) or height (v) of the image, rather than depending on the image dimensions. We typically compute or assign UV coordinates only at the vertex level, and the UV coordinates at an arbitratry interior position on a face are obtained through interpolation (:) See in Appendix)

So the pseudo-code of UV mapping should be:

1
2
3
4
5
//c++
for each rasterized screen sample (x,y): //sample (x,y)-usually a pixel's center
(u,v) = evaluate texture coordinate at (x,y) //using barycentric coordinates
texcolor = texture.sample(u,v);
set sample’s color to texcolor; //usually the diffuse albedo Kd(recall the Blinn-Phong reflectance model)

Texture Magnification

UV coordinates outside of the range [0,1] are allowed, and in fact are quite useful. Such coordinates are interpreted in a variety of ways. The most common addressing modes (Wrap Mode) are repeat (also known as tile or wrap) and clamp.

When repeating is used, the integer portion is discarded and only the fractional portion is used, causing the texture to repeat. Under clamping, when a coordinate outside the range [0,1] is used to access a bitmap, it is clamped in range. This has the effect of streaking the edge pixels of the bitmap outwards. The mesh in both cases is identical: a single polygon with four vertices. And the meshes have identical UV coordinates. The only difference is how coordinates outside the [0,1] range are interpreted. See Fig17.

Fig17. uv-warp-mode(from[1])

If you have used Unity, this is not strange to you. See the example below(from[3]).

Fig17-1. wrap-mode-repeat(from[3])
Fig17-2. wrap-mode-clamp(from[3])

The shader code on the materail of the Quad is :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
Shader "Unity Shaders Book/Chapter 7/Texture Properties" {
Properties {
_MainTex ("Main Tex", 2D) = "white" {}
}
SubShader {
Pass {
Tags { "LightMode"="ForwardBase" }

CGPROGRAM

#pragma vertex vert
#pragma fragment frag

#include "Lighting.cginc"

sampler2D _MainTex;
float4 _MainTex_ST;

struct a2v {
float4 vertex : POSITION;
float4 texcoord : TEXCOORD0;
};

struct v2f {
float4 position : SV_POSITION;
float2 uv : TEXCOORD0;
};

v2f vert(a2v v) {
v2f o;

// Transform the vertex from object space to projection space
o.position = UnityObjectToClipPos(v.vertex);

//注意这里的v是模型空间中就定义好的纹理坐标
//v.texcoord应该是0-1之间,如果在面板上更改缩放值和偏移量,就会改动o.uv不在0-1之间
//o.uv = v.texcoord.xy * _MainTex_ST.xy + _MainTex_ST.zw;
o.uv = TRANSFORM_TEX(v.texcoord, _MainTex);

return o;
}

fixed4 frag(v2f i) : SV_Target {
//根据uv值对纹理进行采样
fixed4 c = tex2D(_MainTex, i.uv);

return fixed4(c.rgb, 1.0);
}

ENDCG
}
}
FallBack "Diffuse"
}

I think you have noticed the code //o.uv = v.texcoord.xy * _MainTex_ST.xy + _MainTex_ST.zw; Look at the gif below, _MainTex_ST.xy means Tiling, _MainTex_ST.zw means offset.

Fig18. tilling-offset

Also, I think you have noticed that there is Mipmap & FilterMode properties in the panel of Fig17-1, what’s the meaning of these? You see that in Unity, the png is 512*512, it matches the Quad just in time. What if the texture(png) is too small? It’s easy to imagine, that you have an image, but the object is too giant, you need some methods to let the texture ‘pasted’ on the object surface without low resolution/distortion.

Fig19. filter-mode(from[5])

Here I want to infer Bilinear Interpolation (:) see it in the Appendix)

Then What if the texture(png) is too large? Here comes Mipmap (This part is a little hard for me. so Jump over it and later back..)

Different Types Texture Mapping

There are too many types of texture mapping.

Bump Mapping

Bump mapping is a general term that can refer to at least two different methods of controlling the surface normal per texel.

  • A height map is a grayscale map, in which the intensity indicates the local “elevation” of the surface. Lighter colors indicate portions of the surface that are “bumped out,” and darker colors are areas where the surface is “bumped in.” Height maps are attractive because they are very easy to author, but they are not ideal for real-time purposes because the normal is not directly available; instead, it must be calculated from the intensity gradient. (We wil talk about it in Displacement Mapping)

  • A bump map, which is very common nowadays, is Normal Mapping.

Fig20. two-bumping-methods(from[5])

Normal Mapping

In a normal map, the coordinates of the surface normal are directly encoded in the map. How could a bump map save the surface normal of the object? Of course, the color. The most basic way is to encode x, y, and z in the red, green, and blue channels. Since the normal vector is bounded in [-1,1],and the color channel component is bounded in [0,1], so there should be a principle:
$$
pixel = \frac{normal + 1}{2}
$$
Seems easy~ The bump map storse the normal vectors in model space in terms of pixels(rgb). Voila! If only it were that easy.

Real-world objects exhibit a great deal of symmetry and self-similarity, and patterns are often repeated. For example, a box often has similar bumps and notches on more than one side. Because of this, it is currently a more efficient use of the same amount of memory (and artist time) to increase the resolution of the map and reuse the same normal map (or perhaps just portions of it) on multiple models (or perhaps just on multiple places in the same model). Of course, the same principle applies to any sort of texture map, not just normal maps. But normal maps are different in that they cannot be arbitrarily rotated or mirrored because they encode a vector. Imagine using the same normal map on all six sides of a cube. While shading a point on the surface of the cube, we will fetch a texel from the map and decode it into a 3D vector. A particular normal map texel on the top will produce a surface normal that points in the same direction as that same texel on the bottom of the cube, when they should be opposites! We need some other kind of information to tell us how to interpret the normal we get from the texture, and this extra bit of information is stored in the basis vectors.

So there comes the Tangent Space.
In tangent space, +z points out from the surface; the +z basis vector is actually just the surface normal n. The x basis vector is known as the tangent vector, which we’ll denote t, and it points in the direction of increasing t in texture space. Similarly, the y basis vector, known as the binormal and denoted here as b, corresponds to the direction of increasing b, although whether this motion is “up” or “down” in the texture space depends on the conventions for the origin in (t,b) space, which can differ, as we discussed earlier. The coordinates for the tangent and binormal are given in model space.

Fig21. tangent-space(from[3])

And how to calculate basis vectors as the average of adjacent triangle normals?Here’s the formula & code(from[1])
We are given a triangle with vertex positions $p_0 = (x_0 ,y_0 ,z_0 ), p_1 = (x_1 ,y_1 ,z_1 ), and p_2 = (x_2 ,y_2 ,z_2),$ and at those vertices we have the UV coordinates $(u_0 ,v_0 ), (u_1 ,v_1 ), and (u_2 ,v_2 ).$
$$
q_1 = p_1 − p_0 , s_1 = u_1 − u_0 , t_1 = v_1 − v_0
$$
$$
q_2 = p 2 − p_0 , s_2 = u_2 − u_0 , t_2 = v_2 − v_0.
$$
$$
tangent = t_2q_1 - t_1q_2 , binormal = -s_2q_1 + s_1q_2
$$

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
//c++
struct Vertex {
Vector3 pos ;
float u, v ;
Vector3 normal ;
Vector3 tangent ;
float det ; // determinant of tangent transform. (−1 i f mirrored )
};
struct Triangle {
int vertexIndex [3];
};
struct TriangleMesh {
int vertexCount ;
Vertex ∗vertexList ;
int triangleCount ;
Triangle ∗ triangleList ;

void computeBasisVectors ( ) {
// Note: we assume vertex normals are valid
Vector3 ∗tempTangent = new Vector3 [ vertexCount ];
Vector3 ∗tempBinormal = new Vector3 [ vertexCount ];
// F i r s t clear out the accumulators
for ( int i = 0 ; i < vertexCount ; ++i ) {
tempTangent [i].zero ( ) ;
tempBinormal [i].zero ( ) ;
}
// Average in the basis vectors for each face
// into i t s neighboring vertices
for ( int i = 0 ; i < triangleCount ; ++i ) {
// Get shortcuts
const Triangle &tri = triangleList [ i ];
const Vertex &v0 = vertexList [ tri.vertexIndex [0]];
const Vertex &v1 = vertexList [ tri.vertexIndex [1]];
const Vertex &v2 = vertexList [ tri.vertexIndex [2]];
// Compute intermediate values
Vector3 q1 = v1.pos − v0.pos ;
Vector3 q2 = v2.pos − v0.pos ;
float s1 = v1.u − v0.u;
float s2 = v2.u − v0.u;
float t1 = v1.v − v0.v ;
float t2 = v2.v − v0.v ;
// Compute basis vectors for this triangle
Vector3 tangent = t2∗q1 − t1∗q2; tangent.normalize ( ) ;
Vector3 binormal = −s2∗q1 + s1∗q2; binormal.normalize ( ) ;
// Add them into the running totals for neighboring verts
for ( int j = 0 ; j < 3 ; ++j ) {
tempTangent [ tri.vertexIndex [ j ]] += tangent ;
tempBinormal [ tri.vertexIndex [ j ]] += binormal ;
}
}
// Now f i l l in the values into the vertices
for ( int i = 0 ; i < vertexCount ; ++i ) {
Vertex &v = vertexList [ i ];
Vector3 t = tempTangent [ i ];
// Ensure tangent is perpendicular to the normal.
// (Gram−Schmit ) , then keep normalized version
t −= v.normal ∗ dot (t,v.normal ) ;
t.normalize ( ) ;
v.tangent = t ;
// Figure out i f we’ re mirrored
if ( dot ( cross ( v.normal , t ) , tempBinormal [ i ]) < 0.0 f ) {
v.det = −1.0f ; // we’ re mirrored
} else {
v.det = +1.0 f ; // not mirrored
}
}
// Clean up
delete [] tempTangent ;
delete [] tempBinormal ;
}
};

In unity, you can calculate the lighting model in the world space with bump textures.Here an example.(from[3])

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
Shader "Unity Shaders Book/Chapter 7/Normal Map In World Space" {
Properties {
_Color ("Color Tint", Color) = (1, 1, 1, 1)
_MainTex ("Main Tex", 2D) = "white" {}
_BumpMap ("Normal Map", 2D) = "bump" {}
_BumpScale ("Bump Scale", Float) = 1.0
_Specular ("Specular", Color) = (1, 1, 1, 1)
_Gloss ("Gloss", Range(8.0, 256)) = 20
}
SubShader {
Pass {
Tags { "LightMode"="ForwardBase" }

CGPROGRAM

#pragma vertex vert
#pragma fragment frag

#include "Lighting.cginc"

fixed4 _Color;
sampler2D _MainTex;
float4 _MainTex_ST;
sampler2D _BumpMap;
float4 _BumpMap_ST;
float _BumpScale;
fixed4 _Specular;
float _Gloss;

struct a2v {
float4 vertex : POSITION;
float3 normal : NORMAL;
float4 tangent : TANGENT;
float4 texcoord : TEXCOORD0;
};

struct v2f {
float4 pos : SV_POSITION;
float4 uv : TEXCOORD0;
float4 TtoW0 : TEXCOORD1;
float4 TtoW1 : TEXCOORD2;
float4 TtoW2 : TEXCOORD3;
};

v2f vert(a2v v) {
v2f o;
o.pos = UnityObjectToClipPos(v.vertex);

o.uv.xy = v.texcoord.xy * _MainTex_ST.xy + _MainTex_ST.zw;
o.uv.zw = v.texcoord.xy * _BumpMap_ST.xy + _BumpMap_ST.zw;

float3 worldPos = mul(unity_ObjectToWorld, v.vertex).xyz;
fixed3 worldNormal = UnityObjectToWorldNormal(v.normal);
fixed3 worldTangent = UnityObjectToWorldDir(v.tangent.xyz);
fixed3 worldBinormal = cross(worldNormal, worldTangent) * v.tangent.w;

// Compute the matrix that transform directions from tangent space to world space
// Put the world position in w component for optimization
o.TtoW0 = float4(worldTangent.x, worldBinormal.x, worldNormal.x, worldPos.x);
o.TtoW1 = float4(worldTangent.y, worldBinormal.y, worldNormal.y, worldPos.y);
o.TtoW2 = float4(worldTangent.z, worldBinormal.z, worldNormal.z, worldPos.z);

return o;
}

fixed4 frag(v2f i) : SV_Target {
// Get the position in world space
float3 worldPos = float3(i.TtoW0.w, i.TtoW1.w, i.TtoW2.w);
// Compute the light and view dir in world space
fixed3 lightDir = normalize(UnityWorldSpaceLightDir(worldPos));
fixed3 viewDir = normalize(UnityWorldSpaceViewDir(worldPos));

// Get the normal in tangent space
fixed3 bump = UnpackNormal(tex2D(_BumpMap, i.uv.zw));
bump.xy *= _BumpScale;
bump.z = sqrt(1.0 - saturate(dot(bump.xy, bump.xy)));

// Transform the narmal from tangent space to world space
bump = normalize(half3(dot(i.TtoW0.xyz, bump), dot(i.TtoW1.xyz, bump), dot(i.TtoW2.xyz, bump)));

fixed3 albedo = tex2D(_MainTex, i.uv).rgb * _Color.rgb;

fixed3 ambient = UNITY_LIGHTMODEL_AMBIENT.xyz * albedo;

fixed3 diffuse = _LightColor0.rgb * albedo * max(0, dot(bump, lightDir));

fixed3 halfDir = normalize(lightDir + viewDir);
fixed3 specular = _LightColor0.rgb * _Specular.rgb * pow(max(0, dot(bump, halfDir)), _Gloss);

return fixed4(ambient + diffuse + specular, 1.0);
}

ENDCG
}
}
FallBack "Specular"
}

Fig22. unity-normal-map(from[3])

Displacement Mapping

A height map (or true displacement map) can be easily painted in Photoshop; Since normal map is clear , displacement is not hard for you.
A displacement map actually changes the geometry using a texture. A common simplification is that the displacement will be in the direction of the surface normal.

Fig23. displacement-map(from[2])

$$
p\prime = p + f(p)n.
$$

Environment Mapping

Often we want to have a texture-mapped background and for objects to have specular reflections of that background. This can be accomplished using environment maps; There are many ways to store environment maps. Here is the most common method cube map.

If you have used Unity, then you’ll be familiar with cube map, yes, the sky box~ In ideal cases, we want to generate the corresponding cube map for the objects of different positions in the scene. So the smart way is to write scripts. Here’s an example

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
using UnityEngine;
using UnityEditor;
using System.Collections;

public class RenderCubemapWizard : ScriptableWizard {

public Transform renderFromPosition;
public Cubemap cubemap;

void OnWizardUpdate () {
helpString = "Select transform to render from and cubemap to render into";
isValid = (renderFromPosition != null) && (cubemap != null);
}

void OnWizardCreate () {
// create temporary camera for rendering
GameObject go = new GameObject( "CubemapCamera");
go.AddComponent<Camera>();
// place it on the object
go.transform.position = renderFromPosition.position;
// render into cubemap
go.GetComponent<Camera>().RenderToCubemap(cubemap);

// destroy temporary camera
DestroyImmediate( go );
}

[MenuItem("GameObject/Render into Cubemap")]
static void RenderCubemap () {
ScriptableWizard.DisplayWizard<RenderCubemapWizard>(
"Render cubemap", "Render!");
}
}

Shadow Maps

Here comes the shadow map.

Opacity Blending


Appendix:

PBS

This part will be explained in First-Met-With-RayTracing.

Interpolation

Before learning CG, I couldn’t understand the term interpolation. Now it’s time write something about it.
There are many interpolation methods, today I want to introduce a common method, called Barycentric Coordinates - used in Interpolation Across Triangles. If you have read the above the paragraphs carefully, you can see the barycentric coordinates method has appeared before.

Why do we want interplate?

  • Specify values at vertices

  • Obtain smoothly varying values across triangles

What do we want to interpolate?

  • Texture coordinates, colors, normal vectors, …

Barycentric Coordinates: Formulas

Fig15. Barycentric-Coordinates(from[5])

$$
\alpha = \frac{-(x-x_B)(y_C - y_B) + (y-y_B)(x_C-x_B)}{-(x_A-x_B)(y_C-y_B) + (y_A-y_B)(x_C-x_B)}
\tag{Barycentric Coordinates: Formulas}
$$
$$
\beta = \frac{-(x-x_C)(y_A-y_C) + (y-y_C)(x_A-x_C)}{-(x_B-x_C)(y_A-y_C) + (y_B-y_C)(x_A-x_C)}
$$
$$
\gamma = 1 - \alpha - \beta
$$

Using Barycentric Coordinates

Fig15. using-barycentrics(from[5])

talk is cheap, show me the code

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
//c++
static std::tuple<float, float, float> computeBarycentric2D(float x, float y, const Vector4f* v){
float alpha = (x*(v[1].y() - v[2].y()) + (v[2].x() - v[1].x())*y + v[1].x()*v[2].y() - v[2].x()*v[1].y()) / (v[0].x()*(v[1].y() - v[2].y()) + (v[2].x() - v[1].x())*v[0].y() + v[1].x()*v[2].y() - v[2].x()*v[1].y());
float beta = (x*(v[2].y() - v[0].y()) + (v[0].x() - v[2].x())*y + v[2].x()*v[0].y() - v[0].x()*v[2].y()) / (v[1].x()*(v[2].y() - v[0].y()) + (v[0].x() - v[2].x())*v[1].y() + v[2].x()*v[0].y() - v[0].x()*v[2].y());
float gamma = (x*(v[0].y() - v[1].y()) + (v[1].x() - v[0].x())*y + v[0].x()*v[1].y() - v[1].x()*v[0].y()) / (v[2].x()*(v[0].y() - v[1].y()) + (v[1].x() - v[0].x())*v[2].y() + v[0].x()*v[1].y() - v[1].x()*v[0].y());
return {alpha,beta,gamma};
}

//we all know that color,vertex position,normal are Vector3f
static Eigen::Vector3f interpolate(float alpha, float beta, float gamma, const Eigen::Vector3f& vert1, const Eigen::Vector3f& vert2, const Eigen::Vector3f& vert3, float weight)
{
return (alpha * vert1 + beta * vert2 + gamma * vert3) / weight;
}
//uv coordinates are Vector2f
static Eigen::Vector2f interpolate(float alpha, float beta, float gamma, const Eigen::Vector2f& vert1, const Eigen::Vector2f& vert2, const Eigen::Vector2f& vert3, float weight)
{
auto u = (alpha * vert1[0] + beta * vert2[0] + gamma * vert3[0]);
auto v = (alpha * vert1[1] + beta * vert2[1] + gamma * vert3[1]);

u /= weight;
v /= weight;
return Eigen::Vector2f(u, v);
}
//here's the rasterization process
void rasterization(Triangle &t)
{
...find the bounding box of t

for(int x = int(x_min); x < int(x_max)+1; x++)
{
for(int y = int(y_min); y< int(y_max)+1;y++)
{
if(insideTriangle(float(x) + 0.5, float(y) + 0.5, t))
{
//get alpha,beta,gamma
auto[alpha, beta, gamma] = computeBarycentric2D(x, y, t.v);
float Z = 1.0 / (alpha / v[0].w() + beta / v[1].w() + gamma / v[2].w());
//interpolate depth
float zp = alpha * v[0].z() / v[0].w() + beta * v[1].z() / v[1].w() + gamma * v[2].z() / v[2].w();
zp *= Z;

//if pass the depth test
auto interpolated_color = interpolate(alpha,beta,gamma,t.color[0],t.color[1],t.color[2],1);
auto interpolated_normal = interpolate(alpha,beta,gamma,t.normal[0],t.normal[1],t.normal[2],1);
auto interpolated_texcoords = interpolate(alpha,beta,gamma,t.tex_coords[0],t.tex_coords[1],t.tex_coords[2],1);
...
}
}
}
}

Bilinear Interpolation

Since we mentioned bilinear interpolation in the texture magnificient part. So let’s go straight.

bilinear-interpolation(from[5])

  • Step1. We want to sample texture f(x,y) at red point, black points indicate texture sample locations.
  • Step2. Take 4 nearest sample locations, with texture values as labeled.
  • Step3. Calculate fractional offsets,(s,t)
  • Step4.

$$
lerp(x,v_0,v_1) = v_0 + x(v_1 - v_0)
\tag{Linear interpolation (1D)}
$$
$$
u_0 = lerp(s,u_{00},u_{10})
$$
$$
u_1 = lerp(s,u_{01},u_{11})
\tag{Two helper lerps}
$$
$$
f(x,y) = lerp(t,u_0,u_1)
\tag{Final vertical lerp, to get result}
$$

talk is cheap, show me the code

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
//c++/opencv
Eigen::Vector3f getColor(float u, float v)
{
auto u_img = u * (width-1);
auto v_img = (1 - v) * (height-1);

auto color = image_data.at<cv::Vec3b>(v_img, u_img);
return Eigen::Vector3f(color[0], color[1], color[2]);
}
//if the texture image is low-pixels, then u_img & v_img will not be int(ideally case).
Eigen::Vector3f getColorBilinear(float u,float v)
{
auto u_img = u * (width-1);
auto v_img = v * (height-1);

Eigen::Vector2f u00(std::floor(u_img)*1.f,std::floor(v_img)*1.f);
Eigen::Vector2f u10(std::ceil(u_img)*1.f,std::floor(v_img)*1.f);
Eigen::Vector2f u01(std::floor(u_img)*1.f,std::ceil(v_img)*1.f);
Eigen::Vector2f u11(std::ceil(u_img)*1.f,std::ceil(v_img)*1.f);

float s = (u_img - u00.x());
float t = (v_img - u00.y());

Eigen::Vector3f u0 = lerp(s,getColor(u00.x()/width,u00.y()/height),getColor(u10.x()/width,u10.y()/height));
Eigen::Vector3f u1 = lerp(s,getColor(u01.x()/width,u01.y()/height),getColor(u11.x()/width,u11.y()/height));

Eigen::Vector3f color = lerp(t,u0,u1);
return color;
}

Eigen::Vector3f lerp(float coefficient,Eigen::Vector3f a,Eigen::Vector3f b)
{
//return (coefficient * a + (1-coefficient) * b);
return (a + coefficient*(b-a));
}

For the code above, I have a few words to add:

In opencv:
Mat image;
image.at<>(i,j) i–>y j–>x
color order : BGR
the origin is at the upper-left corner

Another one, Since have learned that one pixel can be seen as a little square. and the center of the pixel is (x + 0.5,y + 0.5); I tried this experiment using OpenCV, and found that : the result shows that they represent the same pixel.

1
2
window.at<cv::Vec3b>(1,1)[1] = 255
window.at<cv::Vec3b>(1,1.9)[1] = 255

References:
[1]3D Math Primer for Graphics and Game Development 2nd Edition.
[2]Fundamentals of Computer Graphics and 3rd Edition.
[3]Unity+Shader入门精要
[4]Unity3d Mannual
[5]GAMES
[6]scratchapixel

Comments

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×