您的位置:首页 > 其它

2.1.3 Debugging of Shaders (about vertex input parameters) 着色器的调试(关于顶点输入参数)

2015-09-05 11:42 435 查看




A false-color satellite image.

This tutorial discusses vertex input parameters. It assumes that you are familiar with
Section “Minimal Shader” and
Section “RGB Cube”.

This tutorial also introduces the main technique to debug shaders in Unity: false-color images, i.e. a value is visualized by setting one of the components of the fragmentcolor to it. Then the intensity
of thatcolor component in the resulting image allows you to make conclusions about the value in the shader. This might appear to be a very primitive debugging technique because it is a very primitive debugging technique. Unfortunately, there is no alternative
in Unity.

1.Where Does the Vertex Data Come from?

In
Section “RGB Cube” you have seen how the fragment shader gets its data from the vertex shader by means of an output structure of vertex output parameters. The question here is: where does the vertex shader get its data from? Within Unity, the answer is
that the Mesh Renderer component of a game object sends all the data of the mesh of the game object to the GPU in each frame. (This is often called a “draw call”. Note that each draw call has some performance overhead; thus, it is much more efficient to send
one large mesh with one draw call to the GPU than to send several smaller meshes with multiple draw calls.) This data usually consists of a long list of triangles, where each triangle is defined by three vertices and each vertex has certain attributes, including
position. These attributes are made available in the vertex shader by means of vertex input parameters. The mapping of different attributes to different vertex input parameters is usually achieved in Cg by means of semantics, i.e. each vertex input parameter
has to specify a certain semantic, e.g.
POSITION
,
NORMAL
,
TEXCOORD0
,
TEXCOORD1
,
TANGENT
,
COLOR
, etc. However, in Unity's particular implementation of Cg, the built-in vertex input parameters have to have specific names as discussed next.

2.Built-in Vertex Input Parameters and how to Visualize Them

In Unity, the built-in vertex input parameters (position, surface normal, two sets of texture coordinates, tangent vector, and vertexcolor) not only have to have certain semantics but also certain
names and types. Furthermore, they should be included in a single structure of input vertex parameters, e.g.:

      struct vertexInput {
float4 vertex : POSITION; // position (in object coordinates,
// i.e. local or model coordinates)
float4 tangent : TANGENT;
// vector orthogonal to the surface normal
float3 normal : NORMAL; // surface normal vector (in object
// coordinates; usually normalized to unit length)
float4texcoord :TEXCOORD0; // 0th set of texture
// coordinates (a.k.a. “UV”; between 0 and 1)
float4texcoord1 : TEXCOORD1; // 1st set of texture
// coordinates (a.k.a. “UV”; between 0 and 1)
fixed4color : COLOR; //color (usually constant)
};
This structure could be used this way:

Shader "Cg shader with all built-in vertex input parameters" {
SubShader {
Pass {
CGPROGRAM

#pragma vertex vert
#pragma fragment frag

struct vertexInput {
float4 vertex : POSITION;
float4 tangent : TANGENT;
float3 normal : NORMAL;
float4texcoord :TEXCOORD0;
float4texcoord1 : TEXCOORD1;
fixed4color : COLOR;
};
struct vertexOutput {
float4 pos : SV_POSITION;
float4col :TEXCOORD0;
};

vertexOutput vert(vertexInput input)
{
vertexOutput output;

output.pos = mul(UNITY_MATRIX_MVP, input.vertex);
output.col = input.texcoord; // set the outputcolor

// other possibilities to play with:

// output.col = input.vertex;
// output.col = input.tangent;
// output.col = float4(input.normal, 1.0);
// output.col = input.texcoord;
// output.col = input.texcoord1;
// output.col = input.color;

return output;
}

float4 frag(vertexOutput input) : COLOR
{
return input.col;
}

ENDCG
}
}
}


In
Section “RGB Cube” we have already seen, how to visualize the vertex coordinates by setting the fragmentcolor to those values. In this example, the fragmentcolor is set to the texture coordinates such that we can see what kind of texture coordinates Unity
provides.

Note that only the first three components of
tangent
represent the tangent direction. The scaling and the fourth component are set in a specific way, which is mainly useful for parallax
mapping (see Section “Projection of Bumpy Surfaces”).

3.Pre-Defined Input Structures

Usually, you can achieve a higher performance by only specifying the vertex input parameters that you actually need, e.g. position, normal, and one set of texture coordinates; sometimes also the tangent
vector. Unity provides the pre-defined input structures
appdata_base
,
appdata_tan
,
appdata_full
, and
appdata_img
for the most common cases. These are defined in the file
UnityCG.cginc
(in the directory Unity > Editor > Data > CGIncludes):

   struct appdata_base {
float4 vertex : POSITION;
float3 normal : NORMAL;
float4texcoord :TEXCOORD0;
};
struct appdata_tan {
float4 vertex : POSITION;
float4 tangent : TANGENT;
float3 normal : NORMAL;
float4texcoord :TEXCOORD0;
};
struct appdata_full {
float4 vertex : POSITION;
float4 tangent : TANGENT;
float3 normal : NORMAL;
float4texcoord :TEXCOORD0;
float4texcoord1 : TEXCOORD1;
float4texcoord1 : TEXCOORD1;
float4texcoord2 : TEXCOORD2;
float4texcoord3 : TEXCOORD3;
fixed4color : COLOR;
// and additional texture coordinates only on XBOX360
};

struct appdata_img {
float4 vertex : POSITION;
half2texcoord :TEXCOORD0;
};
The file
UnityCG.cginc
is included with the line
#include "UnityCG.cginc"
. Thus, the shader above could be rewritten this way:

Shader "Cg shader with all built-in vertex input parameters" {
SubShader {
Pass {
CGPROGRAM

#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"

struct vertexOutput {
float4 pos : SV_POSITION;
float4col :TEXCOORD0;
};

vertexOutput vert(appdata_full input)
{
vertexOutput output;

output.pos = mul(UNITY_MATRIX_MVP, input.vertex);
output.col = input.texcoord;

return output;
}

float4 frag(vertexOutput input) : COLOR
{
return input.col;
}

ENDCG
}
}
}

4.How to Interpret False-Color Images

When trying to understand the information in a false-color image, it is important to focus on onecolor component only. For example, if the input vertex parameter
texcoord
with semantic
TEXCOORD0
for a sphere is written to the fragmentcolor then the red component of the fragment visualizes the
x
coordinate of
texcoord
, i.e. it doesn't matter whether the outputcolor is maximum pure red or maximum yellow or maximum magenta, in all cases the red component is 1. On the other hand, it also doesn't matter for the red component whether thecolor is blue or
green or cyan of any intensity because the red component is 0 in all cases. If you have never learned to focus solely on onecolor component, this is probably quite challenging; therefore, you might consider to look only at onecolor component at a time. For
example by using this line to set the output parameter in the vertex shader:

output.col = float4(input.texcoord.x, 0.0, 0.0, 1.0);

This sets the red component of the output parameter to the
x
component of
texcoord
but sets the green and blue components to 0 (and the alpha or opacity component to 1 but that doesn't matter in this shader).

If you focus on the red component or visualize only the red component you should see that it increases from 0 to 1 as you go around the sphere and after 360° drops to 0 again. It actually behaves
similar to a longitude coordinate on the surface of a planet. (In terms of spherical coordinates, it corresponds to the azimuth.)

If the
x
component of
texcoord
corresponds to the longitude, one would expect that the
y
component would correspond to the latitude (or the inclination in spherical
coordinates). However, note that texture coordinates are always between 0 and 1; therefore, the value is 0 at the bottom (south pole) and 1 at the top (north pole). You can visualize the
y
component as green on its own with:

output.col = float4(0.0, input.texcoord.y, 0.0, 1.0);
Texture coordinates are particularly nice to visualize because they are between 0 and 1 just likecolor components are. Almost as nice are coordinates of normalized vectors (i.e., vectors
of length 1; for example, the
normal
input parameter is usually normalized) because they are always between -1 and +1. To map this range to the range from 0 to 1, you add 1 to each component and divide all components by 2, e.g.:

output.col = float4((input.normal + float3(1.0, 1.0, 1.0)) / 2.0, 1.0);

Note that
normal
is a three-dimensional vector. Black corresponds then to the coordinate -1 and full intensity of one component to the coordinate +1.

If the value that you want to visualize is in another range than 0 to 1 or -1 to +1, you have to map it to the range from 0 to 1, which is the range ofcolor components. If you don't know which values
to expect, you just have to experiment. What helps here is that if you specifycolor components outside of the range 0 to 1, they are automatically clamped to this range. I.e., values less than 0 are set to 0 and values greater than 1 are set to 1. Thus, when
thecolor component is 0 or 1 you know at least that the value is less or greater than what you assumed and then you can adapt the mapping iteratively until thecolor component is between 0 and 1.

5.Debugging Practice

In order to practice the debugging of shaders, this section includes some lines that produce blackcolors when the assignment to
col
in the vertex shader is replaced by each of them.
Your task is to figure out for each line, why the result is black. To this end, you should try to visualize any value that you are not absolutely sure about and map the values less than 0 or greater than 1 to other ranges such that the values are visible and
you have at least an idea in which range they are. Note that most of the functions and operators are documented in
Section “Vector and Matrix Operations”.

output.col = input.texcoord - float4(1.5, 2.3, 1.1, 0.0);
output.col = input.texcoord.zzzz;
output.col = input.texcoord / tan(0.0);
The following lines require some knowledge about the dot and cross product:

output.col = dot(input.normal, input.tangent.xyz) * input.texcoord;
output.col = dot(cross(input.normal, input.tangent.xyz),input.normal) * input.texcoord;
output.col = float4(cross(input.normal, input.normal), 1.0);
output.col = float4(cross(input.normal, input.vertex.xyz), 1.0);        // only for a sphere!
Does the function
radians()
always return black? What's that good for?

<span style="font-family:Microsoft YaHei;font-size:14px;">        output.col = radians(input.texcoord);

6.Summary

Congratulations, you have reached the end of this tutorial! We have seen:

The list of built-in vertex input paramters in Unity.
How to visualize these parameters (or any other value) by setting components of the fragment outputcolor.

7.Further Reading

If you still want to know more

about the data flow in vertex and fragment shaders, you should read the description in
Section “Programmable Graphics Pipeline”.
about operations and functions for vectors, you should read
Section “Vector and Matrix Operations”.
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: