⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 shadernotes.txt

📁 3D Game Engine Design Source Code非常棒
💻 TXT
📖 第 1 页 / 共 2 页
字号:
assembly for different APIs.  If there is something else, it isn't even on
the horizon at this point.  As much as I don't like tying things into
specific products or technologies, NVIDIA has a good track record of trying
to be cross-platform and supporting different APIs and so I thought it
wouldn't be an issue.  That, and there really isn't any other choice unless
you want to write two assembly language shaders for every shader just to
make it cross-API compatible.

For the most part though, Cg was great.  It allows access to all the assembly
language commands and syntaxes that you need.  One thing that you'll see a
lot are people talking about "swizzles".  (This is more a "you'll probably
see this, so I'll tell you about it" digression.  Cg allows you to swizzle
variables.  Most shaders allow you to "swizzle" components of a vector when
assigning or using them.  For example:

	float2 f2Tex = float2( 0.5f, 0.8f );
	float3 f3Vec;
	f3Vec.y = f2Tex.y * 2;
	// This is a swizzle for both the assignee and the assignment.
	f3Vec.xz = f2Tex.yx;
	// f3Vec now is float3( 0.8f, 1.6f, 0.5f );
	// It is also legal to repeat a suffix, such as here.
	f3Vec.xy = f2Tex.xx;

Essentially you can refer to specific component (or components) of a vector.
By default, when you use a variable, it uses the components in order (.xyzw
or .rgba [which can be used interchangably, usually]).  However, you can swap
the order in any way you want (or even repeat it).  Typically I used swizzles
as an alternative to casting sometimes, becuase it's easier to type.
("float3 f3Vec = f4Vec.xyz;" is the same as "float3 f3Vec = (float3)f4Vec;")
Swizzles are not terribly exciting (I think it's really neat from the
syntactical point of view) and probably more useful in assembly language for
eliminating instructions.  The Cg compiler definitely makes extensive use of
it if you ever look through the code output of a Cg program.  However, it's
definitely something to be aware of because just about every presentation
ever on shaders touts swizzling as being really cool.

I will admit that Cg did have an unexpected downside in that it doesn't
compile into DirectX's ps.1.4 profile.  This is unfortunate because it meant
that a large amount of shaders that could run on GeForce4's and Radeon8x00's
could not because the next available version was ps.2.0.  Other than that, the
documentation in the user manual has been excellent, the online resources are
plentiful, and it's very easy to learn.  The only thing I really was wanting
for was a (run-time) debugger.

It was definitely very hard to debug some programs.  When the only output you
get is a color, sometimes you just have to make your program output the
result of some temporary variable as the color, and then try to figure out
what's going on.  One large API difference that I've mentioned before is that
DirectX clamps pixel shader inputs into the [0..1] range and OpenGL doesn't.
I'm honestly not sure how that happens and I spent a good bit of time trying
to pry into it and figure out.  Considering that they're running on the same
card and through the same pipeline, I'm really unsure of how that happens.
I first saw this with the Fresnel demo, which worked great for OpenGL but
came up entirely white for DirectX.  I ended up munging all vector data into
the [0..1] range and back into the [-1..1] range in the pixel shader.

Just for you own knowledge: I also had a strange experience with a lerp
(linear interpolate) function in a pixel shader.  If I passed in a constant
defined in the program (e.g. float3 f3Vec = lerp(a,b,0.5f);) it would work
fine.  If I passed in some varying data passed from the pixel shader (such as
through a texture coordinate), it would also work fine.  However, if I passed
a uniform constant into the pixel shader and used that as the interpolate
value, it would always be zero or one (I forget).  I ended up having to pass
the value as a uniform constant to the vertex shader, pass it to the pixel
shader in some unused texture coordinate, and then use it from there.  You
can see this in the irridescence shader.  I'm not sure whether this is a bug
or what the issue was with this.  I spent my time trying to make more shaders
work and not track down esoteric behavior (as much as I was really curious).

The limitations of DirectX over the automatic nature of OpenGL was what
really motivated me to write the CgConverter tool.  Not to toot my own horn
too much, but I would have probably been able to finish one or maybe two
shaders had I not had this tool.  I have some side projects that I'm working
on that might get some shader support if I get enough time, and even though
I will probably ignore the DirectX side of things entirely in my case, I will
still want to do something very much like this.  Even if it just does
nothing more than make it easy to refer to uniform constants in the program,
it makes life much easier.

The CgConverter really does two important things.  The first (and easiest)
thing it does is to find out which registers all the varying parameters go
in for a vertex shader.  It's irrelevant for a pixel shader because they are
fixed for all APIs.  OpenGL has fixed registers dependent on the type of the
varying parameter, e.g. vertex data always goes in v0.  DirectX however does
not.  The CgConverter figures out what this is (by default, or if you've
explicitly specified this in your Cg program, which you can) and stores it
in the vertex shader too.

The second thing it does is it builds the ShaderConstants object.  It looks
through all the parameters, figures out which are numerical constants, which
are state constants, and which are user defined constants and builds the
correct ShaderConstants object.  This is really the largest pain about
building a shader.  When I can add more constants or change the order of the
constants without having to fix the program itself or worry about which
register number things are in, it allows the application to be a lot further
removed from the implementation details about the shader.  All it needs to
know is that there is a constant named "fTime" (for example) and the
ShaderConstants object knows under the covers exactly what register that
constant is in and how big it is.

Essentially, this is what the Cg Runtime does for you.  However, the Cg
runtime also (obviously) needs all programs to be in Cg.  Thus, when I talked
to you we made the decision that in order for people to not be so tied to Cg
that I would implement this CgConverter.  WildMagic users can still use their
own assembly language shaders (or any other language that compiles to
API-specific assembly language), they just have to build the objects and the
constants themselves.  (I give a little instruction on how to do this in the
CgConverter Readme).

This is just a side note on Cg usage, but if you look at shaders online, you
may notice a very different style of declaring inputs and outputs than I use
in my shaders.  Typically the way you would declare inputs to a vertex shader
back in the day would be to declare a struct called app2vert and an output
struct called vert2frag (which are keywords, I think).  Then the pixel shader
would take as an input a formal parameter of the type vert2frag.  Generally
the types aren't specified at all and you have to declare the variables in a
certain order (vert, color, norm, tex0-7 [I think]).  This way is deprecated.
The way I do it is to explicitly say which variable is attached to which
varying input.  Thus, my declarations have formal parameters like "in float3
i_f3Normal : NORMAL".  The NORMAL part on the end with the colon is just
semantics to tell the compiler that this variable is tied to the normal
data.  I think (as in the app2vert/vert2frag case) that you can drop the
": <varying parameter>" semantics, but then it becomes order dependent.  It's
just my style, but I think it's much clearer to just specify it all so you
don't have to think about it.

I will also admit that I kind of failed in maintaining a good standard of
Hungarian notation in my shader programs (mostly due to the inadequacy of
Hungarian notation to cover shaders, other than floats and arrays).  By the
end I ended up realizing what was important to include in the notation.  I
think a good way to do it would be this:

	input: i_
	output: o_
	uniform: u_
	array: a
	float: f
	float<n>: f<n> (e.g. float3 f3Temp;)
	float<n>x<n>: f<n><n> (e.g. float4x4 f44Mat;)
	sampler<n>D: s<n> (e.g. sampler2D s2Texture;)

One problem is that I made the WildMagic state constants all start with Wml,
instead of the type prefix.  Maybe that's not entirely bad.  Eh.  You could
fix that by eliminating the check for the 'Wml' prefix in the CgConverter
and then just fixing the constant names in StateConstants.cpp.  I'm also not
sure that the uniform prefix is entirely necessary.

-Cards and versions [cut and pasted]
	-DX: [version(s)]: [supporting cards]
		-vs1.1: GeForce3/4TI, Radeon 8500
		-vs1.1/vs2.0: GeForceFX, Radeon 9500+
		-ps1.1: GeForce3
		-ps1.1-1.3: GeForceTI
		-ps1.1-1.4: Radeon 8x00
		-ps1.1-1.4, 2.0: GeForceFX, Radeon 9500+
	-OpenGL: [GLEXTENSION] = [DX equivalent] ([Cg profile name])
		-GL_NV_vertex_program = vs1.0 (vp20)
		-GL_NV_vertex_program1.1 = vs1.1 (vp20)
		-GL_ARB_vertex_program = vs1.1 (arbvp1)
		-GL_NV_vertex_program2 = vs2.0 (vp30)
		-GL_NV_fragment_program = ps2.0 (vp30)
		-GL_ARB_fragment_program = ps2.0 (arbfp1)

Vertex shaders are fairly straight forward.  I'm pretty sure the only real
change to 2.0 is an increase in the instruction limit and constant registers.
vs1.1 works for most things (I never ran into the instruction limit at all)
and so it makes sense that OpenGL wrote their spec to close to those same
limitations.

All of the ps1.x versions are fairly similar.  There are a few things that
the earlier versions can't do, but it's not really that big of a change.  All
early pixel shaders have two phases: texture reads and then math.  Thus, you
cannot do math on the texture coordinates and then lookup using that value in
early shaders.  ps1.4 introduces a phase marker which allows them to have
four phases (texture read, math, texture read, math).  So, you can lookup in
a texture, do some math, then use that value to look up into a texture, and
then use do some math on that result to get the final color.  ps2.0 is (as
far as I can tell) a much larger break from that and removes the phase
restrictions and just has an instruction count.  Also, on earlier versions
of shaders (pre-1.4?), texture coordinate <n> could only be used to look up
a value in texture <n>.

Some cards go above and beyond the spec (the Radeon9x00's are a good example)
and so there are profiles called ps2.x and vs2.x which eliminate the register,
constants, and instruction limits.  If you really wanted to check if a card
supported it, you'd have to check all of those limits against what the card
reported it could do.  It would be much more involved than just checking a
shader version.

Also, a note on resources: "Real-time Shader Programming" wasn't very useful
at all.  ShaderX on the other hand I used constantly to get DirectX shaders
to work and to give me good ideas about using shaders.  cgshaders.org is a
great website as well.  NVIDIA has a demo called CgLabs where you can try
out some shaders, which I found a good way to learn Cg at first and to see
what shaders you can do.  NVIDIA also has a CgBrowser which can let you see
lots of effects.  Sadly, it's more helpful to see what shaders can do rather
than to learn how to use them.  The "wow" factor of a lot of effects comes
from as much from careful picking of models, textures, and constant values
than it does from interesting programming in the shader itself.

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -