posts - 25,  comments - 21,  trackbacks - 0
 
D3DXVec3TransformCoord 对向量进行变换,没啥好说明的,默认向量为行向量,矩阵为列矩阵。
D3DXVec3TransformNormal 这个家伙对法线进行变换,其实就是对于方向性的向量进行变换,在计算中经常用来变换方向,“D3DXVec3TransformNormal transforms a normal using the transpose of the inverse of the given matrix”,他对向量进行变换时,会使用您提供的参数矩阵,先进行逆矩阵计算,再进行转置,然后变换向量,结果就是你想要的。

问题就在于,dx sdk的文档这么写“If you want to transform a normal, the matrix you pass to this function should be the transpose of the inverse of the matrix you would use to transform a point.”它原话的意思是你要提供一个逆矩阵再转置后的矩阵,唉,dx sdk误导人啊。在此记录一下,希望搜索到这里的人不要再浪费时间去琢磨这个问题,呵呵。
posted @ 2008-09-02 10:22 Sherk 阅读(988) | 评论 (2)编辑 收藏

#include "stdafx.h"

#include <stdarg.h>

#define STR_MAX 1024
void FormatOutput(TCHAR * formatstring, ...)
{
int nSize = 0;
TCHAR *buff=new TCHAR[STR_MAX];
//
va_list args;
va_start(args, formatstring);
nSize = _vsntprintf_s( buff, STR_MAX*sizeof(TCHAR), STR_MAX-1, formatstring, args);
_tprintf(L"nSize: %d, buff: %s\n", nSize, buff);
va_end(args);
//
delete [] buff;
}
int _tmain(int argc, _TCHAR* argv[])
{
int x=1,y=2;
FormatOutput(L"%d+%d=%d",x,y,x+y);
return 0;
}

一个简单的调用,导致_vsntprintf_s出现内存越界,百思不得其解,在unicode时就是调用int _vsnwprintf_s( wchar_t *buffer, size_t sizeOfBuffer, size_t count, const wchar_t *format, va_list argptr ),这是在msdn上对_vsnwprintf_s函数的声明,第二个参数是sizeOfBuffer;





但你进行代码跟踪,跟进vswprint.c时,发现

int __cdecl _vsnwprintf_s (
        wchar_t *string,
        size_t sizeInWords,
        size_t count,
        const wchar_t *format,
        va_list ap
        )
{
    return _vsnwprintf_s_l(string, sizeInWords, count, format, NULL, ap);
}

第二个参数变成了"sizeInWords";

事情明了了,第二个参数应该是多少个TCHAR,不是buffer的size。所以最上面的代码调用_vsntprintf_s时第二个参数用STR_MAX就可以了。

posted @ 2008-05-03 09:49 Sherk 阅读(2621) | 评论 (1)编辑 收藏

遇到一个问题随便一个class,到底有多少个成员函数在里面
class CTestClass
{
public:

 virtual int test_virtual_multiply(int x)
 {
  return x*x;
 }
 int test_add(int x, int y)
 {
  return m_Value1+x+y;
 }
 int m_Value1;
};
编译器,可能会添加default constructor,也可能会添加default copy constructor,同时会遇到,如果你没有调用相关函数,编译器会不生成相关函数的引用,也就查询不到该函数,呵呵。具体见书:"inside the c++ object model";
另外,还可能会添加destructor函数,如果需要。这些个问题,情况比较多,还是请仔细参考好书:"inside the c++ object model";

posted @ 2008-04-26 11:32 Sherk 阅读(493) | 评论 (0)编辑 收藏
lua_getglobal( L, "myTable" );      // 假设我有这么个全局表,这个调用会将该表放在栈顶
lua_pushstring( L, "age" );         // "age"压到栈顶
lua_pushnumber( L, 29 );            // "29"压到栈顶
lua_settable( L, -3 );              // 到此,这个myTable现在处于-3位置,所以此处用-3来索引
posted @ 2008-04-18 15:50 Sherk 阅读(971) | 评论 (0)编辑 收藏

Matrix packing order for uniform parameters is set to column-major by default. This means each column of the matrix is stored in a single constant register.
--dx sdk原文,很明白,默认你的app矩阵的一列将占用一个寄存器,但有个前提条件,就是你使用ID3DXConstantTable::SetMatrix或者这个ID3DXBaseEffect::SetMatrix接口进行矩阵设置,这个row major和column major只影响sdk提供的setmatrix接口提供参数的解析形式。如果你使用IDirect3DDevice9::SetVertexShaderConstantF进行矩阵设置,结果就是上面英文写的对此完全没有影响。理解了不?

 

 

再说明白点,在shader代码中,完全没有行矩阵,列矩阵区分,在shader代码中matrix只是个n×m的数据块,对于矩阵的行列区分完全自动转换,看这个

mul (DirectX HLSL)

Multiplies x and y using matrix math. The inner dimension x-columns and y-rows must be equal.

ret mul(x, y)

Parameters

x

[in] The x input value. If x is a vector, it treated as a row vector.

y

[in] The y input value. If y is a vector, it treated as a column vector.

也就是说,在shader中矩阵就是看到float4x3的矩阵,你并不能确定它占几个常量寄存器,也许是3个,也许是4个,但它一定是4行3列,根据线性代数原则,可以左乘一个3×n的向量,这时矩阵就是行矩阵,如果它右乘一个n×4的向量,则矩阵自动理解为列矩阵,这个n×4的向量时行向量。
应该讲清楚了。
posted @ 2007-11-09 10:28 Sherk 阅读(823) | 评论 (0)编辑 收藏

写个最简单的shader,显示solid color的物体,只需要坐标和颜色,在app层用0xaarrggbb 32位格式,在CreateVertexDeclaration里没有使用D3DDECLTYPE_D3DCOLOR 而是用了D3DDECLTYPE_UBYTE4N ,结果是r跟b调了个位置,实在不解,后来仔细查看sdk,发现

D3DDECLTYPE_D3DCOLOR
Four-component, packed, unsigned bytes mapped to 0 to 1 range. Input is a D3DCOLOR and is expanded to RGBA order.

原来对于D3DDECLTYPE_D3DCOLOR,驱动自动帮忙转换成rgba,从低到高排列,也就是shader得到的参数float4(r,g,b,a),完全正确,而对于D3DDECLTYPE_UBYTE4N ,直接将我的32bit的0xaarrggbb理解为,x86平台little endian格式,从低到高位理解为float(b,g,r,a),结果就是,b跟r调了个位置。哈哈。。。

posted @ 2007-11-07 09:11 Sherk 阅读(1172) | 评论 (0)编辑 收藏

看多了postprocess,经常遇到类似下面的代码,就是RTT之后,将此texture重新调用新的shader画到新的render target
 // To correctly map from texels->pixels we offset the coordinates
        // by -0.5f:
        float fWidth = static_cast< float >( desc.Width ) - 0.5f;
        float fHeight = static_cast< float >( desc.Height ) - 0.5f;

        // Now we can actually assemble the screen-space geometry
        PostProcess::TLVertex v[4];

        v[0].p = D3DXVECTOR4( -0.5f, -0.5f, 0.0f, 1.0f );
        v[0].t = D3DXVECTOR2( 0.0f, 0.0f );

        v[1].p = D3DXVECTOR4( fWidth, -0.5f, 0.0f, 1.0f );
        v[1].t = D3DXVECTOR2( 1.0f, 0.0f );

        v[2].p = D3DXVECTOR4( -0.5f, fHeight, 0.0f, 1.0f );
        v[2].t = D3DXVECTOR2( 0.0f, 1.0f );

        v[3].p = D3DXVECTOR4( fWidth, fHeight, 0.0f, 1.0f );
        v[3].t = D3DXVECTOR2( 1.0f, 1.0f );
对于这个-0.5f的offset,实在不解,其实在dx sdk中有文章解释的非常详细。搜索这个就可以了“Directly Mapping Texels to Pixels”。

Directly Mapping Texels to Pixels (Direct3D 9)

When rendering 2D output using pre-transformed vertices, care must be taken to ensure that each texel area correctly corresponds to a single pixel area, otherwise texture distortion can occur. By understanding the basics of the process that Direct3D follows when rasterizing and texturing triangles, you can ensure your Direct3D application correctly renders 2D output.

Figure 1: 6 x 6 resolution display

Figure 1 shows a diagram wherein pixels are modeled as squares. In reality, however, pixels are dots, not squares. Each square in Figure 1 indicates the area lit by the pixel, but a pixel is always just a dot at the center of a square. This distinction, though seemingly small, is important. A better illustration of the same display is shown in Figure 2:

Figure 2: Display is composed of pixels

This diagram correctly shows each physical pixel as a point in the center of each cell. The screen space coordinate (0, 0) is located directly at the top-left pixel, and therefore at the center of the top-left cell. The top-left corner of the display is therefore at (-0.5, -0.5) because it is 0.5 cells to the left and 0.5 cells up from the top-left pixel. Direct3D will render a quad with corners at (0, 0) and (4, 4) as illustrated in Figure 3.

Figure 3: Outline of an unrasterized quad between (0, 0) and (4, 4)

Figure 3 shows where the mathematical quad is in relation to the display, but does not show what the quad will look like once Direct3D rasterizes it and sends it to the display. In fact, it is impossible for a raster display to fill the quad exactly as shown because the edges of the quad do not coincide with the boundaries between pixel cells. In other words, because each pixel can only display a single color, each pixel cell is filled with only a single color; if the display were to render the quad exactly as shown, the pixel cells along the quad's edge would need to show two distinct colors: blue where covered by the quad and white where only the background is visible.

Instead, the graphics hardware is tasked with determining which pixels should be filled to approximate the quad. This process is called rasterization, and is detailed in Rasterization Rules (Direct3D 9). For this particular case, the rasterized quad is shown in Figure 4:

Figure 4: Untextured quad drawn from (0,0) to (4,4)

Note that the quad passed to Direct3D (Figure 3) has corners at (0, 0) and (4, 4), but the rasterized output (Figure 4) has corners at (-0.5,-0.5) and (3.5,3.5). Compare Figures 3 and 4 for rendering differences. You can see that what the display actually renders is the correct size, but has been shifted by -0.5 cells in the x and y directions. However, except for multi-sampling techniques, this is the best possible approximation to the quad. (See the Antialias Sample for thorough coverage of multi-sampling.) Be aware that if the rasterizer filled every cell the quad crossed, the resulting area would be of dimension 5 x 5 instead of the desired 4 x 4.

If you assume that screen coordinates originate at the top-left corner of the display grid instead of the top-left pixel, the quad appears exactly as expected. However, the difference becomes clear when the quad is given a texture. Figure 5 shows the 4 x 4 texture you'll map directly onto the quad.

Figure 5: 4 x 4 texture

Because the texture is 4 x 4 texels and the quad is 4 x 4 pixels, you might expect the textured quad to appear exactly like the texture regardless of the location on the screen where the quad is drawn. However, this is not the case; even slight changes in position influence how the texture is displayed. Figure 6 illustrates how a quad between (0, 0) and (4, 4) is displayed after being rasterized and textured.

Figure 6: Textured quad drawn from (0, 0) and (4, 4)

The quad drawn in Figure 6 shows the textured output (with a linear filtering mode and a clamp addressing mode) with the superimposed rasterized outline. The rest of this article explains exactly why the output looks the way it does instead of looking like the texture, but for those who want the solution, here it is: The edges of the input quad need to lie upon the boundary lines between pixel cells. By simply shifting the x and y quad coordinates by -0.5 units, texel cells will perfectly cover pixel cells and the quad can be perfectly recreated on the screen. (Figure 8 illustrates the quad at the corrected coordinates.)

The details of why the rasterized output only bears slight resemblance to the input texture are directly related to the way Direct3D addresses and samples textures. What follows assumes you have a good understanding of texture coordinate space And bilinear texture filtering.

Getting back to our investigation of the strange pixel output, it makes sense to trace the output color back to the pixel shader: The pixel shader is called for each pixel selected to be part of the rasterized shape. The solid blue quad depicted in Figure 3 could have a particularly simple shader:

float4 SolidBluePS() : COLOR
{
return float4( 0, 0, 1, 1 );
}

For the textured quad, the pixel shader has to be changed slightly:

texture MyTexture;
sampler MySampler =
sampler_state
{
Texture = <MyTexture>;
MinFilter = Linear;
MagFilter = Linear;
AddressU = Clamp;
AddressV = Clamp;
};
float4 TextureLookupPS( float2 vTexCoord : TEXCOORD0 ) : COLOR
{
return tex2D( MySampler, vTexCoord );
}

That code assumes the 4 x 4 texture of Figure 5 is stored in MyTexture. As shown, the MySampler texture sampler is set to perform bilinear filtering on MyTexture. The pixel shader gets called once for each rasterized pixel, and each time the returned color is the sampled texture color at vTexCoord. Each time the pixel shader is called, the vTexCoord argument is set to the texture coordinates at that pixel. That means the shader is asking the texture sampler for the filtered texture color at the exact location of the pixel, as detailed in Figure 7:

Figure 7: Texture coordinate sampling locations

The texture (shown superimposed) is sampled directly at pixel locations (shown as black dots). Texture coordinates are not affected by rasterization (they remain in the projected screen-space of the original quad). The black dots show where the rasterization pixels are. The texture coordinates at each pixel are easily determined by interpolating the coordinates stored at each vertex: The pixel at (0,0) coincides with the vertex at (0, 0); therefore, the texture coordinates at that pixel are simply the texture coordinates stored at that vertex, UV (0.0, 0.0). For the pixel at (3, 1), the interpolated coordinates are UV (0.75, 0.25) because that pixel is located at three-fourths of the texture's width and one-fourth of its height. These interpolated coordinates are what get passed to the pixel shader.

The texels do not line up with the pixels in this example; each pixel (and therefore each sampling point) is positioned at the corner of four texels. Because the filtering mode is set to Linear, the sampler will average the colors of the four texels sharing that corner. This explains why the pixel expected to be red is actually three-fourths gray plus one-fourth red, the pixel expected to be green is one-half gray plus one-fourth red plus one-fourth green, and so on.

To fix this problem, all you need to do is correctly map the quad to the pixels to which it will be rasterized, and thereby correctly map the texels to pixels. Figure 8 shows the results of drawing the same quad between (-0.5, -0.5) and (3.5, 3.5), which is the quad intended from the outset.

Figure 8: Textured quad matches the rasterized quad

Figure 8 demonstrates that the quad (shown outlined from (-0.5, -0.5) to (3.5, 3.5)) exactly matches the rasterized area.

Summary

In summary, pixels and texels are actually points, not solid blocks. Screen space originates at the top-left pixel, but texture coordinates originate at the top-left corner of the texture's grid. Most importantly, remember to subtract 0.5 units from the x and y components of your vertex positions when working in transformed screen space in order to correctly align texels with pixels.

posted @ 2007-10-24 09:58 Sherk 阅读(1237) | 评论 (1)编辑 收藏
一直搞不明白这个tangent space到底是什么宝贝,既然是tangent肯定需要做dx/dv之类的求导动作,可是,查来查去,都是讲不明白,最后看到这个,才恍然大悟:
T = normalize(dx/du, dy/du, dz/du)
N = T × normalize(dx/dv, dy/dv, dz/dv)
B = N × T
(Tangent space is just such a local coordinate system. The orthonormal basis for the tangent space is the normalized unperturbed surface normal Nn, the tangent vector Tn defined by normalizing dP/du, and the binormal Bn defined as Nn×Tn. The orthonormal basis for a coordinate system is also sometimes
called the reference frame.)

其实这个tangent space 就是我们熟悉的tangent,无非在pixel lighting进行现实应用时,理解为2个坐标系的变换。
说直白一点,T = normalize(dx/du, dy/du, dz/du),这句话说明所有问题,这个t变量的x值就是模型坐标系下x值的变化对应u值变化的正切,就跟2纬曲线dy/dx就是曲线的正切值,也是反应这条曲线变化坡度的值一样。

再直白一点,这个tangent space,切空间,就是反应这个模型空间坐标相对应纹理坐标相的变换坡度

posted @ 2007-10-20 14:20 Sherk 阅读(3293) | 评论 (6)编辑 收藏

原文地址http://www.mindcontrol.org/~hplus/graphics/matrix-layout.html

 

对dx而言,dx app的矩阵是row major,而hlsl默认是column major

Matrix Layouts, DirectX and OpenGL

When reading about computer graphics, you invariably run into the mention of the Matrix datatype. Typically, this is a 4x4 matrix of floating-point values, used to perform affine transforms for graphics (scaling, rotation, translation, sometimes shearing).

However, there are at least two different conventions for how to apply matrices to the vectors (vertices and normals) that make up the building blocks of 3D graphics. A vertex is extended to a 4- vector by tacking on a "1" (this allows translation to work); a normal is extended to a 4-vector by tacking on a "0" (this means that only the non-translation part will apply). So far, so good. But is that 4-vector a row vector, or a column vector? (And it gets even better when the matrix is stored in memory).

Consider the two cases:

      m11 m12 m13 m14    x
m21 m22 m23 m24    y
m31 m32 m33 m34    z
m41 m42 m43 m44    1
Case 1: column vector on the right

 

                  m11 m12 m13 m14
m21 m22 m23 m24
x y z 1
m31 m32 m33 m34
m41 m42 m43 m44
Case 2: row vector on the left

 

In the first case, column vectors on the right, the translation of the operation lives in matrix elements m14, m24 and m34. However, in the second case, the translation lives in elements m41, m42 and m43. Thus, when you see a matrix written out, you have to take a while to consider in which orientation you're supposed to be reading it. Sadly, papers and documentation that writes about matrices seldom consider that there are these two conventions, and tend to just assume that you know which one you mean.

Traditional mathematicians, and OpenGL, tends to prefer colum vectors. Meanwhile, certain other legacies of computer graphics, as well as DirectX, tend to prefer row vectors, on the left. The confusion gets even more complete when you start talking about "pre-multiplying" and "post-multiplying" matrices. Pre-multiplying may mean multiplying it on the left (if you're the row vector type), or it may mean multiplying it on the right (if you're the OpenGL type) -- if by "pre" you mean that the pre-multiplied operation happens before the target matrix operation. If instead you mean, with pre-multplying, that the matrix you're pre-multiplying goes on the left, then it means that it happens afterward in the OpenGL notation, but it still means that it happens before in the DirectX notation.

Confused yet?

So, then we come to storing matrices in memory. Of course there's two ways to store matrices -- they could be stored in the order m11 m12 m13 m14 m21 ..., or they could be stored in the order m11 m21 m31 m41 m12 ... The first version is called "row major" because you can view it as storing one row at a time. The second version is called "column major" because you can view it as storing one column at a time.

So, if you're given a matrix as an array of floats in memory, or as a sequence of floats on a web page, you need to know both which vector convention is assumed for the matrix, AND the storage format used for the matrix. However, as luck would have it, an error in one, will cancel out an error in the other. A row-major matrix intended for row vectors will work, as-is in memory, just as well for a column vector if you just assume it's stored in column-major order.

And, guess what? OpenGL assumes colum major matrices; DirectX assumes row major matrices. This means that the translation, in a matrix seen as a float array, will always go in elements at index 12, 13 and 14 in a matrix in memory (where index is in the range 0..15), be it OpenGL or DirectX. These locations map to m41, m42 and m43 in the DirectX conventions, and to m14, m24, m34 in the OpenGL conventions. Whew! You may actually have something wrong in your code, but because you've mis-understood the convention, an additional interpretation error negates the first mistake, and it all works out.

Left- and Right-handed matrices

There's this persistent rumor that there are "left-handed" and "right-handed" matrices. That's not true. In a left-handed coordinate system, a rotation of the X vector +90 degrees around Z axis projects to the Y vector. In a right-handed coordinate system, a rotation of the X vector +90 degrees around the Z axis projects to the Y vector. The math is the same; the matrix is the same. The only difference is the interpretation of the data once you get it out -- larger Z means closer to the viewer in a Y-up, X-right right-handed coordinate system, but further into the screen in a left-handed coordinate system. If your matrix attempts to some-how do something different for left- and right-handed uses, it will either end up rotating in the opposite direction of what you'd expect, or it will end up mirroring your entire geometry. If you have right-handed data, and want to display it in a left-handed coordinate system, that's what you want, but you should express that as an explicit mirror matrix that either negates one row (or column :-), or flips two rows (or columns) of the identity matrix.

Matrices in the ODE library

The ODE physics library is the most well supported open source rigid body dynamics package you can find, as far as I can tell. Because you can stuff some data in (boxes, planes, spheres, triangle meshes etc), set coefficients, and make it "go," it's popular among those trying to create various kinds of games. However, once you have done your simulation, you have to take the data out, and display it.

The ODE documentation doesn't use many vectors or matrices in print at all, so it's hard to tell whether it intends to use row vectors or column vectors. However, the storage of rotation matrices is such, that a translation (should it be in the matrix), would live in offsets 3, 7 and 11. (ODE doesn't actually look at these values; it leaves them un-touched for alignment reasons). This means that you have to transpose a matrix you get out of ODE, if you want to put it back into OpenGL or DirectX, because the combination of convention and storage for ODE work out to be the opposite of that same combination for the latter two.

Meanwhile, ODE also stores unit quaternions (another representation of the purely rotational part of a matrix) in the W-X-Y-Z order, which is an older convention, whereas most modern libraries use quaternions in X-Y-Z-W order. Goes to show that wherever there's a standard, there's another standard that does it the other way around -- pick one and claim you're standards compliant! Just, please, explicitly state which standard you picked.

In the end, I find that I can work best with matrices when I ask myself: where in memory would the translation go? After you answer that, everything else will follow.

posted @ 2007-09-30 09:25 Sherk 阅读(5633) | 评论 (2)编辑 收藏
先看dx sdk原文,

The data in a matrix is loaded into shader constant registers before a shader runs. There are two choices for how the matrix data is read: in row-major order or in column-major order. Column-major order means that each matrix column will be stored in a single constant register, and row-major order means that each row of the matrix will be stored in a single constant register. This is an important consideration for how many constant registers are used for a matrix.

Row-major and column-major matrix ordering determine the order the matrix components are read from the constant table or from shader inputs. Once the data is written into constant registers, matrix order has no effect on how the data is used or accessed from within shader code. Also, matrices declared in a shader body do not get packed into constant registers.



解释:

    这个row-major和column-major,只是决定你提供给shader的矩阵如何被理解,如果存入寄存器。举个例子,shader code中有column-major的float4x3 matWorld,那么你设置的时候提供一个4×3的矩阵,这个矩阵将占3个float4的寄存器,每列占一个,进行乘法计算的时候,必须是mul(pos,matWorld),也就是pos左乘这个矩阵。

    如果shader中一个row-major的float3×4的矩阵,也占3个寄存器,乘法的时候是mul(matWorld,pos),就是pos右乘矩阵。

    上面做法效率一样,结果一样。

   如果,你混淆着瞎来,结果不是你想要的,就是占多寄存器之类。。。。



posted @ 2007-09-27 09:48 Sherk 阅读(1175) | 评论 (0)编辑 收藏
仅列出标题
共3页: 1 2 3 
<2024年4月>
31123456
78910111213
14151617181920
21222324252627
2829301234
567891011

常用链接

留言簿(3)

随笔档案

相册

好友

搜索

  •  

最新评论

阅读排行榜

评论排行榜