陰影
在光柵化中處理陰影非常不直觀,需要相當(dāng)多的運(yùn)算:需要從每條光線的視角渲染場(chǎng)景,再存儲(chǔ)在紋理中,隨后在光照階段再次投射。更糟糕的是,這樣做未必會(huì)產(chǎn)生優(yōu)質(zhì)的圖像質(zhì)量:那些陰影很容易混疊(因?yàn)楣饩€所視的像素與攝像頭所視的像素并不對(duì)應(yīng)),或堵塞(因?yàn)榘涤百N圖的紋素存儲(chǔ)的是單一的深度值,但卻可以覆蓋大部分區(qū)域。此外,大多數(shù)光柵化需要支持專門的陰影貼圖“類型“,如立方體貼圖陰影(用于無(wú)方向性的光線),或級(jí)聯(lián)陰影貼圖(用于大型戶外場(chǎng)景),而這大大增加了渲染器的復(fù)雜性。
在光線追蹤器中,單一的代碼路徑可以處理所有的陰影場(chǎng)景。更重要的是,投影過(guò)程簡(jiǎn)單直觀,與光線從曲面投向光源及檢查光線是否受阻一樣。PowerVR光線跟蹤架構(gòu)呈現(xiàn)了快速的“試探”光線,其用于檢測(cè)光線投射方向的幾何圖形,這也使得它們特別適合進(jìn)行有效的陰影渲染。
一、先了解下 什么是光柵化及光柵化的簡(jiǎn)單過(guò)程?
光柵化是將幾何數(shù)據(jù)經(jīng)過(guò)一系列變換后最終轉(zhuǎn)換為像素,從而呈現(xiàn)在顯示設(shè)備上的過(guò)程,如下圖:
光柵化的本質(zhì)是坐標(biāo)變換、幾何離散化,如下圖:
有關(guān)光柵化過(guò)程的詳細(xì)內(nèi)容有空再補(bǔ)充。
二、以下內(nèi)容展示紋素到像素時(shí)的一些細(xì)節(jié):
When rendering 2D output using pre-transformed vertices, care must be taken to ensure that each texel area correctly corresponds to a single pixel area, otherwise texture distortion can occur. By understanding the basics of the process that Direct3D follows when rasterizing and texturing triangles, you can ensure your Direct3D application correctly renders 2D output.
當(dāng)使用已經(jīng)執(zhí)行過(guò)頂點(diǎn)變換的頂點(diǎn)作為2D輸出平面的時(shí)候,我們必須確保每個(gè)紋素正確的映射到每個(gè)像素區(qū)域,否則紋理將產(chǎn)生扭曲,通過(guò)理解Direct3D在光柵化和紋理采樣作遵循的基本過(guò)程,你可以確保你的Direct3D程序正確的輸出一個(gè)2D圖像。
圖1: 6 x 6 resolution display
Figure 1 shows a diagram wherein pixels are modeled as squares. In reality, however, pixels are dots, not squares. Each square in Figure 1 indicates the area lit by the pixel, but a pixel is always just a dot at the center of a square. This distinction, though seemingly small, is important. A better illustration of the same display is shown in Figure 2:
圖片1展示了用一個(gè)方塊來(lái)描述像素的。實(shí)際上,像素是點(diǎn),不是方塊,每個(gè)圖片1中的方塊表明了被一個(gè)像素點(diǎn)亮的區(qū)域,然而像素始終是方塊中間的一個(gè)點(diǎn),這個(gè)區(qū)別,看起來(lái)很小,但是很重要。圖片二展示了一種更好的描述方式。
圖 2: Display is composed of pixels
This diagram correctly shows each physical pixel as a point in the center of each cell. The screen space coordinate (0, 0) is located directly at the top-left pixel, and therefore at the center of the top-left cell. The top-left corner of the display is therefore at (-0.5, -0.5) because it is 0.5 cells to the left and 0.5 cells up from the top-left pixel. Direct3D will render a quad with corners at (0, 0) and (4, 4) as illustrated in Figure 3.
這張圖正確地通過(guò)一個(gè)點(diǎn)來(lái)描述每個(gè)單元中央的物理像素。屏幕空間的坐標(biāo)原點(diǎn)(0,0)是位于左上角的像素,因此就在最左上角的方塊的中央。最左上角方塊的最左上角因此是(-0.5,-0.5),因?yàn)樗嘧钭笊辖堑南袼厥牵?0.5,-0.5)個(gè)單位。Direct3D將會(huì)在(0,0)到(4,4)的范圍內(nèi)渲染一個(gè)矩形,如圖3所示
圖3
Figure 3 shows where the mathematical quad is in relation to the display, but does not show what the quad will look like once Direct3D rasterizes it and sends it to the display. In fact, it is impossible for a raster display to fill the quad exactly as shown because the edges of the quad do not coincide with the boundaries between pixel cells. In other words, because each pixel can only display a single color, each pixel cell is filled with only a single color; if the display were to render the quad exactly as shown, the pixel cells along the quad‘s edge would need to show two distinct colors: blue where covered by the quad and white where only the background is visible.
Instead, the graphics hardware is tasked with determining which pixels should be filled to approximate the quad. This process is called rasterization, and is detailed inRasterization Rules. For this particular case, the rasterized quad is shown in Figure 4:
圖片3展示了數(shù)學(xué)上應(yīng)該顯示的矩形。但是并不是Direct3D光柵化之后的樣子。實(shí)際上,像圖3這樣光柵化是根本不可能的,因?yàn)槊總€(gè)像素點(diǎn)亮區(qū)域只能是一種顏色,不可能一半有顏色一半沒(méi)有顏色。如果可以像上面這樣顯示,那么矩形邊緣的像素區(qū)域必須顯示兩種不同的顏色:藍(lán)色的部分表示在矩形內(nèi),白色的部分表示在矩形外。
因此,圖形硬件將會(huì)執(zhí)行判斷哪個(gè)像素應(yīng)該被點(diǎn)亮以接近真正的矩形的任務(wù)。這個(gè)過(guò)程被稱之為光柵化,詳細(xì)信息請(qǐng)查閱Rasterization Rules.。對(duì)于我們這個(gè)特殊的例子,光柵化后的結(jié)果如圖4所示
圖4
Note that the quad passed to Direct3D (Figure 3) has corners at (0, 0) and (4, 4), but the rasterized output (Figure 4) has corners at (-0.5,-0.5) and (3.5,3.5)。 Compare Figures 3 and 4 for rendering differences. You can see that what the display actually renders is the correct size, but has been shifted by -0.5 cells in the x and y directions. However, except for multi-sampling techniques, this is the best possible approximation to the quad. (See theAntialias Sample for thorough coverage of multi-sampling.) Be aware that if the rasterizer filled every cell the quad crossed, the resulting area would be of dimension 5 x 5 instead of the desired 4 x 4.
If you assume that screen coordinates originate at the top-left corner of the display grid instead of the top-left pixel, the quad appears exactly as expected. However, the difference becomes clear when the quad is given a texture. Figure 5 shows the 4 x 4 texture you’ll map directly onto the quad.
注意我們傳給Direct3D(圖三)的兩個(gè)角的坐標(biāo)為(0,0)和(4,4)(相對(duì)于物理像素坐標(biāo))。但是光柵化后的輸出結(jié)果(圖4)的兩個(gè)角的坐標(biāo)為(-0.5,-0.5)和(3.5,3.5)。比較圖3和圖4,的不同之處。你可以看到圖4的結(jié)果才是正確的矩形大小。但是在x,y方向上移動(dòng)了-0.5個(gè)像素矩形單位。然而,拋開(kāi)multi-sampling技術(shù),這是接近真實(shí)大小矩形的最好的光柵化方法。注意如果光柵化過(guò)程中填充所有被覆蓋的物理像素的像素區(qū)域,那么矩形區(qū)域?qū)?huì)是5x5,而不是4x4.
如果你結(jié)社屏幕坐標(biāo)系的原點(diǎn)在最左上角像素區(qū)域的最左上角,而不是最左上角的物理像素,這個(gè)方塊顯示出來(lái)和我們想要的一樣。然而當(dāng)我們給定一個(gè)紋理的時(shí)候,區(qū)別就顯得異常突出了,圖5 展示了一個(gè)用于映射到我們的矩形的4x4的紋理。
圖5
Because the texture is 4 x 4 texels and the quad is 4 x 4 pixels, you might expect the textured quad to appear exactly like the texture regardless of the location on the screen where the quad is drawn. However, this is not the case; even slight changes in position influence how the texture is displayed. Figure 6 illustrates how a quad between (0, 0) and (4, 4) is displayed after being rasterized and textured.
因?yàn)榧y理有4x4個(gè)紋素,并且矩形是4x4個(gè)像素,你可能想讓紋理映射后的矩形就像紋理圖一樣。然而,事實(shí)上并非如此,一個(gè)位置點(diǎn)的輕微變化也會(huì)影響貼上紋理后的樣子,圖6闡釋了一個(gè)(0,0)(4,4)的矩形被光柵化和紋理映射后的樣子。
圖6
The quad drawn in Figure 6 shows the textured output (with a linear filtering mode and a clamp addressing mode) with the superimposed rasterized outline. The rest of this article explains exactly why the output looks the way it does instead of looking like the texture, but for those who want the solution, here it is: The edges of the input quad need to lie upon the boundary lines between pixel cells. By simply shifting the x and y quad coordinates by -0.5 units, texel cells will perfectly cover pixel cells and the quad can be perfectly recreated on the screen. (Figure 8 illustrates the quad at the corrected coordinates.)
圖6中展示了貼上紋理后的矩形(使用線性插值模式和CLAMP尋址模式),文中剩下的部分將會(huì)解釋為什么他看上去是這樣而不像我們的紋理圖。先提供一個(gè)解決這個(gè)問(wèn)題的方法:輸入的矩形的邊界線需要位于兩個(gè)像素區(qū)域之間。通過(guò)簡(jiǎn)單的將x和y值移動(dòng)-0.5個(gè)像素區(qū)域單位,紋素將會(huì)完美地覆蓋到矩形區(qū)域并且在屏幕上重現(xiàn)(圖8闡釋了這個(gè)完美覆蓋的正確的坐標(biāo))(譯者:這里你創(chuàng)建的窗口的坐標(biāo)必須為整數(shù),因此位于像素區(qū)域的中央,你的客戶區(qū)屏幕最左像素區(qū)域的邊界線在沒(méi)有進(jìn)行移位-0.5之前也必位于某個(gè)像素區(qū)域的中央)
The details of why the rasterized output only bears slight resemblance to the input texture are directly related to the way Direct3D addresses and samples textures. What follows assumes you have a good understanding oftexture coordinate space And bilinear texture filtering.
關(guān)于為什么光柵化和紋理映射出來(lái)的圖像只有一點(diǎn)像我們的原始紋理圖的原因和Direct3D紋理選址模式和過(guò)濾模式有關(guān)。
Getting back to our investigation of the strange pixel output, it makes sense to trace the output color back to the pixel shader: The pixel shader is called for each pixel selected to be part of the rasterized shape. The solid blue quad depicted in Figure 3 could have a particularly simple shader:
回到我們調(diào)查為什么會(huì)輸出奇怪像素的過(guò)程中,為了追蹤輸出的顏色,我們看看像素著色器:像素作色器在光柵后的圖形中的每個(gè)像素都會(huì)被調(diào)用一次。圖3中藍(lán)色的線框圍繞的矩形區(qū)域都會(huì)使用一個(gè)簡(jiǎn)單的作色器:
float4 SolidBluePS() : COLOR
{
return float4( 0, 0, 1, 1 );
}
For the textured quad, the pixel shader has to be changed slightly:
texture MyTexture;
sampler MySampler =
sampler_state
{
Texture = 《MyTexture》;
MinFilter = Linear;
MagFilter = Linear;
AddressU = Clamp;
AddressV = Clamp;
};
float4 TextureLookupPS( float2 vTexCoord : TEXCOORD0 ) : COLOR
{
return tex2D( MySampler, vTexCoord );
}
That code assumes the 4 x 4 texture of Figure 5 is stored in MyTexture. As shown, the MySampler texture sampler is set to perform bilinear filtering on MyTexture. The pixel shader gets called once for each rasterized pixel, and each time the returned color is the sampled texture color at vTexCoord. Each time the pixel shader is called, the vTexCoord argument is set to the texture coordinates at that pixel. That means the shader is asking the texture sampler for the filtered texture color at the exact location of the pixel, as detailed in Figure 7:
代碼假設(shè)圖5中的4x4的紋理存儲(chǔ)在MyTexture中。MySampler被設(shè)置成雙線性過(guò)濾。光柵化每個(gè)像素的時(shí)候調(diào)用一次這個(gè)Shader.每次返回的顏色值都是對(duì)sampled texture使用vTexCoord取樣的結(jié)果,vTexCoord是物理像素值處的紋理坐標(biāo)。這意味著在每個(gè)像素的位置都會(huì)查詢紋理以得到這點(diǎn)的顏色值。詳情如圖7所示:
圖7
The texture (shown superimposed) is sampled directly at pixel locations (shown as black dots)。 Texture coordinates are not affected by rasterization (they remain in the projected screen-space of the original quad)。 The black dots show where the rasterization pixels are. The texture coordinates at each pixel are easily determined by interpolating the coordinates stored at each vertex: The pixel at (0,0) coincides with the vertex at (0, 0); therefore, the texture coordinates at that pixel are simply the texture coordinates stored at that vertex, UV (0.0, 0.0)。 For the pixel at (3, 1), the interpolated coordinates are UV (0.75, 0.25) because that pixel is located at three-fourths of the texture‘s width and one-fourth of its height. These interpolated coordinates are what get passed to the pixel shader.
紋理(重疊上的區(qū)域)是在物理像素的位置采樣的(黑點(diǎn))。紋理坐標(biāo)不會(huì)受光柵化的影響(它們被保留在投影到屏幕空間的原始坐標(biāo)中)黑點(diǎn)是光柵化的物理像素點(diǎn)的位置。每個(gè)像素點(diǎn)的紋理坐標(biāo)值可以通過(guò)簡(jiǎn)單的線性插值得到:頂點(diǎn)(0,0)就是物理像素(0,0)UV是(0.0,0.0)。像素(3,1)紋理坐標(biāo)是UV(0.75,0.25)因?yàn)橄袼刂凳窃?/4 紋理寬度和1/4紋理高度的位置上。這些插過(guò)值的紋理坐標(biāo)被傳遞給了像素著色器。
The texels do not line up with the pixels in this example; each pixel (and therefore each sampling point) is positioned at the corner of four texels. Because the filtering mode is set to Linear, the sampler will average the colors of the four texels sharing that corner. This explains why the pixel expected to be red is actually three-fourths gray plus one-fourth red, the pixel expected to be green is one-half gray plus one-fourth red plus one-fourth green, and so on.
每個(gè)紋素并不和每個(gè)像素重疊,每個(gè)像素都在4個(gè)紋素的中間。因?yàn)檫^(guò)濾模式是雙線性。過(guò)濾器將會(huì)取像素周圍4個(gè)顏色的平均值。這解釋了為什么我們想要的紅色實(shí)際上確是3/4的灰色加上1/4的紅色。應(yīng)該是綠色的像素點(diǎn)是1/2的灰色加上1/4的紅色加上1/4的綠色等等。
To fix this problem, all you need to do is correctly map the quad to the pixels to which it will be rasterized, and thereby correctly map the texels to pixels. Figure 8 shows the results of drawing the same quad between (-0.5, -0.5) and (3.5, 3.5), which is the quad intended from the outset.
為了修正這個(gè)問(wèn)題,你需要做的就是正確的將矩形映射到像素,然后正確地映射紋素到像素。圖8顯示了將(-0.5, -0.5) and (3.5, 3.5)的矩形進(jìn)行紋理映射后的結(jié)果。
圖8
Summary
In summary, pixels and texels are actually points, not solid blocks. Screen space originates at the top-left pixel, but texture coordinates originate at the top-left corner of the texture’s grid. Most importantly, remember to subtract 0.5 units from the x and y components of your vertex positions when working in transformed screen space in order to correctly align texels with pixels.
總結(jié):
總的來(lái)說(shuō),像素和紋素實(shí)際上是點(diǎn),不是實(shí)體的塊。屏幕空間原點(diǎn)是左上角的物理像素,但是紋理坐標(biāo)原點(diǎn)是紋素矩形的最左上角。最重要的是,記住當(dāng)你要將紋理中的紋素正確的映射到屏幕空間中的像素時(shí),你需要減去0.5個(gè)單位
發(fā)布評(píng)論請(qǐng)先 登錄
相關(guān)推薦
評(píng)論