Google-apps
Hoofdmenu

Post a Comment On: C0DE517E

"Know your Z"

10 Comments -

1 – 10 of 10
Blogger Matt Enright said...

A: Makes XYZ a valid screen-space position, so it can be used as a float3 once again.
B: Haven't used this one myself, but looks like it would put Z into a 0-1 range, which is useful for writing out, for a depth pre-pass or deferred shading/lighting.

August 14, 2010 at 12:46 AM

Anonymous Anonymous said...

If you do this in the VS and pass the result as POSITION to the rasterizer I'd say you'll get some strange results. I think option B will just compress z further and maybe bring some stuff that would have been (far-) clipped otherwise back in. So maybe this doesn't look too wrong in the end.

But I can't think of a scenario where this would be useful (in a VS -- in a PS it's another story). So? What's the solution?

August 14, 2010 at 2:42 AM

Blogger NULL_PTR said...

As I recall, "A" will effectively disable perspective-correct interpolation.

August 14, 2010 at 10:58 AM

Blogger Krzysztof Narkowicz said...

A. IMHO NULL_PTR is right.
B. Outputs linear z values (w-buffer). Useful if You need same precision over all range (shadow mapping of smth).

August 15, 2010 at 4:25 AM

Blogger DEADC0DE said...

Right! The first option is really a gimmick with no useful application, anyway you already have non perspectively correct intepolators if you need them (the COLOR ones) and I can't think of a reason why you should need them anyways. The second trick is actually useful. It records a linear Z from near to far. It makes eye-space position reconstruction from the depth buffer easier, it gives you a nicer distribution of your precision and so on. Someone told me that at least on some cards it screws with the near-clipping even if I could no reproduce the problem, handle with care.

August 15, 2010 at 7:38 PM

Blogger NULL_PTR said...

I wouldn't rely on COLOR outputs to be non perspective correct.
DXSDK has this to say: "Color values input to the pixel shader are assumed to be perspective correct, but this is not guaranteed (for all hardware).".
On ATI HD5870 they are perspective correct.

>and I can't think of a reason why you should need them anyways.
Well, NVidia found a use for them here:
http://developer.download.nvidia.com/SDK/10/direct3d/Source/SolidWireframe/Doc/SolidWireframe.pdf

But I agree, it is hard to find yourself in a position where you need them.

August 16, 2010 at 1:06 AM

Anonymous MJP said...

If your z is linear, it can also royally screw up coarse-grained z-cull and z compression due to z not being linear in screen space. Have a look at this: http://www.humus.name/index.php?page=News&ID=255

August 17, 2010 at 10:40 AM

Blogger DEADC0DE said...

MJP: Yes, I know that article and it's very neat.

I don't think it's 100% accurate, but I agree that floating point depth is in general better (and that's why indeed W buffering is not really supported nowadays).

For example when it says: "Given that the gradient is constant across the primitive it's also relatively easy to compute the exact depth range within a tile for Hi-Z culling" - I don't think this is true at all, Hi-Z gets quads as inputs as everything really past the rasterizer, quads do know all their interpolated attributes, the gradients are indeed computed by finite differences on the quads values, so I don't think the linearity of Z/W in screenspace comes into play.

"Assume for instance that you want to do edge detection on the depth buffer, perhaps for antialiasing by blurring edges. This is easily done by comparing a pixel's depth with its neighbors' depths. With Z values you have constant pixel-to-pixel deltas, except for across edges of course. This is easy to detect by comparing the delta to the left and to the right, and if they don't match (with some epsilon) you crossed an edge. And then of course the same with up-down and diagonally as well" - This sounds like a lot of processing, and it does not sound like a smart idea. The gradient will be constant across a primitive, but in such algorithms you're not interested at all at primitive to primitive edges (i.e. wireframe) but you want object to object ones, so you'll have to use a fairly large threshold anyways and you gain nothing from the linearity. Actually you loose something, because you have to be sure you either scale your threshold with your projection, or convert to view space everything, or yes, do the gradient thing that is expensive.

So in the end? Is this "trick" useful? Most of the times no, and it has some pretty nasty problems too. But in some situations you can be handy indeed, and could let you express your algorithms in clip-space without having to transform from the z-buffer values to some other space.

August 17, 2010 at 11:29 PM

Anonymous Anonymous said...

Nice article! How about this?

hPos.xy /= hPos.w;
hPos.w = 1;

August 20, 2010 at 12:19 PM

Blogger DEADC0DE said...

Why don't you try? I don't predict anything good, but who knows :)

FxComposer is your friend

August 20, 2010 at 11:11 PM

You can use some HTML tags, such as <b>, <i>, <a>

Comment moderation has been enabled. All comments must be approved by the blog author.

You will be asked to sign in after submitting your comment.
Please prove you're not a robot