Googles appar
Huvudmeny

Post a Comment On: cbloom rants

"04-07-11 - Help me I can't stop"

5 Comments -

1 – 5 of 5
Blogger mvanbem said...

In the previous post you show how to construct an optimal grid of discrete samples for a source signal when those samples will be bilinear filtered back into a continuous signal:
(signal) <analysis filter> (optimal samples) <matching synthesis filter> (good approximation of the signal)

What if this analysis was applied to a 3D rendering pipeline, represented as a series of filters?

(source texture signal) <prefilter> (discrete mip chain) <GPU bilinear/trilinear/anisotropic filter> (screen pixels) <screen synthesis filter> (light)

1. Is this at all sensible? I have no background in signal processing.

2. Given a synthesis filter for the screen, does the ideal prefilter exist and have any interesting properties?

3. Could a shader running a customized filter kernel do a better job matching the light output to a source signal? Based on your posts, I would think the mapping from texture samples to screen pixels could be made optimal by reconstructing with the prefilter's synthesis function and then convolving with the screen's analysis function. I doubt this will be equivalent to any GPU filtering mode.

April 7, 2011 at 11:34 PM

Blogger cbloom said...

The short answer is yes, sort of.

If you knew the entire pipeline from your pixel to the output, you could figure out the ideal reconstruction filter, and what reconstruction was actually being done, and then solve for how to compensate for that.

In practice there are a lot of problems with doing that at the texture level. The output filter depends on where/how the texture is used, so it can't be done statically, you would have to tell the texture fetch shader how&where the pixel was to be used. It also couldn't compensate for anything that happens across triangle edges.

The other option is to run a full screen post-process, and that's sort of what all the MLAA filter type of stuff is doing - compensating for bad filtering in the rendering pipeline to try to make the output more like what it should be.

April 8, 2011 at 12:36 PM

Blogger ryg said...

For "isotropic" (i.e. no nonuniform scales or perspective transforms) 2D-only you're restricted enough to do some general prefiltering with predictable results. Not very helpful for general 3D rendering, but useful for font rendering and things like that.

"The other option is to run a full screen post-process, and that's sort of what all the MLAA filter type of stuff is doing - compensating for bad filtering in the rendering pipeline to try to make the output more like what it should be."
Hmm not really. Multisampling and signal processing based approaches are kind of orthogonal to MLAA and other post-filtering approaches, because really they have very different goals.

The sampling view is "physical" in the sense that there's an objective ground truth and we're trying to reproduce it as closely as possible.

Postfilters don't care about that at all; their POV is that some bad sampling approximations create objectionable visual artifacts, and they remove those artifacts. They don't have any underlying model of a reality they're trying to map to the closest representable approximation, but they do have an underlying appearance model that tells them what looks bad and how to fix it up.

That may make the image closer to what it should be, or move it further away - the post-filters don't care. They're not trying to move the image closer to the ground truth, they just have some "reality-independent" cost function for images that they're trying to minimize.

To give an example, say you're looking through a mosquito net. If you do this IRL, you see a clear moire effect. A perfect "physical" renderer would reproduce this exactly. A post-filter would probably eliminate the effect, mistaking it for a sampling artifact, and opt to just uniformly darken the whole area.

April 8, 2011 at 2:29 PM

Blogger cbloom said...

"Multisampling and signal processing based approaches are kind of orthogonal to MLAA and other post-filtering approaches, because really they have very different goals."

I don't agree with that / that's not true under how I'm using those words.

I see what you're saying and I agree, but I think the distinction between "signal processing" and more hacky techniques is artificial.

The whole point of signal processing for me is to make something that looks good. I don't constrain myself to only linear filters that are interpolating or whatever.

So when I say "do some filtering on the final frame" I am including techniques such as MLAA. I also include things like bilateral filters, "augural zooming", and other types of adaptive filters.

April 8, 2011 at 4:23 PM

Blogger ryg said...

I see what you're saying and I agree, but I think the distinction between "signal processing" and more hacky techniques is artificial.
Poor choice of wording on my part, I certainly don't mean to suggest that linear filters are the only way to go.

The distinction I mean is between what is called "unbiased" and "biased" methods in physically-based rendering. Unbiased renderers can give you incrementally better error bounds (with high probability) the more computing resources (time, memory) you're willing to throw at the problem - e.g. all the various Path Tracing variants. Biased methods *can* give you better results for more work, but they don't come with any convergence guarantee - the example here would be Photon Mapping and friends.

The same distinction can be made for various filtering/postprocessing operations, and that's not just a distinction between linear and non-linear. Most linear approaches happen to be unbiased, but there's also unbiased nonlinear/adaptive filters (e.g. most anisotropic filtering variants).

Biased/unbiased is not a "value judgment" - they both have their advantages and disadvantages; that's precisely why I think the distinction is useful - it's not very enlightening to have two classes of solutions when one of the two is obviously superior in every important way.

April 8, 2011 at 10:44 PM

You can use some HTML tags, such as <b>, <i>, <a>

This blog does not allow anonymous comments.

Comment moderation has been enabled. All comments must be approved by the blog author.

You will be asked to sign in after submitting your comment.