Googles appar
Huvudmeny

Post a Comment On: cbloom rants

"12-15-08 - Denoising"

3 Comments -

1 – 3 of 3
Blogger Ethatron said...

Noise is an artifact of inadequate super-sampling, the best denoiser is allways downscale the image by a factor big X.

Fighting noise is fighting too little camera-shots, too little stochastic renders, too little information.

When NASA made the Ultra Deep Field, they actually sampled each pixel 170 times, not spacially in that case (because it would not have solved the problem) but through repetition and averaging.

Indeed would that be one easy approach to not-inthefirst-noising in digital cameras, if they could just make multiple shots fast enough.

On the topic of do-afterthedamagehasbeendone-denoising I think lossy image compressors and denoising-algorithms are just two sides of the same medal (or model :), obviously any good denoising-algorithm is a good predictor for lossy image compression (fe. neural-networks), and vice versa. Both are and must also be imperfect.

December 16, 2008 at 7:28 PM

Blogger cbloom said...

"Noise is an artifact of inadequate super-sampling, the best denoiser is allways downscale the image by a factor big X."

No, not at all, not by my definition. That certainly is an okay way to reduce noise, but it also removes tons of good information. I want algorithms that preserve as much of the original information as possible. (and in fact that is not even the best way to remove noise even if you are willing to give up lots of information).

Down-sampling with weights on the pixels where the weight is proportional to the confidence in the pixel's correctness is a pretty good denoiser if you are willing to give up some spatial information (eg. to compare a 15 MP noisy camera picture to 10 MP).

"lossy image compressors and denoising- algorithms are just two sides of the same medal"

Yes they are very similar. In general all these things are just trying to create an algorithmic model for the theoretical "image source". However, there are some major differences, as I tried to write about in my post and have also talked about in the past on Super-Resolution.

Part of the key difference comes from how the quality of the result is measured.

If you use normal image compression techniques for super-resolution or denoising you wind up making images that look very smoothed out. This is because the compressors want to pick the lowest entropy prediction, which is in the middle of the probability bump. That's not the same goal as a denoiser or super-resolution algorithm, which wants to pick the single maximum likelihood value.

This is a lot like the difference between median and average.

December 16, 2008 at 7:44 PM

Blogger Ethatron said...

Found some nice paper on the topic:

http://www.busim.ee.boun.edu.tr/~mihcak/publications/spl99.pdf

December 21, 2008 at 8:23 PM

You can use some HTML tags, such as <b>, <i>, <a>

This blog does not allow anonymous comments.

Comment moderation has been enabled. All comments must be approved by the blog author.

You will be asked to sign in after submitting your comment.