Googles appar
Huvudmeny

Post a Comment On: cbloom rants

"11-18-09 - Raw Conversion"

3 Comments -

1 – 3 of 3
Blogger won3d said...

FWIW, I think your ideas are great. Is it me or do these seem rather straightforward? In particular, the bit about resampling to sensor resolution is probably because it is hard to market sensor resolution separate from image resolution.

The whole geometric distortion thing makes it totally obvious. If there is barrel distortion, it will be different for the different color bands because of chromatic aberration. Now, there are time-worn techniques to deal with chromatic aberration, but these end up with more complex, heavier, more expensive lenses. Screw that. The optics should really only optimize for having reasonable focus; geometric and chromatic correction should just be done in software.

November 19, 2009 at 10:39 AM

Blogger cbloom said...

"The optics should really only optimize for having reasonable focus; geometric and chromatic correction should just be done in software."

Yeah I think the S90 and the LX3 show that the camera makers agree. I think we'll see even more lenses in the future that just punt on distortion and chromatic aberration and let the software fix it.

I don't have a problem with that at all, if it lets them make smaller lenses that let in more light (like the S90 and LX3) then I say go for it. I just pray that they let us get at the raw sensor data to fix it up ourselves rather than having firmware do it, which I fear is the way of the future.

November 19, 2009 at 10:55 AM

Blogger won3d said...

There's this:

http://news.stanford.edu/news/2009/august31/levoy-opensource-camera-090109.html

I like the Stanford computational photography stuff, but I guess it is a bit unclear what kind of impact they will have. I haven't heard much out of Refocus Imaging, for example.

So I have all the faith in the world that software correction is the way to go, but I wonder if this technique extends to exchangeable lens systems. One of things about the new micro 4/3rds format is that they are doing corrections in software, but how do they correct for different lenses? Is it somehow encoded in the lens itself (maybe as Zernike polynomials)? Or is there some standard distortion profile that happens to be easier to design for?

But maybe I should stop talking now, since I don't actually know much about actual photography, even though I did some stuff with optics. Meh, it wouldn't be the internet if people didn't speak authoritatively about things they didn't understand.

November 19, 2009 at 10:22 PM

You can use some HTML tags, such as <b>, <i>, <a>

This blog does not allow anonymous comments.

Comment moderation has been enabled. All comments must be approved by the blog author.

You will be asked to sign in after submitting your comment.