Googles appar
Huvudmeny

Post a Comment On: cbloom rants

"06-14-11 - ProcessSuicide"

6 Comments -

1 – 6 of 6
Anonymous Anonymous said...

The UT Video codec is thankfully much faster with only 5-15% worse compression. You are essentially limited to your I/O with UT. It has some bugs as well but I use it several times a week for a commercial product. Here's a thread with download links for v8.5.2 and v9.0.0: http://forum.doom9.org/showthread.php?t=143624&page=10

June 14, 2011 at 10:27 PM

Blogger jfb said...

I do this with ExitProcess(0) for my Mono programs. Their issue is that if you let it do a normal .Net 'exit', it tries to garbage collect or call finalizers or whatnot (who cares, really? It's exiting, I don't need to free memory first...), and often deadlocks.

It might be a "hack", but it's better than spending a lot of work on exiting when a modern OS can clean it up anyway. :) Especially when you don't control all the problem libraries...

June 15, 2011 at 1:07 AM

Blogger cbloom said...

Bloody Blogger marked trixter's comment as spam. It is not spam, it's awesome!

UT is in fact totally functional. And I love that he exposes the different formats (RGB,RGBA,YV12,etc) as different 4CC's (rather than a flag inside the format) - it makes it so much easier to just pick the right one.

June 15, 2011 at 9:19 AM

Blogger cbloom said...

What's more, UT can actually play back video at full speed! Yay!

And the source code is really nice and clean.

June 15, 2011 at 9:32 AM

Blogger cbloom said...

Ah.. interesting. UT is a huffman-DPCM sort of like PNG. The predictor he uses is a form of "clamped gradient" which I wrote about before. It's a very good predictor and is uniformly good so it means you can write optimized assembly just for that one, rather than having to support a whole bunch of predictors.

On encode, his code to form the delta from prediction is very good, SSE that does 16 bytes at a time. But then accumulating the huffman counts for the deltas is not awesome.

On decode, he does one byte at a time using lots of cmovs or MMX pmin/pmax. This is an unfortunate necessity due to left-neighbor serialization.

A north-neighbor only predictor could be much faster but is also a worse predictor.

An LZ back end should get more compression and also be faster than a huffman-only back end.

Even a super-simplified LZ that only does 16-byte matches or something like that might help.

June 15, 2011 at 9:44 AM

Anonymous Anonymous said...

If you're going to write your own lossless codec - thumbs up. This area is seriously underdeveloped.

July 8, 2011 at 9:40 AM

You can use some HTML tags, such as <b>, <i>, <a>

This blog does not allow anonymous comments.

Comment moderation has been enabled. All comments must be approved by the blog author.

You will be asked to sign in after submitting your comment.