Googles appar
Huvudmeny

Post a Comment On: cbloom rants

"01-12-10 - Lagrange Rate Control Part 2"

6 Comments -

1 – 6 of 6
Blogger ryg said...

There's a rough overview on the FFMPEG/x264 rate control available, but it's kinda old and doesn't have more information than what you already found out:

http://akuvian.org/src/x264/ratecontrol.txt

January 13, 2010 at 11:10 AM

Blogger cbloom said...

Oh yeah, I forgot to add my reference list. I'll amend the post rather than put them in a comment ...

January 13, 2010 at 11:26 AM

Anonymous Anonymous said...

An important detail from that bmlock dependency Doom9 thread:

Daiz: at the same bitrate or even at higher bitrates, fades ended up worse-looking with mbtree on than with mbtree off. It's a shame since the mbtree encodes look better everywhere else.

DS: This seems to be inherent in the algorithm and I'm not entirely sure how to resolve it... it was a problem in RDRC as well, in the exact same way. ... MB-tree lowers the quality on things that aren't referenced much in the future. Without weighted prediction, fades are nearly all intra blocks...

This seems almost like a bug... if you view it as "steal bits from other things to increase the quality of source blocks" it makes sense, but presumably putting extra bits into the source blocks actually reduces the number of bits you need later to hit the target quality.

I guess at really low bit rates where you just copy blocks without making ANY fixup at all then you're just stealing bits.

January 15, 2010 at 10:10 PM

Blogger cbloom said...

"Yes, qualitatively, other frames are improved with mb-tree, but the trend seems to be fades are significantly worse. It's almost as if bitrate is redistributed from fades/low luminance frames to other frames"

"mbtree can definitely be beneficial in some scenarios, but it almost always comes with signicantly worse fades and dark scenes. Perhaps the default --qcomp value isn't optimal, but increasing it will lower the mbtree effect, and basically we are back to square one. What i am seeing is a sort of "tradeoff." Some frames are improved at the expense of others. But the "expense" is quite severe in my opinion, at least with default qcomp. I'm looking for more balance."


This isn't really surprising, it's one of those consequences of using a "D" measure that doesn't match the HVS ; mb-tree and lagrange rate control and all that will move bits around to minimize D , which usually means not giving many bits to low luma stuff. That's normally correct, but I guess is occasionally terrible.

I'll write more about this in part 4 some day.

"but presumably putting extra bits into the source blocks actually reduces the number of bits you need later to hit the target quality."

Not necessarily, you *hope* that putting more bits in source blocks helps overall, but you can't afford to test every bit movement, so you just use some heuristics that usually help.

January 16, 2010 at 8:26 AM

Blogger Thatcher Ulrich said...

This is probably either obvious or useless, but -- for rate control, would it make sense to encode a few frames ahead, while keeping intermediate data structures, and then if a rate adjustment has to be made, re-use the partial work to more cheaply re-encode the same few frames with an adjusted quality?

January 16, 2010 at 9:28 PM

Blogger cbloom said...

Yeah, Thatch, that's basically the "classical" non-one-pass-lagrange method. For doing something like movec back-propagation you have to do something like that.

January 17, 2010 at 6:41 AM

You can use some HTML tags, such as <b>, <i>, <a>

This blog does not allow anonymous comments.

Comment moderation has been enabled. All comments must be approved by the blog author.

You will be asked to sign in after submitting your comment.