Googles appar
Huvudmeny

Post a Comment On: cbloom rants

"08-19-13 - Sketch of multi-Huffman Encoder"

3 Comments -

1 – 3 of 3
Blogger ryg said...

How's the speed of this compared to say your MMO LZP variants?

In my experience histogramming isn't cheap. And I'd expect your LZP thingy to do better on the compression too.

August 22, 2013 at 12:59 AM

Blogger cbloom said...

Yeah, histogramming is trouble. On LHS platforms it's a complete disaster, because histogramming is one giant LHS.

LZP gets way better compression, but it's using a ton more memory. It's got a big shared dictionary, and also a per-channel adaptive state. Not a fair contest.

The nice thing about multi-huffman is zero per-channel memory use. Also the decode is quite fast, all the time is in the encode.

It also works okay even on very tiny packets, under 16 bytes.

Those properties make it good for something like an MMO for the client->server traffic. The server only decodes those packets and doesn't want to spend any memory on a channel adaptive state. They also tend to be quite small, and the data rate is so low on the client end that taking a bit of time per byte is not a big deal.

August 22, 2013 at 11:26 AM

Blogger Unknown said...

This is bsaically what we do for our MMO Vendetta Online.
We have a set of static tables that we choose based on the context of the data being sent. For example, if a readable text string is sent, a static table that is 'optimized' for ASCII is used, but if a 3d position is sent, a 'floating-point' optimized table is used. Overall I think we have 16 tables to choose from that have been optimized with actual gameplay data.

October 6, 2013 at 4:05 PM

You can use some HTML tags, such as <b>, <i>, <a>

This blog does not allow anonymous comments.

Comment moderation has been enabled. All comments must be approved by the blog author.

You will be asked to sign in after submitting your comment.