cbloom rants

9/15/2017

Oodle tuneability with space-speed tradeoff

Oodle's modern encoders take a parameter called the "space-speed tradeoff". (specifically OodleLZ_CompressOptions:: spaceSpeedTradeoffBytes).

"speed" here always refers to decode speed - this is about the encoder making choices about how it forms the compressed bit stream.

This parameter allows the encoders to make decisions that optimize for a space-speed goal which is of your choosing. You can make those decisions favor size more, or you can favor decode speed more.

If you like, a modern compressor is a bit a like a compiler. The compressed data is a kind of program in bytecode, and the decompressor is just an intepreter that runs that bytecode. An optimal parser is like an optimizing compiler; you're considering different programs that produce the same output, and trying to find the program that maximizes some metric. The "space-speed tradeoff" parameter is a bit like -Ox vs -Os, optimize for speed vs size in a compiler.

Oodle of course includes Hydra (the many headed beast) which can tune performance by selecting compressors based on their space-speed performance.

But even without Hydra the individual compressors are tuneable, none more so than Mermaid. Mermaid can stretch itself from Selkie-like (LZ4 domain) up to standard LZH compression (ZStd domain).

I thought I would show an example of how flexible Mermaid is. Here's Mermaid level 4 (Normal) with some different space-speed tradeoff parameters :


sstb = space speed tradeoff bytes

sstb 32 :  ooMermaid4  :  2.29:1 ,   33.6 enc mbps , 1607.2 dec mbps
sstb 64 :  ooMermaid4  :  2.28:1 ,   33.8 enc mbps , 1675.4 dec mbps
sstb 128:  ooMermaid4  :  2.23:1 ,   34.1 enc mbps , 2138.9 dec mbps
sstb 256:  ooMermaid4  :  2.19:1 ,   33.9 enc mbps , 2390.0 dec mbps
sstb 512:  ooMermaid4  :  2.05:1 ,   34.3 enc mbps , 2980.5 dec mbps
sstb 1024: ooMermaid4  :  1.89:1 ,   34.4 enc mbps , 3637.5 dec mbps

compare to : (*)

zstd9       :  2.18:1 ,   37.8 enc mbps ,  590.2 dec mbps
lz4hc       :  1.67:1 ,   29.8 enc mbps , 2592.0 dec mbps

(* MSVC build of ZStd/LZ4 , not a fair speed measurement (they're faster in GCC), just use as a general reference point)

Point being - not only can Mermaid span a large range of performance but it's *good* at both ends of that range, it's not getting terrible as it out of its comfort zone.

You may notice that as sstb goes below 128 you're losing a lot of decode speed and not gaining much size. The problem is you're trying to squeeze a lot of ratio out of a compressor that just doesn't target high ratio. As you get into that domain you need to switch to Kraken. That is, there comes a point where the space-speed benefit of squeezing the last drop out of Mermaid is harder than just making the jump to Kraken. And that's where Hydra comes in, it will do that for you at the right spot.

Some learnings from ZStd

I've spent some time in the last month looking into cases where ZStd beats Kraken & Mermaid.

Most of the time Kraken gets better ratio than ZStd, but there were exceptions to that (mainly text), and it always kind of bothered me, since Kraken is roughly a superset of ZStd (not exactly), and the differences are small, it shouldn't have been winning by more than 1% (which is the variation I'd expect due to small differences). On text files, I have no edge over ZStd, all my advantages are moot, so we're reduced to both being pretty basic LZ-Huffs; so we should be equal, but I was losing. So I dug in to see what was going on.

Thanks of course to Yann for making his great work open source so that I'm able to look at it; open source and sharing code is a wonderful and helpful thing when people choose to do so voluntarily, not so nice when your work is stolen from you against your will and shown to the world like phone-hacked dick-pics *cough* *assholes*. Since I'm learning from open source, I figured I should give back, so I'm posting what I learned.

A lot of the differences are a question of binary vs. text focus. ZStd has some tweaking that clearly comes from testing on text and corpora with a lot of text (like silesia). On the other hand, I've been focusing very much on binary and that has caused me to miss some important things that only show up when you look closely at text performance.

This is what I found :

Long hashes are good for text, bad for binary

ZStd non-optimal levels use hash lengths of 5 or even 6 or 7 at the fastest levels. This helps on text because text has many long matches, so it's important to have a hash long enough that it can differentiate between "boogie" and "booger" and put them in different hash table bins. (this is most important at the fastest levels which are cache table with no ways).

On binary you really want to hash len 4 because there are important matches of exactly len 4, and longer hashes can make you miss them.


zstd2 hash len 6 :
PD3D    : zstd2 : 31,941,800 ->11,342,055 =  2.841 bpb =  2.816 to 1 

zstd2 hash len 4 :
PD3D    : zstd2 : 31,941,800 ->10,828,309 =  2.712 bpb =  2.950 to 1 

zstd2 hash len 6 :
dickens : zstd2 : 10,192,446 -> 3,909,882 =  3.069 bpb =  2.607 to 1 

zstd2 hash len 4 :
dickens : zstd2 : 10,192,446 -> 4,387,536 =  3.444 bpb =  2.323 to 1 

Longer hashes help the fast modes a *lot* on text. If you care about fast compression of text you really want those longer hashes.

This is a big issue and because of it ZStd fast modes will continue to be better than Oodle on text (and Oodle will be better on binary); or we have to find a good way to detect the data type and tune the hash length to match.

lazy2 is helpful on text

Standard lazy parsing looks for a match at ptr, if one is found it also looks at ptr+1 to see if something better is there. Lazy2 also looks at ptr+2.

I wasn't doing 2-ahead lazy parsing, because on binary it doesn't help much. But on text it's a nice little win :


Zstd level 9 has 2-step lazy normally :

zstd9 : 41,458,703 ->10,669,424 =  2.059 bpb =  3.886 to 1 

disabled : (1-step lazy) :

zstd9 : 41,458,703 ->10,825,637 =  2.089 bpb =  3.830 to 1 

optimal parser all len reductions helps on text

I once wrote that in codecs that do strong rep0 exclusion (rep0len1 literal can't occur immediately after a match), that you can just always send max-length matches, and not have to consider match length reductions. (because max-length matches maintain rep0 exclusion but shorter ones violate it).

That is not quite right. It tends to be true on binary, but is wrong on text. The issue is that you only get the rep0 exclusion benefit if you actually send a literal after the match.

That happens often on binary. Binary frequently goes match-literal-match-literal , with some near-random bytes between predictable regions. Text has very few literals. Many text files go match-match-match which means the rep0 literal exclusion does nothing for you.

On text files you often have many short & medium length overlapping matches, and trying len reductions is important to find the parse that traces through them optimally.


AAAADDDGGGGJJJJ
 BBBBBFFFHHHHHH
  CCCEEEEEIII

and the optimal parse might be

AAABBBFFFHHHHHH

which you would only find if you tried the len reduction of A

this kind of thing. Text is all about making the best normal-match decisions.


with all len reductions :

zstd22 : 10,000,000 -> 2,800,209 =  2.240 bpb =  3.571 to 1 

without :

zstd22 : 10,000,000 -> 2,833,168 =  2.267 bpb =  3.530 to 1 

Getting len 3 matches right in the optimal parser is really important on text

Part of the "text is all matches" issue. My codecs are mostly MML 4 in the non-optimal modes, then I switch to MML3 at level 7 (Optimal3). Adding MML3 generally lets you get a bit more compression ratio, but hurts decode speed a bit.

(BTW MML3 in the non-optimal modes generally *hurts* compression ratio, because they can't make the decision correctly about when to use it. A len 3 match is always marginal, it's only slightly cheaper than 3 literals (depending on the literals), and you probably don't want it if you can find any longer match within those next 3 bytes. Non-optimal parsers just make these decisions wrong and muck it all up, they do better with MML 4 or even higher sometimes. (there are definitely files where you can crank up MML to 6 or 8 and improve ratio))

So, I was doing that *but* I was using the statistics from a greedy pre-pass to seed the optimal parse decisions, and the greedy pre-pass was MML 4, which was biasing the optimal against len 3 matches. It was just a fuckup, and it wasn't hurting me on binary, but when I compared to ZStd's optimal parse on text I could immediately see it had a lot more len 3 matches than me.

(this is also an example of the parse-statistics feedback problem, which I believe is the most important problem in LZ compresion)


dickens

zstd22 : 10,192,446 -> 2,856,038 =  2.242 bpb =  3.569 to 1

before :
ooKraken7 : 10,192,446 -> 2,905,719 =  2.281 bpb =  3.508 to 1

after  :
ooKraken7 : 10,192,446 -> 2,862,710 =  2.247 bpb =  3.560 to 1 

ZStd is full of small clever bits

There's lot of little clever nuggets that are hard to see. They aren't generally commented and they're buried in chunks of copy-pasted code that all looks the same so it's easy to gloss over the variations.

I looked over this code many times :

        if ((offset_1 > 0) & (MEM_read32(ip+1-offset_1) == MEM_read32(ip+1))) {
            mLength = ZSTD_count(ip+1+4, ip+1+4-offset_1, iend) + 4;
            ip++;
            ZSTD_storeSeq(seqStorePtr, ip-anchor, anchor, 0, mLength-MINMATCH);
        } else {
            U32 offset;
            if ( (matchIndex <= lowestIndex) || (MEM_read32(match) != MEM_read32(ip)) ) {
                ip += ((ip-anchor) >> g_searchStrength) + 1;
                continue;
            }
            // [ got match etc... ]

and I thought - okay, look for a 4 byte rep match, if found take it unconditionally and don't look for normal match. That's the same thing I do (I think it came from me?), no biggie.

But there's a wrinkle. The rep check is not at the same position as the normal match. It's at pos+1.

This is actually a mini-lazy-parse. It doesn't do a full match & rep find at pos & (pos+1). It's just scanning through, at each pos it only does one rep find and one match find, but the rep find is offset forward by +1. That means it will take {literal + rep} even if match is available, which a normal non-lazy parser can't do.

(aside : you might think that this misses a rep find, when the literal run starts, right after a match, it starts find the first rep at pos+1 so there's a spot where it does no rep find. But that spot is where the rep0 exclusion applies - there can be no rep there, so it's all good!)

This is a solid win and it's totally for free, so very cool.


Seven testset 

with rep-ahead search :

total : zstd3       : 80,000,000 ->34,464,878 =  3.446 bpb =  2.321 to 1 

with rep at same pos as match :

total : zstd3       : 80,000,000 ->34,521,261 =  3.452 bpb =  2.317 to 1 

The end.


ADD : a couple more notes on ZStd (that aren't from the recent investigation) while I'm at it :

ZStd uses a unique approach to the lrl0-rep0 exclusion

After a match (of full length), that same offset cannot match again. If your offsets are in a rep match cache, the most recently used offset is the top (0th) entry, rep0. This is the lrl0-rep0 exclusion.

rep0 is usually the most likely match, so it will get the largest share of the entropy coder probability space. Therefore if you're in an exclusion where that symbol is impossible, you're wasting a lot of bits.

There are two ways that I would call "traditional" or straightforward data compression ways to model the lrl0-rep0 exclusion. One is to use a single bit for (lrl == 0) as context for the rep-index coding event. eg. you have two entropy coding states for offsets, one for lrl == 0 and one for lrl != 0. The other classical method would be to combine lrl with rep-index in a larger alphabet, which allows you to model their correlation using only order-0 entropy coding. The minimum alphabet size here is only 2 bits, 1 bit for (lrl == 0) or not, and one for (match == rep0) or not.

ZStd does not use either of these methods. Instead it shifts the rep index by (lrl == 0). That is, ZStd has 3 reps, and normally they are in match offset slots 0,1,2. But right after the end of a match (when lrl is 0) those offset values change to mean rep 1,2,3 ; and there is no rep3, that's a virtual offset equal to (rep0 - 1).

The ZStd format documentation is a good reference for these things.

I can't say how well the ZStd method here compares to the alternatives as it's a bit more effort to check than I'd like to do. (if you want to try it, you could double the size of ZStd's offset coding alphabet to put 1 bit of lrl == 0 into the offset coding; then the decode sequence grabs an offset and only pulls an lrl code if the offset bit says so).

ZStd uses TANS in a limited and efficient way

ZStd does not use TANS (FSE) on its literals, which are the largest class of entropy coded symbols. Presumably Yann found, like us, that the compression gains on literals (over Huffman) are small, and the speed cost is not worth it. ZStd only uses TANS on the LZ match components - LRL, offset, ML.

Each of these has a small alphabet (52,35,28), and therefore can use a small # of bits for the TANS tables (9,9,8). This is a sweet spot for TANS, so it works well in ZStd.

For large alphabets (eg. 256 for literals), TANS needs a higher # of bits for its code tables (at least 11), which means 2048 entries being filled. This makes the table setup time rather large. By cutting the table size to 8 or 9 bits you cut that down by 4-8X. With large alphabets you also may as well just go Huff. But with small alphabets, Huff gets worse and worse. Consider the extreme - in an alphabet of 2 symbols Huff becomes no compression at all, while TANS can still do entropy coding. With small alphabets to use Huffman you need to combine symbols (eg. in a 2-bit alphabet you would code 4 at once as an 8-bit symbol). BUT that means going up to big decoder tables again, which adds to your constant overhead.

FSE uses the prime-scatter method to fill the TANS decode table. (this is using a relatively-prime step to just walk around the circular array, using the property that you can just keep stepping that way and you will eventually hit every slot once and only once). I evaluated the prime-scatter method before and concluded that the compression penalty was unacceptably large. I was mistaken. I had just implemented it wrong, so my results were much worse than they should be.

(the mistake I made was that I did the prime-scatter in one pass; for each symbol, take the steps and fill table entries, increment "from_state" as you step, "to_state" steps around with the prime-modulo. This causes a non-monotonic relationship between from_state and to_state which is very bad. The right way to do it (the way ZStd/FSE does it) is to use some kind of two-pass scheme, so that you do the shuffle-scatter first (which can step around the loop non-monotonically) but then assign the from_state relationship in a second pass which ensures the monotonic relationship).

With a correct implementation, prime-scatter's compression ratio is totally fine (*). The two-pass method that ZStd/FSE uses would be slow for large alphabets or large L, but ZStd only uses FSE for small alphabets and small L. The entropy coder and application are well matched. (* = if you special case singletons, as below)

The worst case for prime-scatter is low counts, and counts of 1 are the worst. ZStd/FSE uses a special case for counts of 1 that are "below 1". Back in the "Understanding TANS" series I looked at the "precise sort" method of table building and found that artificially skewing the bias to put counts of 1 at the end was a big win in practice. The issue there is that the counts we see at that point are normalized, and zeros were forced up to 1 for codeability. The true count might be much lower. Say you're coding an array of size 64k and symbol 'x' only occurs 1 time. If you have a TANS L of 1024 , the true probability should be 1/64k , but normalized forces it up to 1/1024. Putting the singleton counts at th end of the TANS array gives them the maximum codelen (end of the array has maximum fractional bits). The sort bias I did before was a hack that relies on the fact that most singleton counts come from below-1 normalized probabilities. ZStd/FSE explicitly signals the difference, it can send a "true 1" (eg. closest normalized probability really is 1/1024 ; eg. in the 64k array, count is near 64), or a "below 1" , some very low count that got forced up to 1. The "below 1" symbols are forced to the end of the TANS array while the true 1's are allowed to prime-scatter like other symbols.

The end.

8/22/2017

Oodle 2.5.5 - encoder bug fix

Oodle 2.5.5 fixes a bug in the Kraken & Mermaid encoders which could cause them to make compressed data that decodes incorrectly (producing output different than the original) or could cause the decoder to return failure.

This bug was present from Oodle 2.5.0 to 2.5.4 ; if you use those versions you should update to 2.5.5

When the bug occurs, the OodleLZ_Compress call returns success, thinking it made valid compressed data, but it has actually made a damaged bit stream. When you call Decompress it might return failure, or it might return success but produce decompressed output that does not match the original bits.

Any compressed data that you have made which decodes successfully (and matches the original uncompressed data) is fine. The presence of the bug can only be detected by attempting to decode compressed data and checking that it matches the original uncompressed data.

The decoder is not affected by this bug, so if you have shipped user installations that only do decoding, they don't need to be updated. If you have compressed files which were made incorrectly because of this bug, you can patch only those individual compressed files.


Technical details :

This bug was caused by one of the internal bit stream write pointers writing past the end of its bits, potentially over-writing another previously written bit stream. This caused some of the previously written bits to become garbage, causing them to decode into something other than what they had been encoded from.

This only occured with 64-bit encoders. Any data written by 32-bit encoders is not affected by this bug.

This bug could in theory occur on any Kraken & Mermaid compressed data. In practice it's very rare and I've only seen it in one particular case - "whole huff chunks" on data that is only getting a little bit of compression, with uncompressed data that has a trinary byte structure (such as 24-bit RGB). It's also much more likely in pre-2.3.0 compatibility mode (eg. with OodleLZ_BackwardsCompatible_MajorVersion=2 or lower).


BTW it's probably a good idea in general to decode and verify the data after every compress.

I don't do it automatically in Oodle because it would add to encode time, but on second thought that might be a mistake. Pretty much all the Oodle codecs are so asymmetric, that doing a full decode every time wouldn't add much to the encode time. For example :


Kraken Normal level encodes at 50 MB/s
Kraken decodes at 1000 MB/s

To encode 1 MB is 0.02 s
To decode 1 MB is 0.001 s

To decode after every encode changes the encode time to 0.021 s = 47.6 MB/s

it's not a very significant penalty to encode time, and it's worth it to verify that your data definitely decodes correctly. I think it's a good idea to go ahead and add this to your tools.

I may add a "verify" option to the Compress API in the future to automate this.

8/08/2017

Oodle 2.5.4 - now with Windows UWP

Oodle 2.5.4 is out. There's now a separate Windows UWP SDK (separate from Win32).

Oodle for Windows UWP comes with only the "core" library that does memory to memory compression. The Oodle Core library uses no threads, has minimal dependencies (just the CRT), no funny business, making it very portable.

For full details see the Oodle Change Log

7/03/2017

Well Crap

I was cleaning my blog, deleting a bunch old posts, and accidentally deleted some I didn't want to. I'm going to repost a few, so if you have a subscription you may see odd old posts floating in because of that.

Unfortunately there's no blogger recover or trash can feature that I can just undo the delete. Frowny face. Also, while I can repost them, the comments are gone. And unfortunately it seems I can't post them to the same URL. The blogger post URL seems to be irrevocably marked with the post date, and even if I retro-date the post, it munges the URL to not be the same as the original.

ADD : I reposted a few of the ones I wanted to save. The new links are :

cbloom rants 09-27-08 On LZ and ACB
cbloom rants 10-05-08 Rant on New Arithmetic Coders
cbloom rants 10-06-08 Followup on the Russian Range Coder
cbloom rants 10-07-08 Random file stuff I've learned
cbloom rants 10-07-08 A little more on arithmetic coding ...
cbloom rants 10-08-08 Arithmetic coders throw away accuracy in lots of little places.
cbloom rants 10-10-08 On LZ Optimal Parsing
cbloom rants 10-10-08 On the Art of Good Arithmetic Coder Use

2/24/2017

Oodle Perf with Chunking and Dictionary Size

I get a lot of customers that want to cut their data into small blocks for paging, who ask "what's the benefit of using larger blocks" ?

The larger the block = more compression, and can help throughput (decode speed).

Obviously larger block = longer latency (to load & decode one whole block).

(though you can get data out incrementally, you don't have to wait for the whole decode to get the first byte out; but if you only needed the last byte of the block, it's strictly longer latency).

If you need fine grain paging, you have to trade off the desire to get precise control of your loading with small blocks & the benefits of larger blocks.

(obviously always follow general good paging practice, like amortize disk seeks, combine small resources into paging units, don't load a 256k chunk and just keep 1k of it and throw the rest away, etc.)

As a reference point, here's Kraken on Silesia with various chunk sizes :


Silesia : (Kraken Normal -z4)

 16k : ooKraken    : 211,938,580 ->75,624,641 =  2.855 bpb =  2.803 to 1 
 16k : decode           : 264.190 millis, 4.24 c/b, rate= 802.22 mb/s

 32k : ooKraken    : 211,938,580 ->70,906,686 =  2.676 bpb =  2.989 to 1 
 32k : decode           : 217.339 millis, 3.49 c/b, rate= 975.15 mb/s

 64k : ooKraken    : 211,938,580 ->67,562,203 =  2.550 bpb =  3.137 to 1 
 64k : decode           : 195.793 millis, 3.14 c/b, rate= 1082.46 mb/s

128k : ooKraken    : 211,938,580 ->65,274,250 =  2.464 bpb =  3.247 to 1 
128k : decode           : 183.232 millis, 2.94 c/b, rate= 1156.67 mb/s

256k : ooKraken    : 211,938,580 ->63,548,390 =  2.399 bpb =  3.335 to 1 
256k : decode           : 182.080 millis, 2.92 c/b, rate= 1163.99 mb/s

512k : ooKraken    : 211,938,580 ->61,875,640 =  2.336 bpb =  3.425 to 1 
512k : decode           : 182.018 millis, 2.92 c/b, rate= 1164.38 mb/s

1024k: ooKraken    : 211,938,580 ->60,602,177 =  2.288 bpb =  3.497 to 1 
1024k: decode           : 181.486 millis, 2.91 c/b, rate= 1167.80 mb/s

files: ooKraken    : 211,938,580 ->57,451,361 =  2.169 bpb =  3.689 to 1 
files: decode           : 206.305 millis, 3.31 c/b, rate= 1027.31 mb/s


16k   :  2.80:1 ,   15.7 enc mbps ,  802.2 dec mbps
32k   :  2.99:1 ,   19.7 enc mbps ,  975.2 dec mbps
64k   :  3.14:1 ,   22.8 enc mbps , 1082.5 dec mbps
128k  :  3.25:1 ,   24.6 enc mbps , 1156.7 dec mbps
256k  :  3.34:1 ,   25.5 enc mbps , 1164.0 dec mbps
512k  :  3.43:1 ,   25.4 enc mbps , 1164.4 dec mbps
1024k :  3.50:1 ,   24.6 enc mbps , 1167.8 dec mbps
files :  3.69:1 ,   18.9 enc mbps , 1027.3 dec mbps

(note these are *chunks* not a window size; no carry-over of compressor state or dictionary is allowed across chunks. "files" means compress the individual files of silesia as whole units, but reset compressor between files.)

You may have noticed that the chunked files (once you get past the very small 16k,32k) are somewhat faster to decode. This is due to keeping match references in the CPU cache in the decoder.

Limitting the match window (OodleLZ_CompressOptions::dictionarySize) gives the same speed benefit for staying in cache, but with a smaller compression win.


window 128k : ooKraken    : 211,938,580 ->61,939,885 =  2.338 bpb =  3.422 to 1 
window 128k : decode           : 181.967 millis, 2.92 c/b, rate= 1164.71 mb/s

window 256k : ooKraken    : 211,938,580 ->60,688,467 =  2.291 bpb =  3.492 to 1 
window 256k : decode           : 182.316 millis, 2.93 c/b, rate= 1162.48 mb/s

window 512k : ooKraken    : 211,938,580 ->59,658,759 =  2.252 bpb =  3.553 to 1 
window 512k : decode           : 184.702 millis, 2.97 c/b, rate= 1147.46 mb/s

window 1M : ooKraken    : 211,938,580 ->58,878,065 =  2.222 bpb =  3.600 to 1 
window 1M : decode           : 184.912 millis, 2.97 c/b, rate= 1146.16 mb/s

window 2M :  ooKraken    : 211,938,580 ->58,396,432 =  2.204 bpb =  3.629 to 1 
window 2M :  decode           : 182.231 millis, 2.93 c/b, rate= 1163.02 mb/s

window 4M :  ooKraken    : 211,938,580 ->58,018,936 =  2.190 bpb =  3.653 to 1 
window 4M : decode           : 182.950 millis, 2.94 c/b, rate= 1158.45 mb/s

window 8M : ooKraken    : 211,938,580 ->57,657,484 =  2.176 bpb =  3.676 to 1 
window 8M : decode           : 189.241 millis, 3.04 c/b, rate= 1119.94 mb/s

window 16M: ooKraken    : 211,938,580 ->57,525,174 =  2.171 bpb =  3.684 to 1 
window 16M: decode           : 202.384 millis, 3.25 c/b, rate= 1047.21 mb/s

files     : ooKraken    : 211,938,580 ->57,451,361 =  2.169 bpb =  3.689 to 1 
files     : decode           : 206.305 millis, 3.31 c/b, rate= 1027.31 mb/s

window 128k:  3.42:1 ,   20.1 enc mbps , 1164.7 dec mbps
window 256k:  3.49:1 ,   20.1 enc mbps , 1162.5 dec mbps
window 512k:  3.55:1 ,   20.1 enc mbps , 1147.5 dec mbps
window 1M  :  3.60:1 ,   20.0 enc mbps , 1146.2 dec mbps
window 2M  :  3.63:1 ,   19.7 enc mbps , 1163.0 dec mbps
window 4M  :  3.65:1 ,   19.3 enc mbps , 1158.5 dec mbps
window 8M  :  3.68:1 ,   18.9 enc mbps , 1119.9 dec mbps
window 16M :  3.68:1 ,   18.8 enc mbps , 1047.2 dec mbps
files      :  3.69:1 ,   18.9 enc mbps , 1027.3 dec mbps

WARNING : tuning perf to cache size is obviously very machine dependent; I don't really recommend fiddling with it unless you know the exact hardware you will be decoding on. The test machine here has a 4 MB L3, so speed falls off slightly as window size approaches 4 MB.


If you do need to use tiny chunks with Oodle ("tiny" being 32k or smaller; 128k or above is in the normal intended operating range) here are a few tips to consider :

1. Consider pre-allocating the Decoder object and passing in the memory to the OodleLZ_Decompress calls. This avoids doing a malloc per call, which may or may not be significant overhead.

2. Consider changing OodleConfigValues::m_OodleLZ_Small_Buffer_LZ_Fallback_Size . The default is 2k bytes. Buffers smaller than that will use LZB16 instead of the requested compressor, because many of the new ones don't do well on tiny buffers. If you want to have control of this yourself, you can set this to 0.

3. Consider changing OodleLZ_CompressOptions::spaceSpeedTradeoffBytes . This is the number of bytes that must be saved from the compressed output size before the encoder will choose a slower decode mode. eg. it controls decisions like whether literals are sent raw or with entropy coding. This number is scaled for full size buffers (128k bytes or more). When using tiny buffers, it will choose to avoid entropy coding more often. You may wish to dial down this value to scale to your buffers. The default is 256 ; I recommend trying 128 to see what the effect is.

1/20/2017

Oodle on the Nintendo Switch

Oodle is coming soon to the Nintendo Switch (NX), an ARM A57 device.

Initial performance test vs. the software zlib (1.2.8) provided in the Nintendo SDK :

lzt99                : nn_deflate : 1.883 to 1 : 74.750 MB/s
lzt99                : LZNA       : 2.723 to 1 : 24.886 MB/s
lzt99                : Kraken     : 2.549 to 1 : 238.881 MB/s
lzt99                : Hydra      : 2.519 to 1 : 274.433 MB/s
lzt99                : Mermaid    : 2.393 to 1 : 328.930 MB/s
lzt99                : Selkie     : 1.992 to 1 : 660.859 MB/s

total                : nn_deflate : 1.883 to 1 : 74.750 MB/s
total                : LZNA       : 2.723 to 1 : 24.886 MB/s
total                : Kraken     : 2.549 to 1 : 238.881 MB/s
total                : Hydra      : 2.519 to 1 : 274.433 MB/s
total                : Mermaid    : 2.393 to 1 : 328.930 MB/s
total                : Selkie     : 1.992 to 1 : 660.859 MB/s

Kraken is 3.21X faster than zlib, with way more compression (35% more).

All tests single threaded, 64-bit.

I've also included Hydra, at a space-speed tradeoff value between Kraken & Mermaid (sstb=300). It's a bit subtle, perhaps you can see it best in the loglog chart, but Hydra here is not just interpolating between Kraken & Mermaid performance, it's actually beating both of them in a Pareto frontier sense.

UPDATE : these are rather old initial port numbers on the Switch. We've done some optimization work since then, and the speeds are way up, particularly Mermaid & Selkie. Don't read too much into the numbers here. I should update them...

8/20/2016

PNG without ZLib

If you need to send PNG images in a compressed archive, here's a tip.

PNG's are internally compressed with Zlib. When you run another compressor (such as Oodle) on an already-compressed file like PNG, it won't be able to do much with it. It might get a few bytes out of the headers, but typically the space-speed tradeoff decision in Oodle will not think that gain is worth bothering with, so the PNG will just be sent uncompressed.

There are a few reasons why you might want to use an Oodle compressor rather than the Zlib inside PNG. One is to reduce size; some of the Oodle compressors can make the files smaller than Zlib can. Another is for speed, if you use Kraken or Mermaid the decoder is much faster than the Zlib decompression in PNG.

Now obviously if you want the smallest possible lossless image, you should use an image-specific codec like webp-ll , but we will assume here that that isn't an option.

You could of course just decode the PNG to BMP or TGA or some kind of simple sample format, but that is not desirable. For one thing it changes the format, and your end usage loader might be expecting PNG. Your PNG's might be using PNG-specific features like borders or transparency or whatever that is hard to translate to other formats.

But another is that we want the PNG to keep doing its filtering. Filtered image samples from PNG will usually be more compressible by the back-end compressor than the raw samples in a BMP.

The easy way to do this all is just to take an existing PNG and set its ZLib compression level to 0 (just store). You keep all the PNG headers, and you still get the pixel filtering. But the samples are now uncompressed, so the back-end compressor (Oodle or whatever) gets to work on them instead of passing through already-ZLibbed data.


pngcp

pngcp is a utility from the official libpng distribution. It reads & writes a png and can change some options.

Usage for what we want is :


pngcp --level=0 --text-level=0 from.png to.png

I have made a Win32 build with static libs of pngcp for your convenience :

pngcp.zip

I also added a --help option ; run "pngcp --help". The official pngcp seems to have no help or readme at all that explains usage.

I *think* that pngcp preserves headers & options & pixel formats BUT I'M NOT SURE, it's not my code, YMMV, don't go fuck up your pngs without testing it. If it doesn't work - hey you can get pngcp from the official distro and fix it.

I used libpng 1624. The vc7.1 project in libpng worked fine for me. pngcp needed a little bit of de-unixification to build in VC but it was straightforward. You need zlib ; I used 1.2.8 and it worked fine; you need to make a dir named "zlib" at the same level as libpng. I did "mklink /j zlib zlib-1.2.8".

* CAVEAT : this isn't really the way I'd like to do this. pngcp loads the PNG and then saves it out again, which introduces the possibility of losing metadata that was stuffed in the file or just screwing it up somehow. I'd much rather do this conversion without ever actually loading it as an image. That is, take the PNG file as just a binary blob, find the zlib streams and unpack them, store them with a level 0 header, and pass through the PNG headers totally untouched. That would be a much more robust way to ensure you don't lose anything.


cbpngz0

cbpngz0 usage :


cbpngz0 from to

cbpngz0 uses the cblib loaders, so it can load bmp,tga,png,jpeg and so on. It writes a PNG at zlib level 0. Unlike pngcp, cbpngz0 does NOT support lots of weird formats; it only writes 8-bit gray, 24-bit RGB, and 32-bit RGBA. This is not a general purpose PNG zlib level changer!! Nevertheless I find it useful because of the wider range of formats it can load.

cbpngz0.zip

cbpngz0 is an x64 exe and uses the DLLs included.


Some sample results.

I take an original PNG, then try compressing it with Oodle two ways. First, convert it to a BMP and compress the BMP. Second, convert to a Zlib level 0 PNG (the "_z0.png") and then compress with Oodle. The differene between the two is that the _z0.png gets the PNG filters, and of course stays a PNG if that's what your loader expects. If you give the original PNG to Oodle, it passes it through uncompressed.


porsche640.png             529,821

porsche640.bmp             921,654

porsche640.bmp.ooz         711,273

porsche640_z0.png.ooz      508,091

-------------

blinds.png                 328,754

blinds.bmp               1,028,826

blinds.bmp.ooz             193,130

blinds_z0.png.ooz          195,558

-------------

xxx.png                    420,149

xxx.bmp                    915,054

xxx.bmp.ooz                521,861

xxx_z0.png.ooz             409,311

The ooz files are made with Oodle LZNA -z6 (level Optimal2).

You can see there are some big gains possible with replacing Zlib (on "blinds"). On normal photographic continuous tone images Zlib does okay so the gains are small. On those images, compressing the BMP without filters is very bad.


Another small note : if your end usage PNG loader supports the optional MNG format LOCO color transform, that usually helps compression.

ADD : Chris Maiwald points out that he gets better PNG filter choice by using "Z_FIXED" (which is the zlib option for fixed huffman tables instead of per-file huffman). A bit weird, but perhaps it biases the filter choice to be more consistent?

I wonder if choosing a single PNG filter for the whole image would be better than letting PNG do its per-row thing? (to try to make the post-filter residuals more consistent for the back end modeling stage). For max compression you would use something like a png optimizer that tried various filter strategies, but instead of rating them using zlib, rate with the back-end of your choice.

7/25/2016

Introducing Oodle Mermaid and Selkie

I'm pleased to announce the release of two new compressors in Oodle 2.3.0 : Mermaid and Selkie.

Mermaid and Selkie are the super-fast-to-decode distant relatives of Kraken. They use some of the same ideas and technology as Kraken, but are independent compressors targetted at even higher speed and lower compression. Mermaid & Selkie make huge strides in what's possible in compression in the high-speed domain, the same way that Kraken did in the high-compression domain.

Mermaid is about twice as fast as Kraken, but with compression around Zlib levels.

Selkie is one of the fastest decompressors in the world, and also gets much more compression than other very-high-speed compressors.

( Oodle is my data compression library that we sell at RAD Game Tools , read more about it there )

Kraken, Mermaid, and Selkie all use an architecture that makes space-speed decisions in the encoder to give the best tradeoff of compressed size vs decoding speed. The three compressors have different performance targets and make decisions suited for each one's usage domain (Kraken favors more compression and will give up some speed, Selkie strongly favors speed, Mermaid is in between).

For detailed information about the new Mermaid and Selkie I've written a series of posts :

cbloom rants Introducing Oodle Mermaid and Selkie
cbloom rants Oodle 2.3.0 All Test Sets
cbloom rants Oodle 2.3.0 ARM Report
cbloom rants Oodle Mermaid and Selkie on PS4
cbloom rants Oodle Mermaid
cbloom rants Oodle Selkie
RAD Game Tools - Oodle Network and Data Compression

Here are some representative numbers on the Silesia test set : (sum of time and size on individual files)


Oodle 2.3.0 Silesia -z6

Kraken     : 4.082 to 1 : 999.389 MB/s
Mermaid    : 3.571 to 1 : 2022.038 MB/s
Selkie     : 3.053 to 1 : 2929.770 MB/s

zstdmax    : 4.013 to 1 : 468.497 MB/s
zlib9      : 3.128 to 1 : 358.681 MB/s
lz4hc      : 2.723 to 1 : 2267.021 MB/s

on Win64 (Core i7-3770 3.4 GHz)

On Silesia, Mermaid is 5.65X faster to decode than zlib, and gets 14% more compression. Selkie is 1.3X faster to decode than LZ4 and gets 12% more compression.

Charts on Silesia total : (charts show time and size - lower is better!)

And the speedup chart on Silesia, which demonstrates the space-speed efficiency of a compressor in different usage domains.

Kraken was a huge step in the Pareto frontier that pushed the achievable speedup factor way up beyond what other compressers were doing. There's a pre-Kraken curve where we thought the best possible tradeoff existed, that most other compressors in the world roughly lie on (or under). Kraken set a new frontier way up on its own with nobody to join it; Mermaid & Selkie are the partners on that new curve that have their peaks at higher speeds than Kraken.

You can also see this big jump of the new family very easily in scatter plots, which we'll see in later posts .

Oodle Mermaid and Selkie on PS4

The PS4 is a lovely platform to benchmark on because it's standard. It's also very easy to run tests on. Performance of these compressors on the Xbox One is extremely similar, small variatation due to clock rate (1.75 vs 1.6 GHz) and compiler (MSVC vs clang/llvm).

Everything is slow on the PS4 in absolute terms (it's a slow chip and difficult to optimize for). The Oodle compressors do very well, even better in relative terms on PS4 than on typical PC's.

Kraken is usually around ~2X faster than ZStd on PC's, but is 3X faster on PS4. Mermaid is usually just slightly slower than LZ4 on PC's, but is solidly faster than LZ4 on PS4.


lzt99 :

Kraken     : 2.477 to 1 : 390.582 MB/s
Mermaid    : 2.279 to 1 : 749.896 MB/s
Selkie     : 1.937 to 1 : 1159.064 MB/s

zstd       : 2.374 to 1 : 133.498 MB/s
miniz      : 1.883 to 1 : 85.654 MB/s
lz4hc-safe : 1.669 to 1 : 673.616 MB/s
LZSSE8     : 1.626 to 1 : 767.106 MB/s

Mermaid is faster than LZ4 on PS4 !! Wow! And the compression level is in a totally different domain than other super-fast decompressors like LZ4 or LZSSE.

lzt99 is a good case for Selkie & Mermaid. Selkie beats zlib compression ratio while being 75% faster than LZ4.

All compressors here are fuzz-safe, and run in safe mode if they have optional safe/unsafe modes.

Charts : (showing time and size - lower is better!)

lzt99 :

the raw data :


PS4 : Oodle 230 : (-z6)

inName : lzt:/lzt99

reference :

miniz      : 24,700,820 ->13,120,668 =  4.249 bpb =  1.883 to 1
miniz_decompress_time : 288.379 millis, 18.61 c/b, rate= 85.65 mb/s

zstd       : 24,700,820 ->10,403,228 =  3.369 bpb =  2.374 to 1
zstd_decompress_time : 185.028 millis, 11.94 c/b, rate= 133.50 mb/s

lz4hc      : 24,700,820 ->14,801,510 =  4.794 bpb =  1.669 to 1
LZ4_decompress_safe_time : 36.669 millis, 2.37 c/b, rate= 673.62 mb/s

LZSSE8     : 24,700,820 ->15,190,395 =  4.920 bpb =  1.626 to 1
decode_time      : 32.200 millis, 2.08 c/b, rate= 767.11 mb/s

Oodle :

Kraken     : 24,700,820 -> 9,970,882 =  3.229 bpb =  2.477 to 1
decode           : 63.241 millis, 4.08 c/b, rate= 390.58 mb/s

Mermaid    : 24,700,820 ->10,838,455 =  3.510 bpb =  2.279 to 1
decode           : 32.939 millis, 2.13 c/b, rate= 749.90 mb/s

Selkie : 24,700,820 ->12,752,506 =  4.130 bpb =  1.937 to 1
decode           : 21.311 millis, 1.38 c/b, rate= 1159.06 mb/s

BTW for reference, the previous best compressor in Mermaid's domain was LZNIB. Before these new compressors, LZNIB was quite unique in that it got good decode speeds, much faster than the LZ-Huffs of the time (eg. 3X faster than ZStd) but with compression usually better than ZLib. Well, LZNIB is still quite good compared to other competition, but it's just clobbered by the new Oceanic Bestiary compressors. The new compressor in this domain is Mermaid and it creams LZNIB for both size and speed :


LZNIB -z6  : 24,700,820 ->12,015,591 =  3.892 bpb =  2.056 to 1 
decode           : 58.710 millis, 3.79 c/b, rate= 420.73 mb/s

Mermaid    : 24,700,820 ->10,838,455 =  3.510 bpb =  2.279 to 1
decode           : 32.939 millis, 2.13 c/b, rate= 749.90 mb/s

See the index of this series of posts for more information : Introducing Oodle Mermaid and Selkie .
For more about Oodle visit RAD Game Tools

old rants