Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Saturday February 29 2020, @06:05PM   Printer-friendly
from the zip-it-up dept.

Hans Wennborg does a deep dive into the history and evolution of the Zip compression format and underlying algorithms in a blog post. While this lossless compression format became popular around three decades ago, it has its roots in the 1950s and 1970s. Notably, as a result of the "Arc Wars" of the 1980s, hitting BBS users hard, the Zip format was dedicated to the public domain from the start. The main work of the Zip format is performed through use of Lempel-Ziv compression (LZ77) and Huffman coding.

I have been curious about data compression and the Zip file format in particular for a long time. At some point I decided to address that by learning how it works and writing my own Zip program. The implementation turned into an exciting programming exercise; there is great pleasure to be had from creating a well oiled machine that takes data apart, jumbles its bits into a more efficient representation, and puts it all back together again. Hopefully it is interesting to read about too.

This article explains how the Zip file format and its compression scheme work in great detail: LZ77 compression, Huffman coding, Deflate and all. It tells some of the history, and provides a reasonably efficient example implementation written from scratch in C. The source code is available in hwzip-1.0.zip.

Previously:
Specially Crafted ZIP Files Used to Bypass Secure Email Gateways (2019)
Which Compression Format to Use for Archiving? (2019)
The Math Trick Behind MP3s, JPEGs, and Homer Simpson's Face (2019)
Ask Soylent: Internet-communication Archival System (2014)


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Informative) by sjames on Saturday February 29 2020, @08:25PM (3 children)

    by sjames (2882) on Saturday February 29 2020, @08:25PM (#964659) Journal

    Compression was an example of a big grass roots win for free and open, starting with the war between SEA and PKWARE. The courts decided for SEA, but the people (at least the subset that even knew what file compression was) decided for PK and so SEA went away in short order. I remember BBSes at the time, and how quickly the BBS software was modified/configured to not even accept uploads compressed with SEA. Where they weren't outright rejected, users uploading something compressed with SEA would be dogpiled with messages (friendly and not so friendly) telling them to only use PKZIP (and sometimes why). All that before PKZIP got new compression methods that made it more effective than SEA's ARC.

    Unisys rattling the legal saber over GIF had similar results in spite of browser vendors dragging their feet implementing support for PNG.

    Starting Score:    1  point
    Moderation   +1  
       Informative=1, Total=1
    Extra 'Informative' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 4, Informative) by pipedwho on Saturday February 29 2020, @09:04PM (2 children)

    by pipedwho (2032) on Saturday February 29 2020, @09:04PM (#964669)

    That's because SEA made a utility called 'arc' that generated .arc files. Then Phil Katz came along and made pkarc/pkxarc which did the same thing, but was considerably faster and produced slightly smaller files in most cases. The pkarc/pkxarc utilities were also a lot smaller, and you only needed to keep pkxarc around if you only every extracted files.

    Then SEA sued Phil Katz over a trademark issue and Phil renamed the pkarc/pkxarc utilities to pkpak/pkxpak which generated .pak files. These were basically identical to .arc files.

    However, then not long after that (and probably inspired by the law suit) Phil created the zip format and pkzip. Zip was almost as fast as pkarc/pkxarc, but produced much smaller files, and had a much improved container format with longer CRCs and optional encryption (yeah the first version of his encryption was totally insecure, but he fixed that when it was pointed out by the crypto community at the time). The .zip format saw the test of time and became the most popular archive format on BBSes and the internet. The size/speed tradeoff is still the reason why it (and other implementations of the format like gzip) are still around.

    LZMA/LZMA2 is much better with compression size, but is nowhere near as fast at either compression or extraction. And with high speed networks and large storage, the extra 20% size saving may not be worth the time penalty.

    • (Score: 2) by sjames on Saturday February 29 2020, @09:36PM

      by sjames (2882) on Saturday February 29 2020, @09:36PM (#964678) Journal

      The technical advantages are why it is still around today. The legal/social issues are why SEA went away never to be seen again (SOURCE: I was active on BBSes at the time). Until the crazy trademark crap, there was room for both in the community.

    • (Score: 0) by Anonymous Coward on Monday March 02 2020, @05:20PM

      by Anonymous Coward on Monday March 02 2020, @05:20PM (#965557)

      The size/speed tradeoff is still the reason why ... other implementations of the [.zip] format like gzip are still around.

      Hrm, gzip uses the DEFLATE algorithm for compression but it's a stretch to call it an implementation of the .zip format...

      Gzip was of course designed as a replacement for the traditional Unix "compress" utility which suffered from LZW patent threats in the early 1990s.

      I suspect the real reason for gzip's staying power in the Unix-like world has little to do with its performance (good or bad) but more to do with interoperability and network effects: basically everyone has gzip so everyone can unpack the file if you send them a .tar.gz, maybe not so much if you send them something weird like .tar.lz...