Having worked with a few Zip libraries, I can't bring myself to use any library that has an O(n) memory requirement as it relates to archive size. I can't see how this would avoid that because of the way the file is delivered to the client.
For practical usage in dynamically creating ZIP files, I've found it far more efficient and easy to use the `mod_zip` extension to Nginx[1]. It works really well, is highly RAM efficient, and pushes the logic down to some of the most stable software out there (Nginx).
Would mod_deflate or equivalent (mod_gzip in older Apache versions, static and dynamic compression can be enabled easily in recent versions of IIS too) not do this for you?
HTML minifiers don't actually compress in this sense. They remove comments, strip out extraneous whitespace, replace long tags and attributes with equivalent shorter ones (strong->b), normalize markup where possible and so forth. The result is still HTML: no extra decompression step is needed for it to be understood and rendered by any renderer compliant with the standard the tool targets.
General compression can get much better results than content translation. Check mod_deflate, mod_gzip, or what-ever your preferred web server's equivalent is (all modern servers aside from those designed to be absolutely minimal (for embedded systems and such) have one).
It's odd to me that everyone is talking about the user experience for downloading files created with this. The real awesome use case with a library like this (and what I'm currently using this for in a new project) is providing a user a multi-select to upload files, and zipping then up into a single compressed file before uploading to the server.
Fantastic. I already have at least 3 use cases in mind!
I'm interested to look into ways of feature-detection for this that could cause the zip to be generated client-side for supporting browsers and server-side for older..
> Pretty cool, but the method for "downloading" looks painful (converting the generated ZIP to base64 in order to use it as an URL).
Yep, it would be cool if that was included in the API and they used createObjectURL[0] where available. Although I don't think it'd fix the name issue, as there's a BlobBuilder, but only File objects have a name and there's no FileBuilder (a File's name is readonly...)
Chrome (last time I checked) will crash if the data URL is over 2MB. Luckily there are the Blob and BlobBuilder APIs now. Using Stuart's code as a starting point I've successfully built >700MB zip files on flixtractr.com. The main downside is that when you request the blob URL not all browser/platform combinations give it an extension that matches the mime-type, so you have to tell your users they may need to rename the file.
While it is indeed cool that this is done in the browser, it'd be nice to see some emphasis on using it within Node.js. Presumably, it'd probably work without too many modifications.
I thought the zlib support could only handle gzip files and not zip files? This would indicate that it only supports gzip files: http://zlib.net/zlib_faq.html#faq11
Sorry, I was vague. You should use the zlib library when it doesn't matter what the container format happens to be. If you need zip, you can still do it by simply using an outside process to do it. That will be much faster, easier, and more efficient than using something like this.
This is unbelievably useful if you need to have the client download multiple files with one click just using the client-side. This wasn't possible before. Great job!
Aye. I once created a "theme generator" for SonyEricsson phones with javascript client-side and php server-side.
Their themes were basically a .tar.gz of files: one listing all the colours and other options and the rest being images for the various interface elements.
With the canvas and something like this, almost the entire thing could be done client-site now.