We use jszip for parsing xlsx/xlsm/xlsb files in the browser (Excel 2007+ files are zip files that contain XML or binary files in specific locations): <a href="https://github.com/SheetJS/js-xlsx" rel="nofollow">https://github.com/SheetJS/js-xlsx</a><p>JSZip works well for small files, but unzipping XLSB files larger than 50M seem to cause out of memory issues in Firefox
I tried JSZip not too long ago for my side project and it was quite slow. I've been using <a href="http://gildas-lormeau.github.io/zip.js/" rel="nofollow">http://gildas-lormeau.github.io/zip.js/</a> and I'm very happy with it. I hope to release my web app some time this week.
Actually, one of the best use case for this is, when a user wants to upload multiple images / files, we can use the html5 file api with zip js generate 1 zip file and upload just that one file and extract on the server. It will be much faster. And actually you can use this in the web worker api so it doesn't block the Ui thread
Potential use case: instead of using CSS sprite maps (putting all of your images into one image to reduce the number of http requests generated by your page, then using css magic to select regions inside of that image), image files could be zipped into an image package that is delivered to the client, who unzips it and uses the images inside.
This would cut down on the number of requests made, but allow the images to be used as normal images instead of as images within a sprite sheet.
We've used <a href="https://github.com/EvanOxfeld/node-unzip/issues" rel="nofollow">https://github.com/EvanOxfeld/node-unzip/issues</a> in production and we've had a surprising number of corrupt zip files created by it. We've switched to using a barely wrapped zip command line tool.<p>Is PSZIp better?
It's really fast at reading zip files created by Google takeout. I use it at <a href="http://theyhaveyour.info" rel="nofollow">http://theyhaveyour.info</a> as it works perfectly with the FileReader API. For reading only though - haven't tried writing with it.
Holy carp on a stick, that was so easy. You've made my day today.<p>BTW, here is how I make files download with the filename that I want: <a href="https://github.com/capnmidnight/JWD/blob/master/html5/desktop.js#L6-L16" rel="nofollow">https://github.com/capnmidnight/JWD/blob/master/html5/deskto...</a><p>The call to the "a" function on line 8 is just generating an HTML anchor tag. You should be able to figure it out from there.
Why not unrar and untar using javascript as well?<p><a href="https://github.com/varunmayya/bitjs" rel="nofollow">https://github.com/varunmayya/bitjs</a>
If you're purely interested in client-side compression in the browser (i.e. compatibility with server libraries/filesystem tools is not important) it's possible to get better compression in less time using <a href="http://pieroxy.net/blog/pages/lz-string/index.html" rel="nofollow">http://pieroxy.net/blog/pages/lz-string/index.html</a>
Does this handle very large files on Node.js? npm for one fails with large files, by "large" I mean something like a few hundred megabytes. It's very problematic especially on Windows.
>JavaScript today is capable of generating a lot of data. The easiest way to deliver multiple files to your users is in a zip file. Instead of wasting server resources and bandwidth you can get the client to do it for you.<p>... Am I not understanding what they're saying here or do the authors really not understand how the internet works?<p>It looks to me like they're saying "don't bother letting your users download zip files. Save your bandwidth! Just get them to send themselves a zip file, client-side!"