Why don't more companies resize images client-side first using <canvas> and then save the server some work by only asking it to verify the result by<p>- resizing to the same size<p>- removing metadata<p>This results in much faster transfer (10x less bandwidth used often for mobile uploads) and reduces server load by "farming" out the work to the clients.<p><a href="https://developer.mozilla.org/en-US/docs/Web/API/CanvasRenderingContext2D/drawImage" rel="nofollow">https://developer.mozilla.org/en-US/docs/Web/API/CanvasRende...</a><p># Edit: On Keeping Full Resolution Images<p>Some people mention having original highest-resolution images are important. I don't think that is true for most applications.<p>Most apps don't need hi-resolution history as much as current, live engagement so older photos being smaller isn't a big deal. As technology moves on you simply start allowing higher-res uploads. Youtube, facebook, and others have done this fine as the older stuff is replaced with the new/current/now() content.<p>In fact, even our highest resolution images are still low-quality for the future. Pick a good max size for your site (4k?) and resize everything down to that. In a year, bump it up to 6k, then 10k, etc...<p>Keeping costs low has it's benefits, especially for us startups. Now if you have massive collateral, then knock yourself out.
There is already an (unofficial Google) image proxy written in Go that is quite fast, does caching (local or backed by S3/GCS), and does other nice things like smart cropping: <a href="https://github.com/willnorris/imageproxy" rel="nofollow">https://github.com/willnorris/imageproxy</a><p>Seemed like a lot of unnecessary work for them to reimplement a service from scratch without gaining any major perf benefits over their existing one and without leaning on an existing well-known and well-built foundation.
Link to the resulting open-source project:<p><a href="https://github.com/discordapp/lilliput" rel="nofollow">https://github.com/discordapp/lilliput</a>
I’d be very worried about a security issue with the unsafe C++ code.<p>You really have to run this kind of complex parsing in a disposable containerized environment to do it safely. Or do everything carefully and in a memory safe language.
How is the security? Any sort of image processing is a potential exploitation point. I see it says it uses the 'mature' libjpeg-turbo and libpng libraries,along with giflib for .gifs, but even with full trust of those, the C code, patches, and changes ontop could be more exploitation points. You can look through Imagemagick alone to see all the fun things possible when seemingly basic processing turns into exploits. <a href="https://www.cvedetails.com/vulnerability-list/vendor_id-1749/Imagemagick.html" rel="nofollow">https://www.cvedetails.com/vulnerability-list/vendor_id-1749...</a>
> Today, Media Proxy operates with a median per-image resize of 25ms and a median total response latency of 85ms. It resizes more than 150 million images every day. Media Proxy runs on an autoscaled GCE group of n1-standard-16 host type, peaking at 12 instances on a typical day.<p>Awesome! <3
Anybody knows how well libvips <a href="https://github.com/DAddYE/vips" rel="nofollow">https://github.com/DAddYE/vips</a> compares to liliput performance wise?
Nice, but why? <a href="https://cloudinary.com" rel="nofollow">https://cloudinary.com</a>, <a href="https://www.imgix.com" rel="nofollow">https://www.imgix.com</a>, or <a href="https://www.filestack.com" rel="nofollow">https://www.filestack.com</a> already exist and are well worth it for 99% of apps. Even at scale, it really doesn't cost that much to have someone else do it. You can use a thin proxy through your existing CDN if you want to save on their bandwidth fees.<p>Also <a href="http://thumbor.org" rel="nofollow">http://thumbor.org</a> and <a href="https://imageresizing.net" rel="nofollow">https://imageresizing.net</a> if you want a library to host yourself which are already very fast and well tested. Put them in a docker container on a kubernetes cluster and it's all done in an hour.
This post reminded me of a very old article from Yahoo/Tumblr explaining how they were (ab)using Ceph to generate thumbnails on the fly as pictures were uploaded using the Ceph OSD plugin interface.<p>Unfortunately the post seems to have disappeared from the internet (it was probably around 6 years ago), so here are some other teasers:<p><a href="https://yahooeng.tumblr.com/post/116391291701/yahoo-cloud-object-store-object-storage-at" rel="nofollow">https://yahooeng.tumblr.com/post/116391291701/yahoo-cloud-ob...</a><p><a href="https://ceph.com/geen-categorie/dynamic-object-interfaces-with-lua/" rel="nofollow">https://ceph.com/geen-categorie/dynamic-object-interfaces-wi...</a><p>Disclaimer: not affiliated with Ceph apart from being a happy sysadmin.
I wonder why people implement such things on CPU?<p>PCI express is ~100 gbit/sec, much faster than any network interface. Internally, a GPU can resize these images by an order of magnitude faster than that, see the fillrate columns in the GPU spec.
is there any open source project img proxy that can do this?<p>eg: instead of this<p><a href="http://localhost:8080/https://octodex.github.com/images/codercat.jpg" rel="nofollow">http://localhost:8080/https://octodex.github.com/images/code...</a><p>we can create alias like octo and url will become this<p><a href="http://localhost:8080/octo/images/codercat.jpg" rel="nofollow">http://localhost:8080/octo/images/codercat.jpg</a>