TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Tus - an Open Source File Upload Protocol

110 点作者 pow-tac大约 12 年前

9 条评论

lucaspiller大约 12 年前
This is a pretty cool idea. I absolutely detest how much apps don't properly handle network degradation correctly, so anything resumable is great in my opinion.<p>I have a question though... regarding the resuming you do the HEAD request to see what has been uploaded:<p>&#62; HTTP/1.1 200 Ok<p>&#62; Content-Length: 100<p>&#62; Content-Type: image/jpg<p>&#62; Content-Disposition: attachment; filename="cat.jpg"'<p>&#62; Range: bytes=0-69<p>Is it possible that the data that is already there could be corrupt?<p>I'm also wondering how things like proxies deal with this. A lot of mobile networks have nasty transparent caching proxies in their network. Also when uploading a file through Nginx (when the upload works correctly) it won't send anything to the backend until it has the complete data, is this the same if the connection cut half way through?
评论 #5563831 未加载
评论 #5563825 未加载
评论 #5563822 未加载
nanoman大约 12 年前
I've been a user of their product <a href="https://transloadit.com/" rel="nofollow">https://transloadit.com/</a> for about a year now for video encoding and am very happy with it. The API is elegant, the whole service is reliable and fast.<p>These guys know what they're doing.
评论 #5563761 未加载
gabipurcaru大约 12 年前
Shameless plug, but HTML5 + CORS + S3 can enable resumable file uploads. I've written a library that uploads to S3, and can resume uploads (think internet going down for a while, force-closing the tab, etc.): <a href="https://github.com/cinely/mule-uploader" rel="nofollow">https://github.com/cinely/mule-uploader</a> . There's a demo available, I suggest you test it with bigger files (&#62;100MB)
评论 #5563966 未加载
j4_james大约 12 年前
I think some of your responses aren't quite right. For example, in response to the first PUT, you have:<p><pre><code> HTTP/1.1 200 Ok Range: bytes=0-99 Content-Length: 0 </code></pre> But the Range header surely can't be used here, since it's a request header and this is a response. A Content-Range header wouldn't be any more appropriate, since you're not actually returning any content (of any amount). Do you really need this info in the response anyway? The sender knows what they sent, and either it was entirely successful (a 2xx response) or it wasn't.<p>Also, if you're going to return a zero-length 200 response, you might as well use 204 No Content instead.<p>Then, when resuming an upload, you send a HEAD that returns the following:<p><pre><code> HTTP/1.1 200 Ok Content-Length: 100 Content-Type: image/jpg Content-Disposition: attachment; filename="cat.jpg"' Range: bytes=0-69 </code></pre> Again, you can't use the Range request header in a response. And the Content-Length should surely be 70, since that's how much content would be returned if this was a GET request. You could possibly include a Content-Range of 0-69/100 if the server wanted to communicate the expected file size, but I'm not convinced that's necessary and seems something of an abuse of that header.<p>Finally, the response to the resumed PUT has the same problems as the first PUT response. It should probably just be a 204 No Content response - no Content-Length or Range headers required.
评论 #5564060 未加载
andyking大约 12 年前
Call me a prude, but I saw the F-word and hit the back button.
评论 #5564633 未加载
评论 #5563772 未加载
评论 #5563669 未加载
评论 #5563777 未加载
评论 #5563679 未加载
评论 #5563704 未加载
评论 #5563673 未加载
icebraining大约 12 年前
Seems fine as a best practice for using HTTP for file uploads. I feel the requirement for the server to have fixed URLs for uploading to be limiting, but then again, I'm one of those HATEOAS freaks.
评论 #5563766 未加载
tsuraan大约 12 年前
I'm confused on what information the HEAD request gives after a chunk has failed. Suppose a client concurrently uploads chunks 1, 2, 3, 4 and 5; chunks 2 and 4 fail, and the rest work. What information does the HEAD give to tell the client that it needs to re-send the data that was in chunks 2 and 4? Wouldn't it make more sense for the client to store the success of each of its chunk uploads?<p>I'm also not seeing how the client indicates that the upload is complete. It could be done server-side, by just detecting when a file has no more holes in it, but that seems hacky. Holes can also be useful; suppose I make a 32GB .vmdk file (non-sparse) and put 2GB of data on it. If the server can support holes, then I can upload (and the server only has to store) about 2GB of data; if the server can't support holes, then I'll have to upload a bit more data (assuming compression), and the server will have to store a lot more data. If there were some final message the client could submit to the resource saying "I'm done, commit it!", I think the protocol would be a bit more complete.
评论 #5571497 未加载
评论 #5565002 未加载
nnnnni大约 12 年前
As usual, relevant xkcd:<p><a href="http://xkcd.com/927/" rel="nofollow">http://xkcd.com/927/</a>
评论 #5564287 未加载
raimue大约 12 年前
It seems like the handling of concurrent access has been neglected in this protocol. What if multiple clients try to resume uploading the same file?
评论 #5564650 未加载
评论 #5564572 未加载