TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: S3 but with append?

2 点作者 dhbradshaw超过 6 年前
Ideally, append-only.<p>Basically, what do you do if you want to pool a bunch of streams into a (potentially very large) log file?

3 条评论

abd12超过 6 年前
You can use Kinesis Firehose to stream data to S3. It&#x27;ll buffer data for a while -- you set thresholds based on size of data or time, whichever is hit first -- then it will save the data to S3.<p>It won&#x27;t be a single large file, but they&#x27;ll all have the same prefix based on date. Most data processing tools will let you suck up an entire prefix and treat it like a single file.
idunno246超过 6 年前
If you know the size ahead of time, you can use multipart uploads. Otherwise you would have to buffer to disk. you could consider kinesis firehose which had dumping to s3 built in<p>The google storage api has a mode where you can stream bytes, and it doesn’t become visible until you close it(and then can’t modify like s3). And unlike s3, it requires a delete permission to be able to overwrite, though with s3 you can turn on versioning and not give deleteobjectversion.
nikonyrh超过 6 年前
You could always parition them into chunks and upload to S3 as individual objects.