There are several apps that will let you use their service without creating an account or storing any of your data with them. You sign up with google or dropbox etc, and the app will read/write everything to your storage provider.<p>Have any of you done this? What has your experience been like? Have you found a way to make this work with multi-player? What storage provider has the best API for this?
I've often felt we need an abstraction for just this. "Bring your own storage" so that you can sign up and provide a "bucket", then the service will read/write to that. I think the difficulty lies in all the discrepancies between different storage mechanisms and normalizing the SLA there. If Dropbox is down, and your user is on Dropbox, you'll need a way to communicate that their particular "storage" is offline. The trade off is likely worth it - that is being in full control of your data and having a data export "for free" is worth it to a lot of folks.<p>OpenDAL was on HN recently and would be a pretty decent abstraction to use for this: <a href="https://github.com/apache/incubator-opendal">https://github.com/apache/incubator-opendal</a>
I was a lead for a major competitor for Google Drive for awhile.<p>Yes, using a cloud storage provider might work instead of running your own database...<p>> Have you found a way to make this work with multi-player?<p>... But this is where you will run into serious problems.<p>"Cloud drives" aren't designed for highly concurrent multi-user access. That is when you need an ACID-compliant database; (or otherwise understand this issue well enough that you know what you're doing.)<p>In our case, our API was based around light usage of files: If two people edited the same file at the same time, we couldn't resolve the conflict automatically, because these were files: We had no knowledge of the files' schemas.<p>Likewise, our APIs weren't designed for "live" high-performance usage. In our case, if it took under two minutes for an update to sync, we were happy. That's an eternity in a multi-player game.<p>In general, if your application is going to use a cloud drive as its data storage, you should target light usage of storage needs: IE, the user's data is generally private, the user is generally generating light changes, multi-user usage doesn't require heavy "live" data.<p>IE, this would work great for a password manager that allows sharing passwords via the cloud drive's existing sharing system. It would also work great for an online image editor, sheet music editor, anything where you could envision a desktop app that edits files.
I'm not sure if diagrams.net (drawio) people hangout here. But I use their webtool to create flowcharts and architecture diagrams that are then stored into Google Drive. They maybe able to shed some light into it. I must warn that using it in Firefox is a bit spotty compared to Chrome, though this is likely due to issues in Google OAuth with FF extensions.
Originally, when I started working on [Fileshark](fileshark.app) I designed it around using your own Google Drive as the storage backend. I ended up going in a different direction. Here are some of the reasons:<p>- Having the entire product be reliant Google is just too big a risk. Google has changed apis, and restricted access to third parties.<p>- Lots of added complexity around search, pagination, etc.<p>- Due to the last point, all the files need to be tracked in an external database. This adds a requirement to keep everything in sync.<p>- Now that the files are not stored in Google Drive, I’m working on adding an option to support auto backups to Google Drive.
Aternos (a free/ad-supported Minecraft server host) backs up a user’s servers to a user-provided Google Drive account. Google Drive allows apps to store hidden data that the user cannot directly see or modify, which is useful from a security perspective on Aternos, as server backups can contain executable jar files.
I am working on a side project that currently stores data as json-decorated markdown in localstorage, without any backend. It would be trivial to write a backend that oauths with dropbox/github/etc and sync data through it, but there's little point in doing so (assuming you allow data export, and unless you want to avoid storage costs), as once user data is going through a closed backend, you have to trust the provider.<p>A while back I looked into the feasibility of accessing various APIs from the browser directly, and I ran into a dead end. Everything I found has CORS restrictions that prevent this from working. If anybody knows of anything, I'd love to take a look at it.
Possibly related - Dave Winer's writing about using dropbox for apps:<p>- <a href="http://scripting.com/2015/06/25/dropboxCouldBeKingOfTheOnepageApp.html" rel="nofollow noreferrer">http://scripting.com/2015/06/25/dropboxCouldBeKingOfTheOnepa...</a> ('Dropbox could be king of the one-page app', 2015)<p>- <a href="http://scripting.com/2023/08/05/130423.html" rel="nofollow noreferrer">http://scripting.com/2023/08/05/130423.html</a> ('Identity as a product', 2023)
I developed a Telegram bot [1] that allows users to save text messages on Google Sheets, including simple text parsing to arrange the content into columns. Upon signup I automatically create a spreadsheet for the user so that they can start using the bot right away, but they can also change it to a personal spreadsheet later. And of course multiple users can add to the same spreadsheet if they know the link. Note that the bot uses a local SQLite for its own data.<p>Technology-wise it is just a Java app running on my raspberry and using Google's libraries for Drive [2] to create the spreadsheets and the actual Sheet API [3] to modify content. Authentication was a bit hard to figure out mostly because of the poor documentation. Each query to these APIs takes about one second, so it is definitely not suitable if you require low latency on the client's side or lots of concurrent accesses to the same spreadsheet. Not sure about pricing since my bot is not so popular and I am still in the free tier (:<p>[1] <a href="https://t.me/gsheet_notes_bot" rel="nofollow noreferrer">https://t.me/gsheet_notes_bot</a><p>[2] <a href="https://developers.google.com/drive/api/guides/about-sdk" rel="nofollow noreferrer">https://developers.google.com/drive/api/guides/about-sdk</a><p>[3] <a href="https://developers.google.com/sheets/api/guides/concepts" rel="nofollow noreferrer">https://developers.google.com/sheets/api/guides/concepts</a>
I have an app that allows users to select data directories for storing config files. This allows customers to sync their settings using an existing syncing solution. Whether they use Google Drive, Dropbox, iCloud, SyncThing, or even Git is up to them.<p>The huge advantage is that most people already have some kind of syncing solution set up, and they can just use that. This is especially important for corporate environments where IT has to evaluate what's allowed and what isn't.<p>It comes with some drawbacks. Your code needs to be resilient enough to deal with inconsistent syncs, you have to use file system observation, stuff might be delayed etc.<p>But on the plus side, it seems to work very reliably, I've not gotten any complaints yet, and if something does go wrong savvy users can just fix it themselves. It's just a bunch of JSON files on disk, after all.<p>Most sync tools offer a way to share folders with other users, so if you do it right you get multiplayer for free.
The original YNAB (<a href="https://www.ynab.com/" rel="nofollow noreferrer">https://www.ynab.com/</a>) used a JSON file stored in your Dropbox as the backend. It worked pretty well, but definitely lacked some professionalism and had a few rough edges (related to DB sync delays, mainly).
Haven't done this. Curious though about "storage" (type 1) vs "non-voltatile data apis" (type 2). The former I think of as "open/read/write/close/delete" (and <i>maybe</i> truncate) the latter I think of as having things like "truncate/seek" which add state machine semantics (if you don't do the operations in the same order you get a different state of your store). The "hybrid" is to use type 1 storage as a log and use it to reconstruct a type 2 store. You still need need to insure the underlying type 1 log is written in "order" (vectored clocks help here) in order to preserve data integrity.<p>If you have a suitably robust "middle ware" layer this should be doable with any type 1 storage back end.
I really wish there would be more industry standard solutions like this. There are lots of semi serious apps I build where I'd rather not worry about auth and storage but its part of the product so I'll do some hacky stuff with local storage (which only goes so far)
I am in progress of doing just that for a cloud gaming platform of mine: <a href="https://borg.games" rel="nofollow noreferrer">https://borg.games</a><p>If you want to save your game progress, you just login with OneDrive.<p>I previously made a single-page EPUB reader using a similar technique with GDrive: <a href="https://h5reader.azurewebsites.net/" rel="nofollow noreferrer">https://h5reader.azurewebsites.net/</a> (that one does not work with 3rd party cookies disabled though because it uses iframe to login instead of redirects, and I never bothered to update it).
What about using a solution like a personal decentralized storage? that will help to stay up all the time and ensure the ownership of data to the user, no? There is a cool project <a href="https://zyphe.com/" rel="nofollow noreferrer">https://zyphe.com/</a> that does that, they are just at the beginning and they decided to run a decentralized KYC solution on top for now: <a href="https://www.togggle.io/" rel="nofollow noreferrer">https://www.togggle.io/</a>
Solid protocol is trying to do this too, <a href="https://www.inrupt.com/solid" rel="nofollow noreferrer">https://www.inrupt.com/solid</a>
Is it possible to grant Google Drive OAuth permissions with scope limited to a single folder?<p>My experience with Drive OAuth is sketchy Colab notebooks asking me to authenticate with Drive, but in such a way that I'm apparently granting access to everything inside of it. I've never been sure if that's because of limitations on the OAuth API, the Colab client specifically, or just sloppy developers.
What are the main reasons products would do this?<p>Is it user-privacy focused? Product cost-savings focused? Something else?<p>The privacy discussion would be interesting. On one hand, users "own" their data in the sense that they can delete it themselves at any time, but you also allow a product (hopefully limited) access to your own storage provider.
I use a password manager that does this, called SafeInCloud (Android) - it asks you to select one of the available backends, such as Gmail, Dropbox and others, and keeps the entire password db encrypted there. It syncs between my devices quite well for that scenario where writes are seldom
Write/read is not great and there is no way to get help from google if you're not an enterprise customer (in the event of something happening). I use a cloud provider for important things and online drives as backups for smaller personal projects.
I know a guy that built his whole product out on Hubspot using their free CRM and APIs. I believe he said he's storing a few hundred million records there and there's been no issues so far.
All of the products I used to used that did this (like 1Password, YNAB, etc) now have their own cloud sync and it's better in every way. Yes, you pay for it but it's well worth the stability and speed IMHO.<p>Dropbox has fallen so far in my eyes over the past ~decade. It used to be rock solid file storage, a USB drive in the cloud, now it's so busy trying to be everything to everyone and failing at most things. The macOS is hot garbage, literally one of the worst pieces of software I've had to work with. It's buggy, doesn't give useful info, gets stuck syncing, and has alerts that you can't act on. My mom's laptop yells at her that some files might not be openable and she should upgrade to their new file provider API. Ok fine, let's do that, oh wait you can't because you have a personal and a business account, try again later. It's been this way for a while now (like a year+).<p>I have Maestral on my laptop but honestly I just need a tiny push to leave Dropbox completely after being a paying customer for over a decade.
The language/framework I created uses Google Sheets as a datastore, even providing language constructs for JOINs/Pivot tables, and caching timeseries report line items by daterange in a separate auto-generated google sheet. It all works fine, but SQL with a caching layer like memcached is obviously a better option if you have control over your server at all.
Why is @darig's reply dead? It looks reasonable and interesting to me (although they could have perhaps made it clearer that it ties into this topic because of developing a programming language that does what the title asks).