TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Launch HN: Flower (YC W23) – Train AI models on distributed or sensitive data

180 pointsby niclane7about 2 years ago
Hey HN - we&#x27;re Daniel, Taner, and Nic, and we&#x27;re building Flower (<a href="https:&#x2F;&#x2F;flower.dev&#x2F;">https:&#x2F;&#x2F;flower.dev&#x2F;</a>), an open-source framework for training AI on distributed data. We move the model to the data instead of moving the data to the model. This enables regulatory compliance (e.g. HIPAA) and ML use cases that are otherwise impossible. Our GitHub is at <a href="https:&#x2F;&#x2F;github.com&#x2F;adap&#x2F;flower">https:&#x2F;&#x2F;github.com&#x2F;adap&#x2F;flower</a>, and we have a tutorial here: <a href="https:&#x2F;&#x2F;flower.dev&#x2F;docs&#x2F;tutorial&#x2F;Flower-0-What-is-FL.html">https:&#x2F;&#x2F;flower.dev&#x2F;docs&#x2F;tutorial&#x2F;Flower-0-What-is-FL.html</a>.<p>Flower lets you train ML models on data that is distributed across many user devices or “silos” (separate data sources) without having to move the data. This approach is called federated learning.<p>A silo can be anything from a single user device to the data of an entire organization. For example, your smartphone keyboard suggestions and auto-corrections can be driven by a personalized ML model learned from your own private keyboard data, as well as data from other smartphone users, without the data being transferred from anyone’s device.<p>Most of the famous AI breakthroughs—from ChatGPT and Google Translate to DALL·E and Stable Diffusion—were trained with public data from the web. When the data is all public, you can collect it in a central place for training. This “move the data to the computation” approach fails when the data is sensitive or distributed across organizational silos and user devices.<p>Many important use cases are affected by this limitation:<p>* Generative AI: Many scenarios require sensitive data that users or organizations are reluctant to upload to the cloud. For example, users might want to put themselves and friends into AI-generated images, but they don&#x27;t want to upload and share all their photos.<p>* Healthcare: We could potentially train cancer detection models better than any doctor, but no single organization has enough data.<p>* Finance: Preventing financial fraud is hard because individual banks are subject to data regulations, and in isolation, they don&#x27;t have enough fraud cases to train good models.<p>* Automotive: Autonomous driving would be awesome, but individual car makers struggle to gather the data to cover the long tail of possible edge cases.<p>* Personal computing: Users don&#x27;t want certain kinds of data to be stored in the cloud, hence the recent success of privacy-enhancing alternatives like the Signal messenger or the Brave browser. Federated methods open the door to using sensitive data from personal devices while maintaining user privacy.<p>* Foundation models: These get better with more data, and more diverse data, to train them on. But again, most data is sensitive and thus can&#x27;t be incorporated, even though these models continue to grow bigger and need more information.<p>Each of us has worked on ML projects in various settings, (e.g., corporate environments, open-source projects, research labs). We’ve worked on AI use cases for companies like Samsung, Microsoft, Porsche, and Mercedes-Benz. One of our biggest challenges was getting the data to train AI while being compliant with regulations or company policies. Sometimes this was due to legal or organizational restrictions; other times, it was difficulties in physically moving large quantities of data or natural concerns over user privacy. We realized issues of this kind were making it too difficult for many ML projects to get off the ground, especially in domains like healthcare and finance.<p>Federated learning offers an alternative — it doesn&#x27;t require moving data in order to train models on it, and so has the potential to overcome many barriers for ML projects.<p>In early 2020, we began developing the open-source Flower framework to simplify federated learning and make it user-friendly. Last year, we experienced a surge in Flower&#x27;s adoption among industry users, which led us to apply to YC. In the past, we funded our work through consulting projects, but looking ahead, we’re going to offer a managed version for enterprises and charge per deployment or federation. At the same time, we’ll continue to run Flower as an open-source project that everyone can continue to use and contribute to.<p>Federated learning can train AI models on distributed and sensitive data by moving the training to the data. The learning process collects whatever it can, and the data stays where it is. Because the data never moves, we can train AI on sensitive data spread across organizational silos or user devices to improve models with data that could never be leveraged until now.<p>Here’s how it works: (0) Initialize the global model parameters on the server; (1) Send the model parameters to a number of organizations&#x2F;devices (client nodes); (2) Train model locally on the data of each organization&#x2F;device (client node); (3) Return the updated model parameters back to the server; (4) On the server, aggregate the model updates (e.g., by averaging them) into a new global model; (5): Repeat steps 1 to 4 until the model converges.<p>This, of course, is more challenging than centralized learning: we must move AI models to data silos or user devices, train locally, send updated models back, aggregate them, and repeat. Flower provides the open-source infrastructure to easily do this, as well as supporting other privacy-enhancing technologies (PETs). It is compatible with PyTorch, TensorFlow, JAX, Hugging Face, Fastai, Weights &amp; Biases and all the other tools used in ML projects regularly. The only dependency on the server side is NumPy, but even that can be dropped if necessary. Flower uses gRPC under the hood, so a basic client can easily be auto-generated, even for most languages that are not supported today.<p>Flower is open-source (Apache 2.0 license) and can be run in all kinds of environments: on a personal workstation for development and simulation, on Google Colab, on a compute cluster for large-scale simulations or on a cluster of Raspberry Pi’s (or similar devices) to build research systems, or deployed on public cloud instances (AWS, Azure, GCP, others) or private on-prem hardware. We are happy to help users when deploying Flower systems and will soon make this even easier through our managed cloud service.<p>You can find PyTorch example code here: <a href="https:&#x2F;&#x2F;flower.dev#examples">https:&#x2F;&#x2F;flower.dev#examples</a>, and more at <a href="https:&#x2F;&#x2F;github.com&#x2F;adap&#x2F;flower&#x2F;tree&#x2F;main&#x2F;examples">https:&#x2F;&#x2F;github.com&#x2F;adap&#x2F;flower&#x2F;tree&#x2F;main&#x2F;examples</a>.<p>We believe that AI technology must evolve to be more collaborative, open and distributed than it is today (<a href="https:&#x2F;&#x2F;flower.dev&#x2F;blog&#x2F;2023-03-08-flower-labs&#x2F;">https:&#x2F;&#x2F;flower.dev&#x2F;blog&#x2F;2023-03-08-flower-labs&#x2F;</a>). We’re eager to hear your feedback, experiences regarding difficulties in training, data access, data regulation, privacy and anything else related to federated (or related) learning methods!

17 comments

guitesabout 2 years ago
Hey! Glad to see flower getting attention on hn.<p>I&#x27;ve been working on a project for over a year that uses flower to train cv models on medical data.<p>One aspect that we see being brought up again and again is how we can prove to our clients that no unnecessary data is being shared over the network.<p>Do you have any tips on solving that particular problem? I.e. proving that no data apart from model weights are being transferred to the centralized server?<p>Thanks a lot for the project.<p>edit: Just to clarify I am aware of differential privacy, I&#x27;m talking more on a &quot;how to convince a medical institution that we are not sending its images over the network&quot; level.
评论 #35262319 未加载
评论 #35262380 未加载
评论 #35262468 未加载
JohnFenabout 2 years ago
Isn&#x27;t this still moving your data to a central repository? It&#x27;s encoded in a neural net rather than in a more accessible form, but it&#x27;s still being moved out of your control.
评论 #35269281 未加载
cs02rm0about 2 years ago
<i>In the past, we funded our work through consulting projects, but looking ahead, we’re going to offer a managed version for enterprises and charge per deployment or federation.</i><p>Interesting.<p>Flower seems to fit well for people who are sensitive about their data and don&#x27;t want to hand it over to a third party, but this seems to move towards a model where they have to hand that sensitive data over to a third party.<p>Perhaps that still works for the bulk of users, especially commercial rather than government. It&#x27;s difficult to pursue both a managed solution and simultaneously maintain an open source offering without one departing from the other.
评论 #35273621 未加载
评论 #35273585 未加载
dontreactabout 2 years ago
There is so much hype around federated learning but often the hard and insurmountable part of this is federated labeling.<p>For example for your cancer use case, you have to convince multiple hospitals to feed the system labels and this is a very very tall ask.<p>For healthcare it’s also not clear how to get a regulatory clearance if you can’t actually test the performance of the federated deployments.<p>So while federated learning solves some problems generated by an unwillingness to share data, it doesn’t solve all of them. Describe the use cases of your product carefully.
评论 #35261225 未加载
yawnxyzabout 2 years ago
Hi! As someone new to all of this — how would I interact with the trained data after it&#x27;s been trained?<p>Is it possible to create a conversation or QA style interaction with it? I see there&#x27;s examples of &quot;pytorch&quot; but as a someone new— I&#x27;m not sure what that means in terms of public use cases.<p>I guess I&#x27;m asking is &quot;ok I use Flower to train on a bunch of stuff... then what do I do with that?&quot;<p>Thanks!
评论 #35266190 未加载
jaggirsabout 2 years ago
It has been shown that the input data can be reverse-engineered from the model weights. How do you deal with this issue?
评论 #35261072 未加载
brookstabout 2 years ago
Very interesting project. Your write up here does a much better job of explaining the market need and value prop than the GitHub readme.md… consider bringing some of this text over as the “why &#x2F; what” story?
评论 #35261691 未加载
photochemsynabout 2 years ago
This looks very interesting. I&#x27;d like to see a model trained on the complete body of scientific research literature from the past 100 years or so, I wonder if this approach could facilitate that?
评论 #35262726 未加载
评论 #35266893 未加载
juanma91pabout 2 years ago
Great to see Flower here! We use the framework for our projects because of its modularity, scalability, and ease to use. Another important aspect of FL, on top of the already mentioned privacy preservation, is network resource utilisation. By transferring only the weights of the model, less bandwidth is required, which can reduce network congestion. This is especially important given that it is expected that by 2030, more than 50 billion devices will be connected and transferring data.
评论 #35267270 未加载
elijahbenizzyabout 2 years ago
Congratulations! Really excited for you!<p>I love how you found a niche, valuable problem, built a framework, and are seeing a lot of success. A question (and I&#x27;m far from an expert so let me know if the assumptions are wrong):<p>It seems to me that the federated users have to be coordinated around timing for this to work. Otherwise this could take weeks&#x2F;lots of slack messages for a single model to train. E.G. one team is having infra issues and doesn&#x27;t get a job started, the other team is ready but then their lead goes on vacation, etc... In the internal-to-an-organization case this is probably fine (E.G. a hospital where the data has to be separated by patient&#x2F;cohort), but if there are different teams managing the data then (a) have you seen this problem and (b) do you have tooling to fix it?
评论 #35264673 未加载
techwizrdabout 2 years ago
I&#x27;ve been working with Flower to implement and study Federated Learning for a few years, and have just started contributing back on Slack and Github. Congrats on launching on HN!
评论 #35262178 未加载
northlondonerabout 2 years ago
Many congratulations! Glad to hear about UK &amp; EU collaborative innovation in open-source projects. Keep up the fantastic work!<p>Others asked similar question regarding comparable projects. What&#x27;s your take on OpenFL from Intel? Do you think Flower moves into more commercial-MLOps direction? Looks like OpenFL particularly focused on to academic imaging community.
jleguinaabout 2 years ago
This is a great project!<p>Have you though about what happens at inference? Suppose I train in a federated healthcare environment using PII features from patient records. Once I get the weights back how can I ever deploy it if I don&#x27;t have access to the same features? The models would become highly coupled to the training environments no?<p>Best of luck!
spangryabout 2 years ago
Another interesting use case - government training models on legislatively protected data (e.g. tax data). Lots of data the government holds is governed by confidentiality restrictions built into legislation, limiting its utility. Sounds like federated learning could be a way around that.
rjtcabout 2 years ago
How is your approach different than tf federated or any of the other federated libraries out there?
评论 #35266920 未加载
blintzabout 2 years ago
This is really cool. Federated learning seems like it could unlock a lot of value in healthcare settings.<p>Have you had any luck convincing hospitals &#x2F; insurers &#x2F; etc that this satisfies HIPAA and is safe? How do you convince them?
评论 #35267882 未加载
7eabout 2 years ago
Why not train 10000x faster using H100 secure enclaves with remote attestation? The FL window is closing, because it is a PITA to use and replacements are superior.