So this is about machine learning apparently, not learning as in teaching students. I don't quite get where the "federated" part comes into play.
I'm interested in whether federated learning can bring ML to situations where you don't want to pool all the data in one spot for privacy reasons. Say you run a B2B Saas business in seperate tenancies and in each tenant contains sensitive information about that client (eg about <i>their</i> clients). Could you run a federated learning model such that it could learn in each tenant and improve the overall model but not share any of the sensitive information between tenants?
FL has emerged as a promising technique for edge devices to collaboratively learn a shared prediction model while keeping their training data on the device, thereby decoupling machine learning from the need to store the data in the cloud. However, FL is difficult to realistically implement due to scale and system heterogeneity. Although there are several research frameworks for simulating FL algorithms, none of them support the study of scalable FL workloads on heterogeneous edge devices.