Skip to main content
Statistics
STAT
Statistics
Study
Prospective Students
Current Students
Research
Research Areas
Research Groups
People
All People
Faculty
Affiliate Faculty
Instructional Faculty
Research Scientists
Research Staff
Postdoctoral Fellows
Administrative Staff
Alumni
Students
News
Events
History
Al-Kindi
Al-Kindi Distinguished Statistics Lectures
Al-Kindi Student Awards
About
CEMSE Division
Apply
Federated learning
Better Methods and Theory for Federated Learning: Compression, Client Selection, and Heterogeneity
Samuel Horváth, Ph.D. Student, Statistics
Jun 27, 18:00
-
20:00
B5 L5 R5209
Federated learning
Optimization for Machine Learning
Federated learning (FL) is an emerging machine learning paradigm involving multiple clients, e.g., mobile phone devices, with an incentive to collaborate in solving a machine learning problem coordinated by a central server. FL was proposed in 2016 by Konecny et al. and McMahan et al. as a viable privacy-preserving alternative to traditional centralized machine learning since, by construction, the training data points are decentralized and never transferred by the clients to a central server. Therefore, to a certain degree, FL mitigates the privacy risks associated with centralized data collection. Unfortunately, optimization for FL faces several specific issues that centralized optimization usually does not need to handle. In this thesis, we identify several of these challenges and propose new methods and algorithms to address them, with the ultimate goal of enabling practical FL solutions supported with mathematically rigorous guarantees.