Samuel Horvath
Samuel Horvath
Home
Posts
Publications
Contact
CV
Light
Dark
Automatic
Peter Richtarik
Latest
Hyperparameter Transfer Learning with Adaptive Complexity
Optimal Client Sampling for Federated Learning
Lower Bounds and Optimal Algorithms for Personalized Federated Learning
A Better Alternative to Error Feedback for Communication-Efficient Distributed Learning
On Biased Compression for Distributed Learning
Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization
Natural Compression for Distributed Deep Learning
Nonconvex Variance Reduced Optimization with Arbitrary Sampling
Stochastic Distributed Learning with Gradient Quantization and Variance Reduction
Don’t Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are Better Without the Outer Loop
Cite
×