Top

الانتماءات

الاهتمامات البحثية

تتركز الاهتمامات البحثية للبروفيسور بيتر ريشتاريك عند تقاطع مجال الرياضيات مع علوم الحاسب الآلي، والتعلم الآلي، والتحسين، والجبر الخطي العددي، والحوسبة عالية الأداء والاحتمالات التطبيقية. وهو مهتم في تطوير الخوارزميات التي تساهم في وضح حلول لمشاكل التحسين التي تصفها البيانات الكبيرة، مع التركيز بشكل خاص على الطرق العشوائية والموازية والموزعة. وهو المخترع المشارك للتعلم الاتحادي (federated learning)، وهي منصة جوجل للتعلم الآلي على الهواتف النقالة والخاصة بحفظ خصوصية بيانات المستخدمين.

المؤهل العلمي

  • دكتوراه، أبحاث العمليات الحسابية، جامعة كورنيل، 2007

  • ماجستير، أبحاث العمليات الحسابية، جامعة كورنيل، 2006

  • ماجستير، الرياضيات، جامعة كومينيوس، 2001

  • بكالوريوس، الإدارة جامعة كومينيوس ، 2001

  • بكالوريوس، رياضيات، جامعة كومينيوس، 2000

الجوائز والتكريمات

  • SIAM SIGEST Outstanding Paper Award, 2016

  • EUSA Best Research & Dissertation Supervisor Award (2nd Prize), 2016

  • Turing Fellow, The Alan Turing Institute, 2016

  • EPSRC Fellow in Mathematical Sciences, 2016

  • Nominated for the Chancellor’s Rising Star Award, University of Edinburgh, 2014

  • Simons Institute Visiting Scientist Fellow, UC Berkeley, 2013

  • Nominated for the 2014 Microsoft Research Faculty Fellowship, 2013

  • Nominated for the Innovative Teaching Award, University of Edinburgh, 2011 - 2012

  • Honorary Fellow, Heriot-Watt University, 2011

  • CORE Fellowship, Louvain, 2007

  • Cornell University Graduate Fellowship, 2002

  • Dean’s Prize and Rector’s Prize, Comenius University, 2001

  • Winner of Numerous Mathematical Olympiads and Competitions, 1992 - 1997

مؤلفات مختارة

[97] Nicolas Loizou and Peter Richtárik
Convergence analysis of inexact randomized iterative methods
[arXiv] [code: iBasic, iSDSA, iSGD, iSPM, iRBK, iRBCD]

[96] Amedeo Sapio, Marco Canini, Chen-Yu Ho, Jacob Nelson, Panos Kalnis, Changhoon Kim, Arvind Krishnamurthy, Masoud Moshref, Dan R. K. Ports and Peter Richtárik
Scaling distributed machine learning with in-network aggregation
[arXiv] [code: SwitchML]

[95] Samuel Horváth, Dmitry Kovalev, Konstantin Mishchenko, Peter Richtárik and Sebastian Stich
Stochastic distributed learning with gradient quantization and variance reduction
[arXiv] [code: VR-DIANA]

[94] El Houcine Bergou, Marco Canini, Aritra Dutta, Peter Richtárik and Yunming Xiao
Direct nonlinear acceleration
[arXiv] [code: DNA]

[93] El Houcine Bergou, Eduard Gorbunov and Peter Richtárik
Stochastic three points method for unconstrained smooth minimization
[arXiv] [code: STP]

[92] Adel Bibi, El Houcine Bergou, Ozan Sener, Bernard Ghanem and Peter Richtárik
A stochastic derivative-free optimization method with importance sampling
[arXiv] [code: STP_IS]

[91] Konstantin Mishchenko, Filip Hanzely and Peter Richtárik
99% of parallel optimization is inevitably a waste of time
[arXiv] [code: IBCD, ISAGA, ISGD, IASGD, ISEGA]

[90] Konstantin Mishchenko, Eduard Gorbunov, Martin Takáč and Peter Richtárik
Distributed learning with compressed gradient differences
[arXiv] [code: DIANA]

[89] Robert Mansel Gower, Nicolas Loizou, Xun Qian, Alibek Sailanbayev, Egor Shulgin and Peter Richtárik
SGD: general analysis and improved rates
[arXiv] [code: SGD-AS]

[88] Dmitry Kovalev, Samuel Horváth and Peter Richtárik
Don’t jump through hoops and remove those loops: SVRG and Katyusha are better without the outer loop
[arXiv] [code: L-SVRG, L-Katyusha]

[87] Xun Qian, Zheng Qu and Peter Richtárik
SAGA with arbitrary sampling
[arXiv] [code: SAGA-AS]

Prepared in 2018

[86] Lam M. Nguyen, Phuong Ha Nguyen, P. Richtárik, Katya Scheinberg, Martin Takáč and Marten van Dijk
New convergence aspects of stochastic gradient algorithms
[arXiv

[85] Filip Hanzely, Jakub Konečný, Nicolas Loizou, Peter Richtárik and Dmitry Grishchenko
A privacy preserving randomized gossip algorithm via controlled noise insertion 
NeurIPS Privacy Preserving Machine Learning Workshop, 2018
[arXiv] [poster]

[84] Konstantin Mishchenko and Peter Richtárik
A stochastic penalty model for convex and nonconvex optimization with big constraints 
[arXiv]

[83] Nicolas Loizou, Michael G. Rabbat and Peter Richtárik
Provably accelerated randomized gossip algorithms 
[arXiv] [code: AccGossip]

[82] Filip Hanzely and Peter Richtárik
Accelerated coordinate descent with arbitrary sampling and best rates for minibatches
to appear in: The 22nd International Conference on Artificial Intelligence and Statistics (AISTATS 2019)
[arXiv] [code: ACD]

[81] Samuel Horváth and Peter Richtárik
Nonconvex variance reduced optimization with arbitrary sampling 
Horváth: Best DS3 Poster Award, Paris, 2018 (link)
[arXiv] [poster] [code: SVRG, SAGA, SARAH] 

[80] Filip Hanzely, Konstantin Mishchenko and Peter Richtárik
SEGA: Variance reduction via gradient sketching 
Advances in Neural Information Processing Systems, 2018 
[arXiv] [poster] [code: SEGA]

[79] Filip Hanzely, Peter Richtárik and Lin Xiao
Accelerated Bregman proximal gradient methods for relatively smooth convex optimization
[arXiv] [code: ABPG, ABDA]

[78] Jakub Mareček, Peter Richtárik and Martin Takáč 
Matrix completion under interval uncertainty: highlights
Lecture Notes in Computer Science, ECML-PKDD 2018
[pdf]

[77] Nicolas Loizou and Peter Richtárik 
Accelerated gossip via stochastic heavy ball method
56th Annual Allerton Conference on Communication, Control, and Computing, 2018
[arXiv] [poster]

[76] Adel Bibi, Alibek Sailanbayev, Bernard Ghanem, Robert Mansel Gower and Peter Richtárik 
Improving SAGA via a probabilistic interpolation with gradient descent 
[arXiv] [code: SAGD]

[75] Aritra Dutta, Filip Hanzely and Peter Richtárik 
A nonconvex projection method for robust PCA
The Thirty-Third AAAI Conference on Artificial Intelligence, 2019 (AAAI-19)
[arXiv]

[74] Robert M. Gower, Peter Richtárik and Francis Bach 
Stochastic quasi-gradient methods: variance reduction via Jacobian sketching 
[arXiv] [slides] [code: JacSketch] [video: ]

[73] Aritra Dutta, Xin Li and Peter Richtárik 
Weighted low-rank approximation of matrices and background modeling
[arXiv]

[72] Filip Hanzely and Peter Richtárik 
Fastest rates for stochastic mirror descent methods
[arXiv]

[71] Lam M. Nguyen, Phuong Ha Nguyen, Marten van Dijk, P. Richtárik, Katya Scheinberg and Martin Takáč 
SGD and Hogwild! convergence without the bounded gradients assumption 
Proceedings of The 35th International Conference on Machine LearningPMLR 80:3750-3758, 2018
[arXiv]

[70] Robert M. Gower, Filip Hanzely, Peter Richtárik and Sebastian Stich 
Accelerated stochastic matrix inversion: general theory and speeding up BFGS rules for faster second-order optimization
Advances in Neural Information Processing Systems, 2018
[arXiv] [poster] [code: ABFGS]

[69] Nikita Doikov and Peter Richtárik 
Randomized block cubic Newton method
Proceedings of The 35th International Conference on Machine LearningPMLR 80:1290-1298, 2018
Doikov: Best Talk Award, "Control, Information and Optimization", Voronovo, Russia, 2018
[arXiv] [bib] [code: RBCN]

[68] Dmitry Kovalev, Eduard Gorbunov, Elnur Gasanov and Peter Richtárik 
Stochastic spectral and conjugate descent methods
Advances in Neural Information Processing Systems, 2018
[arXiv] [poster] [code: SSD, SconD, SSCD, mSSCD, iSconD, iSSD] 

[67] Radoslav Harman, Lenka Filová and Peter Richtárik 
A randomized exchange algorithm for computing optimal approximate designs of experiments
Journal of the American Statistical Association
[arXiv] [code: REX, OD_REXMVEE_REX]

[66] Ion Necoara, Andrei Patrascu and Peter Richtárik 
Randomized projection methods for convex feasibility problems: conditioning and convergence rates
[arXiv] [slides]

Prepared in 2017

[65] Nicolas Loizou and Peter Richtárik
Momentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods
[arXiv]

[64] Aritra Dutta and Peter Richtárik
Online and batch supervised background estimation via L1 regression
IEEE Winter Conference on Applications in Computer Vision, 2019
[arXiv]

[63] Nicolas Loizou and Peter Richtárik
Linearly convergent stochastic heavy ball method for minimizing generalization error
In NIPS Workshop on Optimization for Machine Learning, 2017
[arXiv] [poster]

[62] Dominik Csiba and Peter Richtárik
Global convergence of arbitrary-block gradient methods for generalized Polyak-Łojasiewicz functions
[arXiv]

[61] Ademir Alves Ribeiro and Peter Richtárik
The complexity of primal-dual fixed point methods for ridge regression
Linear Algebra and its Applications 556, 342-372, 2018
[arXiv]

[60] Matthias J. Ehrhardt, Pawel Markiewicz, Antonin Chambolle, Peter Richtárik, Jonathan Schott and Carola-Bibiane Schoenlieb
Faster PET reconstruction with a stochastic primal-dual hybrid gradient method
Proceedings of SPIE, Wavelets and Sparsity XVII, Volume 10394, pages 1039410-1 - 1039410-11, 2017
[pdf] [poster] [code: SPDHG] [video: ]

[59] Aritra Dutta, Xin Li and Peter Richtárik
A batch-incremental video background estimation model using weighted low-rank approximation of matrices
IEEE International Conference on Computer Vision (ICCV) Workshops, 2017
[arXiv] [code: inWLR]

[58] Filip Hanzely, Jakub Konečný, Nicolas Loizou, Peter Richtárik and Dmitry Grishchenko
Privacy preserving randomized gossip algorithms
[arXiv] [slides]

[57] Antonin Chambolle, Matthias J. Ehrhardt, Peter Richtárik and Carola-Bibiane Schoenlieb
Stochastic primal-dual hybrid gradient algorithm with arbitrary sampling and imaging applications
SIAM Journal on Optimization 28(4):2783-2808, 2018
[arXiv] [slides] [poster] [code: SPDHG] [video: ]

[56] Peter Richtárik and Martin Takáč
Stochastic reformulations of linear systems: algorithms and convergence theory
[arXiv] [slides] [code: basic, parallel and accelerated methods] 

[55] Mojmír Mutný and Peter Richtárik
Parallel stochastic Newton method
Journal of Computational Mathematics 36(3):404-425, 2018
[arXiv] [code: PSNM]

مجالات الأبحاث

وسائط متعددة