التحق بالجامعة
الانتقال إلى الجامعة
انضم إلينا
وظائف أعضاء هيئة التدريس
رؤيتنا
المجلة العلمية
أستاذ, علوم الحاسب الآلي
دكتوراه، أبحاث العمليات الحسابية، جامعة كورنيل، 2007
ماجستير، أبحاث العمليات الحسابية، جامعة كورنيل، 2006
ماجستير، الرياضيات، جامعة كومينيوس، 2001
بكالوريوس، الإدارة جامعة كومينيوس ، 2001
بكالوريوس، رياضيات، جامعة كومينيوس، 2000
SIAM SIGEST Outstanding Paper Award, 2016
EUSA Best Research & Dissertation Supervisor Award (2nd Prize), 2016
Turing Fellow, The Alan Turing Institute, 2016
EPSRC Fellow in Mathematical Sciences, 2016
Nominated for the Chancellor’s Rising Star Award, University of Edinburgh, 2014
Simons Institute Visiting Scientist Fellow, UC Berkeley, 2013
Nominated for the 2014 Microsoft Research Faculty Fellowship, 2013
Nominated for the Innovative Teaching Award, University of Edinburgh, 2011 - 2012
Honorary Fellow, Heriot-Watt University, 2011
CORE Fellowship, Louvain, 2007
Cornell University Graduate Fellowship, 2002
Dean’s Prize and Rector’s Prize, Comenius University, 2001
Winner of Numerous Mathematical Olympiads and Competitions, 1992 - 1997
[97] Nicolas Loizou and Peter RichtárikConvergence analysis of inexact randomized iterative methods[arXiv] [code: iBasic, iSDSA, iSGD, iSPM, iRBK, iRBCD][96] Amedeo Sapio, Marco Canini, Chen-Yu Ho, Jacob Nelson, Panos Kalnis, Changhoon Kim, Arvind Krishnamurthy, Masoud Moshref, Dan R. K. Ports and Peter RichtárikScaling distributed machine learning with in-network aggregation[arXiv] [code: SwitchML][95] Samuel Horváth, Dmitry Kovalev, Konstantin Mishchenko, Peter Richtárik and Sebastian StichStochastic distributed learning with gradient quantization and variance reduction[arXiv] [code: VR-DIANA][94] El Houcine Bergou, Marco Canini, Aritra Dutta, Peter Richtárik and Yunming XiaoDirect nonlinear acceleration[arXiv] [code: DNA][93] El Houcine Bergou, Eduard Gorbunov and Peter RichtárikStochastic three points method for unconstrained smooth minimization[arXiv] [code: STP][92] Adel Bibi, El Houcine Bergou, Ozan Sener, Bernard Ghanem and Peter RichtárikA stochastic derivative-free optimization method with importance sampling[arXiv] [code: STP_IS][91] Konstantin Mishchenko, Filip Hanzely and Peter Richtárik99% of parallel optimization is inevitably a waste of time[arXiv] [code: IBCD, ISAGA, ISGD, IASGD, ISEGA][90] Konstantin Mishchenko, Eduard Gorbunov, Martin Takáč and Peter RichtárikDistributed learning with compressed gradient differences[arXiv] [code: DIANA][89] Robert Mansel Gower, Nicolas Loizou, Xun Qian, Alibek Sailanbayev, Egor Shulgin and Peter RichtárikSGD: general analysis and improved rates[arXiv] [code: SGD-AS][88] Dmitry Kovalev, Samuel Horváth and Peter RichtárikDon’t jump through hoops and remove those loops: SVRG and Katyusha are better without the outer loop[arXiv] [code: L-SVRG, L-Katyusha][87] Xun Qian, Zheng Qu and Peter RichtárikSAGA with arbitrary sampling[arXiv] [code: SAGA-AS]
Prepared in 2018
[86] Lam M. Nguyen, Phuong Ha Nguyen, P. Richtárik, Katya Scheinberg, Martin Takáč and Marten van DijkNew convergence aspects of stochastic gradient algorithms[arXiv] [85] Filip Hanzely, Jakub Konečný, Nicolas Loizou, Peter Richtárik and Dmitry GrishchenkoA privacy preserving randomized gossip algorithm via controlled noise insertion NeurIPS Privacy Preserving Machine Learning Workshop, 2018[arXiv] [poster][84] Konstantin Mishchenko and Peter RichtárikA stochastic penalty model for convex and nonconvex optimization with big constraints [arXiv][83] Nicolas Loizou, Michael G. Rabbat and Peter RichtárikProvably accelerated randomized gossip algorithms [arXiv] [code: AccGossip][82] Filip Hanzely and Peter RichtárikAccelerated coordinate descent with arbitrary sampling and best rates for minibatchesto appear in: The 22nd International Conference on Artificial Intelligence and Statistics (AISTATS 2019)[arXiv] [code: ACD][81] Samuel Horváth and Peter RichtárikNonconvex variance reduced optimization with arbitrary sampling Horváth: Best DS3 Poster Award, Paris, 2018 (link)[arXiv] [poster] [code: SVRG, SAGA, SARAH] [80] Filip Hanzely, Konstantin Mishchenko and Peter RichtárikSEGA: Variance reduction via gradient sketching Advances in Neural Information Processing Systems, 2018 [arXiv] [poster] [code: SEGA][79] Filip Hanzely, Peter Richtárik and Lin XiaoAccelerated Bregman proximal gradient methods for relatively smooth convex optimization[arXiv] [code: ABPG, ABDA][78] Jakub Mareček, Peter Richtárik and Martin Takáč Matrix completion under interval uncertainty: highlightsLecture Notes in Computer Science, ECML-PKDD 2018[pdf][77] Nicolas Loizou and Peter Richtárik Accelerated gossip via stochastic heavy ball method56th Annual Allerton Conference on Communication, Control, and Computing, 2018[arXiv] [poster][76] Adel Bibi, Alibek Sailanbayev, Bernard Ghanem, Robert Mansel Gower and Peter Richtárik Improving SAGA via a probabilistic interpolation with gradient descent [arXiv] [code: SAGD][75] Aritra Dutta, Filip Hanzely and Peter Richtárik A nonconvex projection method for robust PCAThe Thirty-Third AAAI Conference on Artificial Intelligence, 2019 (AAAI-19)[arXiv][74] Robert M. Gower, Peter Richtárik and Francis Bach Stochastic quasi-gradient methods: variance reduction via Jacobian sketching [arXiv] [slides] [code: JacSketch] [video: ][73] Aritra Dutta, Xin Li and Peter Richtárik Weighted low-rank approximation of matrices and background modeling[arXiv][72] Filip Hanzely and Peter Richtárik Fastest rates for stochastic mirror descent methods[arXiv][71] Lam M. Nguyen, Phuong Ha Nguyen, Marten van Dijk, P. Richtárik, Katya Scheinberg and Martin Takáč SGD and Hogwild! convergence without the bounded gradients assumption Proceedings of The 35th International Conference on Machine Learning, PMLR 80:3750-3758, 2018[arXiv][70] Robert M. Gower, Filip Hanzely, Peter Richtárik and Sebastian Stich Accelerated stochastic matrix inversion: general theory and speeding up BFGS rules for faster second-order optimizationAdvances in Neural Information Processing Systems, 2018[arXiv] [poster] [code: ABFGS][69] Nikita Doikov and Peter Richtárik Randomized block cubic Newton methodProceedings of The 35th International Conference on Machine Learning, PMLR 80:1290-1298, 2018Doikov: Best Talk Award, "Control, Information and Optimization", Voronovo, Russia, 2018[arXiv] [bib] [code: RBCN][68] Dmitry Kovalev, Eduard Gorbunov, Elnur Gasanov and Peter Richtárik Stochastic spectral and conjugate descent methodsAdvances in Neural Information Processing Systems, 2018[arXiv] [poster] [code: SSD, SconD, SSCD, mSSCD, iSconD, iSSD] [67] Radoslav Harman, Lenka Filová and Peter Richtárik A randomized exchange algorithm for computing optimal approximate designs of experimentsJournal of the American Statistical Association[arXiv] [code: REX, OD_REX, MVEE_REX][66] Ion Necoara, Andrei Patrascu and Peter Richtárik Randomized projection methods for convex feasibility problems: conditioning and convergence rates[arXiv] [slides]
Prepared in 2017
[65] Nicolas Loizou and Peter RichtárikMomentum and stochastic momentum for stochastic gradient, Newton, proximal point and subspace descent methods[arXiv][64] Aritra Dutta and Peter RichtárikOnline and batch supervised background estimation via L1 regressionIEEE Winter Conference on Applications in Computer Vision, 2019[arXiv][63] Nicolas Loizou and Peter RichtárikLinearly convergent stochastic heavy ball method for minimizing generalization errorIn NIPS Workshop on Optimization for Machine Learning, 2017[arXiv] [poster][62] Dominik Csiba and Peter RichtárikGlobal convergence of arbitrary-block gradient methods for generalized Polyak-Łojasiewicz functions[arXiv][61] Ademir Alves Ribeiro and Peter RichtárikThe complexity of primal-dual fixed point methods for ridge regressionLinear Algebra and its Applications 556, 342-372, 2018[arXiv][60] Matthias J. Ehrhardt, Pawel Markiewicz, Antonin Chambolle, Peter Richtárik, Jonathan Schott and Carola-Bibiane SchoenliebFaster PET reconstruction with a stochastic primal-dual hybrid gradient methodProceedings of SPIE, Wavelets and Sparsity XVII, Volume 10394, pages 1039410-1 - 1039410-11, 2017[pdf] [poster] [code: SPDHG] [video: ][59] Aritra Dutta, Xin Li and Peter RichtárikA batch-incremental video background estimation model using weighted low-rank approximation of matricesIEEE International Conference on Computer Vision (ICCV) Workshops, 2017[arXiv] [code: inWLR][58] Filip Hanzely, Jakub Konečný, Nicolas Loizou, Peter Richtárik and Dmitry GrishchenkoPrivacy preserving randomized gossip algorithms[arXiv] [slides][57] Antonin Chambolle, Matthias J. Ehrhardt, Peter Richtárik and Carola-Bibiane SchoenliebStochastic primal-dual hybrid gradient algorithm with arbitrary sampling and imaging applicationsSIAM Journal on Optimization 28(4):2783-2808, 2018[arXiv] [slides] [poster] [code: SPDHG] [video: ][56] Peter Richtárik and Martin TakáčStochastic reformulations of linear systems: algorithms and convergence theory[arXiv] [slides] [code: basic, parallel and accelerated methods] [55] Mojmír Mutný and Peter RichtárikParallel stochastic Newton methodJournal of Computational Mathematics 36(3):404-425, 2018[arXiv] [code: PSNM]