Laboratoire Jean Kuntzmann (LJK) , Université Grenoble Alpes, DAO Team
Full CV : [EN]/[FR]  Google Scholar Profile
Contact : firstname.lastname a_ univgrenoblealpes.fr
/iutzeler Franck Iutzeler 000000032537380X @FranckIutzeler
News
 Nov. 2020 : Congrats to Dmitry "Mitya" Grishchenko for successfully defending his Ph.D.! The jury included Pascal Bianchi, Peter Richtarik, Julien Mairal, Samuel Vaiter, Alexander Gasnikov and his coadvisors: J. Malick, M. Amini, and I.
 June 2020 : Waiss Azizian (Student at ENS Paris) visited our team for six weeks as part of his (previously online) internship to work on extragradient methods.
 July 2019 : I was awarded an ANR JCJC grant (Young Research grant from the French Research Agency). The project is named STROLL: Harnessing Structure in Optimization for Largescale Learning.
 June 2019 : I am part of the chair on Optimisation and Learning, lead by Jerome Malick and Yurii Nesterov, funded by the Grenoble AI Institute.
 June 2019 : Elnur Gasanov (Student at KAUST with P. Richtarik) visited our team for five weeks to work on distributed optimization.
 Mar. 2019 : Alexander Rogozin (Student at MIPT Moscow with A. Gasnikov) visited our team for two weeks to work on distributed structure optimization. [Photo]
 Jan. 2019 : Mathias Chastan began his PhD between ST MicroElectronics and our team. He is cosupervised by A. Lam, J. Malick and I and will work on machine learning for industrial defects detection.
 Nov. 2018 : I gave a talk about the Python library Pandas at the Python group of the Univ. Grenoble Alpes. The notebook of the presentation is on [GitHub] and here is a possible solution of the exercises [Notebook].
 Jul. 2018 : I presented our work A Delaytolerant ProximalGradient Algorithm for Distributed Learning at ICML in Stockholm: [Slides] [Poster]. We recently posted a paper on ArXiv that pushes these ideas further in terms of mathematical optimization [ArXiv]
 Juin 2018 : With colleages from the DAO team, we organize the Grenoble Optimization Days. This event also celebrates the axe GrenobleMoscow, and is coorganized with Alexander Gasnikov (MIPT, Moscou). Partial funding by my CNRS I3A project.
 Feb. 2018 : Minicourse by Julien Mairal on Optimization and Learning, funded by the GdR MOA, in prelude to the SMAIMODE conference. Details here.
 Oct. 2017 : Dmitry Grishchenko began his PhD thesis in our team. He is cosupervised by J. Malick, M.R. Amini, and I and will work on distributed optimization algorithms.
 Sept. 2017 : Bikash Joshi defended his PhD thesis on some largescale learning methods. He was cosupervised by M.R. Amini and I and is now a data scientist in the private sector.
 June 2017 : The conference CAp 2017 (Conférence sur l'apprentissage automatique  19th annual meeting of the francophone Machine Learning community) held in Grenoble June 2830 was a great success. It was coorganized by the AMA team at LIG and our team DAO at LJK, the general chair was M.R. Amini.
 Mar. 2017 : Our course project on Distributed Optimization for Big Data was granted a development funding from IDEX Grenoble Alpes. The Goal is to raise our Convex and Distributed Optimization to the next level with state of the art technologies (Python/Jupyter, Spark, Docker) and to a wider audience.
 Nov. 2016 : I gave a tutorial talk about Gossip algorithms at the Machine Learning seminar SMILE in Paris. Slides: Part I  Gossip algorithms , Part II  Gossip and Optimization. (The animations may not work with all PDF viewers, they should do fine with Adobe's)
Talks
 Nov. 2020 : Nonsmooth regularizations in Machine Learning: structure of the solutions, identification, and applications, IMAG Montpellier (virtual). [Slides]
 Sep. 2020 : a Randomized Proximal Gradient Method with StructureAdapted Sampling, Journées SMAI MODE (virtual). [Slides]
 Mar. 2020 : Harnessing Structure in Optimization for Machine Learning, Optimization for Machine Learning, CIRM (France). [Slides]
 Oct. 2018 : Distributed Learning with Sparse Communications and Structure Identification, Séminaire INRIA Magnet, Lille (France).
 Jul. 2018 : Distributed Learning with Sparse Communications and Structure Identification, International Symposium on Mathematical Programming (ISMP), Bordeaux (France).
 June 2018 : Distributed Learning with Sparse Communications and Structure Identification, Séminaire Polaris, Grenoble (France).
 June 2018 : Distributed Learning with Sparse Communications and Structure Identification, Séminaire D.A.T.A., Grenoble (France).
 May. 2018 : Distributed Learning with Sparse Communications and Structure Identification, Journées de Statistique, Saclay (France).
 Apr. 2017 : Monotonicity, Acceleration, Inertia, and the proximal gradient algorithm , Optimization and Statistical Learning, Les Houches (France). [Slides]
 Nov. 2016 : Gossip Algorithms: Tutorial and Recent advances , SMILE in Paris, Paris (France).
 Oct. 2016 : Modified fixed points iterations and applications to randomized and accelerated optimization algorithms , Workshop Cavalieri, Paris (France).
 Sep. 2016 : Practical acceleration for some optimization methods using relaxation and inertia , Seminaire d'Analyse non lineaire et Optimisation, Avignon (France).
 June 2016 : Practical acceleration for some optimization methods using relaxation and inertia , Seminaire SignalImage de l'Insitut de Mathematiques de Bordeaux, Bordeaux (France).
 June 2016 : Practical accelerations for the alternating direction method of multipliers , PICOF Workshop , Autrans (France).
 May 2016 : Descente par coordonnéees stochastique dan l'algorithme du point fixe et application aux méthod d'optimisation , Congres d'Analyse Numerique (CANUM) , Obernai (France).
 Nov. 2015 : Relaxation and Inertia on the Proximal Point Algorithm , Titan Workshop , Grenoble (France).
 Nov. 2015 : Relaxation and Inertia on Fixed point algorithms , Journées EDP RhoneAlpesAuvergne (JERAA), ClermontFerrand (France).
 Mar. 2015 : Online Relaxation Method for Improving Linear Convergence Rates of the ADMM , Benelux meeting on Systems and Control, Lommel (Belgium).
 Aug. 2014 : Asynchronous Distributed Optimization , Journées MAS, Toulouse (France).
 May. 2014 : Distributed Optimization Techniques for Learning over Big Data , 2014 ESSEC/CentraleSupélec Conference Bridging Worlds in Big Data, ESSEC CNIT Campus, La Défense Paris (France).
 Apr. 2014 : Distributed Asynchronous optimization using the ADMM, Large graphs and networks seminar, Université Catholique de LouvainlaNeuve , ICTEAM institute, LouvainLaNeuve (Belgium). [ slides ]
 Jul. 2013 : Distributed Optimization using a Randomized Alternating Direction Method of Multipliers , Digicosme Research Day, Digiteo, GifsurYvette.
 Nov. 2012 : Distributed Estimation of the Average Value in Wireless Sensor Networks , AlcatelLucent Chair Seminar, Supélec, GifsurYvette.
 Apr. 2012 : Some useful results on Matrix Products for Signal Processing , Ph.D. Candidates Seminar, Telecom ParisTech, Paris.
 Oct. 2011 : Distributed Maximal Value Estimation , Ph.D. Candidates Seminar, Telecom ParisTech, Paris.
Research Topics
Optimization algorithms
Distributed Optimization
Gossip Algorithms
Students
(former students are in italics)PhD Students
 Gilles Bareilles (2019 at LJK, Grenoble). Coadvised (80%) with J. Malick (LJK, Grenoble)
 YuGuan Hsieh (2019 at LJK, Grenoble). Coadvised (30%) with J. Malick (LJK, Grenoble) and P. Mertikoploulos (LIG, Grenoble)
 Mathias Chastan (2019 at ST MicroElectronics and LJK, Grenoble). CIFRE (industrial) coadvised with J. Malick (LJK, Grenoble) and A. Lam (ST MicroElectronics, Crolles)
 Dmitry Grishchenko (20172020 at LJK, Grenoble) now senior engineer in R&D Huawei Moscow. Coadvised (50%) with J. Malick (LJK, Grenoble) and M.R. Amini (LIG, Grenoble)
 Bikash Joshi (20142017 at LIG, Grenoble), now datascientist in a private company. Coadvised (50%) with M.R. Amini (LIG, Grenoble)
Master Students
 Waiss Azizian (Mar.Aug. 2020 at LJK, Grenoble). Coadvised (50%) with J. Malick (LJK, Grenoble) and Panayotis Mertikopoulos (LIG, Grenoble)
 Gilles Bareilles (Apr.Sept. 2019 at LJK, Grenoble).
 YuGuan Hsieh (Apr.Sept. 2019 at LJK, Grenoble). Coadvised (50%) with J. Malick (LJK, Grenoble) and Panayotis Mertikopoulos (LIG, Grenoble)
 Konstantin Mishchenko (Apr.Sept. 2017 at LJK, Grenoble). Coadvised (50%) with J. Malick (LJK, Grenoble) and M.R. Amini (LIG, Grenoble)
 Taha Essalih (Apr.Sept. 2017 at LEME, Paris X). Coadvised (15%) with N. El Korso, A. Breloy (LEME, Paris X) and R. Flamary (Lagrange, Nice)
Funding
 ANR  Jeune Chercheur project STROLL: Harnessing Structure in Optimization for LargeScale Learning
PI, 145kE, 20192023  PGMO  PRMO project Distributed Optimization on Graphs with Flexible Communications
PI, 5kE, 20192020
with D. Grishchenko (LJK, Grenoble).  CNRS INSMI and INS2I  Intelligence artificielle et apprentissage automatique project
PI, 8kE, 2018
with M. Clausel (IECL, U. Lorraine), M.R. Amini (LIG, Grenoble).  IDEX Grenoble Alpes  Initiatives de Recherche Stratégiques project Distributed Optimization for Largescale Learning
PI, 1 PhD funding + 3kE, 20172020
with J. Malick (LJK, Grenoble), M.R. Amini (LIG, Grenoble).  IDEX Grenoble Alpes  Pedagogical Initiatives project Optimisation Distribuée pour le Big Data
approx. 30kE, 20172019
with J. Malick (PI), A. Iouditski, R. Hildenbrand, J. Lelong, L. Viry (LJK, Grenoble).  PGMO project advanced nonsmooth optimization methods for stochastic programming
12kE, 20162018
with J. Malick (PI) (LJK, Grenoble), W. Van Ackooij (EDF, Paris), W. de Oliveira (UERJ, Rio de Janeiro, Brésil).  Jeunes Chercheurs GDR ISIS/GRETSI project "ON FIRE" Calibration des futurs grands interféromètres
coPI, 7kE, 20162018
with N. El Korso (coPI), A. Breloy (LEME, Paris X), R. Flamary (Lagrange, Nice).  Funding for my PhD thesis Distributed Estimation and Optimization over Wireless Networks by the French Defense Agency (DGA) and Institut Carnot TelecomEurecom
Preprints
Journal articles & NeurIPS/ICML papers
20  F. Iutzeler, J. Malick: Nonsmoothness in Machine Learning: specific structure, proximal identification, and applications, to appear in SetValued and Variational Analysis, 2020.  

19  Y.G. Hsieh, F. Iutzeler, J. Malick, P. Mertikopoulos : Explore Aggressively, Update Conservatively: Stochastic Extragradient Methods with Variable Stepsize Scaling , Advances in Neural Information Processing Systems 34 (NeurIPS) spotlight, Dec. 2020.  
18  G. Bareilles, Y. Laguel, D. Grishchenko, F. Iutzeler, J. Malick: Randomized Progressive Hedging methods for Multistage Stochastic Programming , to appear in Annals of Operations Research, 2020.
code available at https://github.com/yassinelaguel/RandomizedProgressiveHedging.jl 

17  G. Bareilles, F. Iutzeler : On the Interplay between Acceleration and Identification for the Proximal Gradient algorithm , Computational Optimization and Applications, vol. 77, no. 2, pp. 351378, 2020.
code available at https://github.com/GillesBareilles/AccelerationIdentification 

16  D. Grishchenko, F. Iutzeler, and J. Malick : Proximal Gradient Methods with Adaptive Subspace Sampling , to appear in Mathematics of Operations Research, 2020.  
15  K. Mishchenko, F. Iutzeler, and J. Malick : A Distributed Flexible Delaytolerant Proximal Gradient Algorithm , SIAM Journal on Optimization, vol. 30, no. 1, pp. 933959, 2020.  
14  Y.G. Hsieh, F. Iutzeler, J. Malick, and P. Mertikopoulos : On the convergence of singlecall stochastic extragradient methods , Advances in Neural Information Processing Systems 33 (NeurIPS) , Dec. 2019.  
13  F. Iutzeler, J. Malick, and W. de Oliveira : Asynchronous level bundle methods, Mathematical Programming, vol. 184, pp. 319348, 2020.  
12  F. Iutzeler and L. Condat : Distributed Projection on the Simplex and l1 Ball via ADMM and Gossip, IEEE Signal Processing Letters, vol. 25, no. 11, pp. 16501654, Nov. 2018.  
11  K. Mishchenko, F. Iutzeler, J. Malick, M.R. Amini: A Delaytolerant ProximalGradient Algorithm for Distributed Learning, 35th International Conference on Machine Learning (ICML), PMLR 80:35843592, Stockholm (Sweden), July 2018.  
10  F. Iutzeler and J. Malick : On the Proximal Gradient Algorithm with Alternated Inertia , Journal of Optimization Theory and Applications, vol. 176, no. 3, pp. 688710, March 2018.  
9  B. Joshi, F. Iutzeler and M.R. Amini : Largescale asynchronous distributed learning based on parameter exchanges , International Journal of Data Science and Analytics, vol. 5, no. 4, pp. 223232, June 2018.  
8  F. Iutzeler and J. M. Hendrickx : A Generic online acceleration scheme for Optimization algorithms via Relaxation and Inertia , Optimization Methods and Software, vol. 34, no. 2, 2019.  
7  B. Joshi, M.R. Amini, I. Partalas, F. Iutzeler, Yu. Maximov: Aggressive Sampling for Multiclass to Binary Reduction with Applications to Text Classification, Advances in Neural Information Processing Systems 30 (NIPS) , Dec. 2017.  
6  F. Iutzeler : Distributed Computation of Quantiles via ADMM, IEEE Signal Processing Letters, vol. 24, no. 5, pp. 619623, May 2017.  
5  P. Bianchi, W. Hachem, and F. Iutzeler : A Stochastic Coordinate Descent PrimalDual Algorithm and Applications to Distributed Optimization , IEEE Transactions on Automatic Control, vol. 61, no. 10, pp. 29472957, Oct. 2016.  
4  F. Iutzeler, P. Bianchi, P. Ciblat, and W. Hachem : Explicit Convergence Rate of a Distributed Alternating Direction Method of Multipliers, IEEE Transactions on Automatic Control, vol. 61, no. 4, pp. 892904, Apr. 2016.  
3  A. Abboud, F. Iutzeler, R. Couillet, M. Debbah, and H. Siguerdidjane Distributed ProductionSharing Optimization and Application to Power Grid Networks , IEEE Transactions on Signal and Information Processing over Networks, vol. 2, no. 11, pp. 1628, March 2016.  
2  F. Iutzeler, P. Ciblat, and W. Hachem : Analysis of SumWeightlike algorithms for averaging in Wireless Sensor Networks, IEEE Transactions on Signal Processing, vol. 61, no. 11, pp. 28022814, June 2013.  
1  F. Iutzeler, P. Ciblat, and J. Jakubowicz : Analysis of maxconsensus algorithms in wireless channels, IEEE Transactions on Signal Processing, vol. 60, no. 11, pp. 61036107, November 2012. 
International Conferences
Local Conferences
Thesis
F. Iutzeler : Distributed Estimation and Optimization in Asynchronous Networks , Ph.D. Thesis, Telecom ParisTech, defended December 6th, 2013. 

ANR JCJC STROLL ``Harnessing Structure in Optimization for Largescale Learning''
The growth and diversification in data collection techniques has led to tremendous changes in machine learning systems. Since the breakthrough of variancereduced stochastic methods (SAGA, MISO [4, 12]), several research directions in optimization for learning have recently proven to be able to scale up to current challenges. Among them, let us focus on two promising trends: Dimension Reduction. The premise of this topic is to identify pertinent directions in the variable search space and concentrate most computational eflorts onto these directions. A successful and typical example of such methods is the screening of variables for the lasso problem; for instance, one can mention the strong rules [23] used in the popular GLMNET R package or more recently the scikitlearn compatible package Celer (package url: https://mathurinm.github.io/celer/) [13].
 Distributed Computing. Nowadays, computing clusters have become easily available, for example through Amazon Web Services; also, as handheld devices became more and more powerful, learning using local computations directly on the devices is now a trend as initiated by Google’s federated learning framework (Google AI blog post: https://ai.googleblog.com/2017/04/federatedlearningcollaborative.html) [8].
The success of both these topics is partly due to a common denominator: the optimization problem at hand is strongly structured and this structure is harnessed to produce computationally eficient methods.
People
 Franck Iutzeler (PI), Assistant Prof. at Univ. Grenoble Alpes
 Gilles Bareilles, PhD Student at Univ. Grenoble Alpes with F. Iutzeler and J. Malick
Publications
  F. Iutzeler, J. Malick: Nonsmoothness in Machine Learning: specific structure, proximal identification, and applications, to appear in SetValued and Variational Analysis, 2020.  

  G. Bareilles, F. Iutzeler : On the Interplay between Acceleration and Identification for the Proximal Gradient algorithm , Computational Optimization and Applications, vol. 77, no. 2, pp. 351378, 2020.
code available at https://github.com/GillesBareilles/AccelerationIdentification 

  D. Grishchenko, F. Iutzeler, and J. Malick : Proximal Gradient Methods with Adaptive Subspace Sampling , to appear in Mathematics of Operations Research, 2020. 
Context
First, structure may be brought by regularization of the original learning problem, which typically enforces lowdimensional solutions. For instance, L1norm regularization has been immensely popular since the success of the lasso problem [22] for which dimension reduction methods have proven to be highly profftable in practice as mentioned above. In addition, it is wellknown that the iterates of most popular optimization methods actually become sparse in ffnite time for L1norm regularized problems, in which case they are said to identify some sparsity pattern; at this point, the convergence of the algorithm becomes faster in both theory and practice [11]. Combining identiffcation and alongtheway dimension reduction by adapting the coordinate descent probabilities to the identiffed important ones recently spurred and produced very promising results [18]. However, identiffcation results have been extended to more general classes of regularizers (e.g. 1D Total Variation, trace norm, see [5] and references therein) for which no numerical dimension reduction methods are available yet. Eficiently harnessing more general types of identiffcation for improving the convergence of numerical optimization algorithms is Axis 1 of this project.
Secondly, distributed problems directly induce structure as the global system state lives in the product space corresponding to the replication of the master variable at the agents. Then, direct extensions of optimization methods on this kind of setup sufler from the burden of synchronization due to the fact that all local computations have to be gathered before beginning another. This bottleneck was recently tackled both algorithmically and in terms of analysis from diflerent viewpoints (stochastic ffxed point theory [3, 19], direction aggregation [24], iterates aggregation [15]). With the synchrony constraint partially lifted, an important topic is to make these systems more robust and adapted to reallife issues that can arise on a network of handheld devices. The study of the resilience to faulty/malicious updates and the adaptation to evolving architectures/data/loss functions corresponds to Axis 2 of this project.
The two kinds of structure presented above may seem very diflerent at ffrst glance but they actually lead to similar theory and algorithms. A technical evidence of this claim is the following: standard coordinate descent on the proximal gradient algorithm imposes a separable regularizer (see [7]); joining the coordinates is the blocking point, and dealing jointly with messily updated coordinates is also what asynchronous distributed methods are about. This structural link between coordinate descent methods and distributed optimization is the central thread of this project. Exploiting this analogy, harnessing the regularizationbrought structure of the problem to reduce the cost of exchanges in distributed architectures is Axis 3 of this project.
Objectives
The targets of this project are divided into three axes:Axis 1: Faster Optimization algorithms using identiffcation. Computationally accelerating optimization methods may be done either by reducing the cost of one iteration or performing less iterations, splitting this axis into two. 5!white!//1a Reducing the iteration cost may be done by discarding variables or limiting the update to a subset of the coordinates. This proved to be very eficient in practice [13, 18]; however, to fully benefft from these methods the problem at hand has to have some coordinatestructure (i.e. natural sparsity, as brought by `_{1}norm). For general regularizers, the question is still open: how to eficiently reduce iteration cost using more general structures. A solution, which can be linked to sketching [6], is to update the iterate only along some subspace chosen in an adaptive manner by screening identiffed subspaces. 05!white!//1b Performing less iterations by incorporating predictive inertial steps has been immensely popular since Nesterov’s fast gradient [17] and Beck and Teboulle’s FISTA [2]. Eficient methods taking into account the local geometry of the problem have been proposed (see our recent contributions [20, 21] and references therein). However, inertial methods are still structureblind and can actually slow down identiffcation by making the iterates leave an otherwise stable subspace. We thus aim at developing structureadapted inertial algorithms.
Axis 2: Resilient Distributed Learning. Tackling massive learning problems over distributed architectures calls for a change in objective formulation, new mathematical tools, and new performance measures. This need stems from the dynamics and heterogeneity of modern distributed learning systems: network of handheld devices, scattered dataservers, etc. In addition, the idea of extracting information globally, at scale, from local contributions naturally links with the problems of data/model privacy and their representation to make the whole system less prone to attacks and robust to loosely connected agents. The target of this research axis is to build on our recent progress in distributed optimization [14, 15] to produce robust distributed learning systems. 2a First, an important issue is to correctly handle the arrival of new information in the system; leading to the problem of distributed asynchronous online learning, and its robustness to adversarial players. 2b A second goal is to change the structure of the problem at hand and forget about minimizing the full loss as on a single machine but rather some distributed loss adapted to the system. A direction to construct such a loss is to build on the connected ideas of i) the geometric median of the local minimizers of the local losses, i.e. the median of means (see e.g. [16] and the recent [10]), and ii) the proximal average ([1] and an application in learning [26]) which can be seen as the function minimized if the agents would perform full minimization of some regularization of their loss and average the result. A common denominator of these techniques is that they are more focused on local rather than global minimization; furthermore, the median of means can naturally lead to robust distributed systems.
Axis 3: Communicationeficient Distributed methods. In largescale learning, stochastic optimization algorithms are very popular, but their parallel counterparts (e.g. [9]) are highly iterative and thus require lowlatency and highthroughput connections. The distributed systems considered in this project have signiffcantly higherlatency, less reliable, and lowerthroughput connections which rehabilitates batch algorithms and makes communications the practical bottleneck of the learning process [25]. 3a First, as part of the Ph.D. of Dmitry Grishchenko, we work on proposing sparsiffcation techniques for distributed asynchronous optimization. However, direct randomized sparsiffcation might end up being excessively slow in terms of minimization of the objective; but adapting the sparsiffcation to the identiffed iterates structure gave very encouraging preliminary results (unpublished, presented at SMAIMODE 2018 and ISMP 2018). Designing automatic dimension reduction methods to make distributed systems communicationeficient for a large class of regularized learning problems is still an exciting challenge which can take full profft of part 1a of the project. 3b Then, mirroring 2a,b, communications in the system can be drastically decreased by shifting from optimizationadapted communications to learningadapted communications. Indeed, useful models can be obtained without a highprecision solution of the global risk and with very limited communications. For instance, if all the agents compute the solution of their local risk minimization and send it to a master machine, it can compute the medianofmeans global model and thus have robust global information from one exchange with the agents.
References
[1] H. H. Bauschke, R. Goebel, Y. Lucet, and X. Wang. The proximal average: basic theory. SIAM Journal on Optimization, 19(2):766–785, 2008.
[2] A. Beck and M. Teboulle. A fast iterative shrinkagethresholding algorithm for linear inverse problems. SIAM journal on imaging sciences, 2(1):183–202, 2009.
[3] P. Bianchi, W. Hachem, and F. Iutzeler. A coordinate descent primaldual algorithm and application to distributed asynchronous optimization. IEEE Transactions on Automatic Control, 61(10):2947–2957, 2016.
[4] A. Defazio, F. Bach, and S. LacosteJulien. Saga: A fast incremental gradient method with support for nonstrongly convex composite objectives. In Advances in Neural Information Processing Systems, pages 1646–1654, 2014.
[5] J. Fadili, J. Malick, and G. Peyré. Sensitivity analysis for mirrorstratiffable convex functions. arXiv:1707.03194, 2017.
[6] R. M. Gower, P. Richtárik, and F. Bach. Stochastic quasigradient methods: Variance reduction via jacobian sketching. arXiv:1805.02632, 2018.
[7] F. Hanzely, K. Mishchenko, and P. Richtarik. Sega: Variance reduction via gradient sketching. arXiv:1809.03054, 2018.
[8] J. Konečnỳ, H. B. McMahan, F. X. Yu, P. Richtárik, A. T. Suresh, and D. Bacon. Federated learning: Strategies for improving communication eficiency. arXiv:1610.05492, 2016.
[9] R. Leblond, F. Pedregosa, and S. LacosteJulien. Asaga: Asynchronous parallel saga. In Artiffcial Intelligence and Statistics, pages 46–54, 2017.
[10] G. Lecué and M. Lerasle. Robust machine learning by medianofmeans: theory and practice. arXiv:1711.10306, 2017.
[11] J. Liang, J. Fadili, and G. Peyré. Local linear convergence of forward–backward under partial smoothness. In Advances in Neural Information Processing Systems, pages 1970–1978, 2014.
[12] J. Mairal. Optimization with ffrstorder surrogate functions. In International Conference on Machine Learning, pages 783–791, 2013.
[13] M. Massias, J. Salmon, and A. Gramfort. Celer: a fast solver for the lasso with dual extrapolation. In International Conference on Machine Learning, pages 3321–3330, 2018.
[14] K. Mishchenko, F. Iutzeler, and J. Malick. A distributed ffiexible delaytolerant proximal gradient algorithm. arXiv:1806.09429, 2018.
[15] K. Mishchenko, F. Iutzeler, J. Malick, and M.R. Amini. A delaytolerant proximalgradient algorithm for distributed learning. In International Conference on Machine Learning, pages 3584–3592, 2018.
[16] A. S. Nemirovsky and D. B. Yudin. Problem complexity and method eficiency in optimization. 1983.
[17] Y. E. Nesterov. A method for solving the convex programming problem with convergence rate o (1/k^ 2). In Dokl. Akad. Nauk SSSR, volume 269, pages 543–547, 1983.
[18] J. Nutini, I. Laradji, and M. Schmidt. Let’s make block coordinate descent go fast: Faster greedy rules, messagepassing, activeset complexity, and superlinear convergence. arXiv:1712.08859, 2017.
[19] T. Sun, R. Hannah, and W. Yin. Asynchronous coordinate descent under more realistic assumptions. In Advances in Neural Information Processing Systems, pages 6183–6191, 2017.
[20] F. Iutzeler and J. M. Hendrickx. A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optimization Methods and Software, pages 1–23, 2017.
[21] F. Iutzeler and J. Malick. On the proximal gradient algorithm with alternated inertia. Journal of Optimization Theory and Applications, 176(3):688–710, 2018.
[22] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), pages 267–288, 1996.
[23] R. Tibshirani, J. Bien, J. Friedman, T. Hastie, N. Simon, J. Taylor, and R. J. Tibshirani. Strong rules for discarding predictors in lassotype problems. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 74(2):245–266, 2012.
[24] N. D. Vanli, M. Gurbuzbalaban, and A. Ozdaglar. Global convergence rate of proximal incremental aggregated gradient methods. SIAM Journal on Optimization, 28(2):1282–1300, 2018.
[25] J. Wangni, J. Wang, J. Liu, and T. Zhang. Gradient sparsiffcation for communicationeficient distributed optimization. arXiv:1710.09854, 2017.
[26] Y.L. Yu. Better approximation and faster algorithm using the proximal average. In Advances in neural information processing systems, pages 458–466, 2013.
2020/2021
 Introduction à la Recherche Operationelle at Universite Grenoble Alpes  Master 2 SSD (FR/EN)
LP,QP,Reformualtions, etc. TP: [Github] ou Si vous n'arrivez pas à installer cvxpy, une solution est d'utiliser le JupyterHub de l'UGA:
 Aller sur https://jupyterhub.uga.fr
 Cliquer sur "New"  "Terminal"
 Entrer les commandes suivantes successivement:
cd notebooks
pip install user cvxpy
git clone https://github.com/iutzeler/intro_recherche_operationelle.git
 Refresher in Matrix Analysis and Numerical Optimization at Universite Grenoble Alpes  Master 2 MSIAM/MOSIG/etc. (EN)
Reminder in Matrix analysis and optimization.  Statistiques pour la biologie at Universite Grenoble Alpes  L2 BIO (FR)
Estimateurs,Tests, etc.
2019/2020
 Mathematics of Operations Research at Universite Grenoble Alpes  Master 1 MSIAM (EN)
Spectral Graph Theory, Game Theory, Optimal Transport, etc.  Numerical Optimization at Universite Grenoble Alpes  Master 1 MSIAM (EN)
Optimization 101  Introduction à la Recherche Operationelle at Universite Grenoble Alpes  Master 2 SSD (FR/EN)
LP,QP,Reformualtions, etc.  Python for Data Sciences at Universite Grenoble Alpes  Master 1 SSD (FR/EN)
Python, NumPy, Scikitlearn, etc. Notebooks: on Github
 Correction to the Image SVD exercise [Python]
 Correction of the least squares exercise [Python]
 Correction to 32 bots and planets exercises [Notebook]
 Correction to 42 supervised ML [Notebook]
 Projects: [List]
 Notebooks: on Github
 Refresher in Matrix Analysis and Numerical Optimization at Universite Grenoble Alpes  Master 2 MSIAM/MOSIG/etc. (EN)
Reminder in Matrix analysis and optimization. Syllabus and Exercices: [PDF]
 Notebooks: on GitHub
 Chap 1: Python and NumPy Basics (look at that part before the practical sessions)
 Chap 21: Matrix Part (Wed. AM)  [Possible solution (HTML)]
 Chap 22: Optimization Part (Fri. AM)
 Statistiques pour la biologie at Universite Grenoble Alpes  L2 BIO (FR)
Estimateurs,Tests, etc.
2018/2019
 Mathematics of Operations Research at Universite Grenoble Alpes  Master 1 MSIAM (EN)
Spectral Graph Theory, Game Theory, Optimal Transport, etc. Mini Projects [HERE]
 Numerical Optimization at Universite Grenoble Alpes  Master 1 MSIAM (EN)
Optimization 101 Course by L. Desbat
 Some [Topics and references]
 Tutorials:
 Labs: on GitHub
 Evaluation on Thu. 11th: 9:4511:15 Lecture on Stochastic methods and Variance Reduction; 11:3013 Graded Lab (to hand out by groups of 1,2,3 before Apr. 14th).
 Mathématiques pour l'ingénieur at Universite Grenoble Alpes  L2 SPI (FR)
Linear Algebra, Differential Equations, etc.  Convex and Distributed Optimization at Universite Grenoble Alpes  Master 2 MSIAM/MOSIG(EN)
Incremental and Stochastic Optimization for Learning, Spark, Distributed Optimization See the course website
 Refresher in Matrix Analysis and Numerical Optimization at Universite Grenoble Alpes  Master 2 MSIAM/MOSIG/etc. (EN)
Reminder in Matrix analysis and optimization. Syllabus and Exercices: [PDF]
 Notebooks: on GitHub
 Chap 1: Python and NumPy Basics (look at that part before the practical sessions)
 Chap 21: Matrix Part (Wed. AM)  [Possible solution (HTML)]
 Chap 22: Optimization Part (Fri. AM)
 Python for Data Sciences at Universite Grenoble Alpes  Master 1 and 2 SSD (FR/EN)
Python, NumPy, Scikitlearn, etc. Notebooks: on Github
 Correction to the Image SVD exercise [Python]
 Correction of the least squares exercise [Python]
 Correction to 32 bots and planets exercises [Notebook]
 Correction to 42 supervised ML [Notebook]
 Projects: [List]
 Notebooks: on Github
 Statistiques pour la biologie at Universite Grenoble Alpes  L2 BIO (FR)
Estimateurs,Tests, etc.
2017/2018
 Numerical Optimization at Universite Grenoble Alpes  Master 1 MSIAM (EN)
Optimization 101 Course by L. Desbat
 Some [Topics and references]
 Tutorials:
 Tuto 1: Differentiation, Convexity, Gradient algorithm with rates [Tuto1 PDF]
 Tuto 3: Proximal operations, proximal gradient algorithm [Tuto3 PDF]
 Tuto 4: Linear and Quadratic Problems and Reformulation [Tuto4 PDF]
 Labs:
 Lab 1: Structure of an Optimization program, Gradient algorithms [Lab1 Notebooks]v2.1 [Common coding mistakes]
 Lab 2: Descent algorithms [Lab2 Notebooks]v1.0
 Lab 3: Performance of Optimization algorithms on learning problem [Lab3 Notebooks]v2.0
 Lab 4: The Proximal Gradient algorithm and sparsity [Lab4 Notebooks]v1.0
 Lab 5: Linear and Quadratic Programs [Lab5 Notebooks]v1.1 [HTML Solution]
 Lab 6: [1ADMM Notebooks] or [2VRSG]
 Optimisation Numérique at Universite Grenoble Alpes  ENSIMAG 2A (FR)
Numerical Optimization Course by J. Malick
 Material and Informations:
 TP1: Programmation Linéaire et Quadratique [Notebook]
 TP2: Algorithmes de descente de gradient et (quasi) Newton [Notebooks]
 TP3: Algorithmes Proximaux [TP3 Notebooks]
 TP4: Faisceaux [TP4 HTML]
 Introduction à la Recherche Operationelle at Universite Grenoble Alpes  Master 1 SSD (FR)
LP, QP, CVX, Mosek,... in R
 Convex and Distributed Optimization at Universite Grenoble Alpes  Master 2 MSIAM/MOSIG(EN)
Incremental and Stochastic Optimization for Learning, Spark, Distributed Optimization See the course website
 Introduction to Python for Data Sciences at Universite Grenoble Alpes  Master 2 SSD (FR/EN)
Python, NumPy, Scikitlearn, etc. Notebooks: on Github
 Correction to 32 bots and planets exercises [Notebook]
 Projects: [List]
 Notebooks: on Github
 Refresher in Matrix Analysis and Numerical Optimization at Universite Grenoble Alpes  Master 2 MSIAM/MOSIG/etc. (EN)
Reminder in Matrix analysis and optimization.
2016/2017
 Optimisation Numérique at Universite Grenoble Alpes  ENSIMAG 2A (FR)
Numerical Optimization Course by J. Malick
 Material and Informations:
 TP1: Algorithmes de descente de gradient et (quasi) Newton [TP1 Notebooks]
 TP2: Programmation Linéaire et Quadratique [TP2 Notebooks]
 TP3: Algorithmes Proximaux [TP3 Notebooks]
 Numerical Optimization at Universite Grenoble Alpes  Master 1 MSIAM (EN)
Optimization 101 Course by L. Desbat
 Tutorials:
 Tuto 1: Differentiation, Convexity, Gradient algorithm with rates [Tuto1 PDF] [Tuto1+ PDF]
 Tuto 4: Proximal operations, proximal gradient algorithm [Tuto4 PDF]
 Tuto 5: Linear and Quadratic Problems and Reformulation [Tuto5 PDF]
 Labs:
 Lab 1: Structure of an Optimization program, Gradient algorithms [Lab1 Notebooks]v2.1 [Common coding mistakes]
 Lab 2: Descent algorithms [Lab2 Notebooks]v1.0
 Lab 3: Performance of Optimization algorithms on learning problem [Lab3 Notebooks]v1.0
 Lab 6: The Proximal Gradient algorithm and sparsity [Lab6 Notebooks]v1.0
 Lab 7: Linear and Quadratic Programs [Lab7 Notebooks]v1.0
 Lab 8: ADMM [Lab8 Notebooks]v1.0
 Introduction à la Recherche Operationelle at Universite Grenoble Alpes  Master 1 SSD (FR)
Operation Research Course by A. Juditsky
 Material and Informations:
 Intro: Diet planning  Menu with cal. infos vit. infos, ideal diet file
Try to minimize the number of calories while staying equal (or close to) the ideal diet  see Jupyter notebook and HTML version.  TP1: PDF Example du papier CandesTao
 TP2: PDF
 Intro: Diet planning  Menu with cal. infos vit. infos, ideal diet file
 Convex and Distributed Optimization at Universite Grenoble Alpes  Master 2 MSIAM/MOSIG (EN)
Optimization + Spark for Data Science See the course webpage
 Refresher course on Numerical Linear Algebra and Optimization at Universite Grenoble Alpes  Master 2 MSIAM/MOSIG/ORCO/MSCIT (EN)
 Introductory presentation  Tutorial and Labs: Numerical Matrix Analysis  Numerical Optimization
 Time table:
Wed. 21st: 9:4512:45 "Amphi H" 14:0017:00 "Amphi H" Thu. 22nd: 9:4512:45 "E301" 14:0017:00 "Amphi D" Fri. 23rd: 9:4512:45 "Amphi H" 14:0017:00 "E103"  MAT306/MAT405  Mathematiques Apronfondies pour l'ingenieur at Universite Grenoble Alpes  L2 GC (FR)
Complements of Maths for Engineering
2015/2016
 MAP35G  Mathematiques Appliquees at Universite Joseph Fourier.
Applied Maths Travaux diriges :
 Travaux pratiques :
 MAT236  Mathematiques Apronfondies pour l'ingenieur at Universite Joseph Fourier.
Complements of Maths for Engineering Material and Informations: See the page of Arnaud Chauviere
 Optimisation Numerique at ENSIMAG.
Numerical Optimization Material and Informations: See the page of Jerome Malick
 MAT246  Mathematiques Apronfondies pour l'ingenieur at Universite Joseph Fourier.
Complements of Maths for Engineering Material and Informations: See the page of Arnaud Chauviere
 MAT121  Bases de l'analyse et lien avec l'algÃ¨bre at Universite Joseph Fourier.
Basics of Calculus and links with Algebra Material and Informations:
2014/2015
 Linear Automatic Control at Universite Catholique de Louvain.
2013/2014
 Information Theory at ESIPE  UniversitÃ© ParisEst
 Lab of Advanced Optimization for Machine Learning at Telecom ParisTech
2012/2013
 Analysis at UniversitÃ© ParisEst
2011/2012
 Advanced Digital Communications at Telecom ParisTech.
2010/2011
 Advanced Digital Communications at Telecom ParisTech.
Contact
Fastest (and most reliable) way to reach me is by email at firstname.lastname a_ univgrenoblealpes.frMy office address is:
Franck IUTZELER
Laboratoire Jean Kuntzmann
Batiment IMAG  Bureau 153  Domaine Universitaire
38400 Saint Martin d'Hères
FRANCE
Short Vita
since 2015  Asst. Prof. (Maitre de conférences) at Laboratoire Jean Kuntzmann, Université Grenoble Alpes 

2015  Postdoc at Université Catholique de Louvain 
2014  Postdoc at Supelec 
20102013  Ph.D. student at Telecom ParisTech 
20072010  Engineering Student at Telecom ParisTech 
Full Vita
Full CV : Français  Google Scholar Profile
Hobbies
*  Trail Running 

*  Literature (and the like) 