Léonard Torossian is one of the #DigitAg co-funded PhDs
I am interested in :
- Support vector machines
- Multi-task learning
- Quantile regression
- Gaussian process
- Stochastic bandit problem
- Sequential optimization
- Computational neuroscience
My PhD thesis is at the interplay between machine learning, statistics and computer experiments
Metamodeling and robust optimization – application to ideotype design under climatic uncertainty
- Start Date: November 2016
- University: Université Toulouse III Paul Sabatier
- PhD School: MIIT (Mathématique, Informatique, Télécommunications), Toulouse
- Field(s): Mathematics : statistics and optimization
- Doctoral Thesis Advisor: Robert Faivre (INRA MIAT), Aurélien Garivier (Université Toulouse III Paul Sabatier, Institut de Mathématiques de Toulouse)
- Co-supervisors : Victor Picheny (INRA MIAT)
- Funding: INRA – Région Occitanie
- #DigitAg: Labelled PhD – Axis 6
Contact: leonard.torossian [AT] inra.fr – Tél : 0622330387
Communications / Publications
- Léonard Torossian (2018) A review on quantile regression for stochastic computer experiments. Rencontres du Réseau Mexico (Inria, Irstea, Inra, Labex COTE), Bordeaux (FRA), 12-13 novembre 2018
- Léonard Torossian, Aurélien Garivier, Victor Picheny (2019) X-Armed Bandits: Optimizing Quantiles and Other Risks.
Preprint Arxiv Number: 1904.08205 | Hal Number: 02101647
We propose and analyze StoROO, an algorithm for risk optimization on stochastic black-box functions derived from StoOO. Motivated by risk-averse decision making fields like agriculture, medicine, biology or finance, we do not focus on the mean payoff but on generic functionals of the return distribution, like for example quantiles. We provide a generic regret analysis of StoROO. Inspired by the bandit literature and black-box mean optimizers, StoROO relies on the possibility to construct confidence intervals for the targeted functional based on random-size samples. We explain in detail how to construct them for quantiles, providing tight bounds based on Kullback-Leibler divergence. The interest of these tight bounds is highlighted by numerical experiments that show a dramatic improvement over standard approaches.
- Léonard Torossian, Victor Picheny, Robert Faivre, Aurélien Garivier (2019). A Review on Quantile Regression for Stochastic Computer Experiments.
Preprint Arxiv Number: 1901.07874 | Hal Number: 02010735
We report on an empirical study of the main strategies for conditional quantile estimation in the context of stochastic computer experiments. To ensure adequate diversity, six metamodels are presented, divided into three categories based on order statistics, functional approaches, and those of Bayesian inspiration. The metamodels are tested on several problems characterized by the size of the training set, the input dimension, the quantile order and the value of the probability density function in the neighborhood of the quantile. The metamodels studied reveal good contrasts in our set of 480 experiments, enabling several patterns to be extracted. Based on our results, guidelines are proposed to allow users to select the best method for a given problem.