Denis Belomestny
- Leading Research Fellow:Faculty of Computer Science / AI and Digital Science Institute / International Laboratory of Stochastic Algorithms and High-Dimensional Inference
- Denis Belomestny has been at HSE University since 2014.
Education and Degrees
- 2002
Candidate of Sciences* (PhD)
Lomonosov Moscow State University - 1998
Degree
Lomonosov Moscow State University
According to the International Standard Classification of Education (ISCED) 2011, Candidate of Sciences belongs to ISCED level 8 - "doctoral or equivalent", together with PhD, DPhil, D.Lit, D.Sc, LL.D, Doctorate or similar. Candidate of Sciences allows its holders to reach the level of the Associate Professor.
Publications60
- Preprint Belomestny D., Morozova E., Panov V. Decompounding under general mixing distributions / Cornell University. Series arXiv.org "math.ST". 2024. No. 2405.05419.
- Article Puchkin N., Samsonov S., Belomestny D., Moulines E., Naumov A. Rates of convergence for density estimation with generative adversarial networks // Journal of Machine Learning Research. 2024. Vol. 25. No. 29. P. 1-47.
- Article Belomestny D., Goldman A., Naumov A., Samsonov S. Theoretical guarantees for neural control variates in MCMC // Mathematics and Computers in Simulation. 2024. Vol. 220. P. 382-405. doi
- Chapter Tiapkin D., Belomestny D., Calandriello D., Moulines E., Munos R., Naumov A., Perrault P., Tang Y., Valko M., Menard P. Fast Rates for Maximum Entropy Exploration, in: Proceedings of the 40th International Conference on Machine Learning: Volume 202: International Conference on Machine Learning, 23-29 July 2023, Honolulu, Hawaii, USA Vol. 202: International Conference on Machine Learning, 23-29 July 2023, Honolulu, Hawaii, USA. PMLR, 2023. P. 34161-34221.
- Book Belomestny D., Ulyanov V. V., Butucea C., Reiss M., Mammen E. Foundations of Modern Statistics: Festschrift in Honor of Vladimir Spokoiny, Berlin, Germany, November 6–8, 2019, Moscow, Russia, November 30, 2019 / Отв. ред.: D. Belomestny.; Ed. by D. Belomestny. Vol. 425. Springer Publishing Company, 2023. doi
- Chapter Tiapkin D., Belomestny D., Calandriello D., Moulines E., Munos R., Naumov A., Perrault P., Valko M., Menard P. Model-free Posterior Sampling via Learning Rate Randomization, in: Advances in Neural Information Processing Systems 36 (NeurIPS 2023). Curran Associates, Inc., 2023. P. 73719-73774.
- Article Belomestny D., Pilipauskaite V., Podolskij M. Semiparametric estimation of McKean–Vlasov SDEs // Annales de l'institut Henri Poincare (B) Probability and Statistics. 2023. Vol. 59. No. 1. P. 79-96. doi
- Article Tiapkin D., Belomestny D., Naumov A., Valko M., Menard P. Sharp Deviations Bounds for Dirichlet Weighted Sums with Application to analysis of Bayesian algorithms // Working papers by Cornell University. Series math "arxiv.org". 2023. Article 2304.03056.
- Article Belomestny D., Naumov A., Puchkin N., Samsonov S. Simultaneous approximation of a smooth function and its derivatives by deep neural networks with piecewise-polynomial activations // Neural Networks. 2023. Vol. 161. P. 242-253. doi
- Article Belomestny D., Gugushvili S., Schauer M., Spreij P. Weak solutions to gamma-driven stochastic differential equations // Indagationes Mathematicae. 2023. Vol. 34. No. 4. P. 820-829. doi
- Article Belomestny Denis, Iosipoi L., Paris Q., Zhivotovskiy N. Empirical Variance Minimization with Applications in Variance Reduction and Optimal Control // Bernoulli: a journal of mathematical statistics and probability. 2022. Vol. 28. No. 2. P. 1382-1407. doi
- Chapter Tiapkin D., Belomestny D., Moulines E., Naumov A., Samsonov S., Tang Y., Valko M., Menard P. From Dirichlet to Rubin: Optimistic Exploration in RL without Bonuses, in: Proceedings of the 39th International Conference on Machine Learning Vol. 162. PMLR, 2022. P. 21380-21431.
- Article Belomestny D., Gugushvili S., Schauer M., Spreij P. Nonparametric Bayesian volatility estimation for gamma-driven stochastic differential equations // Bernoulli: a journal of mathematical statistics and probability. 2022. Vol. 28. No. 4. P. 2151-2180. doi
- Chapter Tiapkin D., Belomestny D., Calandriello D., Éric Moulines, Munos R., Naumov A., Rowland M., Valko M., Menard P. Optimistic Posterior Sampling for Reinforcement Learning with Few Samples and Tight Guarantees, in: Thirty-Sixth Conference on Neural Information Processing Systems : NeurIPS 2022. Curran Associates, Inc., 2022. P. 10737-10751.
- Preprint Belomestny D., Morozova E., Panov V. Statistical inference for scale mixture models via Mellin transform approach / Cornell University. Series arXiv.org "stat.ME". 2022. No. 2211.01799.
- Preprint Belomestny D., Kaledin M., Golubev A. Variance Reduction for Policy-Gradient Methods via Empirical Variance Minimization / -. Series - "-". 2022. doi
- Article Belomestny D., Moulines E., Samsonov S. Variance reduction for additive functionals of Markov chains via martingale representations // Statistics and Computing. 2022. Vol. 32. No. 1. Article 16. doi
- Article Масютин А. А., Савченко А. В., Наумов А. А., Самсонов С. В., Тяпкин Д. Н., Беломестный Д. В., Морозова Д. С., Бадьина Д. А. О разработке прикладных решений на основе искусственного интеллекта для обеспечения технологической безопасности // Доклады Российской академии наук. Математика, информатика, процессы управления (ранее - Доклады Академии Наук. Математика). 2022. Т. 508. № 106. С. 23-27. doi
- Article Belomestny D., Goldenshluger A. Density deconvolution under general assumptions on the distribution of measurement errors // Annals of Statistics. 2021. Vol. 49. No. 2. P. 615-649. doi
- Article Belomestny D., Iosipoi L. Fourier transform MCMC, heavy-tailed distributions, and geometric ergodicity // Mathematics and Computers in Simulation. 2021. No. 181. P. 351-363. doi
- Article Bayer C., Belomestny D., Hager P., Paolo P., Schoenmakers J. Randomized Optimal Stopping Algorithms and Their Convergence Analysis // SIAM Journal on Financial Mathematics. 2021. Vol. 12. No. 3. P. 1201-1225. doi
- Article Belomestny D., Levin I., Moulines E., Naumov A., Samsonov S., Zorina V. UVIP: Model-Free Approach to Evaluate Reinforcement Learning Algorithms // Working papers by Cornell University. Series math "arxiv.org". 2021. Article 2105.02135.
- Article Belomestny D., Iosipoi L., Moulines E., Naumov A., Samsonov S. Variance reduction for dependent sequences with applications to Stochastic Gradient MCMC // SIAM-ASA Journal on Uncertainty Quantification. 2021. Vol. 9. No. 2. P. 507-535. doi
- Article Belomestny D., Goldenshluger A. Nonparametric density estimation from observations with multiplicative measurement errors // Annales de l’Institut Henri Poincaré. 2020. Vol. 56. No. 1. P. 36-67. doi
- Article Belomestny D., Schoenmakers J. Optimal Stopping of McKean-Vlasov Diffusions via Regression on Particle Systems // SIAM Journal on Control and Optimization. 2020. Vol. 58. No. 1. P. 529-550. doi
- Article Belomestny D., Schoenmakers J., Spokoiny V., Zharkynbay B. Optimal stopping via reinforced regression // Communications in Mathematical Sciences. 2020. Vol. 18. No. 1. P. 109-121. doi
- Article Belomestny D., Kaledin M., Schoenmakers J. Semitractability of optimal stopping problems via a weighted stochastic mesh algorithm // Mathematical Finance. 2020. Vol. 30. No. 4. P. 1591-1616. doi
- Article Bayer C., Belomestny D., Redmann M., Riedel S., Schoenmakers J. Solving linear parabolic rough partial differential equations // Journal of Mathematical Analysis and Applications. 2020. Vol. 490. No. 1. Article 124236. doi
- Article Belomestny D., Moulines E., Iosipoi L., Naumov A., Samsonov S. Variance reduction for Markov chains with application to MCMC // Statistics and Computing. 2020. No. 30. P. 973-997. doi
- Article Belomestny D., Panov V., Woerner J. Low-frequency estimation of continuous-time moving average Levy processes // Bernoulli: a journal of mathematical statistics and probability. 2019. Vol. 25. No. 2. P. 902-931. doi
- Article Belomestny D., Kraetschmer V., Hübner T., Nolte S. Minimax theorems for American options without time-consistency // Finance and Stochastics. 2019. Vol. 23. P. 209-238. doi
- Article Belomestny D., GUGUSHVILI S., SCHAUER M., SPREIJ P. NONPARAMETRIC BAYESIAN INFERENCE FOR GAMMA-TYPE LEVY SUBORDINATORS // Communications in Mathematical Sciences. 2019. Vol. 17. No. 3. P. 781-816. doi
- Article Belomestny D., Comte F., Genon-Catalot V. Sobolev-Hermite versus Sobolev nonparametric density estimation on R // Annals of the Institute of Statistical Mathematics. 2019. Vol. 71. No. 1. P. 29-62. doi
- Article Belomestny D., Trabs M., Tsybakov A. Sparse covariance matrix estimation in high-dimensional deconvolution // Bernoulli: a journal of mathematical statistics and probability. 2019. Vol. 25. No. 3. P. 1901-1938. doi
- Article Belomestny D., Orlova T., Panov V. Statistical inference for moving-average Lévy-driven processes: Fourier-based approach // Statistica Neerlandica. 2019. Vol. 1. P. 100-117. doi
- Article Беломестный Д. В., Иосипой Л. С. Об оценке плотности распределения с помощью ряда Фурье // Управление большими системами: сборник трудов. 2019. № 82. С. 28-43. doi
- Book Belomestny D., Schoenmakers J. Advanced Simulation-Based Methods for Optimal Stopping and Control: With Applications in Finance. Palgrave Macmillan, 2018. doi
- Article Belomestny D., Trabs M. Low-rank diffusion matrix estimation for high-dimensional time-changed Levy processes // Annales de l'institut Henri Poincare (B) Probability and Statistics. 2018. Vol. 54. No. 3. P. 1583-1621. doi
- Article Belomestny D., Schoenmakers J. PROJECTED PARTICLE METHODS FOR SOLVING MCKEAN−VLASOV STOCHASTIC DIFFERENTIAL EQUATIONS // SIAM Journal on Numerical Analysis. 2018. Vol. 56. No. 6. P. 3169-3195. doi (in press)
- Article Belomestny D., Urusov M., Häfner S. Regression-based complexity reduction of the nested Monte Carlo methods // SIAM Journal on Financial Mathematics. 2018. Vol. 9. No. 2. P. 665-689. doi (in press)
- Article Belomestny D., Panov V. Semiparametric estimation in the normal variance-mean mixture model // Statistics. 2018. Vol. 52. No. 3. P. 571-589. doi
- Article Belomestny D., Häfner S., Urusov M. Stratified regression-based variance reduction approach for weak approximation schemes // Mathematics and Computers in Simulation. 2018. Vol. 143. P. 125-137. doi
- Article Belomestny D., Iosipoi L., Zhivotovskiy N. Variance Reduction in Monte Carlo Estimators via Empirical Variance Minimization // Doklady Mathematics. 2018. Vol. 98. No. 2. P. 494-497. doi
- Article Belomestny D., Kraetschmer V. Addentum to "Optimal Stopping Under Model Uncertainty:Randomized Stopping times approach" // Annals of Applied Probability. 2017. Vol. 2. No. 27. P. 1289-1293. doi
- Article Belomestny D., Mai H., Schoenmakers J. Generalized Post–Widder inversion formula with application to statistics // Journal of Mathematical Analysis and Applications. 2017. No. 455. P. 89-104. doi
- Chapter Belomestny D., Häfner S., Urusov M. Regression-Based Variance Reduction Approach for Strong Approximation Schemes, in: Modern problems of stochastic analysis and statistics - Selected contributions in honor of Valentin Konakov / Ed. by V. Panov. Heidelberg : Springer, 2017. doi P. 131-178.
- Preprint Belomestny D., Panov V. Semiparametric estimation in the normal variance-mean mixture model / Cornell University. Series ArXiv.org "Stat". 2017. No. 1705.07578.
- Article Belomestny D., Krymova E., Haerdle W. Sieve Estimation of the Minimal Entropy Martingale Marginal Dencity with Application to Pricing Kernel Estimation // International Journal of Theoretical and Applied Finance. 2017. Vol. 20. No. 6. P. 1750041-1-1750041-21. doi
- Preprint Belomestny D., Orlova T., Panov V. Statistical inference for moving-average Lévy-driven processes: Fourier-based approach / Cornell University. Series arXiv "stat". 2017. No. 1702.02794.
- Preprint Belomestny D., Panov V., Woerner J. Low frequency estimation of continuous-time moving average Lévy processes / Cornell University. Series arXiv "math". 2016. No. 1607.00896.
- Article Belomestny D., Krätschmer V. Optimal stopping under model uncertainty: Randomized stopping times approach // Annals of Applied Probability. 2016. Vol. 26. No. 2. P. 1260-1295.
- Article Belomestny D., Schoenmakers J. Statistical inference for time-changed Lévy processes via Mellin transform approach // Stochastic Processes and their Applications. 2016. Vol. 126. No. 7. P. 2092-2122. doi
- Article Belomestny D., Joshi M., Schoenmakers J. Addendum to: Multilevel dual approach for pricing American style derivatives // Finance and Stochastics. 2015. Vol. 19. No. 3. P. 681-684. doi
- Article Belomestny D., Prokhorov A. Stability of characterization of the independence of random variables by the independence of linear statistics. // Theory of Probability and Its Applications. 2015. Vol. 59. No. 4. P. 179-190.
- Article Belomestny D., Schoenmakers J. Statistical Skorohod embedding problem: Optimality and asymptotic normality // Statistics and Probability Letters. 2015. Vol. 104. P. 169-180. doi
- Article Belomestny D., Panov V. Statistical inference for generalized Ornstein-Uhlenbeck processes // Electronic journal of statistics. 2015. Vol. 9. No. 2. P. 1974-2006. doi
- Chapter Belomestny D., Reiss M. Estimation and Calibration of Lévy Models via Fourier Methods, in: Lévy Matters IV. Estimation for Discretely Observed Lévy Processes. Vol. 2128: Lévy Matters IV. Heidelberg : Springer, 2014. P. 1-76.
- Book Belomestny D., Comte F., Genon-Catalot V., Masuda H., Reiss M. Lévy Matters IV. Estimation for Discretely Observed Lévy Processes. Vol. 2128: Lévy Matters IV. Heidelberg : Springer, 2014.
- Article Belomestny D., Panov V. Abelian theorems for stochastic volatility models with application to the estimation of jump activity // Stochastic Processes and their Applications. 2013. Vol. 123. No. 1. P. 15-44.
- Article Belomestny D., Panov V. Estimation of the activity of jumps in time-changed Levy models // Electronic journal of statistics. 2013. Vol. 7. P. 2970-3003.
Conferences
- 201510th IMACS Seminar on Monte Carlo Methode (Линц). Presentation: Multilevel Monte Carlo for weak approximation schemes
- Workshop on "Nonparametric and high-dimensional statistics", Heidelberg (Хайдельберг). Presentation: Low-rank diffusion covariance matrix estimation under presence of jumps
Employment history
1998 - 2002 Moscow State University
2002 - 2003 Institute for Applied Mathematics (Bonn)
2003 - 2011 Weierstrass Institute for Applied Analysis and Stochastics (Berlin)
2011 - now University of Duisburg - Essen
‘Every Article on NeurIPS Is Considered a Significant Result’
Staff members of the HSE Faculty of Computer Science will present 12 of their works at the 37th Conference and Workshop on Neural Information Processing Systems (NeurIPS), one of the most significant events in the field of artificial intelligence and machine learning. This year it will be held on December 10–16 in New Orleans (USA).
SAMPLE Conference Takes Place
On October 26-30, Statistics, Artificial Intelligence, Machine Learning, Probability, Learning Theory Event (SAMPLE) conference took place in Gelendzhik, Russia.
Faculty of Computer Science and Skoltech Host Third Statistical Learning Theory Olympiad
HSE University’s Faculty of Computer Science and Skoltech have organised Statistical Learning Theory Olympiad for the third time. The Olympiad’s main award is admission to the HSE University and Skoltech joint master’s programme.
First Cohort Graduates from Master’s Programme in Statistical Learning Theory
The Master's Programme in Statistical Learning Theory was launched in 2017. It is run jointly with the Skolkovo Institute of Science and Technology (Skoltech). The programme trains future scientists to effectively carry out fundamental research and work on new challenging problems in statistical learning theory, one of the most promising fields of science. Yury Kemaev and Maxim Kaledin, from the first cohort of programme graduates, sat down with HSE News Service to talk about their studies and plans for the future.
First Cohort Graduates from Master’s Programme in Statistical Learning Theory
The Master's Programme ‘Statistical Learning Theory’ was launched in 2017, and is run jointly with the Skolkovo Institute of Science and Technology(Skoltech).
HDI Lab staff attend International Vilnius Conference on Probability Theory and Mathematical Statistics 2018
12th International Vilnius Conference on Probability Theory and Mathematical Statistics and 2018 IMS Annual Meeting on Probability and Statistics took place in Vilnius (Lithuania) on July 2-6. This is one of the world's leading conferences in the field of modern probability theory and mathematical statistics, which is held every four years since 1973. This year over 200 works were presented at the event and 500 participants from all over the globe attended it.
Structural Learning Seminar Summer Meeting
On 19 July at 11 am an extraordinary meeting of Structural Learning Seminar was held on the Faculty of Computer Science.
Laboratory Researchers Received Russian Science Foundation Grant
Team of researchers of the HSE International Laboratory of Stochastic Algorithms and High-Dimensional Inference was announced as a winner of the Russian Science Foundation Grant Competition to support fundamental and exploratory scientific research conducted by individual scientific groups and was awarded three-year grant for implementation of the project "Analysis of high dimensional random objects with applications to large scale data processing" (RSF №18-11-00132).
HSE Lends Its Support to the Very First Conference in ‘New Frontiers in High-Dimensional Probability and Statistics’
On February 23 and 24, the Institute for Information Transmission Problems of the Russian Academy of Sciences hosted the first international mini-conference entitled ‘New frontiers in high-dimensional probability and statistics’. The event was attended by Russian and international researchers in the field of statistical methods of analysis of multidimensional data and modern stochastic algorithms. The conference was hosted by HSE, the Institute for Information Transmission Problems of the RAS and Skoltech. Organisers included HSE Faculty of Computer Science staff, Vladimir Spokoiny, Alexey Naumov, Denis Belomestny and Quentin Paris.
‘Our Programme Aims to Make a Research Breakthrough at the Intersection of Mathematics and Computer Science’
In 2017, the HSE Faculty of Computer Science and Skoltech are opening admissions to the Master’s programme inStatistical Learning Theory, which will become the successor to theMathematical Methods of Optimization and Stochastics programme.Vladimir Spokoiny, the programme’s academic supervisor and professor of mathematics at Humboldt University in Berlin, told us about the research part of the new programme and the opportunities it offers to both Master’s students and undergraduate students alike.