Pac-Bayesian Based Adaptation for Regularized Learning

Apr 14 2022 - 1pm


Prem Talwai


Abstract: In this paper, we propose a PAC-Bayesian a posteriori parameter selection scheme for adaptive regularized regression in Hilbert scales under general, unknown source conditions. We demonstrate that our approach is adaptive to misspecification, and achieves the optimal learning rate under light distributional assumptions. Unlike existing parameter selection schemes, the computational complexity of our approach is independent of sample size. We derive minimax adaptive rates for a new, broad class of kernel ridge regression problems under general, misspecified source conditions, that notably do not require any conventional a priori assumptions on kernel eigendecay. Using the theory of real interpolation, we demonstrate that the spectrum of the Mercer operator can be inferred in the presence of “tight” L∞ embeddings of suitable Hilbert scales. Finally, we prove, that under a ∆2 condition on the smoothness index functions, our PAC-Bayesian scheme can indeed achieve minimax rates. We discuss applications of our approach to statistical inverse problems and oracle-efficient contextual bandit algorithms


Bio: Prem Talwai is a second-year PhD student in the Operations Research Center advised by David Simchi-Levi. His interests lie at the intersection of functional analysis and learning theory, with a particular focus on developing new frameworks/perspectives for handling misspecification in broad settings. He is also excited by applications of these ideas to sequential decision-making in pricing and revenue management. He completed his undergraduate studies in mathematics at Cornell.