Title: Fast Bayesian inference using Gaussian Processes (and an attempt at convincing you why it’s worth pursuing)
Abstract: With ever-more precise experiments generating an ever-increasing amount of data, and the need for theoretical computations to match that precision, ‘classic’ Bayesian inference algorithms often prove to be a bottleneck in the theory-to-constraints pipeline. Such slow-to-compute likelihoods arise either when observables are expensive to compute, when there is a huge amount of data to compare, or both. This, in turn, makes Bayesian inference extremely resource-intensive: typical sampling algorithms like Markov chain Monte Carlo or nested sampling require hundreds of thousands of evaluations of the likelihood/posterior distribution. Recently, likelihood-free approaches that circumvent this problem have gained attention. However, they come with their own set of challenges, such as managing biases and often require methods tailored to the specific problem at hand.
In my talk I will be introducing the Python package “GPry”, an approach that maintains the simplicity and robustness of likelihood-based inference while substantially reducing the number of samples needed to obtain a representative Monte Carlo sample of the posterior. It employs Gaussian process interpolation of the posterior distribution and a deterministic, sequential acquisition of likelihood samples, drawing inspiration from Bayesian optimization. I will be showing examples from CMB cosmology and LISA sources, as well as some synthetic inference problems.