SMS scnews item created by Tiangang Cui at Wed 14 Aug 2024 1229
Type: Seminar
Modified: Wed 14 Aug 2024 1625; Fri 16 Aug 2024 1043
Distribution: World
Expiry: 14 Aug 2025
Calendar1: 23 Aug 2024 1300-1400
CalLoc1: Carslaw 275
CalTitle1: Reinforcement Learning for Adaptive MCMC
Auth: tcui@piha.staff.wireless.sydney.edu.au (tcui0786) in SMS-SAML

Statistics Seminar

Statistics Seminar Series: Reinforcement Learning for Adaptive MCMC

Wilson Chen

The next statistics seminar will be presented by Dr Wilson Chen from the Business School.

Title: Reinforcement Learning for Adaptive MCMC
Speaker: Wilson Chen
Time and location : 1-2pm on Carslaw 275 or Zoom
Abstract : An informal observation, made by several authors, is that the adaptive design of a Markov transition kernel has the flavour of a reinforcement learning task. Yet, to-date it has remained unclear how to actually exploit modern reinforcement learning technologies for adaptive MCMC. The aim of this paper is to set out a general framework, called Reinforcement Learning Metropolis--Hastings, that is theoretically supported and empirically validated. Our principal focus is on learning fast-mixing Metropolis--Hastings transition kernels, which we cast as deterministic policies and optimise via a policy gradient. Control of the learning rate provably ensures conditions for ergodicity are satisfied. The methodology is used to construct a gradient-free sampler that out-performs a popular gradient-free adaptive Metropolis--Hastings algorithm on ≈90% of tasks in the PosteriorDB benchmark.