Department Seminar Series
The Quest for Optimal Parallel Monte Carlo Methods
4th March 2025, 13:00
ELEC204, 2th Floor Lecture Theatre EEE
Alessandro Varsi
University of Liverpool
Abstract
Monte Carlo (MC) methods are fundamental in Bayesian inference and Machine Learning, enabling sampling from complex posterior distributions. Markov Chain Monte Carlo (MCMC), first introduced in 1953, has been the dominant approach but suffers from being inherently sequential. In 1993, Sequential Monte Carlo (SMC) emerged as a parallelizable alternative, yet the resampling step (and its redistribution component) remained difficult to scale efficiently.
For years, parallel resampling was believed to be infeasible. However, breakthroughs in Distributed Memory (DM) architectures led to Optimal Parallel Redistribution (OPR) for fixed-size samples, culminating in an optimal O(logN) algorithm in 2021. Yet, recent evidence reveals that OPR for variable-size samples is NP-complete in DM, posing a major challenge for scalable Bayesian inference. In Shared Memory (SM), OPR for variable-size samples remains achievable, but an efficient DM solution is an open problem.
This talk explores past breakthroughs, fundamental limitations, and current challenges of parallel Monte Carlo methods.
This talk will be streamed on teams: https://teams.microsoft.com/l/meetup-join/19%3ameeting_ZjEwZmJiMGEtZTk2MC00OWExLTgwYzAtMTNkMGM0YWYwNDE0%40thread.v2/0?context=%7b%22Tid%22%3a%2253255131-b129-4010-86e1-474bfd7e8076%22%2c%22Oid%22%3a%22ea14c976-751c-48ac-9b37-112f41cfb44b%22%7d
Meeting ID: 382 773 593 185
Passcode: tX2QF9pn
Ashton Street, Liverpool, L69 3BX
United Kingdom
Call the department
+44 (0)151 795 4275