Bayes In the Real World: The Metropolis-Hastings Algorithm
Wrapping Up and Next Steps
We've use the Metropolis-Hastings algorithm to approximately sample from a probability distribution with unknown normalizing constant. As I've been hinting at, however, this is definitely a best-case scenario:
- We knew the actual distribution, so we could easily assess our work.
- Our distribution was one-dimensional, so simpler methods that scale less well would have been a lot faster.
- Our distribution was defined over the whole real line, so we had no issues with domain errors.
- Computing relative probabilities was very quick, so we weren't very concerned with the exact number of samples we needed.
- Because we knew the distribution, we also knew where we needed to start, more or less, and how wide our proposal should be.
That'll change next time—we'll apply Metropolis-Hastings to a real-world Bayesian statistics problem and give it a tougher challenge. Until then!
Wrapping Up and Next Steps
We've use the Metropolis-Hastings algorithm to approximately sample from a probability distribution with unknown normalizing constant. As I've been hinting at, however, this is definitely a best-case scenario:
- We knew the actual distribution, so we could easily assess our work.
- Our distribution was one-dimensional, so simpler methods that scale less well would have been a lot faster.
- Our distribution was defined over the whole real line, so we had no issues with domain errors.
- Computing relative probabilities was very quick, so we weren't very concerned with the exact number of samples we needed.
- Because we knew the distribution, we also knew where we needed to start, more or less, and how wide our proposal should be.
That'll change next time—we'll apply Metropolis-Hastings to a real-world Bayesian statistics problem and give it a tougher challenge. Until then!