In probability theory, the mixing time of a Markov chain is the time until the Markov chain is "close" to its steady state distribution.
More precisely, a fundamental result about Markov chains is that a finite state irreducible aperiodic chain has a unique stationary distribution π and, regardless of the initial state, the time-t distribution of the chain converges to π as t tends to infinity. Mixing time refers to any of several variant formalizations of the idea: how large must t be until the time-t distribution is approximately π? One variant, total variation distance mixing time, is defined as the smallest t such that the total variation distance of probability measures is small:
.
Choosing a different , as long as , can only change the mixing time up to a constant factor (depending on ) and so one often fixes and simply writes .
This is the sense in which Dave Bayer and Persi Diaconis (1992) proved that the number of riffle shuffles needed to mix an ordinary 52 card deck is 7. Mathematical theory focuses on how mixing times change as a function of the size of the structure underlying the chain. For an -card deck, the number of riffle shuffles needed grows as . The most developed theory concerns randomized algorithms for #P-Complete algorithmic counting problems such as the number of graph colorings of a given vertex graph. Such problems can, for sufficiently large number of colors, be answered using the Markov chain Monte Carlo method and showing that the mixing time grows only as (Jerrum 1995). This example and the shuffling example possess the rapid mixing property, that the mixing time grows at most polynomially fast in (number of states of the chain). Tools for proving rapid mixing include arguments based on conductance and the method of coupling. In broader uses of the Markov chain Monte Carlo method, rigorous justification of simulation results would require a theoretical bound on mixing time, and many interesting practical cases have resisted such theoretical analysis.
and 23 Related for: Markov chain mixing time information
In probability theory, the mixingtime of a Markovchain is the time until the Markovchain is "close" to its steady state distribution. More precisely...
A Markovchain or Markov process is a stochastic model describing a sequence of possible events in which the probability of each event depends only on...
process of mixingMarkovchainmixingtime, the time to achieve a level of homogeneity in the probability distribution of a state in a Markov process This...
In statistics, Markovchain Monte Carlo (MCMC) is a class of algorithms used to draw samples from a probability distribution. Given a probability distribution...
MarkovChains and Mixing Times is a book on Markovchainmixing times. The second edition was written by David A. Levin, and Yuval Peres. Elizabeth Wilmer...
A hidden Markov model (HMM) is a Markov model in which the observations are dependent on a latent (or "hidden") Markov process (referred to as X {\displaystyle...
time. A stronger concept than ergodicity is that of mixing, which aims to mathematically describe the common-sense notions of mixing, such as mixing drinks...
walk Markovchain Examples of Markovchains Detailed balance Markov property Hidden Markov model Maximum-entropy Markov model Markovchainmixingtime Markov...
seven, in the precise sense of variation distance described in Markovchainmixingtime; of course, you would need more shuffles if your shuffling technique...
the walk is nearly uniformly distributed? That is, what is the Markovchainmixingtime? Examples of problems studied in reconfiguration include: Games...
probability measure on the set of subshifts. For example, consider the Markovchain given on the left on the states A , B 1 , B 2 {\displaystyle A,B_{1}...
Markov additive process Markov blanket / Bay Markovchainmixingtime / (L:D) Markov decision process Markov information source Markov kernel Markov logic...
Dynamic Markov compression (DMC) is a lossless data compression algorithm developed by Gordon Cormack and Nigel Horspool. It uses predictive arithmetic...
theory and statistics, diffusion processes are a class of continuous-timeMarkov process with almost surely continuous sample paths. Diffusion process...
In statistics, Gibbs sampling or a Gibbs sampler is a Markovchain Monte Carlo (MCMC) algorithm for sampling from a specified multivariate probability...
these cubes. By using the theory of rapidly mixingMarkovchains, they show that it takes a polynomial time for the random walk to settle down to being...
Oleg Markov (Belarusian: Олег Маркаў, born 8 May 1996) is a professional Australian rules footballer who plays for the Collingwood Football Club in the...
programming in fixed dimensions the path coupling method for proving mixing of Markovchains (with Russ Bubley) complexity of counting constraint satisfaction...
(MC³) improves the mixing of Markovchains in presence of multiple local peaks in the posterior density. It runs multiple (m) chains in parallel, each...
bioinformatics Margin Markovchain geostatistics Markovchain Monte Carlo (MCMC) Markov information source Markov logic network Markov model Markov random field...