| 
View
 

Hidden Markov Model

Page history last edited by PBworks 16 years, 9 months ago

Hidden Markov Model

 

Lecture by Professor Xiaole Shirley Liu

Wiki by Da Lin and Allen Cheng


 

 

The Markov chain was named in honor of Professor Andrei A Markov, who first published his findings in 1906. Markov was born on June 14, 1856 in Ryazan, Russia, and studied mathematics at the University of St. Petersburg where he later taught as a professor until his death in 1922. Although his work in mathematics ranges from differential equations to probability theory, he has become best known for this research on stochastic processes and the Markov chain.

 

On this page, we first give a brief introduction to Markov processes and the discrete Markov chain. We then present the Hidden Markov model, and solve it with three different methods: forward, backwards, and the Viterbi algorithm. Finally, we will examine and solve a Hidden Markov problem in a biological context.


Discrete Markov Chain

 


Markov Chain - Example 1

 

Suppose Diana has a test tomorrow, and she wants to know what is the probability of receiving a passing grade. In the 6 previous exams this semester, she has passed the first three and failed the most recent three. Her genius roommate has also calculated for her the transition probabilities a(i,j) for her exam states:

 

 

What is the probability that Sam will pass her exam tomorrow?

 

Solution:

We can see this as a Markov chain problem with states S(1) - pass, and S(2) - fail. Each exam can be associated with a time frame, denoted as q(n), where n is the exam number. We want to find the probability that q(7) = P, given the transition probabilities and the observed exam states PPPFFF. However, if we remember that the Markov process has no memory, the problem simplifies to finding the probability that q(7) = P given that q(6) = F. This is shown mathematically below:

 


Markov Chain - Example 2

 

Frank the Weatherman’s computer has malfunctioned, so he needs to predict the 7-day weather forecast by hand. To simplify his task, he decides to report only 3 different states, rain (R), cloudy (C), or sunny (S). From college, he remembers that the transition probabilities for the different weather states are:

 

 

If it is sunny today, what is the probability that the observed weather for the next 7 days is S(today)SSRRSCS?

 

Solution:

This is again a Markov chain problem with states S(1) - rain, and S(2) - cloudy, and S(3) - sunny. Each day can be associated with a time frame, denoted as q(n), where n is the number of days since today. We want to find the probability of observing SSSRRSCS given that q(0)=3. This is simply multiplication of transition probabilities. If today is sunny, the probability that q(1)=3 is just a(3,3). If q(1)=3, the probability that q(2)=3 is again a(3,3), and so forth. This is solved mathematically below:

 


Markov Chain - Example 3

 

Every year at the company-wide meeting, a select employee is honored with the chance at a promotion. In order to get the promotion, they must guess correctly on the following game. The CEO has two coin, one fair and the other biased, and holds one in each hand. The CEO hands the employee a coin with the following initial probability:

 

 

After eight tosses, the CEO asks the employer to guess the order of the coins he or she flipped (fair or biased). Knowing the initial probabilities, and these transition probabilities:

 

 

Daniel guesses FFBBFFFB. What is the probability that he receives the promotion?

 

Solution:

This is a Markov chain with 2 states S(1)= fair, S(2) = biased. This question is very similar to example 2, only instead of knowing for certain what the initial state is, we need to include an initial probability. From the initial probability, we can see that the probability of being handed a fair coin at the beginning is 0.6. Each subsequent probability is just the probability of transitioning from one state to the other. Intuitively, we are asking, what is the probability of being handed a fair coin initially? Once we have the fair coin, what is the probability of tossing the fair coin again? And after the second toss with the fair coin, what is the probability of tossing a biased coin? And so forth. This is solved mathematically below:

 


Hidden Markov Model

 


Problems

 

Given an observation sequence, it becomes of interest to solve several problems. The final goal is to formulate the most likely state sequence that produces the observation sequence.


Problem 1

 

First, given the parameters and probabilities, what is the probability of observing the observation sequence? Additionally, what is the probability that, at a given observation point, a state is observed?

 

The first problem is a simple collection of probabilities. Suppose that we know or formulate a state sequence. The probability that the observation sequence will be observed with this state sequence is calculated by the multiplication of the relevant emission probability at each point:

 

 

We will then need to calculate the probability of observing the state sequence using transition probabilities:

 

 

The probability that the observation sequence will be observed given all probability conditions is thus the sum over all possible state paths: each state sequence with its own probability multiplied by the probability of the observation sequence given the state sequence.

 

 

This is clearly an exponential calculation, however, and so with long paths and many hidden states the brute-force calculation is unfeasible.

 

Dynamic programming can be employed by using the results of previous calculations at a given point to calculate the most recent probability. In essence, the algorithm uses the immediately preceding probability to calculate the current one:

 

 

This can consist of two procedures, one that proceeds from the beginning of the observation sequence forward, or one that begins at the end of the observation sequence and moves backwards.

 

The forward procedure essentially calculates the probability of a state at each point by considering transitions from the previous states and emissions from the given state. The algorithm begins by using the initial state distribution to calculate initial probabilities of states at the first observation point.

 

 

At the next observation point, the probability that a fair coin is used is calculated.

 

 

The procedure is as follows:

 

1. Multiply the previous probability that a fair coin was used by the transition probability from fair coin to fair coin.

2. Multiply the previous probability that a biased coin was used by the transition probability from biased coin to fair coin.

3. Sum the results of (1) and (2). This yields the probability of obtaining a fair coin from either previous state.

4. Multiply the result of 3 with the probability that the fair coin will yield a Tail on a flip. This gives the probability that the fair coin was used at the second point.

 

The same procedure can be used to calculate the probability that a biased coin was used:

 

1. Multiply the previous probability that a fair coin was used by the transition probability from fair coin to biased coin.

2. Multiply the previous probability that a biased coin was used by the transition probability from biased coin to biased coin.

3. Sum the results of (1) and (2). This yields the probability of obtaining a biased coin from either previous state.

4. Multiply the result of 3 with the probability that the biased coin will yield a Tail on a flip. This gives the probability that the biased coin was used at the second point.

 

 

This can be repeated at each observation point to yield state probabilities.

 

 

The final calculation is the sum of the final probabilities of both states. This yields the probability that the observation sequence is observed given the initial probability sequences.

 

 

The backward procedure proceeds in a similar way, but it calculates the probability that a coin will see a certain flip after it. Again, it calculates the probability that a state exists at any given point – in this case, whether the coin is fair or biased. The algorithm initiates with probabilities of 1 for both fair and biased.

 

 

We will define the current point as the last tail in the observation sequence. The previous point will thus be the head before it. Then, to calculate the probability that the state preceding the current one is a fair coin, we use this procedure:

 

1. Multiply the probability that a fair coin will flip a Tail by the probability that a fair coin will transition into a fair coin.

2. Multiply the result of (1) by the probability that the current state is a fair coin.

3. Multiply the probability that a biased coin will flip a Tail by the probability that a fair coin will transition into a biased coin.

4. Multiply the result of (3) by the probability that the current state is a biased coin.

5. Sum (2) and (4) to yield the probability that the previous state was a fair coin.

 

The probability that the previous state was a biased coin can be calculated similarly:

 

1. Multiply the probability that a fair coin will flip a Tail by the probability that a biased coin will transition into a fair coin.

2. Multiply the result of (1) by the probability that the current state is a fair coin.

3. Multiply the probability that a biased coin will flip a Tail by the probability that a biased coin will transition into a biased coin.

4. Multiply the result of (3) by the probability that the current state is a biased coin.

5. Sum (2) and (4) to yield the probability that the previous state was a biased coin.

 

 

Once again, these two calculations can be performed for the rest of the observation sequence.

 


Problem 2

 

The next problem involves choosing the most likely state sequence given the observation sequence. This can be done with the calculations from the forward/backward procedures. By running the two procedures separately and storing the probabilities at each point, the probabilistic prediction can be made at every point.

 

 

This maximizes the expected number of correctly predicted states.


Viterbi Algorithm

 

The Viterbi Algorithm is a dynamic programming algorithm used to find the most likely sequence of hidden states, or the Viterbi path. Similar to the forward-backward algorithm, the Viterbi algorithm uses dynamic programming when the current state's probability is calculated using the results of previous calculations.

 

The algorithm initiates by calculating the initial probabilities of each state. We continue with the coin flipping example, so there are two states: fair or biased coin. There can be, of course, many more states.

 

 

The newly introduced Psi variable will allow us to determine the Viterbi path, as we will soon see.

 

 

We will now consider the second observation "current," and the initial observation "previous." The recursion step proceeds by considering each state independently. We will take the fair coin first. The fair coin could have been approached from either previous state - fair of biased. One previous state, however, will give a higher probability of transitioning to the current fair coin state. The dynamic programming comes into play when the previous state's probabilities are included in the recursion step. The general formula is as follows:

 

 

Applied to this example and to both states,

 

 

Let us examine this calculation more closely. The top line is used to determine the previous state that will most likely yield the fair coin state in the current observation. Two quantities are compared:

1. probability of the fair coin multiplied by the transition probability from fair coin to fair coin

2. probability of the biased coin multiplied by the transition probability from biased coin to fair coin.

 

In this case, 1 is higher, so the psi variable records the fact that the fair coin in the previous state is most likely to yield the fair coin in the current state.

 

These steps are repeated for the current biased coin state. Paths can thus be traced to depict the progress of the psi variable.

 

We move to the next observation and repeat the calculations:

 

 

At the end, we can obtain the complete psi series for both states:

 

 

The termination proceeds as follows:

 

 

and we can ascertain that the fair state yields the higher probability at the end. The purpose of the Viterbi algorithm then surfaces when we trace the Viterbi sequence. Starting at the fair coin in the last observation, we simply determine which state was most likely to lead to the terminated state. A path can thus be traced:

 

 

Thus, the state sequence FFFF will be most likely to produce the observation sequence HTTH.

 

More complicated systems can be analyzed, of course:

 

 

Assuming that state B produces the highest probability in the last observation, we can trace a Viterbi sequence of BDDDAB.

 


Problem 3

 

The last basic problem for a Hidden Markov Model involves estimating all the parameters (transition and emission probabilities, initial probabilities, etc.) so as to maximize the probability of an observation sequence given those parameters.

 

A method of estimating the optimal parameters is with the Baum-Welch algorithm, which is basically an expectation-maximization algorithm. Begin by randomly initializing the parameters, run the Viterbi algorithm based on the random parameters and the observation sequence, and update the parameters to achieve a better score.


Additional Applications of Hidden Markov Models

 

Stock Market - predicts Bear/Bull Market state (hidden) based on observed stock fluctuations.

Speech Recognition - predicts sentences and words (hidden) based on observed/perceived sounds.

Digital Sound Processing - Predicts source signal (hidden) based on observed signal fluctuations

Bioinformatics - Multitude of applications including sequence motif finding, gene

prediction, genome copy number change, protein structure prediction, protein-DNA interaction prediction


Additional Resources

 

Readings

Eddy, S.R. Hidden Markov Models. Current Opinion in Structural Biology 6:361-365. 1996

Link

 

Eddy, S.R. What is a Hidden Markov Model. Nature Biotechnology 22: 1315 - 1316. 2004

Harvard Link

 

Krogh, A. An Introduction to Hidden Markov Models for Biological Sequences. Computational Methods in Molecular Biology. Edited by S.L Salzberg, D.B. Searls, and S. Kasif. Elsevier. 1998

Link

 

Rabiner, L.R. A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition. Proceedings of the IEEE, 77 (2), p. 257–286, February 1989.

Link

 

Wai-Ki Ching, Michael K. Ng. Markov Chains: Models, Algorithms and Applications. Springer-Verlag New York. 2005

 

Links

Wikipedia on HMM Link

HMMER Updated Link

Understanding HMM from a mathematical perspective Link

 

References

Stat115 Lecture (Xiaole Shirley Liu) Harvard Link

 

Cartoon images from YahooImages

Comments (0)

You don't have permission to comment on this page.