Next: Needleman-Wunsch algorithm Up: Liner and Clapp: Alignment Previous: INTRODUCTION

# Dynamic Programming

Dynamic programming offers a way to solve certain classes of non-linear problems. It is useful for problems that can be thought of as making one decision after another. To understand how it works, it is easiest to start with our final decision. Our goal is to maximize our `score' S of a series of decision 1...n. Each decision has several potential outcomes (1...m). Our final score is going to be based on some score we have calculated for all of our possible options `states' at j=n-1, and the best score we can get from moving from all of the possible states at n-1 to all possible states at n. We can write the score as

 S(i,j) = maxk=1...m [ S(k,j-1) +v(i,j,k) ], (1)

where i is the given state and v(i,j,k) is the value obtained from moving from state k to state i at decision j. The best series of decisions is then found by going backwards, taking the state at each decision i that corresponded to the highest score.

In this most general form, the algorithm is quite expensive. The cost is on order n*m*m. For a large number of states the problem quickly becomes impractical. The easiest way to reduce the cost is to limit the number of states that we must search when moving from one decision to another. The next two subsections describe ways to limit the search that are practical for certain classes of problems.