21Dynamic Programming
In this chapter, the algorithm design technique of Dynamic Programming is detailed. The technique is demonstrated over 0/1 Knapsack Problem, Traveling Salesperson Problem, All-Pairs Shortest Path Problem and Optimal Binary Search Tree Construction.
21.1. Introduction
Dynamic Programming is an effective algorithm design technique built on Bellman’s Principle of Optimality which states that ‘an optimal sequence of decisions has the property that whatever the initial state and decisions are, the remaining decisions must constitute an optimal decision sequence with regard to the state resulting from the first decision’.
The dynamic programming strategy is applicable to optimization problems, which comprise an objective function that needs to be maximized or minimized, subject to constraints that need to be satisfied. A candidate solution that satisfies the constraints is termed a feasible solution and when a feasible solution results in the best objective function value, it is termed an optimal solution.
The greedy method (discussed in Chapter 20) also works over problems whose characteristics are as defined above and obtains optimal solutions to these problems. How then is a greedy method different from dynamic programming?
A greedy method works in an iterative fashion, selecting objects constituting the solution set one by one and constructing a feasible solution set, which eventually turns into an optimal solution set. However, there are problems where ...
Get A Textbook of Data Structures and Algorithms, Volume 3 now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.