next up previous


CSE 2320 Section 501/571 Fall 1999

Homework 4 Solution

1.
Dynamic programming solution to the rental car problem.

(a)
Let c(i,j) be the optimal cost of driving from agency i to agency j. Assuming the optimal trip from i to j consists of trips from i to k and from k to j,

c(i,j) = c(i,k) + c(k,j)

(b)
Prove that the rental car problem exhibits optimal substructure.

Let $R_{i \ldots j}$ be the optimal (minimum cost) route for traveling from city i to city j, and assume the route passes through city k. Let $R_{i \ldots k}$ and $R_{k \ldots j}$ be the routes from i to k and kto j, respectively, contained in the optimal solution to the original problem. To prove optimal substructure, we need to prove that in order for the route $R_{i \ldots j}$ to be optimal, the routes $R_{i \ldots k}$ and $R_{k \ldots j}$ must also be optimal solutions to their respective subproblems.

Using proof by contradiction, assume there is a better solution $R^{\prime}_{i \dots k}$ for the subproblem of traveling from city i to city k, such that $cost(R^{\prime}_{i \dots k}) < cost(R_{i \ldots k})$. Since the solution for traveling from k to j will work regardless of how we arrived at city k, we could then use the better solution $R^{\prime}_{i \dots k}$ to arrive at a lower cost solution to the original problem.

\begin{eqnarray*}cost(R_{i \ldots j}) & = & cost(R_{i \ldots k}) + cost(R_{k \ld...
...\\
& > & cost(R^{\prime}_{i \ldots k}) +
cost(R_{k \ldots j})
\end{eqnarray*}


But this contradicts the original definition that $R_{i \ldots j}$ is optimal. Therefore, the route from i to k contained in the original solution $R_{i \ldots j}$ must an optimal solution to the subproblem of traveling from i to k. The same argument works for the subproblem of traveling from k to j. Thus, the problem exhibits optimal substructure.

(c)
Define a recursive solution for computing c(i,j) and write pseudocode for a divide-and-conquer algorithm implementing your solution.

Because we do not know the optimal value for k, we try all possible values.

\begin{displaymath}c(i,j) = \left\{ \begin{array}{ll}
0 & i = j \\
\min(C[i,j...
...i < k < j} \{ c(i,k) + c(k,j) \}) & i < j
\end{array} \right. \end{displaymath}

where C[i,j] is the given cost of traveling directly from city i to city j.


 RECURSIVE-TRAVEL(C,i,j) 
1 if i = j
2 then return 0
3 else c = C[i,j]
4 for k = i+1 to j-1
5 q = RECURSIVE-TRAVEL(C,i,k) + RECURSIVE-TRAVEL(C,k,j)
6 if q < c
7 then c = q
8 return c

(d)
Give a recurrence T(n) for the running time of your recursive solution in part c, where n=j-i+1, and show that $T(n) = \Omega(2^n)$.

The recurrence for RECURSIVE-TRAVEL is shown below (best, average and worst cases are all the same). Note the constraints on n showing that recursive calls are made only if j > i + 1.

\begin{displaymath}T(n) = \left\{ \begin{array}{ll}
\Theta(1) & n \leq 2 \\
\...
...{n-1} (T(k) + T(n-k) + \Theta(1)) & n > 2
\end{array} \right. \end{displaymath}

Using the substitution method, we can show $T(n) = \Omega(2^n) \geq c2^n$using the inductive hypotheses $T(k) \geq c2^{k}$ and $T(n-k) \geq
c2^{n-k}$, where both k and n-k are less than n.

\begin{eqnarray*}T(n) & \geq & \sum_{k=2}^{n-1} (c2^k + c2^{n-k} + \Theta(1)) \\...
...\Theta(n) \\
& \geq & c2^n - 6c + \Theta(n) \\
& \geq & c2^n
\end{eqnarray*}


The last inequality is true only if $6c - \Theta(n) \leq 0$, or $c \leq \Theta(n)/6$, which is true for any constant c and sufficiently large n. Thus, $T(n) = \Omega(2^n)$.

(e)
Demonstrate that the recursive solution has overlapping subproblems and compute the number of unique subproblems.

Below is a portion of the recursion tree for the computation of c(1,4)with overlapping subproblems indicated.

\psfig{figure=figures/hs41e.ps}

Since there is no backtracking, the number of unique subproblems is the number of different values for i and j from 1 to n in c(i,j), where $i \leq j$, which can be expressed as

\begin{displaymath}\sum_{i=1}^{n} (n-i+1) = \sum_{i=1}^{n} i = n(n+1)/2 = \Theta(n^2). \end{displaymath}

Therefore, the number of unique subproblems is polynomial.

(f)
A O(n3) bottom-up, dynamic programming solution to the rental car problem.

 TRAVEL(C,i,j) 
1 allocate $c[1 \ldots n, 1 \ldots n]$ ;; costs
2 allocate $a[1 \ldots n, 1 \ldots n]$ ;; agencies
3 for i = 1 to n
4 for j = 1 to n
5 c[i,j] = C[i,j] ;; initially choose direct routes
6 a[i,j] = j
7 for ws = 3 to n
8 for i = 1 to n - ws + 1
9 j = i + ws - 1
10 for k = i+1 to j-1
11 q = c[i,k] + c[k,j]
12 if q < c[i,j]
13 then c[i,j] = q
14 a[i,j] = k

Analysis and Explanation: TRAVEL first allocates (lines 1-2) the arrays for holding the optimal cost and optimal decisions for which intermediate city to visit next. Thus, a[i,j]=k implies that in traveling form city i to j, you should definitely stop to change cars in city k. Lines 3-6 initialize the cost and decision arrays to be the cost matrix and the decision to travel directly from i to j, which may change later as we consider more complex routes. Lines 1-6 take O(n2)time and memory. Lines 7-14 contain three nested for loops, each of which may run n times in the worst case, yielding a total of O(n3)times through lines 11-14. Thus, the running time of TRAVEL is dominated by the nested for loops, i.e., T(n) = O(n3). Note that the window size (ws) starts at 3, because a trip of size 2 (i.e., travel from city i to city i+1) has already been processed correctly in lines 3-6.

All that remains is an algorithm to output the optimal sequence of cities to visit. The following OPTIMAL-ROUTE(a,i,j) algorithm outputs the optimal cities as stored in the a array by a call to TRAVEL.


 OPTIMAL-ROUTE(a,i,j) 
print i
OPTIMAL-ROUTE-1(a,i,j)

OPTIMAL-ROUTE-1(a,i,j)
if a[i,j] = j
then print j
else OPTIMAL-ROUTE-1(a,i,a[i,j])
OPTIMAL-ROUTE-1(a,a[i,j],j)

Since OPTIMAL-ROUTE recurses at most n times, where n is the number of cities, then the running time is O(n). Thus, the total time to produce the optimal route is still O(n3).

2.
Consider a greedy algorithm for the rental car problem where the greedy choice is to choose the lowest cost single car trip from your current location (originally i) to some other location k along the way to j, and then continue with the same greedy choice from k.

\psfig{figure=figures/hs42.ps}

Consider the above instance of the rental car problem depicting the costs of every direct route. The greedy choice would choose to go from city 1 to city 3 first, and then to city 4 for a total cost of 11. However, the optimal route is to go directly from 1 to 4. Therefore, the greedy choice is not in the optimal solution and does not exhibit the greedy choice property.

3.
Problem 17-1a: Describe a greedy algorithm for making change from quarters, dimes, nickels, and pennies using the fewest number of coins. The following algorithm CHANGE(n), where n is the amount of change to be made, implements the greedy choice of first using as many quarters that add up to less than or equal to n, then whatever remains, do the same with dimes, and then nickels, and then pennies.


 CHANGE(n) 
while $n \geq 25$
output ``quarter''
n = n - 25
while $n \geq 10$
output ``dime''
n = n - 10
while $n \geq 5$
output ``nickel''
n = n - 5
while $n \geq 1$
output ``penny''
n = n - 1

To prove this greedy choice yields an optimal solution, we must show optimal substructure and the greedy choice property.

Proof of Optimal Substructure: Let $\{ c_1, c_2, ..., c_m \}$ be the coins comprising an optimal solution to the coin-changing problem, where ci is either a penny, nickel, dime or quarter, and the ci's sum to n, the change to be made. Putting aside the first coin choice c1, the remaining coins $\{ c_2, ..., c_m \}$ represent a solution to the n - d(c1) coin-changing problem, where d(c) is the denomination of coin c. If $\{ c_2, ..., c_m \}$ were not an optimal solution to the n-d(c1) change problem, then there exists another solution $\{ a_2, ...,
a_k \}$, where k < m; thus taking fewer coins. But if such a solution exists for n-d(c1), then combining it with c1 would yield a better solution to the n change problem. Since we started with an optimal solution, this is a contradiction, $\{ c_2, ..., c_m \}$ must be an optimal solution to the subproblem n-d(c1), and the coin-changing problem exhibits optimal substructure.

Proof of the Greedy Choice Property: Assume we have an optimal solution to the n-change problem $\{ c_1, c_2, ..., c_m \}$. First, we know that the coins ci add up to n no matter what order they are chosen, therefore, we can make any coin choice cj swap with the first choice c1 and still yield an optimal solution. We have only to prove that the greedy choice must reside somewhere in an optimal solution. Assume it does not. Then, one of the highest denomination coins less than n is not in the solution. But, that means that lower denomination coins must be used to obtain the amount of cents of the higher-denomination coin. Since it takes at least two lower-denomination coins to equal the cents of a higher-denomination coin, we could always replace these coins with the greedy coin and obtain a better solution (fewer coins). Therefore, the greedy coin must be in the optimal solution. Thus, a greedy choice for the first coin will lead to an optimal solution.

Furthermore, since the greedy choice of coin c for the n-change problem reduces the problem to an n - d(c) change problem, and the problem exhibits optimal substructure, the greedy algorithm will yield an optimal solution.


next up previous