The goal of this exercise is to highlight the difference between the recursive definition of a procedure, and the recursive evolution of a process. From
SICP:
In contrasting iteration and recursion, we must be careful not to confuse the notion of a recursive process
with the notion of a recursive procedure. When we describe a procedure as recursive, we are referring to
the syntactic fact that the procedure definition refers (either directly or indirectly) to the procedure itself.
But when we describe a process as following a pattern that is, say, linearly recursive, we are speaking
about how the process evolves, not about the syntax of how a procedure is written.
In order to understand this better, we will work on the implementation of the exponentiation function.
We can do this by noticing that:
exp(x,y) = x * exp(x, y-1)
exp(x, 0) = 1
we're assuming that both x and y are integers with x > 0 and y >= 0
A straightforward recursive way of writing this in Python:
define exp (x, y):
if y == 0:
return 1
return x * exp (x, y-1)
Another way of implementing this could be:
define exp-iter (x, y, count, product):
if count > y:
return product
return exp-iter (x, y, count+1, x*product)
define exp (x, y):
return exp-iter (x, y, 1, x)
Note that despite the recursive
definition, the function
exp-iter is actually evolving
iteratively. A simple way to see this is that unlike its recursive counterpart, exp-iter holds the whole state of the computation in its arguments. The recursion is just a syntactic tool for looping over the set [1 .. y]. Modern languages like Python will use
for loops instead:
define exp-iter (x,y):
product = 1
for count in range(y):
product *= x
return product
Notice that the two implementations of exp-iter are exactly equivalent.
If we compare the complexity of the recursive and the iterative approach, we will find that they will both find the result in the same number of steps - O(y). However, they have different space (memory) complexity. The recursive way will have a O(y) complexity, whereas the iterative way will grow constantly, O(1).
Exercise
We will try to improve the complexity by noticing that:
exp (x, y) = exp (x, y/2) * exp (x, y/2) if y is even
exp (x, y) = x * exp (x, y-1) if y is odd
For instance:
exp(x, 8) = exp(x, 4) * exp (x, 4)
= square (exp (x, 4))
Implementing this in Python:
define square(x):
return x*x
define fast-exp (x, y):
if y == 0:
return 1
if y%2==0:
return square (fast-exp (x, y/2))
else:
return x * fast-exp (x, y-1)
The new procedure fast-exp will compute in O(log(n)) steps which is an improvement over the early one. However it is recursive, and therefore will not be constant in space when growing. Hence, the following exercise:
Design a procedure that evolves an iterative exponentiation process that uses successive squaring and uses a logarithmic number of steps, as does fast-expt
NB: I clearly did not come up with this exercise; I have simply summarized an exercise available in the famous book mentionned above. I will post my solution, as well as links to other solutions found online, tonight.