# LebGeeks

A community for technology geeks in Lebanon.

You are not logged in.

## #1 August 6 2012

Joe
Member

### [Exercise] Improving performance through caching

The goal of today's exercise is to implement a basic caching system in order to improve performance of a function. We will be working on a simple recursive version of the Fibonacci Sequence calculator.

Simple recursive function

Consider the following code in Python:

``````def fib(n):
if n <= 1:
return n
else:
return fib(n-1)+fib(n-2)``````

Let's forget for a second that the python interpreter has a limit to the depth of recursion (a limit that can easily be changed).
There is a major problem with the performance of the above code, and it is apparent when looking at the execution stack.

Without looking at the order of execution of the recursive calls, here's what the interpreter will have to calculate:

``````fib (10)

= fib(9) + fib (8)
= fib(8) + fib(7) + fib(7) + fib(6)
= fib(7) + fib(6) + fib(6) + fib(5) + fib(6) + fib(5) + fib(5) + fib(4)
= ...``````

As you can see, a lot of calculations are repeated several times. fib(6) for instance is calculated 4 times. The number can be much higher for greater arguments.

However fib is a function that consistently returns the same value for a specific argument. fib(6) always returns 13 no matter how many times you execute it. These functions are called side-effect free.

Instead of calculating fib(6) each time, we can optimize the function by making it remember which arguments it has already calculated and manage an internal database of which key/values can be returned immediately and which should be calculated again.

A reusable caching system

Before you get to coding, there are further requirements to the exercise.
We want to set a caching system that is somewhat reusable throughout the code. It doesn't need to be very complex or performant, but we should be able to add this behavior to any function in the program.

Basically what I'm saying is that you should implement the caching system without modifying the above fib() function.

There are several ways to do this, more or less obvious depending on the language you are using. I will show you how to do it using Python decorators. We could maybe explore this using other languages.

A short presentation of Python decorators

Decorators are a lot less scary than what they look like. In order to understand them, you'd have to understand one simple concept:
In Python, functions are objects just like any other.

This means that functions can be given as arguments to other functions, functions can return functions and functions can be declared inside another function.
Just like any other regular object. For instance:

``````
>>> a = abs # 'a' and 'abs' refer to the same function object.
>>> a
<built-in function abs>
>>> abs
<built-in function abs>
>>>
>>> print a(-4)
4``````

Once you understood that, you understood decorators that are nothing more than syntactic sugar for the following. Imagine you have defined a function called foo() that takes a function as argument and returns a function. The two following codes are equivalent:

``````@foo
def bar(n):
return n+1``````

or

``````def bar(n):
return n+1

bar = foo(bar)``````

The exercise is then about creating a @cached decorator that can modify (decorate) the way our fib function executes to add caching to it.

I will be working on this and post my solution as soon as I'm done.
Feel free to work in any language you want using any construct you want. I gave decorators as one example of achieving this, I'm sure there are others.

## #2 August 6 2012

xterm
Moderator

### Re: [Exercise] Improving performance through caching

Give this a try, I didn't give it much test (wrote it in the browser)

``````def cached(f):
cache = {}
def _decoed(*args):
if args in cache:
return cache[args]
else:
cache[args] = f(*args)
return cache[args]
return _decoed``````

Could easily be a one liner too, I'll change it later.

## #3 August 6 2012

Joe
Member

### Re: [Exercise] Improving performance through caching

xterm, your solution is similar to mine with one notable difference:

``````def cached(fn):
fn.db = {}
def cachedfn(n):
# If the answer is already in the db, return this value.
if n in fn.db:
else:
# Else calculate it and save the value in the db

return cachedfn``````

I though the cache dictionary has to be an attribute of the function object. I tried your solution and it works. I am not sure how your cache dictionary remains persistent throughout every execution of your function. I must be missing something.

Also, your use of *args is better than my stupid single argument cheapness.

## #4 August 6 2012

xterm
Moderator

### Re: [Exercise] Improving performance through caching

rahmu wrote:

I am not sure how your cache dictionary remains persistent throughout every execution of your function. I must be missing something.

Actually, yours may be the proper way of doing it (setting the cache on the function itself). Mine may conflict with other functions if the args are the same!

Try it again on another function (factorial) and see if it conflicts when both are running one after the other:

``````fib(10)
factorial(10)``````

In terms of how its persisting, I think it's in globals(), i'll check tonight.

## #5 August 6 2012

xterm
Moderator

### Re: [Exercise] Improving performance through caching

Ok the persistence seems straight forward, the decorator creates the inner function the first time and simply reuses it everytime fib(n) is called that's why cache lingers.

I'll update this when i test if it conflicts.

Edit:

Apparently there's no conflict, it seems each function that gets decorated gets its own returned function with its own scope.

## #6 August 6 2012

geek
Member

### Re: [Exercise] Improving performance through caching

Excuse the Mathematica but this is one of the many reasons I love this language

``````Fib[0] = 0;
Fib[1] = 1;
Fib[n_ /; n > 1] := Fib[n] = Fib[n - 1] + Fib[n - 2];``````

By the way, this particular form of caching is called memoization.

Last edited by geek (August 6 2012)

## #7 August 6 2012

Joe
Member

### Re: [Exercise] Improving performance through caching

@geek: can you please at least explain that piece of code?

Also, thank you for pointing out the name of the technique. It allowed me to find this code, using a decorator class instead of a function.

Fischer
Member

## #9 August 6 2012

xterm
Moderator

### Re: [Exercise] Improving performance through caching

Here we go, took me a while but was finally able to compact it (mind you i'd prefer the previous version over this one)

``````def cached(f, cache={}):
return lambda *a: cache[a] if a in cache else cache.update({a: f(*a)}) or cache[a]``````

This is the same as the previous implementation the notable difference is:
1- The returned function is no longer named.
2- In python all mutator functions should return None so if the cache of a parameter doesn't exist, cache.update() is updating the dictionary and returning None. None or cache[a] will return cache[a].

## #10 August 7 2012

Ayman
Member

### Re: [Exercise] Improving performance through caching

Interesting exercise, indeed self caching programs(Memoization) are really useful. I did a similar implementation as the ones above but in Javascript I used the decorator pattern too. Trying to think of a way without necessarily using decorators.

``````/*Fibonacci calculation function */
function fib(n)
{
if(n < 2)
return n;
else
return fib(n-1)+fib(n-2);
}

/* Memoization function */
function memoize(f)
{
var cache = {};

/* puts arguments into the cache and forwards the call to f */
function wrapper(args)
{
if(!(args in cache))
{
cache[args] = f.call(this,args);
}
return cache[args];
}

/* method to clear the cache when needed */
wrapper.clearCache = function()
{
cache = {};
}

return wrapper;
}

/* Usage */
fib = memoize(fib);
console.log(fib(10));``````

@rahmu regarding your Fibonacci function, as far as I know fib(0)=0 and fib(1)=1, in yours fib(0) would return 1 and result in a larger final answer.

Last edited by Ayman (October 17 2012)

## #11 August 7 2012

Member

### Re: [Exercise] Improving performance through caching

just for fun (does not do tha job, but a nice c++ template+macros (vs20100

``````#include <iostream>
#include <map>

using namespace std;

template <class T, typename R>
class CacheThis{
map<R, R> cMap;
T* _f;
public:
CacheThis(T *func){
_f = func;
}
R Call(R a){
map<typename R,typename R>::iterator it=cMap.find(a);
if(it == cMap.end()){
cMap[a] = (*_f)(a);
}
return cMap[a];
}
};

template<typename R>
struct Function
{
typedef R (*FPTR)(R);
};
typedef Function<unsigned int>::FPTR UINT_FUN_PTR;

#define CALL_FN(F,val) CacheThis<UINT_FUN_PTR,unsigned int> v(&F); cout<<v.Call(val);

unsigned int fib(unsigned int n)
{
if(n==0) return 0;
else if(n==1) return 1;
else return fib(n-1)+fib(n-2);
}

void main()
{
UINT_FUN_PTR myfunctionptr = &fib;
CALL_FN(myfunctionptr,40);
}``````

Last edited by jsaade (August 8 2012)

## #12 August 7 2012

Joe
Member

### Re: [Exercise] Improving performance through caching

@Ayman: Nice catch! I fixed it.

@xterm: The cache variable lingers on because of the creation of the closure. And to the best of my knowledge (and that of the guy who very nicely answered my question) there's no way of accessing the variable from outside the _decoed function.

Also, what does this mean?

2- In python all mutator functions should return None so if the cache of a parameter doesn't exist, cache.update() is updating the dictionary and returning None. None or cache[a] will return cache[a].

## #13 August 7 2012

xterm
Moderator

### Re: [Exercise] Improving performance through caching

rahmu wrote:

Also, what does this mean?

It means in python, functions that mutate the object they are called on should return None and not another object or the object itself.

cache.update() is changing cache, so it returns None.

``````result = cache.update(...)
assert result is None``````

If update doesn't change cache, it should return a new object that was updated. E.g.:

``````newcache = cache.update({'a', 1})

assert 'a' in newcache # passes
assert 'a' not in cache # passes``````

P.S: This is not a language feature or anything but simply a followed approach when building such functions.

Edit: what you quoted should be reformatted sorry:

2-  if the cache of a parameter doesn't exist, cache.update() is updating the dictionary and returning None. None or cache[a] will return cache[a]. (In python all mutator functions should return None)

## #14 August 7 2012

saeidw
Member

### Re: [Exercise] Improving performance through caching

This was an interesting excercise, I ended up on the Haskell wiki and they have this really elegant solution:

``````memoized_fib :: Int -> Integer
memoized_fib = ((map fib' [0..]) !!)
where
fib' 0 = 0
fib' 1 = 1
fib' n = memoized_fib (n - 1) + memoized_fib (n - 2)``````

The list generated by map fib' [0..] is available to the fib' function as a cache, If you ask for memoized_fibs n, at the point where fib' n is calculated, the list contains [fib' 0, ..., fin' n - 1] so that the computation of fib' n is a simple indexing and addition.

Using a list for the cache isn't the most efficient way of doing it, though.

## #15 August 8 2012

arithma
Member

### Re: [Exercise] Improving performance through caching

@saeidw: This is indeed an interesting one. What does the !! do?
Is there anyway to know how the computation is carried out?
Since this is returning an indexable list, will it be the case that the computation will run it from 0 till the indexed number or will it only compute the necessary numbers?
Imagine another series where:
S[0] = 1
S[1] = .5
S[n] = S[n-2] * 2

If we need to compute S[16] we only need to compute S[i] where i is even and 0 <= i < 16.

## #16 August 20 2012

geek
Member

### Re: [Exercise] Improving performance through caching

@rahmu: Functions That Remember Values They Have Found
@arithma: !! returns the nth element of a list, e.g., `[0 ..] !! n` returns n. Only the necessary _expressions_ are evaluated (lazy evaluation), i.e., only `s i` where i is even and less than 16  for `s 16`.

## #17 January 2 2013

Joe
Member

### Re: [Exercise] Improving performance through caching

rahmu wrote:

The cache variable lingers on because of the creation of the closure. And to the best of my knowledge (and that of the guy who very nicely answered my question) there's no way of accessing the variable from outside the _decoed function.

Today I found a way to access these variables in Python. Each function object has a __closure__ attribute that holds the variables.
The variable __closure__ is a tuple of cell objects.

##### Case study

Let's consider the code at hand:

``````def cached(f):
cache = {}
def _decoed(*args):
if args in cache:
return cache[args]
else:
cache[args] = f(*args)
return cache[args]
return _decoed

@cached
def fib(n):
if n <= 1:
return n
else:
return fib(n-1)+fib(n-2)``````
##### Let's read inside the closure

We can access the cache directory of the function like this:

``````>>> fib.__closure__
(<cell at 0x14b5210: dict object at 0x14ad7e0>, <cell at 0x14b5328: function object at 0x7fbc9fa9f628>)``````

As you can see, the cell 0 holds a dict object. We can inspect the variable it contains using the attribute cell_contents:

``````>>> fib.__closure__[0].cell_contents
{}``````

Of course the dict is still empty. If we call fib(5) we expect to find 6 values inserted:

``````>>> fib(5)
5
>>> fib.__closure__[0].cell_contents
{(0,): 4, (1,): 1, (2,): 1, (3,): 2, (4,): 3, (5,): 5}``````
##### Let's write inside the closure

This one scares me a bit. You can write and modify the values inside the cells.
EDIT: Turns out you can modify the values inside the dict, because dicts are mutable. But you cannot change the content of a cell.
Like this:

``````>>> fib.__closure__[0].cell_contents[(5,)] = -3 # modifying the value inside the cache dict
>>> fib(5)
-3
>>> fib(6)
0``````