visit
Memoization is an optimization technique that speeds up programs by the results of previous function calls. This allows subsequent calls to reuse the cached results, avoiding time-consuming recalculation.
Memoization is commonly used in , where problems can be broken down into simpler sub-problems. One such dynamic programming problem is calculating the nth Fibonacci number.
F(0) = 0
F(1) = 1
F(n) = F(n - 1) + F(n - 2)
A function that calculates the nth Fibonacci number is often implemented .
def fibonacci(n):
if n <= 1:
return n
return fibonacci(n - 1) + fibonacci(n - 2)
The function calls of
fibonacci(4)
can be visualized with a recursion tree.Notice that the function is called with the same input multiple times. Particularly
fibonacci(2)
is calculated from scratch twice. As the input increases, the running time grows exponentially. This is suboptimal and can be improved significantly using memoization.Python 3 makes it incredibly easy to memorize functions. The module included in Python's standard library provides two useful decorators for memoization:
lru_cache
(new in Python 3.2) and cache
(new in Python 3.9). These decorators use a cache, which stores items in order of use, discarding the least recently used items to make room for new items.To avoid costly repeated function calls,
fibonacci
can be wrapped by lru_cache
, which saves values that have already been calculated. The size limit of lru_cache
can be specified with maxsize
, which has a default value of 128.from functools import lru_cache
@lru_cache(maxsize=64)
def fibonacci(n):
if n <= 1:
return n
return fibonacci(n - 1) + fibonacci(n - 2)
Now the recursion tree for
fibonacci(4)
does not have any nodes that occur more than twice. The running time now grows linearly, which is much faster than exponential growth.The
cache
decorator is equivalent to lru_cache(maxsize=None)
.from functools import cache
@cache
def fibonacci(n):
if n <= 1:
return n
return fibonacci(n - 1) + fibonacci(n - 2)
Since it does not need to discard least recently used items,
cache
is both smaller and faster than lru_cache
with a size limit.Python 3 makes it easy to implement memoization. Simply import
lru_cache
or cache
from functools
and apply the decorator.