📖 Algorithms and Complexity#
⏱ | words
Task1.1 Task1.2 Task1.3 Task1.4 Task1.5 References
Writing programs that work fast#
Example: evaluation of polynomials
Task: evaluate the polynomial of the form at a given \(x\)
1def calc_polynomial(qs=[0,], x=0.0):
2 '''Evaluates the polynomial given by coefficients qs at given x.
3 First coefficient qs[0] is a constant, last coefficient is for highest power.
4 '''
5 res=0.0
6 for k in range(len(qs)):
7 xpw = x**k
8 res += qs[k] * xpw
9 return res
Explain the operation of this program.
Can you make this algorithm more efficient?
Example: next
Here is a better approach!
1def calc_polynomial_faster(qs=[0,], x=0.0):
2 '''Evaluates the polynomial given by coefficients qs at given x.
3 First coefficient qs[0] is a constant, last coefficient is for highest power.
4 Faster than before!
5 '''
6 res, xpw = qs[0], x # init result and power of x
7 for i in range(1,len(qs)): # start with second coefficient
8 res += xpw * qs[i]
9 xpw *= x
Why is this algorithm faster? What is the difference?
Practical Task 1.1: Evaluations the run time of the two polynomial algorithms.
Complete the coding assignment in the exercises repo in the Jupyter notebook 1_algorithms/task1.1polynomials_pre.ipynb
An algorithm is a method of solving a class of problems on a computer.
sequence of steps/commands for the computer to run
Relevant questions:
How much time does it take to run?
How much memory does it need?
What other resources may be limiting? (storage, communication, etc)
Smart algorithm is a lot more important that fast computer
“a benchmark production planning model solved using linear programming would have taken 82 years to solve in 1988, using the computers and the linear programming algorithms of the day. Fifteen years later – in 2003 – this same model could be solved in roughly 1 minute, an improvement by a factor of roughly 43 million. Of this, a factor of roughly 1,000 was due to increased processor speed, whereas a factor of roughly 43,000 was due to improvements in algorithms!”
Algorithms are behind any computation done in economics
Macro simulation models (growth, heterogenous agents, overlapping generations, etc.)
Computationally heavy econometrics (Bayesian, MCMC, multi-dimensional fixed effects, etc.)
Structural estimation with the need to resolve the model solution many thousand times
Counterfactual analysis, sensitivity analysis and uncertainty quantification
Estimation of static and dynamic games is one of the areas of structural econometrics requiring quick computation \(\implies\) smart algorithms
Algorithms with different complexity#
Complexity of an algorithms in the cost, measured in running time or in storage requirement, of using an algorithm to solve one of the problems in the relevant class.
Let’s look at some particular algorithms
Parity of a number#
Check whether an integer is odd or even.
Algorithm:
Convert the number to binary
Check whether the last digit is 0 (number is even) or 1 (number is odd)
# check parity of various numbers
for n in [2,4,7,32,543,671,780]:
print('n = {0:5d} ({0:08b}), parity={1:d}'.format(n,parity(n)))
n = 2 (00000010), parity=0
n = 4 (00000100), parity=0
n = 7 (00000111), parity=1
n = 32 (00100000), parity=0
n = 543 (1000011111), parity=1
n = 671 (1010011111), parity=1
n = 780 (1100001100), parity=0
Some details on bitwise operations
Bitwise operations in Python
bitwise AND
&bitwise OR
\|bitwise XOR
\^bitwise NOT
\~(including sign bit!)right shift
\>\>left shift
\<\<(without overflow!)
Bitwise AND, OR and XOR
7 |
= |
0 |
1 |
1 |
1 |
4 |
= |
0 |
1 |
0 |
0 |
7 AND 4 |
= |
0 |
1 |
0 |
0 = 4 |
7 |
= |
0 |
1 |
1 |
1 |
4 |
= |
0 |
1 |
0 |
0 |
7 OR 4 |
= |
0 |
1 |
1 |
1 = 7 |
7 |
= |
0 |
1 |
1 |
1 |
4 |
= |
0 |
1 |
0 |
0 |
7 XOR 4 |
= |
0 |
0 |
1 |
1 = 3 |
Bit shifts in Python

Finding max/min of a list#
Find max or min in an unsorted list of values
Algorithm:
cycle through the list once saving the current extremum value
Binary search in finite set#
Finding a discrete element between given boundaries
Example
Think of number between 1 and 100
How many guesses are needed to locate it if the only answers are “below” and “above”?
What is the optimal sequence of questions?
Explain the operation of the code below
Inputs: sorted list of numbers, and a value to find
Algorithm:
1. Find middle point
1. If the sought value is below, reduce the list to the lower half
1. If the sought value is above, reduce the list to the upper half
import numpy as np
N = 10
# random sorted sequence of integers up to 100
x = np.random.choice(100,size=N,replace=False)
x = np.sort(x)
# random choice of one number/index
k0 = np.random.choice(N,size=1)
k1 = binary_search(grid=x,val=x[k0])
print(f'Index of x{k0}={x[k0]} in {x} is {k1}')
Index of x[7]=[84] in [ 6 15 21 34 35 61 80 84 91 92] is 7
Rate of growth and big-O notation#
Very useful way to talk about the rate of growth \(\leftrightarrow\) complexity of algorithms
Definition
In words, \(f(x) = O\big(g(x)\big)\) simply means that as \(x\) increases, \(f(x)\) certainly does not grow at a faster rate than \(g(x)\)
In measuring solution time we may distinguish performance in
best (easiest to solve) case
average case
worst case (\(\leftarrow\) the focus of the theory!)
Constants and lower terms are ignored because we are only interested in order or growth
Classes of algorithm complexity#
\(O(1)\) constant time
\(O(\log_{2}(n))\) logarithmic time
\(O(n)\) linear time
\(O(n \log_{2}(n))\) quasi-linear time
\(O(n^{k}), k>1\) quadratic, cubic, etc. polinomial time \(\uparrow\) Tractable
\(O(2^{n})\) exponential time \(\downarrow\) Curse of dimensionality
\(O(n!)\) factorial time
How many operations as function of input size?#
Parity: Just need to check the lowest bit, does not depend on input size \(\Rightarrow O(1)\)
Maximum element: Need to loop through elements once: \(\Rightarrow O(n)\)
Binary search: Divide the problem in 2 each step \(\Rightarrow O(\log(n))\)
Divide-and-conquer algorithms#
Divide-and-conquer structure is what typically marks an excellent algorithm
Example
Examples of DAC algorithms:
Binary search
Quicksort and merge sort
Fast Fourier transform (FTT) algorithm
Karatsuba fast multiplication algorithm
Curse of dimensionality#
Example of a bad algorithm?
Definition
The term curse of dimensionality relates to the above exponential complexity of an algorithm.
Example
Many board games (checkers, chess, shogi, go) in their \(n\)-by-\(n\) generalizations
Traveling salesman problem (TSP)
Many problems in economics are subject to curse of dimensionality 😢
Allocation of discrete good#
Maximize welfare \(W(x_1,x_2,\dots,x_n)\) subject to \(\sum_{i=1}^{n}x_i = A\) where \(A\) is discrete good that is only divisible in steps of \(\Lambda\).
Let \(M=A/\Lambda \in \mathbb{N}\). Let \(p_i \in \{0,1,\dots,M\}\) such that \(\sum_{i=1}^{n}p_i = M\).
Then the problem is equivalent to maximize \(W(\Lambda p_1,\Lambda p_2,\dots,\Lambda p_n)\) subject to above.
\((p_1,p_2,\dots,p_n)\) is composition of number \(M\) into \(n\) parts.
# example of compositions generation
for c in compositions(5,3) : print(c)
[0, 0, 5]
[0, 1, 4]
[0, 2, 3]
[0, 3, 2]
[0, 4, 1]
[0, 5, 0]
[1, 0, 4]
[1, 1, 3]
[1, 2, 2]
[1, 3, 1]
[1, 4, 0]
[2, 0, 3]
[2, 1, 2]
[2, 2, 1]
[2, 3, 0]
[3, 0, 2]
[3, 1, 1]
[3, 2, 0]
[4, 0, 1]
[4, 1, 0]
[5, 0, 0]
Hint
What to do with heavy to compute models?
Design of better solution algorithms
Analyze special classes of problems + rely on problem structure
Speed up the code (low level language, compilation to machine code)
Parallelize the computations
Bound the problem to maximize model usefulness while keeping it tractable
Wait for innovations in computing technology (quantum computing, etc.)
Classes of computational complexity in theoretical computer science
Thinking of all problems there are:
P can be solved in polynomial time
NP solution can checked in polynomial time, even if requires exponential solution algorithm
NP-hard as complex as any NP problem (including all exponential and combinatorial problems)
NP-complete both NP and NP-hard (tied via reductions)
NP stands for non-deterministic polynomial time \(\leftrightarrow\) ‘magic’ guess algorithm
P vs. NP
Unresolved question of whether P = NP or P \(\ne\) NP ($1 mln. prize by Clay Mathematics Institute)

Recursion#
Definition
Recursive algorithm is an algorithm that calls itself in order to solve a problem
Surprisingly powerful technique in scientific programming!
Example
Fibonacci sequence defined as
Imagine a program that computes Fibonacci numbers using this definition, and calls itself in the process
def fibonacci(n):
if n == 0:
return 1
elif n == 1:
return 1
else:
return fibonacci(n - 1) + fibonacci(n - 2)
for i in range(10):
print(fibonacci(i),end=' ')
1 1 2 3 5 8 13 21 34 55
Is this an efficient algorithm? Why or why not?
Towers of Hanoi problem#
Classic puzzle: given a board with three pegs, move a stack of disks of different size from the left-most peg to the right-most peg, moving one disk at a time and following the rule that no larger disk can be place on top of a smaller one.
The problem can be solved nicely by breaking it into small parts using the following algorithm:
def move(from,to):
move one disk from --> to
def move_via(from,via,to):
move(from,via)
move(via,to)
def main_algorithm(n,source,aux,target):
'''
Inputs: number of disks n
source peg
auxiliary peg
target peg
'''
if n==0:
do nothing, return
if n==1:
move(source,target)
if n>0:
main_algorithm(n-1,source,target,aux)
move(source,target)
main_algorithm(n-1,aux,source,target)
Practical Task 1.2
Code up the recursive solution using the algorithm above in the exercises repo in the Jupyter notebook 1_algorithms/task1.2_hanoi_pre.ipynb
Solution for 4 disks requires 13 steps:
Bisection method#
The first of two very important classic algorithms for equation solving
Solve equations of the form (focus on scalar case today)
The latter condition requires that the function \(f(x)\) takes different signs at the endpoints \(a\) and \(b\)
Algorithm is similar to binary search, but in continuous space
Input: function f(x)
brackets [a,b] such that f(a)f(b)<0
convergence tolerance epsilon
maximum number of iterations max_iter
Algorithm:
step 0: ensure all conditions are satisfied
step 1: compute the sign of the function at (a+b)/2
step 2: replace a with (a+b)/2 if f(a)f((a+b)/2)>0, otherwise replace b with (a+b)/2
step 3: repeat steps 1-2 until |a-b|< epsilon, or max_iter number of iterations is reached
step 4: return (a+b)/2
Practical Task 1.3: Implementing bisections method
Complete the coding assignment in the exercises repo in the Jupyter notebook 1_algorithms/task1.3_bisections_pre.ipynb
Newton-Raphson method#
The second of the two classic methods for solving an equation \(f(x)=0\), gradient based
General form
Equation solving
Finding maximum/minimum based on FOC, then \( f(x)=Q'(x) \)
Derivation for Newton method using Taylor series expansion#
Take first two terms, assume \( f(x) \) is solution, and let \( x_0=x_i \) and \( x=x_{i+1} \)
The main ides of Newton-Raphson method is to iterate on the equation starting from some \(x_0\)
Applicable to the system of equations, in which case \(x\in\mathbb{R}^n\) and \(f: \mathbb{R}^n \to \mathbb{R}^n\)
Input: function f(x)
gradient function f'(x)
Algorithm:
1. Start with some good initial value
2. Update x using Newton step above
3. Iterate until convergence
Practical Task 1.4: Implementing Newton-Raphson method
Complete the coding assignment in the exercises repo in the Jupyter notebook 1_algorithms/task1.4_newton_pre.ipynb
Practical Task 1.5: Multivariate Newton method [optional]
Complete the coding assignment in the exercises repo in the Jupyter notebook 1_algorithms/task1.5_multivariate_pre.ipynb
Measuring complexity of Newton and bisection methods#
What is the size of input \( n \)?
Desired precision of the solution!
Thus, attention to the errors in the solution as algorithm proceeds
Rate of convergence is part of the computational complexity of the algorithms
Computational complexity
Calculating a root of a function f(x) with n-digit precision
Provided that a good initial approximation is known
Is \( O((logn)F(n)) \), where \( F(n) \) is the cost of
calculating \( f(x)/f'(x) \) with \( n \)-digit precision
References and Additional Resources
📖 Wilf [2002] “Algorithms and Complexity” {download}`pdf of the book https://www2.math.upenn.edu/~wilf/AlgoComp.pdf ’
Complexity classes and P vs. NP
📺 Lecture on algorithm complexity by Erik Demaine, MIT Lecture recording, 50 min
Big-O cheat sheet https://www.bigocheatsheet.com
Bitwise operations post on Geeksforgeeks link
On computational complexity of Newton method link
“Improved convergence and complexity analysis of Newton’s method for solving equations” link
📺 Oscar Veliz videos on Newton method and its domains of attraction