Simple Compile-Time Dynamic Programming in Modern C++ -- Andrew Drakeford
Compile time code can be very efficient. Andrew Drakeford demonstrates how to write efficient chains of matrix multiplication.
Simple Compile-Time Dynamic Programming in Modern C++
by Andrew Drakeford
From the article:
Modern C++ enables us to solve mathematical optimisation problems at compile time. With the expanded constexpr capabilities [Fertig21, Turner18, Turner19, Wu24], we can now write clear and efficient optimisation logic that runs during compilation. Fixed-size containers such as
std::array
fit naturally into these routines. Even standard algorithms, such asstd::sort
andstd::lower_bound
, are now constexpr, enabling more straightforward code and more powerful compile-time computations. Additionally, compile-time optimisation generates constant results, which enables the compiler to create even more efficient code. We will use the matrix chain multiplication problem as our worked example.Matrix chain multiplication
Matrix chain multiplication is a classic dynamic programming problem [Corman22, Das19, Mount]. It aims to determine the most efficient method for multiplying a sequence of matrices. Since matrix multiplication is associative, the order of grouping does not affect the result. However, the number of scalar multiplications involved can vary depending on the grouping.
Consider the three matrices A₁ (10×100), A₂ (100×5), and A₃ (5×50), multiplied in a chain, A₁ × A₂ × A₃.
There are two ways to multiply them:
- Grouping as (A₁ × A₂) × A₃ first computes a 10×5 matrix, then multiplies that with A₃. This results in 5,000 operations for the first multiplication, and another 2,500 for the second – a total of 7,500 scalar multiplications.
- Grouping as A₁ × (A₂ × A₃) first multiplies A₂ and A₃, then A₁. This results in 25,000 operations for the first step and 50,000 for the second – a total of 75,000, which is clearly worse.