Hadamard Matrices #1: A Motivation

Inês Guimarães
Inês Guimarães Author
David Carvalho
David Carvalho Editor

In this mini-series of blog posts, we are going to expose you to a piece of Mathematics that transcends pure formality and can have real implications in various domains. Starting from an intuitive motivation, we will get you to understand the underpinnings of the theory behind Hadamard matrices and how this class of matrices is highly relevant in Combinatorial Optimization.

However simple to make sense of, these objects have made us ask questions for which no answers exist to this day. It goes without saying that this field (and those lingering questions) have a lot to gain from Machine Learning-driven methods. We will get to that soon.

We are chuffed to be collaborating with a friend of Inductiva: Inês Guimarães a.k.a MathGurl, who is responsible for some of the series installments. If you understand Portuguese, go check out her impressive and extensive repertoire of videos about (mostly) Mathematics on YouTube by clicking here - and don’t be afraid to subscribe to her channel!

Without further ado, our journey starts here.
Welcome to a journey into the realms of Hadamard matrices.

Hadamard Matrices #1: A Motivation

A painting game

As with most things in life, let us start with chess. A typical game of chess is played on an \(8 \times 8\) board whose square colors alternate between black and white, as shown in Fig. 1. The chessboard pattern is a simple one; we observe that, given any pair of rows, the patterns of colors either agree everywhere (so we have a total match), or disagree everywhere (a total mismatch). Indeed, for two arbitrary rows, there is a total match whenever they are both on an even or odd row position (e.g. rows \(4\) and \(6\)) or a total mismatch if each belongs to a different parity (e.g. rows \(2\) and \(3\))

Fig. 1: A chessboard (and related pattern). Credits: Inês Guimarães

Now let us abstract a tad and think about a more exotic black-and-white pattern. Could we paint an \(8 \times 8\) board in such a way that, given any two rows, exactly half their squares are matched? It may take you a while to come up with such a board, but it is indeed possible, as Fig. 2 shows! For example, rows \(2\) and \(5\) agree on positions \(1\), \(3\), \(4\), \(8\), and disagree on positions \(2\), \(5\), \(6\), \(7\).

Fig. 2: A half-pattern on a chessboard. Credits: Inês Guimarães

In order to obtain these “half-match” patterns, it must certainly be the case that each row must have an even number of entries. We can ask more questions though. For instance, can such patterns exist on a \(6 \times 6\) board? What about a \(10 \times 10\) or even a \(1000 \times 1000\) board? We challenge the reader to come up with such a board (or to die trying).

Now: if they do exist, how can we “cook” them up? Surprisingly (given the simplicity of this idea), it is terribly hard to construct these kinds of patterns for boards with arbitrary size (as we shall see).

Let’s formalize this idea more mathematically. Spoiler alert: it will help us immensely.

Matrices to the rescue

In order to tackle this problem, we need to frame it in a more abstract and structured fashion, so this is the moment Math kicks in. Instead of a black-and-white board, we consider a matrix whose entries are either \(1\) or \(-1\) (which we call a \(\{\pm 1\}\)-matrix) and map those two colors (black/white) to the respective values of \(1/-1\). The board depicted Fig. 2 is then represented by the matrix:

$$ \begin{pmatrix} 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & -1 & 1 & -1 & -1 & -1 \\ 1 & 1 & -1 & -1 & -1 & 1 & -1 & 1 \\ 1 & -1 & 1 & 1 & -1 & -1 & -1 & 1 \\ 1 & -1 & 1 & -1 & -1 & 1 & 1 & -1 \\ 1 & -1 & -1 & -1 & 1 & -1 & 1 & 1 \\ 1 & 1 & -1 & 1 & -1 & -1 & 1 & -1 \\ 1 & -1 & -1 & 1 & 1 & 1 & -1 & -1 \end{pmatrix}. $$

What do we gain from switching to this viewpoint? It may not be as pretty, but it allows us to perform calculations. In this context, a “half-match” pattern on a board corresponds to a \(\{\pm 1\}\)-matrix such that, given any two rows, the number of pairs of entries with the same sign equals the number of pairs of entries with opposite signs. So, given a pair of rows, if we multiply their entries at the same position, half of their products yield \(1\) and half the products yield \(-1\), canceling each other perfectly.

In other words: if we think of each row as a vector, the half-match condition implies that the dot product of any pair of different row vectors equals \(0\). Consequently, all these rows of a “half-match” matrix are pairwise orthogonal.

Take a look again, for instance, at rows \(2\) and \(5\). A quick computation of the dot product shows:

$$ \begin{align*} &\begin{pmatrix} \textcolor{OliveGreen}{1} & \textcolor{Red}{1} & \textcolor{OliveGreen}{1} & \textcolor{OliveGreen}{-1} & \textcolor{Red}{1} & \textcolor{Red}{-1} & \textcolor{Red}{-1} & \textcolor{OliveGreen}{-1} \end{pmatrix} \cdot \begin{pmatrix} \textcolor{OliveGreen}{1} & \textcolor{Red}{-1} & \textcolor{OliveGreen}{1} & \textcolor{OliveGreen}{-1} & \textcolor{Red}{-1} & \textcolor{Red}{1} & \textcolor{Red}{1} & \textcolor{OliveGreen}{-1} \end{pmatrix} \\ = & \, \textcolor{OliveGreen}{1 \times 1} + \textcolor{Red}{1 \times (-1)} + \textcolor{OliveGreen}{1 \times 1} + \textcolor{OliveGreen}{(-1) \times (-1)} + \textcolor{Red}{1 \times (-1)} + \textcolor{Red}{(-1) \times 1} + \textcolor{Red}{(-1) \times 1} + \textcolor{OliveGreen}{(-1) \times (-1)}\\ = & \, \textcolor{OliveGreen}{1} \, \textcolor{Red}{- \, 1} \, \textcolor{OliveGreen}{+ \, 1} \, \textcolor{OliveGreen}{+ \, 1} \, \textcolor{Red}{- \, 1} \, \textcolor{Red}{- \, 1} \, \textcolor{Red}{- \, 1} \, \textcolor{OliveGreen}{+ \, 1} \\ = & \, 0 \end{align*} $$

As we can see, the products in positions \(1\), \(3\), \(4\) and \(8\) (in green) are equal to \(1\), since these correspond to squares of the same color, whereas the products in positions \(2\), \(5\), \(6\) and \(7\) (in red) are equal to \(-1\), since these correspond to squares with opposite colors.

These “half-match” matrices have a particular relevance in Mathematics - they are known as Hadamard matrices, named after the French mathematician Jacques Hadamard (1865-1963). Hadamard proved that, among all matrices with entries between \(-1\) and \(1\), they are precisely the ones with the maximum possible determinant.

This problem relates to another one whose starting point was given in a paper in 1893 precisely by Hadamard himself (eventually being termed after him) - Hadamard’s maximal determinant problem. In this problem, one is interested in knowing the maximal determinant of a Hadamard matrix \(H_n\) of any size \(n\). Amazingly, no definitive answer has been found and Hadamard only managed to provide a bound: \(\det(H_n) \leq n^{n/2}\).

These questions are hard to answer since the space of all possible matrices (and their associated determinants) grow extremely fast as the size increases - and also consequently the associated constraints on the problem (for instance, by making sure all rows remain orthogonal).

To see this, note that, even for a comically-small \(3 \times 3\) matrix

\[\begin{pmatrix} a & b & c \\ d & e & f \\ g & h & i \end{pmatrix}\]

there are \(2^{3^2} = 512\) different possible entry combinations!

This fact alone should convince you that we are dealing with some rather special creatures…

But who cares about Hadamard matrices, anyway? Are they even useful outside “Math Lab”? You bet they are!

Great theory but greater applications

In 1971, the robotic spacecraft Mariner 9 was launched towards Mars from Cape Canaveral Air Force Station, in Florida. It was the first spacecraft to orbit another planet — what an amazing time to be alive! Everybody was eager to see some nice photos of the “Red Planet”. But how were the pictures sent from the orbiter back to Earth? In the first few months, the occurrence of dust storms made it impossible for clear photos to be transmitted. Even without adverse meteorological phenomena, the data took several minutes to reach our planet, and during its travel the information got slightly garbled.

In fact, any spacecraft (or computing system, actually) stores pictures as long sequences of \(0\)s and \(1\)s — the so-called bits — and, due to external interference, some of the \(0\)s are accidentally transmitted as \(1\)s and vice-versa. Therefore, in order to retrieve the correct information, we resort to error correcting code. This sort of scheme adds redundancy to the data, allowing the recovery of the original signal without errors.

Fig. 3: A photo of Mars taken by Mariner 9 and likely reconstructed via Hadamard codes. Credits: Wikipedia

Because of their amazing mathematical properties, Hadamard codes, which are themselves based on Hadamard matrices, were used in the Mariner 9 mission to transmit clear photos of Mars back to our humble planet. So, yes - Hadamard matrices are very useful.

We could mention several other applications, from statistics to spectrometry, but we shall get familiar with some basic properties of Hadamard matrices first. So, next up: Hadamard Matrices #2: An Introduction.

Be prepared for the exciting journey that lies ahead!

References

[1] A very digestible introduction to Hadamard matrices and their most common applications.
[2] For a more detailed account of what the Hadamard code is for and how error correction is attained.
[3] Yet another source detailing a number of applications of Hadamard matrices to signal processing, optical multiplexing, error correction coding and design/analysis of statistics.

Recent posts from our blog

Sofia Guerreiro

Sofia Guerreiro

Cristiana Carpinteiro

Cristiana Carpinteiro

In this series of blog posts we will explore a specific case of the use of AI in the pharmaceutical industry - using Graph Neural Networks for predicting binding affinity. But for now, let’s start by understanding the problem of drug discovery and some fundamental concepts like binding affinity.

Ivan Pombo

Ivan Pombo

Luís Sarmento

Luís Sarmento

Coastal engineering projects protect the coast from erosion, flooding and other events. Due to their cost, the design phase of these projects is heavily based on computational simulations. In this blog post, we enlighten how Inductiva API allowed researchers to scale their coastal engineering simulations.