Skip to main content

Section 11.2 Boolean matrices

In this chapter we will work with Boolean matrices. To some authors, a “Boolean matrix” is a matrix whose entries are elements of a Boolean algebra. To most authors in computer science, a Boolean matrix is what those other authors call a “logical matrix”—a matrix whose entries are bits. So, our Boolean matrices will have bits for entries.

Subsection 11.2.1 General matrix definitions

Definition 11.2.1.

An \(m \times n\) matrix is a function \(A:\{1,2,\ldots,m\}\times\{1,2,\ldots,n\} \to X\) where \(X\) is a set with certain algebraic structure (for us, bits or numbers) such that the entry \(A(i,j)\) in the \(i\)-th row and \(j\)-th column is written \(A_{i,j}\text{.}\) To specify the elements that belong to the matrix we sometimes write \(A=(A_{i,j})\text{.}\) We say \(A\) has \(m\) rows and \(n\) columns and can represent the matrix visually as

\begin{equation*} A = \left( \begin{array}{cccc} A_{1,1} \amp A_{1,2} \amp \cdots \amp A_{1,n} \\ A_{2,1} \amp A_{2,2} \amp \cdots \amp A_{2,n} \\ \amp \amp \ddots \amp \\ A_{m,1} \amp A_{m,2} \amp \cdots \amp A_{m,n} \end{array} \right) \end{equation*}
Definition 11.2.2.

A Boolean matrix is a matrix whose entries are bits.

Consider the Boolean matrix

\begin{equation*} A = \left( \begin{array}{ccc} 1 \amp 0 \amp 1 \\ 0 \amp 1 \amp 1 \end{array} \right). \end{equation*}

This is a matrix with \(2\) rows and \(3\) columns, so we would say it is a \(2 \times 3\) matrix. The entry \(A_{1,3}=1\text{,}\) the entry \(A_{2,1}=0\text{,}\) and the entry \(A_{3,1}\) does not exist for this matrix.

What follows are some worthwhile definitions for studying matrices.

Definition 11.2.4.

An \(m \times n\) matrix is square if \(m=n\text{.}\)

Definition 11.2.5.

Let \(A\) be a square \(n \times n\) matrix. The entries \(A_{i,j}\) where \(i=j\) are called diagonal entries and entries that are not diagonal are off-diagonal entries.

A square matrix whose only nonzero entries are diagonal is a diagonal matrix.

Definition 11.2.6.

The identity matrix \(I_n\) of dimension \(n\) is the \(n\times n\) diagonal matrix whose diagonal entries are all \(1\text{.}\)

The matrix

\begin{equation*} A = \left( \begin{array}{ccc} 1 \amp 0 \amp 1 \\ 0 \amp 1 \amp 1 \\ 1 \amp 0 \amp 0\end{array} \right) \end{equation*}

is square but not diagonal, the matrix

\begin{equation*} B =\left( \begin{array}{ccc} 1 \amp 0 \amp 0 \\ 0 \amp 1 \amp 0 \\ 0 \amp 0 \amp 0\end{array} \right) \end{equation*}

is diagonal but not \(I_3\text{,}\) and

\begin{equation*} I_3=\left( \begin{array}{ccc} 1 \amp 0 \amp 0 \\ 0 \amp 1 \amp 0 \\ 0 \amp 0 \amp 1 \end{array} \right) \end{equation*}

Subsection 11.2.2 Boolean matrix operations and relations

In this section, we will learn relations and operations specific to Boolean matrices. It may surprise you, but one Boolean matrix can be “less than” another. (In a few chapters we will learn about all kinds of ways different objects can be “less than” each other.) Basically, a Boolean matrix \(A\) is less than a matrix \(B\) of the same size if \(A\) never has a one where \(B\) doesn't:

Definition 11.2.8.

Let \(A=(A_{i,j})\) and \(B=(B_{i,j})\) both be \(m \times n\) Boolean matrices. We say \(A \le B\) if \(A_{i,j} \le B_{i,j}\) (as numbers) for all \(i\) and \(j\text{.}\) Equivalently, we can say that \(A \le B\) if \(B\) never has a \(0\) in a position where \(A\) has a \(1\text{.}\)

For instance,

\begin{equation*} \left( \begin{array}{cc} 1 \amp 0\\ 0 \amp 1 \\ 1 \amp 0\end{array} \right) \le \left( \begin{array}{cc} 1 \amp 0\\ 1 \amp 1 \\ 1 \amp 1\end{array} \right) \end{equation*}

but

\begin{equation*} \left( \begin{array}{cc} 1 \amp 0\\ 0 \amp 1 \\ 1 \amp 0\end{array} \right) \not\le \left( \begin{array}{cc} 0 \amp 1\\ 0 \amp 1 \\ 1 \amp 1\end{array} \right). \end{equation*}

Boolean matrices inherit the meet, join, and complement operations from their underlying Boolean algebra.

Definition 11.2.10.

Let \(A\) and \(B\) be \(m\times n\) matrices. Their meet (or join) is the \(m\times n\) matrix \(A \wedge B = (A_{i,j} \wedge B_{i,j})\) (or \(A \vee B=(A_{i,j} \vee B_{i,j})\)) whose \(i,j\)-th entry is the meet (join) of the \(i,j\)-th entries of \(A\) and \(B\text{.}\)

Here are some examples of the meet and join operations. Notice that the matrices must be the same size.

\begin{equation*} \left (\begin{array}{ccc} 0 \amp 1 \amp 0 \\ 1 \amp 0 \amp 1 \end{array} \right) \wedge \left (\begin{array}{ccc} 0 \amp 0 \amp 1 \\ 0 \amp 1 \amp 1 \end{array} \right) = \left (\begin{array}{ccc} 0 \amp 0 \amp 0 \\ 0\amp 0 \amp 1 \end{array} \right) \end{equation*}
\begin{equation*} \left (\begin{array}{ccc} 0 \amp 1 \amp 0 \\ 1 \amp 0 \amp 1 \end{array} \right) \vee \left (\begin{array}{ccc} 0 \amp 0 \amp 1 \\ 0 \amp 1 \amp 1 \end{array} \right) = \left (\begin{array}{ccc} 0 \amp 1 \amp 1 \\ 1 \amp 1 \amp 1 \end{array} \right) \end{equation*}
Definition 11.2.12.

Let \(A=(A_{i,j})\) be a \(m\times n\) Boolean matrix. Its complement is the \(m\times n\) Boolean matrix \(\neg A = (\neg A_{i,j})\text{.}\)

Here is the complement of a matrix.

\begin{equation*} \neg \left (\begin{array}{ccc} 0 \amp 0 \amp 1 \\ 0 \amp 1 \amp 1 \end{array} \right) = \left (\begin{array}{ccc} 1 \amp 1 \amp 0 \\ 1 \amp 0 \amp 0 \end{array} \right) \end{equation*}

The last operation we need to look at in this section is transposition, where a matrix's rows and columns are interchanged.

Definition 11.2.14.

Let \(A=(A_{i,j})\) be a \(m\times n\) Boolean matrix. Its transpose is the \(n \times m\) Boolean matrix \(A^T = (A_{j,i})\text{.}\)

Here is a matrix being transposed:

\begin{equation*} \left (\begin{array}{cc} 1 \amp 1 \\ 1 \amp 0 \\ 0 \amp 1 \end{array} \right)^T = \left (\begin{array}{ccc} 1\amp1\amp0 \\ 1\amp0\amp1 \end{array} \right) \end{equation*}

Subsection 11.2.3 Boolean matrix multiplication

Matrix multiplication, whether the entries are real numbers or bits like ours, is an interesting operation. It is not straightforward. However, the reason that it is not straightforward is that matrices allow us to “encode” abstract mathematical objects in a concrete way (as rectangular arrays of numbers or bits). We will see that Boolean matrices represent relations. The product of two Boolean matrices therefore corresponds to the composition of those two relations.

Consider the following example.

On the universe \(\{1,2,3,4,5\}\text{,}\) let \(A(k)\) be the predicate “\(k\) is prime.” The predicate \(A(k)\) evaluates to a true statement when \(k=2\text{,}\) \(3\text{,}\) or \(5\text{.}\) We can “encode” the statement \(A(k)\) as a \(1 \times 5\) Boolean matrix:

\begin{equation*} A = (\begin{array}{ccccc} 0 \amp 1 \amp 1 \amp 0 \amp 1 \end{array}) \end{equation*}

where a \(1\) in the \(k\)-th entry represents that the statement \(A(k)\) is true.

Consider also the predicate \(B(k)\) meaning “\(k\) is even”. We can encode \(B(k)\) as a \(5 \times 1\) matrix:

\begin{equation*} B = \left( \begin{array}{c} 0 \\ 1 \\ 0 \\ 1 \\ 0 \end{array} \right) \end{equation*}

Suppose we wanted to answer the question, “Is there a number that is both even and prime?” The answer to this question is yes if both \(A\) and \(B\) have a \(1\) in the same position. In other words, we can look at the join of the meets of the corresponding entries of the matrices. If both matrices ever share a \(1\) in the same spot, then the meet will be a \(1\text{;}\) and \(1\) join anything is \(1\text{.}\) If there are no shared \(1\)s, then the join will be \(0\text{.}\) Let's see:

\begin{align*} (0 \wedge 0) \vee (1 \wedge 1) \vee (1 \wedge 0) \vee (0 \wedge 1) \vee (1 \wedge 0) \amp \\ \\ 0 \vee 1 \vee 0 \vee 0 \vee 0 \amp = 1 \end{align*}

We got a single \(1 \wedge 1\) in the calculation because \(2\) is both prime and even. We only need a single \(1\) for the join to be \(1\text{,}\) telling us that yes, there is some element of the universe satisfying both predicates.

We can extrapolate this calculation to an operation between a row matrix and column matrix.

Definition 11.2.17.

Let

\begin{equation*} A = (\begin{array}{cccc}A_1 \amp A_2 \amp \ldots \amp A_n\end{array}) \end{equation*}

be a \(1\times n\) Boolean row matrix and

\begin{equation*} B = \left(\begin{array}{c}B_1 \\ B_2 \\ \vdots \\ B_n\end{array}\right) \end{equation*}

be a \(n\times 1\) Boolean column matrix. Their Boolean dot product is the bit

\begin{equation*} A \odot B = (A_1 \wedge B_1) \vee (A_2 \wedge B_2) \vee \cdots \vee (A_n \wedge B_n). \end{equation*}

We may also write this in “big vee” notation:

\begin{equation*} A \odot B = \bigvee_{r=1}^n (A_i \wedge B_i). \end{equation*}

Let

\begin{equation*} A=\left(\begin{array}{cccc} 0 \amp 1 \amp 0 \amp 1\end{array}\right) \end{equation*}

and

\begin{equation*} B=\left( \begin{array}{c} 1 \\ 1 \\ 0 \\ 0 \end{array} \right). \end{equation*}

Then,

\begin{equation*} A \odot B = (0 \wedge 1) \vee (1 \wedge 1) \vee (0 \wedge 0) \vee (1 \wedge 0). \end{equation*}

Evaluating each of the conjunctions, we have

\begin{equation*} A \odot B = 0 \vee 1 \vee 0 \vee 0. \end{equation*}

Since \(A \odot B\) is a join with at least one \(1\text{,}\) \(A \odot B = 1.\) In fact, \(A\odot B\) is only \(1\) because there is an entry (the second one) where both \(A_i\) and \(B_i\) are 1. If there are no such entries, the Boolean dot product will always be 0.

This theorem is highly useful in quickly calculating Boolean products:

From the logical perspective, a Boolean matrix is a family of statements. Given two matrix of compatible sizes, we can construct a third matrix whose entries answer the question, “Is there a subject for which the statement represented by this row and the statement represented by that column are both true simultaneously?” In other words, we have a matrix whose entries are the Boolean dot products of all the rows of the first matrix with all the columns of the second.

Definition 11.2.20.

Let \(A=(A_{i,j})\) be a \(m\times k\) Boolean matrix and \(B=(B_{i,j})\) be a \(k \times n\) Boolean matrix. Their Boolean product is the \(m\times n\) matrix

\begin{equation*} A\odot B = \left( \bigvee_{r=1}^k (A_{i,r} \wedge B_{r,j})\right); \end{equation*}

that is, the \(i,j\)-th entry of \(A \odot B\) is the Boolean dot product of the \(i\)-th row of \(A\) with the \(j\)-th column of \(B\text{.}\)

Note that \(A\) must have exactly as many columns as \(B\) has rows, else the Boolean dot products will be undefined.

Let

\begin{equation*} A = \left(\begin{array}{cc} 0 \amp 1 \\ 0 \amp 1\\ 1 \amp 0 \end{array}\right) \end{equation*}
and
\begin{equation*} B = \left(\begin{array}{ccc} 1 \amp 1 \amp 0 \\ 0 \amp 1 \amp 0 \end{array}\right). \end{equation*}

Their Boolean product \(A \odot B\) is the matrix whose entries are the Boolean dot products of the appropriate rows of \(A\) with the corresponding columns of \(B\text{.}\)

For example, the entry in the first row and first column is

\begin{equation*} (0 \wedge 1) \vee (1 \wedge 0) = 0 \vee 0 = 0, \end{equation*}

as evidenced by the fact that the first row \((0,1)\) of \(A\) and the first column \((1,0)^T\) of \(B\) do not have any \(1\)'s in the same spot. Meanwhile, the \(1,2\)-th entry of the product is \(1\text{,}\) because the first row and the second column both have a \(1\) in the second position. Here is the full product:

\begin{equation*} A \odot B = \left( \begin{array}{ccc} 0 \amp 1 \amp 0 \\ 0 \amp 1 \amp 0 \\ 1 \amp 1 \amp 0 \end{array} \right) \end{equation*}

As an exercise, compute \(B \odot A\) and verify that it is not equal to \(A \odot B\text{.}\)

Verify for yourself that

\begin{equation*} \left( \begin{array}{cccc} 1 \amp 0 \amp 1 \amp 0 \\ 1 \amp 1 \amp 1 \amp 0 \end{array} \right) \odot \left( \begin{array}{ccc} 1 \amp 0 \amp 0 \\ 0 \amp 1 \amp 0 \\ 1 \amp 0 \amp 0 \\ 1 \amp 0 \amp 1 \end{array} \right) \end{equation*}

is

\begin{equation*} \left( \begin{array}{ccc} 1 \amp 0 \amp 0 \\ 1 \amp 1 \amp 0 \end{array}\right). \end{equation*}

However,

\begin{equation*} \left( \begin{array}{ccc} 1 \amp 0 \amp 0 \\ 0 \amp 1 \amp 0 \\ 1 \amp 0 \amp 0 \\ 1 \amp 0 \amp 1 \end{array} \right) \odot \left( \begin{array}{cccc} 1 \amp 0 \amp 1 \amp 0 \\ 1 \amp 1 \amp 1 \amp 0 \end{array} \right) \end{equation*}

doesn't exist because the left-hand matrix has \(3\) columns and the right matrix has \(2\) rows.