# Adjugate matrix

In linear algebra, the **adjugate** or **classical adjoint** of a square matrix is the transpose of its cofactor matrix.^{[1]} It is also occasionally known as **adjunct matrix**,^{[2]}^{[3]} though this nomenclature appears to have decreased in usage.

The adjugate^{[4]} has sometimes been called the "adjoint",^{[5]} but today the "adjoint" of a matrix normally refers to its corresponding adjoint operator, which is its conjugate transpose.

In more detail, suppose *R* is a commutative ring and **A** is an *n* × *n* matrix with entries from *R*. The (*i*,*j*)-*minor* of **A**, denoted **M**_{ij}, is the determinant of the (*n* − 1) × (*n* − 1) matrix that results from deleting row i and column j of **A**. The cofactor matrix of **A** is the *n* × *n* matrix **C** whose (*i*, *j*) entry is the (*i*, *j*) *cofactor* of **A**, which is the (*i*, *j*)-minor times a sign factor:

The adjugate of **A** is the transpose of **C**, that is, the *n*×*n* matrix whose (*i*,*j*) entry is the (*j*,*i*) cofactor of **A**,

The adjugate is defined as it is so that the product of **A** with its adjugate yields a diagonal matrix whose diagonal entries are the determinant det(**A**). That is,

where **I** is the *n*×*n* identity matrix. This is a consequence of the Laplace expansion of the determinant.

The above formula implies one of the fundamental results in matrix algebra, that **A** is invertible if and only if det(**A**) is an invertible element of *R*. When this holds, the equation above yields

In this case, it is also true that det(adj(**A**)) = det(**A**) and hence that adj(adj(**A**)) = **A**.

It is easy to check the adjugate is the inverse times the determinant, −6.

The −1 in the second row, third column of the adjugate was computed as follows. The (2,3) entry of the adjugate is the (3,2) cofactor of **A**. This cofactor is computed using the submatrix obtained by deleting the third row and second column of the original matrix **A**,

For any *n* × *n* matrix **A**, elementary computations show that adjugates enjoy the following properties.

This can be proved in three ways. One way, valid for any commutative ring, is a direct computation using the Cauchy–Binet formula. The second way, valid for the real or complex numbers, is to first observe that for invertible matrices **A** and **B**,

Because every non-invertible matrix is the limit of invertible matrices, continuity of the adjugate then implies that the formula remains true when one of **A** or **B** is not invertible.

A corollary of the previous formula is that, for any non-negative integer k,

Suppose that **A** commutes with **B**. Multiplying the identity **AB** = **BA** on the left and right by adj(**A**) proves that

If **A** is invertible, this implies that adj(**A**) also commutes with **B**. Over the real or complex numbers, continuity implies that adj(**A**) commutes with **B** even when **A** is not invertible.

Finally, there is a more general proof than the second proof, which only requires that an *n*×*n* matrix has entries over a field with at least 2*n*+1 elements (e.g. a 5×5 matrix over the integers mod 11). det(**A**+*t***I**) is a polynomial in *t* with degree at most *n*, so it has at most n roots. Note that the *ij*th entry of adj((**A**+*t***I**)(**B**)) is a polynomial of at most order *n*, and likewise for adj(**A**+*t***I**)adj(**B**). These two polynomials at the *ij*th entry agree on at least *n*+1 points, as we have at least *n*+1 elements of the field where **A**+*t***I** is invertible, and we have proven the identity for invertible matrices. Polynomials of degree *n* which agree on *n*+1 points must be identical (subtract them from each other and you have *n*+1 roots for a polynomial of degree at most *n* − *a* contradiction unless their difference is identically zero). As the two polynomials are identical, they take the same value for every value of *t*. Thus, they take the same value when *t* = 0.

Using the above properties and other elementary computations, it is straightforward to show that if **A** has one of the following properties, then adj **A** does as well:

If **A** is invertible, then, as noted above, there is a formula for adj(**A**) in terms of the determinant and inverse of **A**. When **A** is not invertible, the adjugate satisfies different but closely related formulas.

Let **b** be a column vector of size *n*. Fix 1 ≤ *i* ≤ *n* and consider the matrix formed by replacing column *i* of **A** by **b**:

Laplace expand the determinant of this matrix along column i. The result is entry i of the product adj(**A**)**b**. Collecting these determinants for the different possible i yields an equality of column vectors

This formula has the following concrete consequence. Consider the linear system of equations

Assume that **A** is non-singular. Multiplying this system on the left by adj(**A**) and dividing by the determinant yields

The first divided difference of *p* is a symmetric polynomial of degree *n* − 1,

Multiply *s***I** − **A** by its adjugate. Since *p*(**A**) = **0** by the Cayley–Hamilton theorem, some elementary manipulations reveal

The adjugate also appears in Jacobi's formula for the derivative of the determinant. If **A**(*t*) is continuously differentiable, then

It follows that the total derivative of the determinant is the transpose of the adjugate:

Let *p*_{A}(*t*) be the characteristic polynomial of **A**. The Cayley–Hamilton theorem states that

Separating the constant term and multiplying the equation by adj(**A**) gives an expression for the adjugate that depends only on **A** and the coefficients of *p*_{A}(*t*). These coefficients can be explicitly represented in terms of traces of powers of **A** using complete exponential Bell polynomials. The resulting formula is

where n is the dimension of **A**, and the sum is taken over s and all sequences of *k _{l}* ≥ 0 satisfying the linear Diophantine equation

The same formula follows directly from the terminating step of the Faddeev–LeVerrier algorithm, which efficiently determines the characteristic polynomial of **A**.

The adjugate can be viewed in abstract terms using exterior algebras. Let *V* be an *n*-dimensional vector space. The exterior product defines a bilinear pairing

Suppose that *T* : *V* → *V* is a linear transformation. Pullback by the (*n* − 1)st exterior power of *T* induces a morphism of Hom spaces. The **adjugate** of *T* is the composite

If *V* is endowed with an inner product and a volume form, then the map *φ* can be decomposed further. In this case, *φ* can be understood as the composite of the Hodge star operator and dualization. Specifically, if ω is the volume form, then it, together with the inner product, determines an isomorphism

By the definition of the Hodge star operator, this linear functional is dual to ***v**. That is, ω^{∨} ∘ φ equals **v** ↦ ***v**^{∨}.

where σ(*I*) and σ(*J*) are the sum of the elements of *I* and *J*, respectively.

Iteratively taking the adjugate of an invertible matrix **A** k times yields