Proofs Ch.10: Derivatives of Quadratic Forms

This chapter proves differentiation formulas for the quadratic form xᵀAx and its variants. Quadratic forms arise extensively as objective functions in optimization problems, including squared error loss functions in machine learning, regularization terms (L2 regularization), risk computation in portfolio optimization, and cost functions in Kalman filters. Obtaining closed-form expressions for the gradient and Hessian directly enables implementation of Newton's method and IRLS (Iteratively Reweighted Least Squares).

Prerequisites: Chapter 2 (Scalar-by-Vector Derivatives), Chapter 4 (Basic Matrix Differentiation Formulas). Related chapters: Chapter 12 (Norm Derivatives), Chapter 15 (Special Matrix Derivatives).

Derivatives of Quadratic Forms

Assumptions for This Chapter
Unless otherwise noted, the formulas in this chapter hold under the following conditions:
  • All formulas are based on the denominator layout
  • The derivative of a scalar $f$ with respect to a matrix $\boldsymbol{X} \in \mathbb{R}^{N \times N}$ yields $\dfrac{\partial f}{\partial \boldsymbol{X}} \in \mathbb{R}^{N \times N}$
  • In the quadratic form $\boldsymbol{x}^\top \boldsymbol{A} \boldsymbol{x}$, the matrix $\boldsymbol{A}$ is not necessarily symmetric (symmetrization is noted where required)
Theoretical Background on Quadratic Forms and Higher-Order Derivatives

The Hessian matrix $\boldsymbol{H} = \dfrac{\partial^2 f}{\partial \boldsymbol{x} \partial \boldsymbol{x}^\top}$ of the quadratic form $f(\boldsymbol{x}) = \boldsymbol{x}^\top \boldsymbol{A} \boldsymbol{x}$ is symmetric whenever $f$ is $C^2$ (twice continuously differentiable), by Schwarz's theorem (Clairaut's theorem):

$\dfrac{\partial^2 f}{\partial x_i \partial x_j} = \dfrac{\partial^2 f}{\partial x_j \partial x_i}$

For the quadratic forms in this chapter, $\boldsymbol{H} = \boldsymbol{A} + \boldsymbol{A}^\top$, and when $\boldsymbol{A}$ is symmetric, $\boldsymbol{H} = 2\boldsymbol{A}$.

Extension to infinite dimensions: In Banach spaces and Hilbert spaces, these results generalize as Fréchet derivatives and Gâteaux derivatives. The finite-dimensional results extend naturally to infinite dimensions under appropriate topologies.

We derive differentiation formulas for quadratic forms.

10.1 Derivative of the Matrix Quadratic Form $\boldsymbol{a}^\top \boldsymbol{X} \boldsymbol{a}$

Formula: $\dfrac{\partial}{\partial \boldsymbol{X}} (\boldsymbol{a}^\top \boldsymbol{X} \boldsymbol{a}) = \boldsymbol{a} \boldsymbol{a}^\top$
Conditions: $\boldsymbol{a}$ is an $N$-dimensional constant vector, $\boldsymbol{X}$ is an $N \times N$ matrix
Proof

Expand the quadratic form $\boldsymbol{a}^\top \boldsymbol{X} \boldsymbol{a}$ in components. First, the $i$-th component of $\boldsymbol{X} \boldsymbol{a}$ is

\begin{equation}(\boldsymbol{X} \boldsymbol{a})_i = \sum_{j=0}^{N-1} X_{ij} a_j \label{eq:10-1-1}\end{equation}

Multiplying from the left by $\boldsymbol{a}^\top$ gives

\begin{equation}\boldsymbol{a}^\top \boldsymbol{X} \boldsymbol{a} = \sum_{i=0}^{N-1} a_i (\boldsymbol{X} \boldsymbol{a})_i \label{eq:10-1-2}\end{equation}

Substituting $\eqref{eq:10-1-1}$ into $\eqref{eq:10-1-2}$:

\begin{equation}\boldsymbol{a}^\top \boldsymbol{X} \boldsymbol{a} = \sum_{i=0}^{N-1} a_i \sum_{j=0}^{N-1} X_{ij} a_j = \sum_{i=0}^{N-1} \sum_{j=0}^{N-1} a_i X_{ij} a_j \label{eq:10-1-3}\end{equation}

Differentiating $\eqref{eq:10-1-3}$ with respect to the matrix entry $X_{mn}$. Since $a_i$ and $a_j$ are constants, only $X_{ij}$ is subject to differentiation.

\begin{equation}\dfrac{\partial}{\partial X_{mn}} (\boldsymbol{a}^\top \boldsymbol{X} \boldsymbol{a}) = \sum_{i=0}^{N-1} \sum_{j=0}^{N-1} a_i \dfrac{\partial X_{ij}}{\partial X_{mn}} a_j \label{eq:10-1-4}\end{equation}

Since $X_{ij}$ and $X_{mn}$ are independent variables, $\dfrac{\partial X_{ij}}{\partial X_{mn}} = 1$ only when $(i, j) = (m, n)$, and 0 otherwise.

\begin{equation}\dfrac{\partial X_{ij}}{\partial X_{mn}} = \delta_{im} \delta_{jn} \label{eq:10-1-5}\end{equation}

Substituting $\eqref{eq:10-1-5}$ into $\eqref{eq:10-1-4}$:

\begin{equation}\dfrac{\partial}{\partial X_{mn}} (\boldsymbol{a}^\top \boldsymbol{X} \boldsymbol{a}) = \sum_{i=0}^{N-1} \sum_{j=0}^{N-1} a_i \delta_{im} \delta_{jn} a_j \label{eq:10-1-6}\end{equation}

Since $\delta_{im}$ equals 1 only when $i = m$, only the $i = m$ term survives in the sum over $i$.

\begin{equation}\sum_{i=0}^{N-1} a_i \delta_{im} = a_m \label{eq:10-1-7}\end{equation}

Similarly, $\delta_{jn}$ equals 1 only when $j = n$, so only the $j = n$ term survives in the sum over $j$.

\begin{equation}\sum_{j=0}^{N-1} \delta_{jn} a_j = a_n \label{eq:10-1-8}\end{equation}

Applying $\eqref{eq:10-1-7}$ and $\eqref{eq:10-1-8}$ to $\eqref{eq:10-1-6}$:

\begin{equation}\dfrac{\partial}{\partial X_{mn}} (\boldsymbol{a}^\top \boldsymbol{X} \boldsymbol{a}) = a_m a_n \label{eq:10-1-9}\end{equation}

Computing the $(m, n)$ entry of the outer product $\boldsymbol{a} \boldsymbol{a}^\top$. Since $\boldsymbol{a}$ is a column vector and $\boldsymbol{a}^\top$ is a row vector, their product is a matrix.

\begin{equation}(\boldsymbol{a} \boldsymbol{a}^\top)_{mn} = a_m a_n \label{eq:10-1-10}\end{equation}

Comparing $\eqref{eq:10-1-9}$ and $\eqref{eq:10-1-10}$, for all $(m, n)$ we have $\dfrac{\partial}{\partial X_{mn}} (\boldsymbol{a}^\top \boldsymbol{X} \boldsymbol{a}) = (\boldsymbol{a} \boldsymbol{a}^\top)_{mn}$. In matrix form:

\begin{equation}\dfrac{\partial}{\partial \boldsymbol{X}} (\boldsymbol{a}^\top \boldsymbol{X} \boldsymbol{a}) = \boldsymbol{a} \boldsymbol{a}^\top \label{eq:10-1-11}\end{equation}

Remark: The resulting matrix $\boldsymbol{a} \boldsymbol{a}^\top$ is symmetric and has rank 1 (when $\boldsymbol{a} \neq \boldsymbol{0}$).

10.2 Derivative of the Squared Component Sum

Formula: $\dfrac{\partial}{\partial X_{ij}} \left(\sum_{k,l} X_{kl}\right)^2 = 2 \sum_{k,l} X_{kl}$
Conditions: $\boldsymbol{X}$ is an arbitrary matrix
Proof

Let $s$ denote the sum of all entries of the matrix $\boldsymbol{X}$.

\begin{equation}s = \sum_{k=0}^{M-1} \sum_{l=0}^{N-1} X_{kl} \label{eq:10-2-1}\end{equation}

where $\boldsymbol{X}$ is an $M \times N$ matrix.

Differentiating $s^2$ with respect to $X_{ij}$. Since $s^2$ is a composite function of $s$, we apply the chain rule (1.26).

\begin{equation}\dfrac{\partial s^2}{\partial X_{ij}} = \dfrac{d(s^2)}{ds} \cdot \dfrac{\partial s}{\partial X_{ij}} \label{eq:10-2-2}\end{equation}

Differentiating the outer function $f(s) = s^2$ with respect to $s$:

\begin{equation}\dfrac{d(s^2)}{ds} = 2s \label{eq:10-2-3}\end{equation}

Differentiating the inner function $s = \sum_{k,l} X_{kl}$ with respect to $X_{ij}$. Since each $X_{kl}$ is an independent variable, only the $(k, l) = (i, j)$ term contributes a derivative of 1; all others are 0.

\begin{equation}\dfrac{\partial s}{\partial X_{ij}} = \sum_{k=0}^{M-1} \sum_{l=0}^{N-1} \dfrac{\partial X_{kl}}{\partial X_{ij}} = \sum_{k=0}^{M-1} \sum_{l=0}^{N-1} \delta_{ki} \delta_{lj} = 1 \label{eq:10-2-4}\end{equation}

Substituting $\eqref{eq:10-2-3}$ and $\eqref{eq:10-2-4}$ into $\eqref{eq:10-2-2}$:

\begin{equation}\dfrac{\partial s^2}{\partial X_{ij}} = 2s \cdot 1 = 2s \label{eq:10-2-5}\end{equation}

Substituting $\eqref{eq:10-2-1}$ into $\eqref{eq:10-2-5}$ to obtain the final result:

\begin{equation}\dfrac{\partial}{\partial X_{ij}} \left(\sum_{k,l} X_{kl}\right)^2 = 2 \sum_{k,l} X_{kl} \label{eq:10-2-6}\end{equation}

Remark: The result is a constant independent of $(i, j)$. This reflects the fact that $s^2$ depends symmetrically on all entries of $\boldsymbol{X}$.

10.3 Derivative of the Gram Matrix Bilinear Form $\boldsymbol{b}^\top \boldsymbol{X}^\top \boldsymbol{X} \boldsymbol{c}$

Formula: $\dfrac{\partial}{\partial \boldsymbol{X}} (\boldsymbol{b}^\top \boldsymbol{X}^\top \boldsymbol{X} \boldsymbol{c}) = \boldsymbol{X} (\boldsymbol{b} \boldsymbol{c}^\top + \boldsymbol{c} \boldsymbol{b}^\top)$
Conditions: $\boldsymbol{b}$, $\boldsymbol{c}$ are constant vectors, $\boldsymbol{X}$ is a matrix
Proof

Let $\boldsymbol{X} \in \mathbb{R}^{M \times N}$ and $\boldsymbol{b}, \boldsymbol{c} \in \mathbb{R}^N$. We expand the bilinear form of the Gram matrix $\boldsymbol{X}^\top \boldsymbol{X} \in \mathbb{R}^{N \times N}$ in components.

First, compute the $(i, k)$ entry of $\boldsymbol{X}^\top \boldsymbol{X}$:

\begin{equation}(\boldsymbol{X}^\top \boldsymbol{X})_{ik} = \sum_{j=0}^{M-1} (\boldsymbol{X}^\top)_{ij} X_{jk} = \sum_{j=0}^{M-1} X_{ji} X_{jk} \label{eq:10-3-1}\end{equation}

Next, expand $\boldsymbol{b}^\top \boldsymbol{X}^\top \boldsymbol{X} \boldsymbol{c}$ in components:

\begin{equation}\boldsymbol{b}^\top \boldsymbol{X}^\top \boldsymbol{X} \boldsymbol{c} = \sum_{i=0}^{N-1} \sum_{k=0}^{N-1} b_i (\boldsymbol{X}^\top \boldsymbol{X})_{ik} c_k \label{eq:10-3-2}\end{equation}

Substituting $\eqref{eq:10-3-1}$ into $\eqref{eq:10-3-2}$:

\begin{equation}\boldsymbol{b}^\top \boldsymbol{X}^\top \boldsymbol{X} \boldsymbol{c} = \sum_{i=0}^{N-1} \sum_{k=0}^{N-1} \sum_{j=0}^{M-1} b_i X_{ji} X_{jk} c_k \label{eq:10-3-3}\end{equation}

Differentiating $\eqref{eq:10-3-3}$ with respect to $X_{mn}$. Since $b_i$ and $c_k$ are constants, only $X_{ji}$ and $X_{jk}$ are subject to differentiation. Applying the product rule (1.25) to the product $X_{ji} X_{jk}$:

\begin{equation}\dfrac{\partial}{\partial X_{mn}} (X_{ji} X_{jk}) = \dfrac{\partial X_{ji}}{\partial X_{mn}} X_{jk} + X_{ji} \dfrac{\partial X_{jk}}{\partial X_{mn}} \label{eq:10-3-4}\end{equation}

$\dfrac{\partial X_{ji}}{\partial X_{mn}} = \delta_{jm} \delta_{in}$, which equals 1 only when $(j, i) = (m, n)$.

\begin{equation}\dfrac{\partial X_{ji}}{\partial X_{mn}} = \delta_{jm} \delta_{in} \label{eq:10-3-5}\end{equation}

Similarly, $\dfrac{\partial X_{jk}}{\partial X_{mn}} = \delta_{jm} \delta_{kn}$.

\begin{equation}\dfrac{\partial X_{jk}}{\partial X_{mn}} = \delta_{jm} \delta_{kn} \label{eq:10-3-6}\end{equation}

Substituting $\eqref{eq:10-3-5}$ and $\eqref{eq:10-3-6}$ into $\eqref{eq:10-3-4}$:

\begin{equation}\dfrac{\partial}{\partial X_{mn}} (X_{ji} X_{jk}) = \delta_{jm} \delta_{in} X_{jk} + X_{ji} \delta_{jm} \delta_{kn} \label{eq:10-3-7}\end{equation}

Computing the derivative of $\eqref{eq:10-3-3}$ by substituting $\eqref{eq:10-3-7}$:

\begin{equation}\dfrac{\partial}{\partial X_{mn}} (\boldsymbol{b}^\top \boldsymbol{X}^\top \boldsymbol{X} \boldsymbol{c}) = \sum_{i,k,j} b_i (\delta_{jm} \delta_{in} X_{jk} + X_{ji} \delta_{jm} \delta_{kn}) c_k \label{eq:10-3-8}\end{equation}

Separating into the first and second terms:

\begin{equation}\dfrac{\partial}{\partial X_{mn}} = \sum_{i,k,j} b_i \delta_{jm} \delta_{in} X_{jk} c_k + \sum_{i,k,j} b_i X_{ji} \delta_{jm} \delta_{kn} c_k \label{eq:10-3-9}\end{equation}

Computing the first term. $\delta_{jm}$ retains only $j = m$, and $\delta_{in}$ retains only $i = n$.

\begin{equation}\sum_{i,k,j} b_i \delta_{jm} \delta_{in} X_{jk} c_k = b_n \sum_{k=0}^{N-1} X_{mk} c_k = b_n (\boldsymbol{X} \boldsymbol{c})_m \label{eq:10-3-10}\end{equation}

Computing the second term. $\delta_{jm}$ retains only $j = m$, and $\delta_{kn}$ retains only $k = n$.

\begin{equation}\sum_{i,k,j} b_i X_{ji} \delta_{jm} \delta_{kn} c_k = c_n \sum_{i=0}^{N-1} b_i X_{mi} = c_n (\boldsymbol{X} \boldsymbol{b})_m \label{eq:10-3-11}\end{equation}

Substituting $\eqref{eq:10-3-10}$ and $\eqref{eq:10-3-11}$ into $\eqref{eq:10-3-9}$:

\begin{equation}\dfrac{\partial}{\partial X_{mn}} (\boldsymbol{b}^\top \boldsymbol{X}^\top \boldsymbol{X} \boldsymbol{c}) = b_n (\boldsymbol{X} \boldsymbol{c})_m + c_n (\boldsymbol{X} \boldsymbol{b})_m \label{eq:10-3-12}\end{equation}

Writing $\eqref{eq:10-3-12}$ in matrix form. The $(m, n)$ entry of the outer product $\boldsymbol{u} \boldsymbol{v}^\top$ is $u_m v_n$.

\begin{equation}(\boldsymbol{X} \boldsymbol{c})_m b_n = (\boldsymbol{X} \boldsymbol{c} \boldsymbol{b}^\top)_{mn}, \quad (\boldsymbol{X} \boldsymbol{b})_m c_n = (\boldsymbol{X} \boldsymbol{b} \boldsymbol{c}^\top)_{mn} \label{eq:10-3-13}\end{equation}

The final result in matrix form:

\begin{equation}\dfrac{\partial}{\partial \boldsymbol{X}} (\boldsymbol{b}^\top \boldsymbol{X}^\top \boldsymbol{X} \boldsymbol{c}) = \boldsymbol{X} (\boldsymbol{b} \boldsymbol{c}^\top + \boldsymbol{c} \boldsymbol{b}^\top) \label{eq:10-3-14}\end{equation}

Remark: When $\boldsymbol{b} = \boldsymbol{c}$, we have $\boldsymbol{b} \boldsymbol{c}^\top + \boldsymbol{c} \boldsymbol{b}^\top = 2 \boldsymbol{b} \boldsymbol{b}^\top$, yielding $\dfrac{\partial}{\partial \boldsymbol{X}} \|\boldsymbol{X} \boldsymbol{b}\|^2 = 2 \boldsymbol{X} \boldsymbol{b} \boldsymbol{b}^\top$.

10.4 Derivative of the General Quadratic Form $\boldsymbol{u}^\top \boldsymbol{C} \boldsymbol{v}$

Formula: $\dfrac{\partial}{\partial \boldsymbol{x}} (\boldsymbol{u}^\top \boldsymbol{C} \boldsymbol{v}) = \boldsymbol{B}^\top \boldsymbol{C} (\boldsymbol{D}\boldsymbol{x} + \boldsymbol{d}) + \boldsymbol{D}^\top \boldsymbol{C}^\top (\boldsymbol{B}\boldsymbol{x} + \boldsymbol{b})$
Conditions: $\boldsymbol{u} = \boldsymbol{B}\boldsymbol{x} + \boldsymbol{b}$, $\boldsymbol{v} = \boldsymbol{D}\boldsymbol{x} + \boldsymbol{d}$; $\boldsymbol{B}$, $\boldsymbol{C}$, $\boldsymbol{D}$, $\boldsymbol{b}$, $\boldsymbol{d}$ are constants
Proof

Let $\boldsymbol{u} = \boldsymbol{B}\boldsymbol{x} + \boldsymbol{b}$ and $\boldsymbol{v} = \boldsymbol{D}\boldsymbol{x} + \boldsymbol{d}$. Expand the scalar function $f = \boldsymbol{u}^\top \boldsymbol{C} \boldsymbol{v}$ in components:

\begin{equation}f = \boldsymbol{u}^\top \boldsymbol{C} \boldsymbol{v} = \sum_{i=0}^{m-1} \sum_{j=0}^{n-1} u_i C_{ij} v_j \label{eq:10-4-1}\end{equation}

where $\boldsymbol{u} \in \mathbb{R}^m$, $\boldsymbol{v} \in \mathbb{R}^n$, and $\boldsymbol{C} \in \mathbb{R}^{m \times n}$.

Differentiating $f$ with respect to $x_k$. Since $C_{ij}$ are constants, the chain rule (1.26) requires differentiating $u_i$ and $v_j$:

\begin{equation}\dfrac{\partial f}{\partial x_k} = \sum_{i,j} \dfrac{\partial u_i}{\partial x_k} C_{ij} v_j + \sum_{i,j} u_i C_{ij} \dfrac{\partial v_j}{\partial x_k} \label{eq:10-4-2}\end{equation}

From $\boldsymbol{u} = \boldsymbol{B}\boldsymbol{x} + \boldsymbol{b}$, we have $u_i = \sum_l B_{il} x_l + b_i$. Differentiating with respect to $x_k$:

\begin{equation}\dfrac{\partial u_i}{\partial x_k} = \sum_l B_{il} \dfrac{\partial x_l}{\partial x_k} = \sum_l B_{il} \delta_{lk} = B_{ik} \label{eq:10-4-3}\end{equation}

Similarly, from $\boldsymbol{v} = \boldsymbol{D}\boldsymbol{x} + \boldsymbol{d}$, we have $v_j = \sum_l D_{jl} x_l + d_j$. Differentiating with respect to $x_k$:

\begin{equation}\dfrac{\partial v_j}{\partial x_k} = D_{jk} \label{eq:10-4-4}\end{equation}

Substituting $\eqref{eq:10-4-3}$ and $\eqref{eq:10-4-4}$ into $\eqref{eq:10-4-2}$:

\begin{equation}\dfrac{\partial f}{\partial x_k} = \sum_{i,j} B_{ik} C_{ij} v_j + \sum_{i,j} u_i C_{ij} D_{jk} \label{eq:10-4-5}\end{equation}

Computing the first term. Summing over $j$ first: $\sum_j C_{ij} v_j = (\boldsymbol{C} \boldsymbol{v})_i$.

\begin{equation}\sum_{i,j} B_{ik} C_{ij} v_j = \sum_i B_{ik} (\boldsymbol{C} \boldsymbol{v})_i \label{eq:10-4-6}\end{equation}

Writing $\eqref{eq:10-4-6}$ as a matrix product. Noting that $B_{ik} = (\boldsymbol{B}^\top)_{ki}$:

\begin{equation}\sum_i B_{ik} (\boldsymbol{C} \boldsymbol{v})_i = \sum_i (\boldsymbol{B}^\top)_{ki} (\boldsymbol{C} \boldsymbol{v})_i = (\boldsymbol{B}^\top \boldsymbol{C} \boldsymbol{v})_k \label{eq:10-4-7}\end{equation}

Computing the second term. Summing over $i$ first: $\sum_i u_i C_{ij} = (\boldsymbol{u}^\top \boldsymbol{C})_j = (\boldsymbol{C}^\top \boldsymbol{u})_j$.

\begin{equation}\sum_{i,j} u_i C_{ij} D_{jk} = \sum_j (\boldsymbol{C}^\top \boldsymbol{u})_j D_{jk} \label{eq:10-4-8}\end{equation}

Writing $\eqref{eq:10-4-8}$ as a matrix product. Noting that $D_{jk} = (\boldsymbol{D}^\top)_{kj}$:

\begin{equation}\sum_j (\boldsymbol{C}^\top \boldsymbol{u})_j D_{jk} = \sum_j (\boldsymbol{D}^\top)_{kj} (\boldsymbol{C}^\top \boldsymbol{u})_j = (\boldsymbol{D}^\top \boldsymbol{C}^\top \boldsymbol{u})_k \label{eq:10-4-9}\end{equation}

Substituting $\eqref{eq:10-4-7}$ and $\eqref{eq:10-4-9}$ into $\eqref{eq:10-4-5}$:

\begin{equation}\dfrac{\partial f}{\partial x_k} = (\boldsymbol{B}^\top \boldsymbol{C} \boldsymbol{v})_k + (\boldsymbol{D}^\top \boldsymbol{C}^\top \boldsymbol{u})_k \label{eq:10-4-10}\end{equation}

Writing $\eqref{eq:10-4-10}$ in vector form:

\begin{equation}\dfrac{\partial f}{\partial \boldsymbol{x}} = \boldsymbol{B}^\top \boldsymbol{C} \boldsymbol{v} + \boldsymbol{D}^\top \boldsymbol{C}^\top \boldsymbol{u} \label{eq:10-4-11}\end{equation}

Substituting $\boldsymbol{u} = \boldsymbol{B}\boldsymbol{x} + \boldsymbol{b}$ and $\boldsymbol{v} = \boldsymbol{D}\boldsymbol{x} + \boldsymbol{d}$ to obtain the final result:

\begin{equation}\dfrac{\partial}{\partial \boldsymbol{x}} (\boldsymbol{u}^\top \boldsymbol{C} \boldsymbol{v}) = \boldsymbol{B}^\top \boldsymbol{C} (\boldsymbol{D}\boldsymbol{x} + \boldsymbol{d}) + \boldsymbol{D}^\top \boldsymbol{C}^\top (\boldsymbol{B}\boldsymbol{x} + \boldsymbol{b}) \label{eq:10-4-12}\end{equation}

Remark: When $\boldsymbol{u} = \boldsymbol{v} = \boldsymbol{x}$ ($\boldsymbol{B} = \boldsymbol{D} = \boldsymbol{I}$, $\boldsymbol{b} = \boldsymbol{d} = \boldsymbol{0}$) and $\boldsymbol{C}$ is symmetric, we obtain $\dfrac{\partial}{\partial \boldsymbol{x}} (\boldsymbol{x}^\top \boldsymbol{C} \boldsymbol{x}) = 2\boldsymbol{C}\boldsymbol{x}$.

10.5 Component Derivative of the Matrix Quadratic Form $\boldsymbol{X}^\top \boldsymbol{B} \boldsymbol{X}$

Formula: $\dfrac{\partial (\boldsymbol{X}^\top \boldsymbol{B} \boldsymbol{X})}{\partial X_{ij}} = \boldsymbol{X}^\top \boldsymbol{B} \boldsymbol{J}^{ij} + \boldsymbol{J}^{ji} \boldsymbol{B} \boldsymbol{X}$
Conditions: $\boldsymbol{X}$ is a matrix, $\boldsymbol{B}$ is a constant matrix, $\boldsymbol{J}^{ij}$ is the matrix with 1 only at position $(i,j)$
Proof

Let $\boldsymbol{X} \in \mathbb{R}^{M \times N}$ and $\boldsymbol{B} \in \mathbb{R}^{M \times M}$. We compute the $(k, l)$ entry of the resulting matrix $\boldsymbol{X}^\top \boldsymbol{B} \boldsymbol{X} \in \mathbb{R}^{N \times N}$.

First, compute the $(p, l)$ entry of $\boldsymbol{B} \boldsymbol{X}$:

\begin{equation}(\boldsymbol{B} \boldsymbol{X})_{pl} = \sum_{q=0}^{M-1} B_{pq} X_{ql} \label{eq:10-5-1}\end{equation}

Next, compute the $(k, l)$ entry of $\boldsymbol{X}^\top \boldsymbol{B} \boldsymbol{X}$. Note that $(\boldsymbol{X}^\top)_{kp} = X_{pk}$.

\begin{equation}(\boldsymbol{X}^\top \boldsymbol{B} \boldsymbol{X})_{kl} = \sum_{p=0}^{M-1} (\boldsymbol{X}^\top)_{kp} (\boldsymbol{B} \boldsymbol{X})_{pl} = \sum_{p=0}^{M-1} X_{pk} (\boldsymbol{B} \boldsymbol{X})_{pl} \label{eq:10-5-2}\end{equation}

Substituting $\eqref{eq:10-5-1}$ into $\eqref{eq:10-5-2}$:

\begin{equation}(\boldsymbol{X}^\top \boldsymbol{B} \boldsymbol{X})_{kl} = \sum_{p=0}^{M-1} X_{pk} \sum_{q=0}^{M-1} B_{pq} X_{ql} = \sum_{p=0}^{M-1} \sum_{q=0}^{M-1} X_{pk} B_{pq} X_{ql} \label{eq:10-5-3}\end{equation}

Differentiating $\eqref{eq:10-5-3}$ with respect to $X_{ij}$. Since $B_{pq}$ are constants, only $X_{pk}$ and $X_{ql}$ are subject to differentiation. Applying the product rule (1.25):

\begin{equation}\dfrac{\partial}{\partial X_{ij}} (X_{pk} B_{pq} X_{ql}) = \dfrac{\partial X_{pk}}{\partial X_{ij}} B_{pq} X_{ql} + X_{pk} B_{pq} \dfrac{\partial X_{ql}}{\partial X_{ij}} \label{eq:10-5-4}\end{equation}

$\dfrac{\partial X_{pk}}{\partial X_{ij}}$ equals 1 only when $(p, k) = (i, j)$, and 0 otherwise.

\begin{equation}\dfrac{\partial X_{pk}}{\partial X_{ij}} = \delta_{pi} \delta_{kj} \label{eq:10-5-5}\end{equation}

Similarly, $\dfrac{\partial X_{ql}}{\partial X_{ij}}$ equals 1 only when $(q, l) = (i, j)$.

\begin{equation}\dfrac{\partial X_{ql}}{\partial X_{ij}} = \delta_{qi} \delta_{lj} \label{eq:10-5-6}\end{equation}

Substituting $\eqref{eq:10-5-5}$ and $\eqref{eq:10-5-6}$ into $\eqref{eq:10-5-4}$:

\begin{equation}\dfrac{\partial}{\partial X_{ij}} (X_{pk} B_{pq} X_{ql}) = \delta_{pi} \delta_{kj} B_{pq} X_{ql} + X_{pk} B_{pq} \delta_{qi} \delta_{lj} \label{eq:10-5-7}\end{equation}

Computing the derivative of $\eqref{eq:10-5-3}$ by substituting $\eqref{eq:10-5-7}$ and summing:

\begin{equation}\dfrac{\partial (\boldsymbol{X}^\top \boldsymbol{B} \boldsymbol{X})_{kl}}{\partial X_{ij}} = \sum_{p,q} \delta_{pi} \delta_{kj} B_{pq} X_{ql} + \sum_{p,q} X_{pk} B_{pq} \delta_{qi} \delta_{lj} \label{eq:10-5-8}\end{equation}

Computing the first term. $\delta_{pi}$ retains only $p = i$, and $\delta_{kj}$ equals 1 only when $k = j$.

\begin{equation}\sum_{p,q} \delta_{pi} \delta_{kj} B_{pq} X_{ql} = \delta_{kj} \sum_{q} B_{iq} X_{ql} = \delta_{kj} (\boldsymbol{B} \boldsymbol{X})_{il} \label{eq:10-5-9}\end{equation}

Computing the second term. $\delta_{qi}$ retains only $q = i$, and $\delta_{lj}$ equals 1 only when $l = j$.

\begin{equation}\sum_{p,q} X_{pk} B_{pq} \delta_{qi} \delta_{lj} = \delta_{lj} \sum_{p} X_{pk} B_{pi} \label{eq:10-5-10}\end{equation}

Writing $\eqref{eq:10-5-10}$ as a matrix product: $\sum_p X_{pk} B_{pi} = \sum_p (\boldsymbol{X}^\top)_{kp} B_{pi} = (\boldsymbol{X}^\top \boldsymbol{B})_{ki}$.

\begin{equation}\delta_{lj} \sum_{p} X_{pk} B_{pi} = \delta_{lj} (\boldsymbol{X}^\top \boldsymbol{B})_{ki} \label{eq:10-5-11}\end{equation}

Substituting $\eqref{eq:10-5-9}$ and $\eqref{eq:10-5-11}$ into $\eqref{eq:10-5-8}$:

\begin{equation}\dfrac{\partial (\boldsymbol{X}^\top \boldsymbol{B} \boldsymbol{X})_{kl}}{\partial X_{ij}} = \delta_{kj} (\boldsymbol{B} \boldsymbol{X})_{il} + \delta_{lj} (\boldsymbol{X}^\top \boldsymbol{B})_{ki} \label{eq:10-5-12}\end{equation}

$\boldsymbol{J}^{ij}$ is the single-entry matrix with 1 only at position $(i, j)$ and 0 elsewhere. Its transpose $\boldsymbol{J}^{ji}$ has 1 only at position $(j, i)$.

Computing $(\boldsymbol{X}^\top \boldsymbol{B} \boldsymbol{J}^{ij})_{kl}$. The $l$-th column of $\boldsymbol{J}^{ij}$ is $\boldsymbol{e}_i$ (the $i$-th standard basis vector) when $l = j$, and the zero vector otherwise.

\begin{equation}(\boldsymbol{X}^\top \boldsymbol{B} \boldsymbol{J}^{ij})_{kl} = \sum_{m} (\boldsymbol{X}^\top \boldsymbol{B})_{km} J^{ij}_{ml} = (\boldsymbol{X}^\top \boldsymbol{B})_{ki} \delta_{lj} \label{eq:10-5-13}\end{equation}

Computing $(\boldsymbol{J}^{ji} \boldsymbol{B} \boldsymbol{X})_{kl}$. The $k$-th row of $\boldsymbol{J}^{ji}$ is $\boldsymbol{e}_i^\top$ when $k = j$, and the zero vector otherwise.

\begin{equation}(\boldsymbol{J}^{ji} \boldsymbol{B} \boldsymbol{X})_{kl} = \sum_{m} J^{ji}_{km} (\boldsymbol{B} \boldsymbol{X})_{ml} = \delta_{kj} (\boldsymbol{B} \boldsymbol{X})_{il} \label{eq:10-5-14}\end{equation}

Comparing $\eqref{eq:10-5-12}$, $\eqref{eq:10-5-13}$, and $\eqref{eq:10-5-14}$, the entries match for all $(k, l)$:

\begin{equation}\dfrac{\partial (\boldsymbol{X}^\top \boldsymbol{B} \boldsymbol{X})}{\partial X_{ij}} = \boldsymbol{X}^\top \boldsymbol{B} \boldsymbol{J}^{ij} + \boldsymbol{J}^{ji} \boldsymbol{B} \boldsymbol{X} \label{eq:10-5-15}\end{equation}

Remark: $\boldsymbol{J}^{ij}$ is called the single-entry matrix and can also be written as the outer product $\boldsymbol{J}^{ij} = \boldsymbol{e}_i \boldsymbol{e}_j^\top$. When $\boldsymbol{B}$ is symmetric, the result also has a symmetric structure.

10.6 Derivative of the Vector Quadratic Form $\boldsymbol{x}^\top \boldsymbol{B} \boldsymbol{x}$

Formula: $\dfrac{\partial \boldsymbol{x}^\top \boldsymbol{B} \boldsymbol{x}}{\partial \boldsymbol{x}} = (\boldsymbol{B} + \boldsymbol{B}^\top)\boldsymbol{x}$
Conditions: $\boldsymbol{x} \in \mathbb{R}^n$, $\boldsymbol{B} \in \mathbb{R}^{n \times n}$ (general square matrix)
Proof

Expand the scalar function $f = \boldsymbol{x}^\top \boldsymbol{B} \boldsymbol{x}$ in components. First, the $i$-th component of $\boldsymbol{B} \boldsymbol{x}$ is

\begin{equation}(\boldsymbol{B} \boldsymbol{x})_i = \sum_{j=0}^{n-1} B_{ij} x_j \label{eq:10-6-1}\end{equation}

Multiplying from the left by $\boldsymbol{x}^\top$:

\begin{equation}f = \boldsymbol{x}^\top \boldsymbol{B} \boldsymbol{x} = \sum_{i=0}^{n-1} x_i (\boldsymbol{B} \boldsymbol{x})_i \label{eq:10-6-2}\end{equation}

Substituting $\eqref{eq:10-6-1}$ into $\eqref{eq:10-6-2}$:

\begin{equation}f = \sum_{i=0}^{n-1} x_i \sum_{j=0}^{n-1} B_{ij} x_j = \sum_{i=0}^{n-1} \sum_{j=0}^{n-1} x_i B_{ij} x_j \label{eq:10-6-3}\end{equation}

Differentiating $\eqref{eq:10-6-3}$ with respect to $x_k$. Since $B_{ij}$ are constants, only $x_i$ and $x_j$ are subject to differentiation. Applying the product rule (1.25) to the product $x_i B_{ij} x_j$:

\begin{equation}\dfrac{\partial}{\partial x_k} (x_i B_{ij} x_j) = \dfrac{\partial x_i}{\partial x_k} B_{ij} x_j + x_i B_{ij} \dfrac{\partial x_j}{\partial x_k} \label{eq:10-6-4}\end{equation}

$\dfrac{\partial x_i}{\partial x_k}$ equals 1 only when $i = k$, and 0 otherwise.

\begin{equation}\dfrac{\partial x_i}{\partial x_k} = \delta_{ik} \label{eq:10-6-5}\end{equation}

Similarly, $\dfrac{\partial x_j}{\partial x_k} = \delta_{jk}$.

\begin{equation}\dfrac{\partial x_j}{\partial x_k} = \delta_{jk} \label{eq:10-6-6}\end{equation}

Substituting $\eqref{eq:10-6-5}$ and $\eqref{eq:10-6-6}$ into $\eqref{eq:10-6-4}$:

\begin{equation}\dfrac{\partial}{\partial x_k} (x_i B_{ij} x_j) = \delta_{ik} B_{ij} x_j + x_i B_{ij} \delta_{jk} \label{eq:10-6-7}\end{equation}

Computing the derivative of $\eqref{eq:10-6-3}$ by substituting $\eqref{eq:10-6-7}$ and summing:

\begin{equation}\dfrac{\partial f}{\partial x_k} = \sum_{i,j} \delta_{ik} B_{ij} x_j + \sum_{i,j} x_i B_{ij} \delta_{jk} \label{eq:10-6-8}\end{equation}

Computing the first term. $\delta_{ik}$ retains only $i = k$:

\begin{equation}\sum_{i,j} \delta_{ik} B_{ij} x_j = \sum_{j} B_{kj} x_j = (\boldsymbol{B} \boldsymbol{x})_k \label{eq:10-6-9}\end{equation}

Computing the second term. $\delta_{jk}$ retains only $j = k$:

\begin{equation}\sum_{i,j} x_i B_{ij} \delta_{jk} = \sum_{i} x_i B_{ik} \label{eq:10-6-10}\end{equation}

Writing $\eqref{eq:10-6-10}$ as a matrix product: $\sum_i x_i B_{ik} = \sum_i B_{ik} x_i = \sum_i (\boldsymbol{B}^\top)_{ki} x_i = (\boldsymbol{B}^\top \boldsymbol{x})_k$.

\begin{equation}\sum_{i} x_i B_{ik} = (\boldsymbol{B}^\top \boldsymbol{x})_k \label{eq:10-6-11}\end{equation}

Substituting $\eqref{eq:10-6-9}$ and $\eqref{eq:10-6-11}$ into $\eqref{eq:10-6-8}$:

\begin{equation}\dfrac{\partial f}{\partial x_k} = (\boldsymbol{B} \boldsymbol{x})_k + (\boldsymbol{B}^\top \boldsymbol{x})_k \label{eq:10-6-12}\end{equation}

Combining the right-hand side of $\eqref{eq:10-6-12}$. Components of a matrix sum equal the sum of the components.

\begin{equation}\dfrac{\partial f}{\partial x_k} = ((\boldsymbol{B} + \boldsymbol{B}^\top) \boldsymbol{x})_k \label{eq:10-6-13}\end{equation}

Since $\eqref{eq:10-6-13}$ holds for all $k = 0, \ldots, n-1$, in vector form:

\begin{equation}\dfrac{\partial}{\partial \boldsymbol{x}} (\boldsymbol{x}^\top \boldsymbol{B} \boldsymbol{x}) = (\boldsymbol{B} + \boldsymbol{B}^\top) \boldsymbol{x} \label{eq:10-6-14}\end{equation}

Remark: When $\boldsymbol{B}$ is symmetric ($\boldsymbol{B} = \boldsymbol{B}^\top$), we have $\boldsymbol{B} + \boldsymbol{B}^\top = 2\boldsymbol{B}$, yielding $\dfrac{\partial}{\partial \boldsymbol{x}} (\boldsymbol{x}^\top \boldsymbol{B} \boldsymbol{x}) = 2\boldsymbol{B}\boldsymbol{x}$.

10.7 Derivative of the Generalized Bilinear Gram Form

Formula: $\dfrac{\partial \boldsymbol{b}^\top \boldsymbol{X}^\top \boldsymbol{D} \boldsymbol{X} \boldsymbol{c}}{\partial \boldsymbol{X}} = \boldsymbol{D}^\top \boldsymbol{X} \boldsymbol{b} \boldsymbol{c}^\top + \boldsymbol{D} \boldsymbol{X} \boldsymbol{c} \boldsymbol{b}^\top$
Conditions: $\boldsymbol{X} \in \mathbb{R}^{m \times n}$, $\boldsymbol{D} \in \mathbb{R}^{m \times m}$, $\boldsymbol{b}, \boldsymbol{c} \in \mathbb{R}^n$
Proof

Let $\boldsymbol{u} = \boldsymbol{X}\boldsymbol{b} \in \mathbb{R}^m$ and $\boldsymbol{v} = \boldsymbol{X}\boldsymbol{c} \in \mathbb{R}^m$. Expand the scalar function $f = \boldsymbol{u}^\top \boldsymbol{D} \boldsymbol{v}$ in components.

First, the $p$-th component of $\boldsymbol{D} \boldsymbol{v}$ is

\begin{equation}(\boldsymbol{D} \boldsymbol{v})_p = \sum_{q=0}^{m-1} D_{pq} v_q \label{eq:10-7-1}\end{equation}

Multiplying from the left by $\boldsymbol{u}^\top$:

\begin{equation}f = \boldsymbol{u}^\top \boldsymbol{D} \boldsymbol{v} = \sum_{p=0}^{m-1} u_p (\boldsymbol{D} \boldsymbol{v})_p = \sum_{p=0}^{m-1} \sum_{q=0}^{m-1} u_p D_{pq} v_q \label{eq:10-7-2}\end{equation}

From $\boldsymbol{u} = \boldsymbol{X}\boldsymbol{b}$, the component form of $u_p$ is

\begin{equation}u_p = (\boldsymbol{X}\boldsymbol{b})_p = \sum_{r=0}^{n-1} X_{pr} b_r \label{eq:10-7-3}\end{equation}

Similarly, from $\boldsymbol{v} = \boldsymbol{X}\boldsymbol{c}$, the component form of $v_q$ is

\begin{equation}v_q = (\boldsymbol{X}\boldsymbol{c})_q = \sum_{s=0}^{n-1} X_{qs} c_s \label{eq:10-7-4}\end{equation}

Differentiating $f$ with respect to $X_{kl}$. Since $D_{pq}$, $b_r$, $c_s$ are constants, the chain rule (1.26) requires differentiating $u_p$ and $v_q$:

\begin{equation}\dfrac{\partial f}{\partial X_{kl}} = \sum_{p,q} \dfrac{\partial u_p}{\partial X_{kl}} D_{pq} v_q + \sum_{p,q} u_p D_{pq} \dfrac{\partial v_q}{\partial X_{kl}} \label{eq:10-7-5}\end{equation}

From $\eqref{eq:10-7-3}$, differentiating $u_p = \sum_r X_{pr} b_r$ with respect to $X_{kl}$:

\begin{equation}\dfrac{\partial u_p}{\partial X_{kl}} = \sum_{r=0}^{n-1} \dfrac{\partial X_{pr}}{\partial X_{kl}} b_r = \sum_{r=0}^{n-1} \delta_{pk} \delta_{rl} b_r = \delta_{pk} b_l \label{eq:10-7-6}\end{equation}

Similarly, from $\eqref{eq:10-7-4}$, differentiating $v_q = \sum_s X_{qs} c_s$ with respect to $X_{kl}$:

\begin{equation}\dfrac{\partial v_q}{\partial X_{kl}} = \sum_{s=0}^{n-1} \dfrac{\partial X_{qs}}{\partial X_{kl}} c_s = \sum_{s=0}^{n-1} \delta_{qk} \delta_{sl} c_s = \delta_{qk} c_l \label{eq:10-7-7}\end{equation}

Substituting $\eqref{eq:10-7-6}$ and $\eqref{eq:10-7-7}$ into $\eqref{eq:10-7-5}$:

\begin{equation}\dfrac{\partial f}{\partial X_{kl}} = \sum_{p,q} \delta_{pk} b_l D_{pq} v_q + \sum_{p,q} u_p D_{pq} \delta_{qk} c_l \label{eq:10-7-8}\end{equation}

Computing the first term. $\delta_{pk}$ retains only $p = k$:

\begin{equation}\sum_{p,q} \delta_{pk} b_l D_{pq} v_q = b_l \sum_{q} D_{kq} v_q = b_l (\boldsymbol{D} \boldsymbol{v})_k \label{eq:10-7-9}\end{equation}

Substituting $\boldsymbol{v} = \boldsymbol{X}\boldsymbol{c}$:

\begin{equation}b_l (\boldsymbol{D} \boldsymbol{v})_k = b_l (\boldsymbol{D} \boldsymbol{X} \boldsymbol{c})_k \label{eq:10-7-10}\end{equation}

Computing the second term. $\delta_{qk}$ retains only $q = k$:

\begin{equation}\sum_{p,q} u_p D_{pq} \delta_{qk} c_l = c_l \sum_{p} u_p D_{pk} \label{eq:10-7-11}\end{equation}

Writing $\eqref{eq:10-7-11}$ as a matrix product: $\sum_p u_p D_{pk} = \sum_p D_{pk} u_p = \sum_p (\boldsymbol{D}^\top)_{kp} u_p = (\boldsymbol{D}^\top \boldsymbol{u})_k$.

\begin{equation}c_l \sum_{p} u_p D_{pk} = c_l (\boldsymbol{D}^\top \boldsymbol{u})_k \label{eq:10-7-12}\end{equation}

Substituting $\boldsymbol{u} = \boldsymbol{X}\boldsymbol{b}$:

\begin{equation}c_l (\boldsymbol{D}^\top \boldsymbol{u})_k = c_l (\boldsymbol{D}^\top \boldsymbol{X} \boldsymbol{b})_k \label{eq:10-7-13}\end{equation}

Substituting $\eqref{eq:10-7-10}$ and $\eqref{eq:10-7-13}$ into $\eqref{eq:10-7-8}$:

\begin{equation}\dfrac{\partial f}{\partial X_{kl}} = (\boldsymbol{D} \boldsymbol{X} \boldsymbol{c})_k b_l + (\boldsymbol{D}^\top \boldsymbol{X} \boldsymbol{b})_k c_l \label{eq:10-7-14}\end{equation}

Writing $\eqref{eq:10-7-14}$ in matrix form. The $(k, l)$ entry of the outer product $\boldsymbol{u} \boldsymbol{v}^\top$ is $u_k v_l$:

\begin{equation}(\boldsymbol{D} \boldsymbol{X} \boldsymbol{c})_k b_l = (\boldsymbol{D} \boldsymbol{X} \boldsymbol{c} \boldsymbol{b}^\top)_{kl} \label{eq:10-7-15}\end{equation}

\begin{equation}(\boldsymbol{D}^\top \boldsymbol{X} \boldsymbol{b})_k c_l = (\boldsymbol{D}^\top \boldsymbol{X} \boldsymbol{b} \boldsymbol{c}^\top)_{kl} \label{eq:10-7-16}\end{equation}

Combining $\eqref{eq:10-7-15}$ and $\eqref{eq:10-7-16}$ for the final result in matrix form:

\begin{equation}\dfrac{\partial}{\partial \boldsymbol{X}} (\boldsymbol{b}^\top \boldsymbol{X}^\top \boldsymbol{D} \boldsymbol{X} \boldsymbol{c}) = \boldsymbol{D}^\top \boldsymbol{X} \boldsymbol{b} \boldsymbol{c}^\top + \boldsymbol{D} \boldsymbol{X} \boldsymbol{c} \boldsymbol{b}^\top \label{eq:10-7-17}\end{equation}

Remark: When $\boldsymbol{D} = \boldsymbol{I}$, we have $\boldsymbol{D}^\top = \boldsymbol{D} = \boldsymbol{I}$, so the result reduces to the Gram matrix bilinear form result of 10.3: $\boldsymbol{X} (\boldsymbol{b} \boldsymbol{c}^\top + \boldsymbol{c} \boldsymbol{b}^\top)$.

10.8 Derivative of the Affine Quadratic Form

Formula: $\dfrac{\partial}{\partial \boldsymbol{X}} (\boldsymbol{X}\boldsymbol{b} + \boldsymbol{c})^\top \boldsymbol{D} (\boldsymbol{X}\boldsymbol{b} + \boldsymbol{c}) = (\boldsymbol{D} + \boldsymbol{D}^\top)(\boldsymbol{X}\boldsymbol{b} + \boldsymbol{c})\boldsymbol{b}^\top$
Conditions: $\boldsymbol{X} \in \mathbb{R}^{m \times n}$, $\boldsymbol{b} \in \mathbb{R}^n$, $\boldsymbol{c} \in \mathbb{R}^m$, $\boldsymbol{D} \in \mathbb{R}^{m \times m}$
Proof

Let $\boldsymbol{u} = \boldsymbol{X}\boldsymbol{b} + \boldsymbol{c} \in \mathbb{R}^m$. Consider the scalar function $f = \boldsymbol{u}^\top \boldsymbol{D} \boldsymbol{u}$.

Computing the $i$-th component of $\boldsymbol{u}$: since $(\boldsymbol{X}\boldsymbol{b})_i = \sum_{j=0}^{n-1} X_{ij} b_j$,

\begin{equation}u_i = (\boldsymbol{X}\boldsymbol{b})_i + c_i = \sum_{j=0}^{n-1} X_{ij} b_j + c_i \label{eq:10-8-1}\end{equation}

Differentiating $u_i$ with respect to $X_{kl}$. Since $c_i$ is a constant, its derivative is 0.

\begin{equation}\dfrac{\partial u_i}{\partial X_{kl}} = \sum_{j=0}^{n-1} \dfrac{\partial X_{ij}}{\partial X_{kl}} b_j = \sum_{j=0}^{n-1} \delta_{ik} \delta_{jl} b_j = \delta_{ik} b_l \label{eq:10-8-2}\end{equation}

By the result of 10.6, the gradient of $f = \boldsymbol{u}^\top \boldsymbol{D} \boldsymbol{u}$ with respect to $\boldsymbol{u}$ is

\begin{equation}\dfrac{\partial f}{\partial \boldsymbol{u}} = (\boldsymbol{D} + \boldsymbol{D}^\top) \boldsymbol{u} \label{eq:10-8-3}\end{equation}

Applying the chain rule (1.26). Differentiating $f$ with respect to $X_{kl}$:

\begin{equation}\dfrac{\partial f}{\partial X_{kl}} = \sum_{i=0}^{m-1} \dfrac{\partial f}{\partial u_i} \dfrac{\partial u_i}{\partial X_{kl}} \label{eq:10-8-4}\end{equation}

From $\eqref{eq:10-8-3}$, $\dfrac{\partial f}{\partial u_i} = ((\boldsymbol{D} + \boldsymbol{D}^\top) \boldsymbol{u})_i$. Substituting $\eqref{eq:10-8-2}$ into $\eqref{eq:10-8-4}$:

\begin{equation}\dfrac{\partial f}{\partial X_{kl}} = \sum_{i=0}^{m-1} ((\boldsymbol{D} + \boldsymbol{D}^\top) \boldsymbol{u})_i \cdot \delta_{ik} b_l \label{eq:10-8-5}\end{equation}

$\delta_{ik}$ retains only $i = k$:

\begin{equation}\dfrac{\partial f}{\partial X_{kl}} = ((\boldsymbol{D} + \boldsymbol{D}^\top) \boldsymbol{u})_k \cdot b_l \label{eq:10-8-6}\end{equation}

Writing $\eqref{eq:10-8-6}$ in matrix form. The $(k, l)$ entry of the outer product $\boldsymbol{v} \boldsymbol{w}^\top$ is $v_k w_l$:

\begin{equation}((\boldsymbol{D} + \boldsymbol{D}^\top) \boldsymbol{u})_k \cdot b_l = ((\boldsymbol{D} + \boldsymbol{D}^\top) \boldsymbol{u} \boldsymbol{b}^\top)_{kl} \label{eq:10-8-7}\end{equation}

Substituting $\boldsymbol{u} = \boldsymbol{X}\boldsymbol{b} + \boldsymbol{c}$ to obtain the final result:

\begin{equation}\dfrac{\partial}{\partial \boldsymbol{X}} (\boldsymbol{X}\boldsymbol{b} + \boldsymbol{c})^\top \boldsymbol{D} (\boldsymbol{X}\boldsymbol{b} + \boldsymbol{c}) = (\boldsymbol{D} + \boldsymbol{D}^\top)(\boldsymbol{X}\boldsymbol{b} + \boldsymbol{c})\boldsymbol{b}^\top \label{eq:10-8-8}\end{equation}

Remark: This formula frequently appears in deriving normal equations for least squares. When $\boldsymbol{D} = \boldsymbol{I}$, we have $\boldsymbol{D} + \boldsymbol{D}^\top = 2\boldsymbol{I}$, yielding $\dfrac{\partial}{\partial \boldsymbol{X}} \|\boldsymbol{X}\boldsymbol{b} + \boldsymbol{c}\|^2 = 2(\boldsymbol{X}\boldsymbol{b} + \boldsymbol{c})\boldsymbol{b}^\top$.

10.9 Symmetric Matrix Quadratic Form ($\boldsymbol{x}$ derivative)

Formula: $\dfrac{\partial}{\partial \boldsymbol{x}} (\boldsymbol{x} - \boldsymbol{s})^\top \boldsymbol{W} (\boldsymbol{x} - \boldsymbol{s}) = 2\boldsymbol{W}(\boldsymbol{x} - \boldsymbol{s})$
Conditions: $\boldsymbol{x}, \boldsymbol{s} \in \mathbb{R}^n$, $\boldsymbol{W} \in \mathbb{R}^{n \times n}$ is symmetric ($\boldsymbol{W} = \boldsymbol{W}^\top$)
Proof

Let $\boldsymbol{u} = \boldsymbol{x} - \boldsymbol{s} \in \mathbb{R}^n$. Consider the scalar function $f = \boldsymbol{u}^\top \boldsymbol{W} \boldsymbol{u}$.

The $i$-th component of $\boldsymbol{u}$ is

\begin{equation}u_i = x_i - s_i \label{eq:10-9-1}\end{equation}

Differentiating $u_i$ with respect to $x_k$. Since $s_i$ is a constant independent of $\boldsymbol{x}$:

\begin{equation}\dfrac{\partial u_i}{\partial x_k} = \dfrac{\partial x_i}{\partial x_k} = \delta_{ik} \label{eq:10-9-2}\end{equation}

In vector form, this is $\dfrac{\partial \boldsymbol{u}}{\partial \boldsymbol{x}} = \boldsymbol{I}$ (the identity matrix).

By the result of 10.6, the gradient of $f = \boldsymbol{u}^\top \boldsymbol{W} \boldsymbol{u}$ with respect to $\boldsymbol{u}$ is

\begin{equation}\dfrac{\partial f}{\partial \boldsymbol{u}} = (\boldsymbol{W} + \boldsymbol{W}^\top) \boldsymbol{u} \label{eq:10-9-3}\end{equation}

Since $\boldsymbol{W}$ is symmetric ($\boldsymbol{W} = \boldsymbol{W}^\top$):

\begin{equation}\boldsymbol{W} + \boldsymbol{W}^\top = \boldsymbol{W} + \boldsymbol{W} = 2\boldsymbol{W} \label{eq:10-9-4}\end{equation}

Substituting $\eqref{eq:10-9-4}$ into $\eqref{eq:10-9-3}$:

\begin{equation}\dfrac{\partial f}{\partial \boldsymbol{u}} = 2\boldsymbol{W} \boldsymbol{u} \label{eq:10-9-5}\end{equation}

Applying the chain rule (1.26). Differentiating $f$ with respect to $x_k$:

\begin{equation}\dfrac{\partial f}{\partial x_k} = \sum_{i=0}^{n-1} \dfrac{\partial f}{\partial u_i} \dfrac{\partial u_i}{\partial x_k} \label{eq:10-9-6}\end{equation}

Substituting $\eqref{eq:10-9-2}$ and $\eqref{eq:10-9-5}$ into $\eqref{eq:10-9-6}$:

\begin{equation}\dfrac{\partial f}{\partial x_k} = \sum_{i=0}^{n-1} (2\boldsymbol{W} \boldsymbol{u})_i \cdot \delta_{ik} = (2\boldsymbol{W} \boldsymbol{u})_k \label{eq:10-9-7}\end{equation}

Substituting $\boldsymbol{u} = \boldsymbol{x} - \boldsymbol{s}$ to obtain the final result in vector form:

\begin{equation}\dfrac{\partial}{\partial \boldsymbol{x}} (\boldsymbol{x} - \boldsymbol{s})^\top \boldsymbol{W} (\boldsymbol{x} - \boldsymbol{s}) = 2\boldsymbol{W}(\boldsymbol{x} - \boldsymbol{s}) \label{eq:10-9-8}\end{equation}

10.10 Symmetric Matrix Quadratic Form ($\boldsymbol{s}$ derivative)

Formula: $\dfrac{\partial}{\partial \boldsymbol{s}} (\boldsymbol{x} - \boldsymbol{s})^\top \boldsymbol{W} (\boldsymbol{x} - \boldsymbol{s}) = -2\boldsymbol{W}(\boldsymbol{x} - \boldsymbol{s})$
Conditions: $\boldsymbol{x}, \boldsymbol{s} \in \mathbb{R}^n$, $\boldsymbol{W} = \boldsymbol{W}^\top$
Proof

Let $\boldsymbol{u} = \boldsymbol{x} - \boldsymbol{s} \in \mathbb{R}^n$. Consider the scalar function $f = \boldsymbol{u}^\top \boldsymbol{W} \boldsymbol{u}$.

The $i$-th component of $\boldsymbol{u}$ is

\begin{equation}u_i = x_i - s_i \label{eq:10-10-1}\end{equation}

Differentiating $u_i$ with respect to $s_k$. Since $x_i$ is a constant independent of $\boldsymbol{s}$:

\begin{equation}\dfrac{\partial u_i}{\partial s_k} = -\dfrac{\partial s_i}{\partial s_k} = -\delta_{ik} \label{eq:10-10-2}\end{equation}

In vector form, this is $\dfrac{\partial \boldsymbol{u}}{\partial \boldsymbol{s}} = -\boldsymbol{I}$.

As in 10.9, since $\boldsymbol{W}$ is symmetric:

\begin{equation}\dfrac{\partial f}{\partial \boldsymbol{u}} = 2\boldsymbol{W} \boldsymbol{u} \label{eq:10-10-3}\end{equation}

Applying the chain rule (1.26). Differentiating $f$ with respect to $s_k$:

\begin{equation}\dfrac{\partial f}{\partial s_k} = \sum_{i=0}^{n-1} \dfrac{\partial f}{\partial u_i} \dfrac{\partial u_i}{\partial s_k} \label{eq:10-10-4}\end{equation}

Substituting $\eqref{eq:10-10-2}$ and $\eqref{eq:10-10-3}$ into $\eqref{eq:10-10-4}$:

\begin{equation}\dfrac{\partial f}{\partial s_k} = \sum_{i=0}^{n-1} (2\boldsymbol{W} \boldsymbol{u})_i \cdot (-\delta_{ik}) = -(2\boldsymbol{W} \boldsymbol{u})_k \label{eq:10-10-5}\end{equation}

Substituting $\boldsymbol{u} = \boldsymbol{x} - \boldsymbol{s}$ to obtain the final result in vector form:

\begin{equation}\dfrac{\partial}{\partial \boldsymbol{s}} (\boldsymbol{x} - \boldsymbol{s})^\top \boldsymbol{W} (\boldsymbol{x} - \boldsymbol{s}) = -2\boldsymbol{W}(\boldsymbol{x} - \boldsymbol{s}) \label{eq:10-10-6}\end{equation}

10.11 Affine Quadratic Form ($\boldsymbol{x}$ derivative)

Formula: $\dfrac{\partial}{\partial \boldsymbol{x}} (\boldsymbol{x} - \boldsymbol{A}\boldsymbol{s})^\top \boldsymbol{W} (\boldsymbol{x} - \boldsymbol{A}\boldsymbol{s}) = 2\boldsymbol{W}(\boldsymbol{x} - \boldsymbol{A}\boldsymbol{s})$
Conditions: $\boldsymbol{x} \in \mathbb{R}^m$, $\boldsymbol{s} \in \mathbb{R}^n$, $\boldsymbol{A} \in \mathbb{R}^{m \times n}$, $\boldsymbol{W} = \boldsymbol{W}^\top$
Proof

Let $\boldsymbol{u} = \boldsymbol{x} - \boldsymbol{A}\boldsymbol{s} \in \mathbb{R}^m$. Consider the scalar function $f = \boldsymbol{u}^\top \boldsymbol{W} \boldsymbol{u}$.

Computing the $i$-th component of $\boldsymbol{u}$: since $(\boldsymbol{A}\boldsymbol{s})_i = \sum_{j=0}^{n-1} A_{ij} s_j$,

\begin{equation}u_i = x_i - (\boldsymbol{A}\boldsymbol{s})_i = x_i - \sum_{j=0}^{n-1} A_{ij} s_j \label{eq:10-11-1}\end{equation}

Differentiating $u_i$ with respect to $x_k$. Since $\boldsymbol{A}\boldsymbol{s}$ is a constant vector independent of $\boldsymbol{x}$:

\begin{equation}\dfrac{\partial u_i}{\partial x_k} = \dfrac{\partial x_i}{\partial x_k} = \delta_{ik} \label{eq:10-11-2}\end{equation}

In vector form, this is $\dfrac{\partial \boldsymbol{u}}{\partial \boldsymbol{x}} = \boldsymbol{I}$.

Since $\boldsymbol{W}$ is symmetric, as in 10.9:

\begin{equation}\dfrac{\partial f}{\partial \boldsymbol{u}} = 2\boldsymbol{W} \boldsymbol{u} \label{eq:10-11-3}\end{equation}

Applying the chain rule (1.26). Differentiating $f$ with respect to $x_k$:

\begin{equation}\dfrac{\partial f}{\partial x_k} = \sum_{i=0}^{m-1} \dfrac{\partial f}{\partial u_i} \dfrac{\partial u_i}{\partial x_k} \label{eq:10-11-4}\end{equation}

Substituting $\eqref{eq:10-11-2}$ and $\eqref{eq:10-11-3}$ into $\eqref{eq:10-11-4}$:

\begin{equation}\dfrac{\partial f}{\partial x_k} = \sum_{i=0}^{m-1} (2\boldsymbol{W} \boldsymbol{u})_i \cdot \delta_{ik} = (2\boldsymbol{W} \boldsymbol{u})_k \label{eq:10-11-5}\end{equation}

Substituting $\boldsymbol{u} = \boldsymbol{x} - \boldsymbol{A}\boldsymbol{s}$ to obtain the final result in vector form:

\begin{equation}\dfrac{\partial}{\partial \boldsymbol{x}} (\boldsymbol{x} - \boldsymbol{A}\boldsymbol{s})^\top \boldsymbol{W} (\boldsymbol{x} - \boldsymbol{A}\boldsymbol{s}) = 2\boldsymbol{W}(\boldsymbol{x} - \boldsymbol{A}\boldsymbol{s}) \label{eq:10-11-6}\end{equation}

10.12 Affine Quadratic Form ($\boldsymbol{s}$ derivative)

Formula: $\dfrac{\partial}{\partial \boldsymbol{s}} (\boldsymbol{x} - \boldsymbol{A}\boldsymbol{s})^\top \boldsymbol{W} (\boldsymbol{x} - \boldsymbol{A}\boldsymbol{s}) = -2\boldsymbol{A}^\top\boldsymbol{W}(\boldsymbol{x} - \boldsymbol{A}\boldsymbol{s})$
Conditions: $\boldsymbol{x} \in \mathbb{R}^m$, $\boldsymbol{s} \in \mathbb{R}^n$, $\boldsymbol{A} \in \mathbb{R}^{m \times n}$, $\boldsymbol{W} = \boldsymbol{W}^\top$
Proof

Let $\boldsymbol{u} = \boldsymbol{x} - \boldsymbol{A}\boldsymbol{s} \in \mathbb{R}^m$. Consider the scalar function $f = \boldsymbol{u}^\top \boldsymbol{W} \boldsymbol{u}$.

Computing the $i$-th component of $\boldsymbol{u}$:

\begin{equation}u_i = x_i - (\boldsymbol{A}\boldsymbol{s})_i = x_i - \sum_{j=0}^{n-1} A_{ij} s_j \label{eq:10-12-1}\end{equation}

Differentiating $u_i$ with respect to $s_k$. Since $x_i$ is a constant independent of $\boldsymbol{s}$:

\begin{equation}\dfrac{\partial u_i}{\partial s_k} = -\sum_{j=0}^{n-1} A_{ij} \dfrac{\partial s_j}{\partial s_k} = -\sum_{j=0}^{n-1} A_{ij} \delta_{jk} = -A_{ik} \label{eq:10-12-2}\end{equation}

Since $\boldsymbol{W}$ is symmetric:

\begin{equation}\dfrac{\partial f}{\partial \boldsymbol{u}} = 2\boldsymbol{W} \boldsymbol{u} \label{eq:10-12-3}\end{equation}

Applying the chain rule (1.26). Differentiating $f$ with respect to $s_k$:

\begin{equation}\dfrac{\partial f}{\partial s_k} = \sum_{i=0}^{m-1} \dfrac{\partial f}{\partial u_i} \dfrac{\partial u_i}{\partial s_k} \label{eq:10-12-4}\end{equation}

Substituting $\eqref{eq:10-12-2}$ and $\eqref{eq:10-12-3}$ into $\eqref{eq:10-12-4}$:

\begin{equation}\dfrac{\partial f}{\partial s_k} = \sum_{i=0}^{m-1} (2\boldsymbol{W} \boldsymbol{u})_i \cdot (-A_{ik}) = -2 \sum_{i=0}^{m-1} (\boldsymbol{W} \boldsymbol{u})_i A_{ik} \label{eq:10-12-5}\end{equation}

Writing $\eqref{eq:10-12-5}$ as a matrix product: $\sum_i A_{ik} (\boldsymbol{W} \boldsymbol{u})_i = \sum_i (\boldsymbol{A}^\top)_{ki} (\boldsymbol{W} \boldsymbol{u})_i = (\boldsymbol{A}^\top \boldsymbol{W} \boldsymbol{u})_k$.

\begin{equation}\dfrac{\partial f}{\partial s_k} = -2 (\boldsymbol{A}^\top \boldsymbol{W} \boldsymbol{u})_k \label{eq:10-12-6}\end{equation}

Substituting $\boldsymbol{u} = \boldsymbol{x} - \boldsymbol{A}\boldsymbol{s}$ to obtain the final result in vector form:

\begin{equation}\dfrac{\partial}{\partial \boldsymbol{s}} (\boldsymbol{x} - \boldsymbol{A}\boldsymbol{s})^\top \boldsymbol{W} (\boldsymbol{x} - \boldsymbol{A}\boldsymbol{s}) = -2\boldsymbol{A}^\top\boldsymbol{W}(\boldsymbol{x} - \boldsymbol{A}\boldsymbol{s}) \label{eq:10-12-7}\end{equation}

10.13 Affine Quadratic Form ($\boldsymbol{A}$ derivative)

Formula: $\dfrac{\partial}{\partial \boldsymbol{A}} (\boldsymbol{x} - \boldsymbol{A}\boldsymbol{s})^\top \boldsymbol{W} (\boldsymbol{x} - \boldsymbol{A}\boldsymbol{s}) = -2\boldsymbol{W}(\boldsymbol{x} - \boldsymbol{A}\boldsymbol{s})\boldsymbol{s}^\top$
Conditions: $\boldsymbol{x} \in \mathbb{R}^m$, $\boldsymbol{s} \in \mathbb{R}^n$, $\boldsymbol{A} \in \mathbb{R}^{m \times n}$, $\boldsymbol{W} = \boldsymbol{W}^\top$
Proof

Let $\boldsymbol{u} = \boldsymbol{x} - \boldsymbol{A}\boldsymbol{s} \in \mathbb{R}^m$. Consider the scalar function $f = \boldsymbol{u}^\top \boldsymbol{W} \boldsymbol{u}$.

Computing the $i$-th component of $\boldsymbol{u}$:

\begin{equation}u_i = x_i - (\boldsymbol{A}\boldsymbol{s})_i = x_i - \sum_{j=0}^{n-1} A_{ij} s_j \label{eq:10-13-1}\end{equation}

Differentiating $u_i$ with respect to $A_{kl}$. Since $x_i$ and $s_j$ are constants independent of $\boldsymbol{A}$:

\begin{equation}\dfrac{\partial u_i}{\partial A_{kl}} = -\sum_{j=0}^{n-1} \dfrac{\partial A_{ij}}{\partial A_{kl}} s_j = -\sum_{j=0}^{n-1} \delta_{ik} \delta_{jl} s_j = -\delta_{ik} s_l \label{eq:10-13-2}\end{equation}

Since $\boldsymbol{W}$ is symmetric:

\begin{equation}\dfrac{\partial f}{\partial \boldsymbol{u}} = 2\boldsymbol{W} \boldsymbol{u} \label{eq:10-13-3}\end{equation}

Applying the chain rule (1.26). Differentiating $f$ with respect to $A_{kl}$:

\begin{equation}\dfrac{\partial f}{\partial A_{kl}} = \sum_{i=0}^{m-1} \dfrac{\partial f}{\partial u_i} \dfrac{\partial u_i}{\partial A_{kl}} \label{eq:10-13-4}\end{equation}

Substituting $\eqref{eq:10-13-2}$ and $\eqref{eq:10-13-3}$ into $\eqref{eq:10-13-4}$:

\begin{equation}\dfrac{\partial f}{\partial A_{kl}} = \sum_{i=0}^{m-1} (2\boldsymbol{W} \boldsymbol{u})_i \cdot (-\delta_{ik} s_l) \label{eq:10-13-5}\end{equation}

$\delta_{ik}$ retains only $i = k$:

\begin{equation}\dfrac{\partial f}{\partial A_{kl}} = -2 (\boldsymbol{W} \boldsymbol{u})_k s_l \label{eq:10-13-6}\end{equation}

Writing $\eqref{eq:10-13-6}$ in matrix form. The $(k, l)$ entry of the outer product $\boldsymbol{v} \boldsymbol{w}^\top$ is $v_k w_l$:

\begin{equation}(\boldsymbol{W} \boldsymbol{u})_k s_l = (\boldsymbol{W} \boldsymbol{u} \boldsymbol{s}^\top)_{kl} \label{eq:10-13-7}\end{equation}

Substituting $\boldsymbol{u} = \boldsymbol{x} - \boldsymbol{A}\boldsymbol{s}$ to obtain the final result in matrix form:

\begin{equation}\dfrac{\partial}{\partial \boldsymbol{A}} (\boldsymbol{x} - \boldsymbol{A}\boldsymbol{s})^\top \boldsymbol{W} (\boldsymbol{x} - \boldsymbol{A}\boldsymbol{s}) = -2\boldsymbol{W}(\boldsymbol{x} - \boldsymbol{A}\boldsymbol{s})\boldsymbol{s}^\top \label{eq:10-13-8}\end{equation}

Remark: The formulas 10.9–10.13 frequently appear in least squares and statistics. Symmetry yields $\boldsymbol{W} + \boldsymbol{W}^\top = 2\boldsymbol{W}$, which simplifies the results.

Advanced Topics

Tensor Representation of the Hessian and Flattening

The second-order derivative (Hessian) of a quadratic form is represented as a tensor whose order depends on the type of variables:

  • Hessian of a scalar function $f: \mathbb{R}^n \to \mathbb{R}$: 2nd-order tensor ($n \times n$ matrix)
  • Hessian of a vector function $\boldsymbol{f}: \mathbb{R}^n \to \mathbb{R}^m$: 3rd-order tensor ($m \times n \times n$)
  • Hessian of a matrix function: 4th-order tensor

To treat higher-order tensors as matrices, one uses a flattening map:

$\phi: \mathbb{R}^{m \times n \times n} \to \mathbb{R}^{m \times n^2}$

This map can be constructed explicitly using the vec operator and Kronecker products. This site adopts a consistent convention based on the denominator layout, and the flattened matrices follow the same layout rules.

Tensor Transformation Laws Under Change of Coordinates

The rigorous definition of a "tensor" is characterized by its transformation law under changes of coordinates. Under a basis change $\boldsymbol{x}' = \boldsymbol{P}\boldsymbol{x}$, a $k$-th order covariant tensor $T$ transforms as:

$T'_{i_1 \cdots i_k} = \sum_{j_1, \ldots, j_k} P_{i_1 j_1} \cdots P_{i_k j_k} T_{j_1 \cdots j_k}$

In matrix form, the transformation of a 2nd-order tensor (matrix) $\boldsymbol{A}$ is $\boldsymbol{A}' = \boldsymbol{P} \boldsymbol{A} \boldsymbol{P}^\top$. That the Hessian matrix obeys this transformation law can be verified from the chain rule.

Connection to Automatic Differentiation

The quadratic form differentiation formulas in this chapter are mathematically equivalent to automatic differentiation (AD) in deep learning frameworks:

  • Forward mode: Corresponds to pushforward of tangent vectors. Computes Jacobian-vector products (JVP).
  • Reverse mode (backpropagation): Corresponds to pullback of cotangent vectors. Computes vector-Jacobian products (VJP).

In differential geometric terms, forward mode is a map on the tangent bundle $TM$, while reverse mode is a map on the cotangent bundle $T^*M$. The gradient $(\boldsymbol{A} + \boldsymbol{A}^\top)\boldsymbol{x}$ of the quadratic form $\boldsymbol{x}^\top \boldsymbol{A} \boldsymbol{x}$ is efficiently computed via reverse mode AD.

References