# Numerical Shadow

The web resource on numerical range and numerical shadow

### Site Tools

numerical-range:generalizations:c-perturbation-of-unitary-matrix-numerical-range

# Perturbation of unitary matrix numerical range

We are given an arbitrary unitary matrix $U$. The numerical range $W(U)$ is convex hull of eigenvalues of matrix $U$, $W(U)=\text{conv}(\lambda(U))$. If $V$ denotes unitary matrix arbitrary close to matrix $U$ then the numerical range $W(V)$ should differs from $W(U)$ slightly. It comes from the fact, that the function $U \to \lambda(U)$, that for given unitary matrix returns vector of its eigenvalues is continuous. The question is; can we actually predict how small changes impact changes in numerical range?

In the first setup we take two unitary matrices - matrix $U \in U_d$ and its perturbation $V \in U_d$ - for example for given constant $0<c< \! \! <1$, we have $\| U - V \| \le c$. Then we fix continuous parametric curve $U(t) \in U_d$ for $t \in [0,1]$, that connects matrices $U=U(0)$ and $V=U(1)$. Which one should we take?

We start our considerations by taking the most natural curve between $U$ and $V$, which is the shortest one - geodesic. The geodesic between unitary matrices is well-known  and in our case it is given by formula $$t \to U \exp(itH(U^\dagger V)),$$ where function $H$ for arbitrary unitary matrix $U$ has the form $H(U)=-i \text{Log}(U) \in H_d$, with convection, that $\lambda(H(U)) \subset (-\pi, \pi]$.

We can simplyfy formulation of our problem to investigate $W(U \exp(itH))$, where $\|H\|_{\infty}\le \pi$. Without loss of generality we can assume that $H$ is a diagonal matrix. Because, tha global phase does not matter ($W(U) \sim W(\lambda U)$) and we put special interest on $t$ arbitrary small, we take such matrix $H$, for which $\lambda(H)$ is a probability vector.

## Theorem

Let $U \in U_d$ be unitary matrix of dimension $d$ and denote $S_y^M=\{\ket{x}: (y\mathbb{1}_d-M)\ket{x}=0, \|x\|=1\}$ for some matrix $M$. Assume that $\lambda$ is eigenvalue of $U$, $p=\{p_i\}_{i=1}^d$ is a probability vector and define matrix $V(t) = \sum_{i=1}^{d} e^{i p_i t} \ket{i}\bra{i} \in U_d$ for $t \geq 0$. Then:

• Each eigenvalue of product $UV(t)$ moves counterclockwise as $t \rightarrow 2 \pi$ or stays in initial state $t=0$
• If $\dim(S_\lambda^U)=k$ and eigenvalues of $UV(t)$, for which initial position was $\lambda$ are $\{\lambda_{t,j}\}_{j=1}^k$, then for small enough $t \geq 0$,

$$\lambda_{t,1} \approx \lambda \exp\left( i t \min\limits_{\ket{x} \in S_\lambda^U} \sum\limits_{i=1}^d\ p_i |\braket{i}{x}|^2 \right),$$ $$\lambda_{t,k} \approx \lambda \exp\left( i t \max\limits_{\ket{x} \in S_\lambda^U} \sum\limits_{i=1}^d\ p_i |\braket{i}{x}|^2 \right),$$

• Solutions $\ket{x_1}$ of $\min\limits_{\ket{x} \in S_\lambda^U} \sum\limits_{i=1}^d\ p_i |\braket{i}{x}|^2$ and $\ket{x_k}$ of $\max\limits_{\ket{x} \in S_\lambda^U} \sum\limits_{i=1}^d\ p_i |\braket{i}{x}|^2$ are orthogonal.
• If $\min\limits_{\ket{x} \in S_\lambda^U} \sum\limits_{i=1}^d\ p_i |\braket{i}{x}|^2=0$ then $\lambda_{t,1}= \lambda$
• If $\dim(S_\lambda^U)=k$ and $|\{p_i: p_i>0\}|=l<k$, then $\lambda$ is eigenvalue of $UV(t)$ and $\dim(S_\lambda^{UV(t)}) \geq k-l.$

## Illustration of above theorem

For each eigenvalue $\lambda(t)$ of matrix $UE(t)$ we mark its instantaneous velocity given by formula $\sum\limits_{i=1}^d\ p_i |\braket{i}{x(t)}|^2$, where $\ket{x(t)}$ is corresponding eigenvector. Red colour denotes instantaneous velocity equal to one and blue colour corresponds to the instantaneous velocity equal to zero.

#### Example 1

Diagonal matrix $$U=\begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & \ii & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -\ii \end{pmatrix}.$$ We act on two subspaces with probability $p_1 = 1/3$ and $p_2 = 2/3$. Eigenvalues $-1$ and $-\ii$ stay in the initial position, because they lie on the orthogonal subspace to subspace of which we acting on.

#### Example 2

Random matrix $U$ of dimension $3 \times 3$. Vector of probability is equal $p = (1,0,0)$. All eigenvalues have nonzero velocity, so degeneration is impossible.

#### Example 3

Matrix $U$ of dimension $5\times 5$ having eigenbasis given by the Fourier matrix. Vector of probability is equal $p = (3/8,1/4,1/4,1/8,0)$. Because standard basis and Fourier basis are mutually unbiased, velocities are similar, which implies the shape changes slightly in time.

#### Example 4

Matrix $U$ with eigenvalues $(1,e^{\ii \pi/6}, \ii, \ii, \ii, -1, -1)$. We act on two subspaces with probability $p_1 = 1/3$ and $p_2 = 2/3$. In this case the eigenvalue $\ii$ is three fold degenerated, so it stays in the initial position.

1. Jorge Antezana, Gabriel Larotonda, Alejandro Varela, 2014. Optimal paths for symmetric actions in the unitary group. Communications in Mathematical Physics, 328, Springer, pp.481–497. 