\documentclass{psapm-l}% \usepackage{amsfonts} \usepackage{amsmath} \usepackage{amssymb} \usepackage{graphicx}% \setcounter{MaxMatrixCols}{30} %TCIDATA{OutputFilter=latex2.dll} %TCIDATA{Version=5.50.0.2953} %TCIDATA{CSTFile=psapm-l.cst} %TCIDATA{Created=Thursday, April 22, 2010 15:25:07} %TCIDATA{LastRevised=Saturday, April 24, 2010 16:34:51} %TCIDATA{} %TCIDATA{} %TCIDATA{BibliographyScheme=Manual} %TCIDATA{} %BeginMSIPreambleData \providecommand{\U}{\protect\rule{.1in}{.1in}} %EndMSIPreambleData \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \theoremstyle{definition} \newtheorem{definition}[theorem]{Definition} \newtheorem{example}[theorem]{Example} \newtheorem{xca}[theorem]{Exercise} \theoremstyle{remark} \newtheorem{remark}[theorem]{Remark} \numberwithin{equation}{section} \theoremstyle{plain} \newtheorem{acknowledgement}{Acknowledgement} \newtheorem{algorithm}{Algorithm} \newtheorem{axiom}{Axiom} \newtheorem{case}{Case} \newtheorem{claim}{Claim} \newtheorem{conclusion}{Conclusion} \newtheorem{condition}{Condition} \newtheorem{conjecture}{Conjecture} \newtheorem{corollary}{Corollary} \newtheorem{criterion}{Criterion} \newtheorem{exercise}{Exercise} \newtheorem{notation}{Notation} \newtheorem{problem}{Problem} \newtheorem{proposition}{Proposition} \newtheorem{solution}{Solution} \newtheorem{summary}{Summary} \copyrightinfo{2001}{enter name of copyright holder} \begin{document} \title{\textbf{Riemannian Geometry of Quantum Computation}} \author{Howard E. Brandt} \address{U.S. Army Research Laboratory, Adelphi, MD} \email{hbrandt@arl.army.mil} \thanks{The author wishes to thank Samuel Lomonaco for the invitation to present this lecture. He also thanks John Myers for reading the manuscript, suggesting improvements, and checking equations.} \thanks{This research was supported by the Directors Research Initiative of the U.S. Army Research Laboratory.} \subjclass{Primary 81P68, 81-01, 81-02, 53B20, 53B50, 22E60, 22E70, 03D15, 53C22; Secondary 22D10, 43A75, 51N30, 20C35, 81R05} \keywords{quantum computing, quantum circuits, quantum complexity, differential geometry, Riemannian geometry, geodesics, Lax equation, Jacobi fields} \begin{abstract} An introduction is given to some recent developments in the differential geometry of quantum computation for which the quantum evolution is described by the special unitary unimodular group $SU(2^{n})$. Using the Lie algebra $su(2^{n})$, detailed derivations are given of a useful Riemannian geometry of $SU(2^{n})$, including the connection, curvature, the geodesic equation for minimal complexity quantum computations, and the lifted Jacobi equation. \end{abstract} \maketitle \section{INTRODUCTION} Any quantum computation can be ideally represented by a unitary transformation acting in the Hilbert space of the computational degrees of freedom of the quantum computer, and any unitary transformation can be faithfully represented by a network of universal quantum gates, such as two-qubit controlled-NOT gates and single-qubit gates. This is the basis of the quantum circuit model of quantum computation \cite{NC}. An important measure of the difficulty of performing a quantum computation is the number of quantum gates needed. A quantum algorithm is considered efficient if the number of required gates scales only polynomially (not exponentially) with the size of the problem. Quantum circuit networks are usually analyzed using discrete methods, however potentially powerful continuous differential geometric methods are under development, using sub-Riemannian \cite{Mont}-\cite{Moseley 0}, Riemannian \cite{Nielsen 1}-\cite{Nielsen 4}, and also Finsler \cite{Nielsen 1}, and sub-Finsler \cite{Moseley} geometries. Since unitary transformations are themselves continuous, this is perhaps not a surprising development. Using these differential geometric methods, optimal paths may be sought in Hilbert space for executing a quantum computation. A new innovative approach to the differential geometry of quantum computation and quantum circuit complexity was recently introduced by Nielsen and collaborators \cite{Nielsen 1}-\cite{Nielsen 4}. A Riemannian metric was formulated on the special unitary unimodular group manifold of multi-qubit unitary transformations, such that the metric distance between the identity and the desired unitary operator, representing the quantum computation, is equivalent to the number of quantum gates needed to represent that unitary operator, thereby providing a measure of the complexity associated with the corresponding quantum computation. The Riemannian metric was defined as a positive-definite bilinear form expressed in terms of the multi-qubit Hamiltonian. The analytic form of the metric was chosen to penalize all directions on the manifold not easily simulated by local gates. In this way, basic differential geometric concepts such as the Levi-Civita connection, geodesic path, Riemannian curvature, Jacobi fields, and conjugate points can be associated with quantum computation. Equations for the Levi-Civita connection on the Riemannian manifold can be obtained, as well as the characteristic curvature of the manifold. In accord with the Schr\"{o}dinger equation, the unitary transformation expressing the quantum evolution is an exponential involving the Hamiltonian. The Hamiltonian can be expressed in terms of tensor products of the Pauli matrices which act on the qubits. The Riemann curvature tensor can then be constructed from the Christoffel symbols and their ordinary partial derivatives. The geodesic equation on the manifold follows from the connection and determines the local optimal Hamiltonian evolution corresponding to the unitary transformation representing the desired quantum computation. The optimal unitary evolution may follow by solving the geodesic equation. Useful upper and lower bounds on the associated quantum circuit complexity may be obtained. Such differential geometric approaches to quantum computation are currently preliminary and many details remain to be worked out. The present work presents an expository review of the Riemannian geometry of the special unitary unimodular group manifold associated with quantum computation, and detailed derivations are presented of a suitable connection, curvature, and geodesic equation expressed in terms of the tensor products of Pauli matrices appearing in the Hamiltonian and representing gate operations. Examples of some solutions to the geodesic equation are elaborated. Jacobi fields are also addressed, and the Jaobi equation and the so-called lifted Jacobi equation are derived. \section{METRIC} A Riemannian metric is first chosen on the manifold of the Lie Group $SU(2^{n})$ (special unitary group) of $n$-qubit unitary operators with unit determinant \cite{Hou}-\cite{Cornwell}. The traceless Hamiltonian serves as a tangent vector to a point on the group manifold of the $n$-qubit unitary transformation $U$. The Hamiltonian $H$ is an element of the Lie algebra $su(2^{n})$ of traceless $2^{n}\times2^{n}$ Hermitian matrices \cite{Pfeifer}% -\cite{Cornwell} and is tangent to the evolutionary curve $e^{-iHt}U$ at $t=0$. (Here and throughout, units are chosen such that Planck's constant divided by $2\pi$ is $\hbar=1$.) Independent of $U$, the Riemannian metric (inner product) $\left\langle .,.\right\rangle$ is taken to be a right-invariant positive definite bilinear form $\left\langle H,J\right\rangle$ defined on tangent vectors (Hamiltonians) $H$ and $J$. Right invariance of the metric means that all right translations are isometries. It follows that the Levi-Civita connection is also right invariant. Following \cite{Nielsen 4}, the $n$-qubit Hamiltonian $H$ can be divided into two parts $P(H)$ and $Q(H)$, where $P(H)$ contains only one and two-body terms, and $Q(H)$ contains more than two-body terms. Thus:% \begin{equation} \ H=P(H)+Q(H), \end{equation} in which $P$ and $Q$ are superoperators acting on $H$, and obey the following relations:% \begin{equation} \ \ P+Q=I,\ \ \ PQ=QP=0,\ \ \ P^{2}=P,\ \ \ Q^{2}=Q,\ \ \end{equation} where $I$ is the identity. Letting $H_{m}$ denote the $m$-body part of $H$, then% \begin{equation} P(H)=H_{1}+H_{2}, \end{equation} and \begin{equation} Q(H)=% %TCIMACRO{\dsum \limits_{m=3}^{n}}% %BeginExpansion {\displaystyle\sum\limits_{m=3}^{n}} %EndExpansion H_{m}. \end{equation} For example, in the case of a 3-qubit Hamiltonian, for Pauli matrices $\sigma_{1}$, $\sigma_{2}$, and $\sigma_{3}$ (see Appendix A) \cite{NC}, one has \begin{align} P(H) & =x^{1}\sigma_{1}\otimes I\otimes I+x^{2}\sigma_{2}\otimes I\otimes I+x^{3}\sigma_{3}\otimes I\otimes I\nonumber\\ & +\ x^{4}I\otimes\sigma_{1}\otimes I+x^{5}I\otimes\sigma_{2}\otimes I+x^{6}I\otimes\sigma_{3}\otimes I\nonumber\\ & +\ x^{7}I\otimes I\otimes\sigma_{1}+x^{8}I\otimes I\otimes\sigma_{2}% +x^{9}I\otimes I\otimes\sigma_{3}\nonumber\\ & +\ x^{10}\sigma_{1}\otimes\sigma_{2}\otimes I+x^{11}\sigma_{1}\otimes I\otimes\sigma_{2}+x^{12}I\otimes\sigma_{1}\otimes\sigma_{2}\nonumber\\ & +\ x^{13}\sigma_{2}\otimes\sigma_{1}\otimes I+x^{14}\sigma_{2}\otimes I\otimes\sigma_{1}+x^{15}I\otimes\sigma_{2}\otimes\sigma_{1}\nonumber\\ & +\ x^{16}\sigma_{1}\otimes\sigma_{3}\otimes I+x^{17}\sigma_{1}\otimes I\otimes\sigma_{3}+x^{18}I\otimes\sigma_{1}\otimes\sigma_{3}\nonumber\\ & +\ x^{19}\sigma_{3}\otimes\sigma_{1}\otimes I+x^{20}\sigma_{3}\otimes I\otimes\sigma_{1}+x^{21}I\otimes\sigma_{3}\otimes\sigma_{1}\nonumber\\ & +\ x^{22}\sigma_{2}\otimes\sigma_{3}\otimes I+x^{23}\sigma_{2}\otimes I\otimes\sigma_{3}+x^{24}I\otimes\sigma_{2}\otimes\sigma_{3}\nonumber\\ & +\ x^{25}\sigma_{3}\otimes\sigma_{2}\otimes I+x^{26}\sigma_{3}\otimes I\otimes\sigma_{2}+x^{27}I\otimes\sigma_{3}\otimes\sigma_{2}\nonumber\\ & +\ x^{28}\sigma_{1}\otimes\sigma_{1}\otimes I+x^{29}\sigma_{2}\otimes \sigma_{2}\otimes I+x^{30}\sigma_{3}\otimes\sigma_{3}\otimes I\nonumber\\ & +\ x^{31}\sigma_{1}\otimes I\otimes\sigma_{1}+x^{32}\sigma_{2}\otimes I\otimes\sigma_{2}+x^{33}\sigma_{3}\otimes I\otimes\sigma_{3}\nonumber\\ & +\ x^{34}I\otimes\sigma_{1}\otimes\sigma_{1}+x^{35}I\otimes\sigma _{2}\otimes\sigma_{2}+x^{36}I\otimes\sigma_{3}\otimes\sigma_{3}, \end{align} in which $\otimes$ denotes the tensor product \cite{NC}, \cite{Steeb}, and the $n$ in $x^{n}$ serves as an index,% \begin{align} Q(H) & =x^{37}\sigma_{1}\otimes\sigma_{2}\otimes\sigma_{3}+x^{38}\sigma _{1}\otimes\sigma_{3}\otimes\sigma_{2}\nonumber\\ & +\ x^{39}\sigma_{2}\otimes\sigma_{1}\otimes\sigma_{3}+x^{40}\sigma _{2}\otimes\sigma_{3}\otimes\sigma_{1}\ \ \ \ \nonumber\\ & +\ x^{41}\sigma_{3}\otimes\sigma_{1}\otimes\sigma_{2}+x^{42}\sigma _{3}\otimes\sigma_{2}\otimes\sigma_{1}\nonumber\\ & +\ x^{43}\sigma_{1}\otimes\sigma_{1}\otimes\sigma_{2}+x^{44}\sigma _{1}\otimes\sigma_{2}\otimes\sigma_{1}+\ x^{45}\sigma_{2}\otimes\sigma _{1}\otimes\sigma_{1}\nonumber\\ & +\ x^{46}\sigma_{1}\otimes\sigma_{1}\otimes\sigma_{3}+\ x^{47}\sigma _{1}\otimes\sigma_{3}\otimes\sigma_{1}+x^{48}\sigma_{3}\otimes\sigma _{1}\otimes\sigma_{1}\nonumber\\ & +\ x^{49}\sigma_{2}\otimes\sigma_{2}\otimes\sigma_{1}+x^{50}\sigma _{2}\otimes\sigma_{1}\otimes\sigma_{2}+\ x^{51}\sigma_{1}\otimes\sigma _{2}\otimes\sigma_{2}\nonumber\\ & +\ x^{52}\sigma_{2}\otimes\sigma_{2}\otimes\sigma_{3}+\ x^{53}\sigma _{2}\otimes\sigma_{3}\otimes\sigma_{2}+x^{54}\sigma_{3}\otimes\sigma _{2}\otimes\sigma_{2}\nonumber\\ & +\ x^{55}\sigma_{3}\otimes\sigma_{3}\otimes\sigma_{1}+x^{56}\sigma _{3}\otimes\sigma_{1}\otimes\sigma_{3}+\ x^{57}\sigma_{1}\otimes\sigma _{3}\otimes\sigma_{3}\nonumber\\ & +\ x^{58}\sigma_{3}\otimes\sigma_{3}\otimes\sigma_{2}+\ x^{59}\sigma _{3}\otimes\sigma_{2}\otimes\sigma_{3}+x^{60}\sigma_{2}\otimes\sigma _{3}\otimes\sigma_{3}\nonumber\\ & +\ x^{61}\sigma_{1}\otimes\sigma_{1}\otimes\sigma_{1}+x^{62}\sigma _{2}\otimes\sigma_{2}\otimes\sigma_{2}+\ x^{63}\sigma_{3}\otimes\sigma _{3}\otimes\sigma_{3}. \end{align} Here, all possible tensor products having one and two-qubit Pauli matrix operators on three qubits appear in $P(H)$, and analogously, all possible tensor products having three-qubit operators appear in $Q(H)$. Tensor products including only the identity are excluded because the Hamiltonian is taken to be traceless. Each of the terms in Eqs. (2.5) and (2.6) is an 8$\times$8 matrix. The various tensor products of Pauli matrices such as those appearing in Eqs. (2.5) and (2.6) are referred to as generalized Pauli matrices. In the case of an $n$-qubit Hamiltonian, there are $4^{n}-1$ possible traceless tensor products (corresponding to the dimension of the $SU(2^{n})$ tangent space $T_{U}SU(2^{n})$ and the $su(2^{n})$ algebra), and each term is a $2^{n}\times2^{n}$ matrix. The right-invariant \cite{Hou}-\cite{Lee} Riemannian metric for tangent vectors $H$ and $J$ is given by \cite{Nielsen 4}% \begin{equation} \left\langle H,J\right\rangle \equiv\frac{1}{2^{n}}\text{Tr}\left[ HP(J)+qHQ(J)\right] . \end{equation} Here $q$ is a large penalty parameter which taxes many-body ($m>2$) terms. Justification for the form of the metric, Eq. (2.7), is given in references \cite{Nielsen 1}, \cite{Nielsen 4}. The length $l$ of an evolutionary path on the $SU(2^{n})$ manifold is given by the integral over time $t$ from an initial time $t_{i}$ to a final time $t_{f}$, namely,% \begin{equation} l=% %TCIMACRO{\dint \limits_{t_{i}}^{t_{f}}}% %BeginExpansion {\displaystyle\int\limits_{t_{i}}^{t_{f}}} %EndExpansion dt\left( \left\langle H(t),H(t)\right\rangle \right) ^{1/2}, \end{equation} and is a measure of the cost of applying a control Hamiltonian $H(t)$ along the path. The Riemannian distance between an initial point and a final point in the manifold is the infimum of the length of all curves connecting those points \cite{Lee}. A geodesic curve is in general only locally minimizing \cite{Lee} \section{LEVI-CIVITA CONNECTION} In order to obtain the Levi-Civita connection, it is necessary to exploit the Lie algebra $su(2^{n})$ associated with the group $SU(2^{n})$. Because of the right-invariance of the metric, if the Christoffel symbols are calculated at the origin, the same expression applies everywhere on the manifold. Following \cite{Nielsen 4}, consider the unitary transformation% \begin{equation} U=e^{-iX}% \end{equation} in the neighborhood of the identity $I\in SU(2^{n})$ (or equivalently in the neighborhood of the origin of the tangent space manifold) with \begin{equation} X=x\cdot\sigma\equiv% %TCIMACRO{\dsum \limits_{\sigma}}% %BeginExpansion {\displaystyle\sum\limits_{\sigma}} %EndExpansion x^{\sigma}\sigma, \end{equation} which expresses symbolically terms like those in Eqs. (2.5) and (2.6) generalized to $2^{n}$ dimensions. In Eqs. (3.1) and (3.2), $X$ is defined using the standard branch of the logarithm with a cut along the negative real axis. In Eq. (3.2), for the general case of $n$ qubits, $x^{\sigma}$ represents the set of real ($4^{n}-1$) coefficients of the generalized Pauli matrices $\sigma$ which represent all of the $n$-fold tensor products. It follows from Eq. (3.2) that the factor $x^{\sigma}$ multiplying a particular generalized Pauli matrix $\sigma$ is given by% \begin{equation} x^{\sigma}=\frac{1}{2^{n}}\text{Tr}(X\sigma). \end{equation} These are so-called Pauli coordinates. (In the neighborhood of the origin, $X$ will be represented as $X=\Delta x^{\mu}\mu$ for infinitesimal $\Delta x^{\mu}$ and generalized Pauli matrix $\mu$, where the Einstein sum convention summing over $\mu$ is to be understood. (See Eq. (3.32) below.) \ Consider a curve $e^{-iHt}e^{-iX}$ in the $SU(2^{n})$ group manifold, evolving from a point $U=e^{-iX}$, and representing \ a system with initial action $X$ \ acted on by a control Hamiltonian $H.$ For the point $U$ on the $SU(2^{n})$ manifold with tangent vector $H$ to the curve, one has in the neighborhood of the identity, \begin{equation} e^{-iHt}e^{-iX}=e^{-i(X+Jt)}+0(t^{2}) \end{equation} to second-order in the time $t$. This follows from the Baker-Campbell-Hausdorff formula \cite{Hall}, \cite{Naimark}, \cite{Weigert}% -\cite{Hausdorff}. The right side of Eq. (3.4) contains the resulting total action $(X+Jt)$. Explicitly, the matrix $J$, the so-called Pauli representation of the tangent vector in the Pauli-coordinate representation in the tangent space $T_{U}SU(2^{n})$, is related to $H$, the Hamiltonian representation of the tangent vector, by% \begin{equation} H=E_{X}(J), \end{equation} in which the linear superoperator $E_{X}$ is given by% \begin{equation} E_{X}=i\text{ad}_{X}^{-1}(e^{-i\text{ad}_{X}}-I), \end{equation} where $I$ is the identity, a power series expansion is to be understood since the operator ad$_{X}$ is not invertible, and ad$_{X}(Y)$ is the Lie bracket, defined by the ordinary matrix commutator \begin{equation} \text{ad}_{X}(Y)\equiv\lbrack X,Y]. \end{equation} The power series expansion of $E_{X}$ is% \begin{equation} E_{X}=% %TCIMACRO{\dsum \limits_{j=0}^{\infty}}% %BeginExpansion {\displaystyle\sum\limits_{j=0}^{\infty}} %EndExpansion \frac{(-i\text{ad}_{X})^{j}}{(j+1)!}. \end{equation} Near the origin, $E_{X}$ is invertible, and one has% \begin{equation} J\equiv D_{X}(H)=E_{X}^{-1}(H). \end{equation} It then follows from Eqs. (3.8) and (3.9) near the origin that% \begin{equation} E_{X}=I-\frac{i}{2}\text{ad}_{X}+O(X^{2}), \end{equation} and \begin{equation} D_{X}=I+\frac{i}{2}\text{ad}_{X}+O(X^{2}). \end{equation} One also has the adjoint relations with respect to the trace inner product (see Appendix B):% \begin{equation} E_{X}^{\dag}=E_{-X}, \end{equation}% \begin{equation} D_{X}^{\dag}=D_{-X}. \end{equation} Next, the right-invariant metric, Eq. (2.7), in the so-called Hamiltonian representation can be written as% \begin{equation} \left\langle H,J\right\rangle =\frac{1}{2^{n}}\text{Tr}(HG(J)), \end{equation} in which the positive self-adjoint superoperator $G$ is given by% \begin{equation} G=P+qQ. \end{equation} It is also useful to define a Hermitian matrix $L$, dual to the Hamiltonian $H$, \begin{equation} L=G(H), \end{equation} so that Eq. (3.14) can also be written as% \begin{equation} \left\langle H,J\right\rangle \equiv\frac{1}{2^{n}}\text{Tr}(LJ). \end{equation} Now consider the metric $\left\langle Y,Z\right\rangle$ for tangent vector fields $Y$ and $Z$ in the neighborhood of the origin at point $U=e^{-iX}$ (See Eq. (3.32)). By Eq. (3.5), the so-called Hamiltonian representations $\left\{ Y^{H},Z^{H}\right\}$ of the vector fields are related to their so-called Pauli representations $\left\{ Y^{P},Z^{P}\right\}$ by \begin{equation} Y^{H}=E_{X}(Y^{P}),\ \ \ \ \ Z^{H}=E_{X}(Z^{P}). \end{equation} Substituting Eqs. (3.18) in Eq. (3.14), one obtains% \begin{equation} \left\langle Y,Z\right\rangle =\frac{1}{2^{n}}\text{Tr}(Y^{H}G(Z^{H}% ))\ =\frac{1}{2^{n}}\text{Tr}(E_{X}(Y^{P})G\circ E_{X}(Z^{P})), \end{equation} or% \begin{equation} \ \ \left\langle Y,Z\right\rangle =\frac{1}{2^{n}}\text{Tr}(Y^{P}E_{X}^{\dag }\circ G\circ E_{X}(Z^{P})). \end{equation} Equivalently,% \begin{equation} \ \ \left\langle Y,Z\right\rangle =\frac{1}{2^{n}}\text{Tr}(Y^{P}G_{X}% (Z^{P})), \end{equation} where% \begin{equation} \ G_{X}\equiv E_{X}^{\dag}\circ G\circ E_{X}. \end{equation} The metric can be rewritten in the familiar Riemannian tensor form $g_{\sigma\tau}$, in a coordinate basis, as follows. The vectors $Y^{P}$ and $Z^{P}$ in the Pauli representation can be written as% \begin{equation} Y^{P}=% %TCIMACRO{\dsum \limits_{\sigma}}% %BeginExpansion {\displaystyle\sum\limits_{\sigma}} %EndExpansion y^{\sigma}\sigma,\ \ \ \ Z^{P}=% %TCIMACRO{\dsum \limits_{\sigma}}% %BeginExpansion {\displaystyle\sum\limits_{\sigma}} %EndExpansion z^{\sigma}\sigma\ \end{equation} with Pauli coordinates $y^{\sigma}$ and $z^{\sigma}$. Here $\sigma$, as an index, is used to refer to a particular tensor product appearing in the generalized Pauli matrix $\sigma$. This index notation, used throughout, is a convenient abbreviation for the actual numerical indices (e.g. in Eq. (2.5), the number $22$ appearing in $x^{22}$, the coefficient of $\sigma_{2}% \otimes\sigma_{3}\otimes I$). Then substituting Eqs. (3.23) in Eq. (3.21), one obtains% \begin{equation} \ \ \ \ \left\langle Y,Z\right\rangle =\frac{1}{2^{n}}\text{Tr}(% %TCIMACRO{\dsum \limits_{\sigma}}% %BeginExpansion {\displaystyle\sum\limits_{\sigma}} %EndExpansion y^{\sigma}\sigma G_{X}(% %TCIMACRO{\dsum \limits_{\tau}}% %BeginExpansion {\displaystyle\sum\limits_{\tau}} %EndExpansion z^{\tau}\tau)),\ \end{equation} or% \begin{equation} \ \ \ \ \left\langle Y,Z\right\rangle =% %TCIMACRO{\dsum \limits_{\sigma\tau}}% %BeginExpansion {\displaystyle\sum\limits_{\sigma\tau}} %EndExpansion g_{\sigma\tau}y^{\sigma}z^{\tau}, \end{equation} in which the Pauli-coordinate representation of the metric tensor $g_{\sigma\tau}$ is given by% \begin{equation} \ g_{\sigma\tau}=\ \frac{1}{2^{n}}\text{Tr}(\sigma G_{X}(\tau)). \end{equation} According to Eqs.\ (3.22), (3.10), and Eq. (13.13) of Appendix B, one has in the neighborhood of the origin:% \begin{equation} \ G_{X}=\left( 1+\frac{i}{2}\text{ad}_{X}\right) \circ G\circ\left( 1-\frac{i}{2}\text{ad}_{X}\right) +0(X^{2}), \end{equation} or equivalently,% \begin{equation} \ G_{X}=G+\frac{i}{2}\left[ \text{ad}_{X},G\right] +0(X^{2}). \end{equation} Next one has for the partial derivative of $g_{\sigma\tau}$ with respect to $x^{\mu}$:% \begin{equation} \ g_{\sigma\tau,\mu}=\underset{\Delta x\rightarrow0}{\lim}\frac{g_{\sigma\tau }(x+\Delta x^{\mu})-g_{\sigma\tau}(x)}{\Delta x^{\mu}}, \end{equation} where the comma followed by $\mu$ denotes the partial derivative $\partial/\partial x^{\mu}$. Using Eqs. (3.26) and (3.28), then Eq. (3.29) becomes% \begin{equation} g_{\sigma\tau,\mu}=\underset{\Delta x\rightarrow0}{\lim}\frac{1}{2^{n}% }\text{Tr}\frac{\sigma\left( G+\frac{i}{2}[\text{ad}_{X},G]\right) (\tau)-\sigma G(\tau)}{\Delta x^{\mu}}, \end{equation} or \begin{equation} \ g_{\sigma\tau,\mu}\ \ \ =\underset{\Delta x\rightarrow0}{\lim}\frac {i}{2^{n+1}}\text{Tr}\frac{\sigma\lbrack X,G(\tau)]-\sigma\lbrack G(\tau ),X]}{\Delta x^{\mu}}. \end{equation} In the neighborhood of the origin, one has for infinitesimals $\Delta x^{\mu}%$, using the Einstein sum convention for repeated upper and lower indices, \begin{equation} X=\Delta x^{\mu}\mu, \end{equation} and when the $\mu$-component is substituted in Eq. (3.31), one obtains% \begin{equation} \ g_{\sigma\tau,\mu}=\frac{i}{2^{n+1}}\text{Tr}\left( \sigma\lbrack\mu ,G(\tau)]-\sigma\lbrack G(\tau),\mu]\right) .\ \ \end{equation} Next expanding the commutators, and using the cyclic property of the trace, one obtains% \begin{equation} \ g_{\sigma\tau,\mu}\ =\frac{i}{2^{n+1}}\text{Tr}\left( 2(G(\tau)\sigma \mu-\sigma G(\tau)\mu)\right) , \end{equation} or equivalently,% \begin{equation} \ \ g_{\sigma\tau,\mu}=\frac{i}{2^{n+1}}\text{Tr}\left( 2[G(\tau),\sigma ]\mu\right) . \end{equation} However because any Riemannian metric tensor is symmetric, one has% \begin{equation} \ \ g_{\sigma\tau,\mu}=\frac{1}{2}\left( g_{\sigma\tau,\mu}+g_{\tau\sigma ,\mu}\right) , \end{equation} and substituting Eq. (3.35) in Eq. (3.36), one obtains \cite{Nielsen 4}% \begin{equation} \ g_{\sigma\tau,\mu}=\frac{i}{2^{n+1}}\text{Tr}\left\{ \left( [G(\sigma ),\tau]+[G(\tau),\sigma]\right) \mu\right\} . \end{equation} The familiar form of the Levi-Civita connection of Riemannian geometry, in a coordinate basis, is given by the Christoffel symbols of the first kind, namely, \cite{Lee},\cite{Petersen} \begin{equation} \ \Gamma_{\mu\sigma\tau}=\frac{1}{2}(g_{\mu\sigma,\tau}+g_{\mu\tau,\sigma }-g_{\sigma\tau,\mu}). \end{equation} Substituting Eq. (3.37) in Eq. (3.38), one obtains% \begin{align} \Gamma_{\mu\sigma\tau} & =\frac{1}{2}\frac{i}{2^{n+1}}\text{Tr}\left( \left( [G(\mu),\sigma]+[G(\sigma),\mu]\right) \tau\right. \nonumber\\ & \ +\ \left( [G(\mu),\tau]+[G(\tau),\mu]\right) \sigma\nonumber\\ & -\left. \left( [G(\sigma),\tau]+[G(\tau),\sigma]\right) \mu\right) , \end{align} and expanding the commutators, using the cyclic property of the trace, and simplifying, this becomes% \begin{equation} \Gamma_{\mu\sigma\tau}=\frac{i}{2^{n+1}}\text{Tr}((\tau G(\sigma )-G(\sigma)\tau)\mu\ +\ (\sigma G(\tau)-G(\tau)\sigma)\mu), \end{equation} or% \begin{equation} \Gamma_{\mu\sigma\tau}\ =\frac{i}{2^{n+1}}\text{Tr}(([\sigma,G(\tau )]+[\tau,G(\sigma)])\mu), \end{equation} and again using the cyclic property of the trace, one obtains \begin{equation} \Gamma_{\mu\sigma\tau}=\frac{i}{2^{n+1}}\text{Tr}(\mu([\sigma,G(\tau )]+[\tau,G(\sigma)])). \end{equation} The inverse metric is given by (see Appendix C):% \begin{equation} g^{\sigma\tau}=\frac{1}{2^{n}}\text{Tr}(\sigma F(\tau)). \end{equation} It then follows that the Christoffel symbols of the second kind \cite{Petersen} are given by (see Appendix D) \cite{Nielsen 4}% \begin{equation} \ \Gamma_{\sigma\tau}^{\rho}=\frac{i}{2^{n+1}}\text{Tr}\left( F(\rho)\left( [\sigma,G(\tau)]+[\tau,G(\sigma)]\right) \right) , \end{equation} in which one defines% \begin{equation} F(\rho)\equiv G^{-1}(\rho). \end{equation} Next, for a generic Riemannian connection $\Gamma_{kl}^{j}$ and vectors $Z$ and $Y$, written in a coordinate basis, one has the familiar equation for the covariant derivative of $Z$ along $Y$:% \begin{equation} (\nabla_{Y}Z)^{j}=\frac{\partial z^{j}}{\partial x^{k}}y^{k}+\Gamma_{kl}% ^{j}\ y^{k}z^{l}, \end{equation} in which the Einstein convention of summing over repeated indices is implicit. Replacing indices $(j,k,l)$ by $(\sigma,\tau,\lambda)$, multiplying both sides of Eq. (3.46) by $\sigma$, and summing over $\sigma$ yields% \begin{equation}% %TCIMACRO{\dsum \limits_{\sigma}}% %BeginExpansion {\displaystyle\sum\limits_{\sigma}} %EndExpansion \sigma(\nabla_{Y}Z)^{\sigma}=% %TCIMACRO{\dsum \limits_{\sigma\tau}}% %BeginExpansion {\displaystyle\sum\limits_{\sigma\tau}} %EndExpansion y^{\tau}\sigma\frac{\partial z^{\sigma}}{\partial x^{\tau}}+% %TCIMACRO{\dsum \limits_{\sigma\tau\lambda}}% %BeginExpansion {\displaystyle\sum\limits_{\sigma\tau\lambda}} %EndExpansion \sigma\Gamma_{\tau\lambda}^{\sigma}y^{\tau}z^{\lambda}, \end{equation} and substituting Eqs. (3.23) and (3.44) in Eq. (3.47), \ one obtains% \begin{equation} (\nabla_{Y}Z)^{P}\equiv% %TCIMACRO{\dsum \limits_{\tau}}% %BeginExpansion {\displaystyle\sum\limits_{\tau}} %EndExpansion y^{\tau}\frac{\partial Z^{P}}{\partial x^{\tau}}+% %TCIMACRO{\dsum \limits_{\sigma\tau\lambda}}% %BeginExpansion {\displaystyle\sum\limits_{\sigma\tau\lambda}} %EndExpansion \sigma\frac{i}{2^{n+1}}\text{Tr}\left\{ F(\sigma)\left( [\tau,G(\lambda \right) ]+[\lambda,G(\tau)])\right\} y^{\tau}z^{\lambda}. \end{equation} The following identity is true (see Appendix E):% \begin{equation}% %TCIMACRO{\dsum \limits_{\sigma}}% %BeginExpansion {\displaystyle\sum\limits_{\sigma}} %EndExpansion \sigma\text{Tr}\left\{ F(\sigma)[\tau,G(\lambda]\right\} =2^{n}% F([\tau,G(\lambda]), \end{equation} so that Eq. (3.48) becomes% \begin{equation} (\nabla_{Y}Z)^{P}\equiv% %TCIMACRO{\dsum \limits_{\tau}}% %BeginExpansion {\displaystyle\sum\limits_{\tau}} %EndExpansion y^{\tau}\frac{\partial Z^{P}}{\partial x^{\tau}}+% %TCIMACRO{\dsum \limits_{\tau\lambda}}% %BeginExpansion {\displaystyle\sum\limits_{\tau\lambda}} %EndExpansion \frac{i}{2^{n+1}}2^{n}\left( F([\tau,G(\lambda]\right) +F([\lambda ,G(\tau)]))y^{\tau}z^{\lambda}. \end{equation} Then substituting Eqs. (3.23) in Eq. (3.50), and using the Einstein sum convention, one obtains the Pauli representation of the connection evaluated at the origin with the vector fields given in the Pauli representation, namely, \cite{Nielsen 4}:% \begin{equation} (\nabla_{Y}Z)^{P}\equiv y^{\tau}\frac{\partial Z^{P}}{\partial x^{\tau}}% +\frac{i}{2}\left( F([Y^{P},G(Z^{P}]\right) +F([Z^{P},G(Y^{P})])). \end{equation} To obtain the Hamiltonian representation of the connection, one has according to Eqs. (3.9) and (3.11) near the origin,% \begin{equation} Z^{P}=D_{X}(Z^{H})=\left( 1+\frac{i}{2}ad_{X}\right) (Z^{H}). \end{equation} Also, clearly,% \begin{equation} \frac{\partial Z^{P}}{\partial x^{\sigma}}=\underset{\Delta x\rightarrow 0}{\lim}\frac{Z^{P}(x+\Delta x)-Z^{P}(x)}{\Delta x^{\sigma}}, \end{equation} and substituting Eq. (3.52) in Eq. (3.53), then% \begin{equation} \ \ \ \frac{\partial Z^{P}}{\partial x^{\sigma}}=\underset{\Delta x\rightarrow0}{\lim}\frac{\left( 1+\frac{i}{2}ad_{X}\right) (Z^{H}(x+\Delta x))-Z^{H}}{\Delta x^{\sigma}}, \end{equation} or substituting Eqs. (3.7) and (3.32), and dropping the Einstein sum convention here only, then% \begin{equation} \ \ \frac{\partial Z^{P}}{\partial x^{\sigma}}=\underset{\Delta x\rightarrow 0}{\lim}\frac{\frac{i}{2}\left[ \Delta x^{\sigma}\sigma,Z^{H}\right] +\frac{\partial Z^{H}}{\partial x^{\sigma}}\Delta x^{\sigma}}{\Delta x^{\sigma}}=\frac{i}{2}\left[ \sigma,Z^{H}\right] +\frac{\partial Z^{H}% }{\partial x^{\sigma}}. \end{equation} Thus% \begin{equation} Z_{,\sigma}^{P}=\frac{i}{2}\left[ \sigma,Z^{H}\right] +Z_{,\sigma}^{H}, \end{equation} or multiplying by $y^{\sigma}$, using Eq. (3.23), and restoring the Einstein sum convention, one has \begin{equation} y^{\sigma}Z_{,\sigma}^{P}=y^{\sigma}Z_{,\sigma}^{H}+\frac{i}{2}\left[ Y^{P},Z^{H}\right] . \end{equation} Next substituting Eq. (3.57) in Eq. (3.51), one obtains% \begin{equation} \left( \nabla_{Y}Z\right) ^{P}=y^{\sigma}Z_{,\sigma}^{H}+\frac{i}{2}\left[ Y^{P},Z^{H}\right] +\frac{i}{2}F\left( \left[ Y^{P},G(Z^{P})\right] +\left[ Z^{P},G(Y^{P})\right] \right) . \end{equation} But at the origin, it is true that% \begin{equation} (\nabla_{Y}Z)^{H}=(\nabla_{Y}Z)^{P},\ \ \ \ Y^{H}=Y^{P},\ \ \ \ Z^{H}=Z^{P}, \end{equation} and the components $y^{\sigma}$ of $Y$ are the same in both representations. Therefore using Eqs. (3.58) and (3.59), one obtains the Hamiltonian representation of the connection at the origin \cite{Nielsen 4}:% \begin{equation} \left( \nabla_{Y}Z\right) ^{H}=y^{\sigma}Z_{,\sigma}^{H}+\frac{i}{2}\left( \left[ Y^{H},Z^{H}\right] +F\left( \left[ Y^{H},G(Z^{H})\right] +\left[ Z^{H},G(Y^{H})\right] \right) \right) , \end{equation} in which $Z_{,\sigma}^{H}\equiv\frac{\partial Z^{H}}{\partial x^{\sigma}}$. Equation (3.60) gives the covariant derivative of the vector $Z^{H}$ along the vector $Y^{H}$. \section{GEODESIC EQUATION} Next consider \ a curve passing through the origin with tangent vector $Y^{H}$ having components $y^{\sigma}=dx^{\sigma}/dt$. Then according to Eq. (3.60) and the chain rule, the covariant derivative along the curve in the Hamiltonian representation is given by% \begin{equation} (D_{t}Z)^{H}\equiv(\nabla_{Y}Z)^{H}=\frac{dZ^{H}}{dt}+\frac{i}{2}\left( \left[ Y^{H},Z^{H}\right] +F\left( \left[ Y^{H},G(Z^{H})\right] +\left[ Z^{H},G(Y^{H})\right] \right) \right) . \end{equation} Because of the right-invariance of the metric, Eq. (4.1) is true on the entire manifold. Furthermore, for a right-invariant vector field $Z^{H}$, one has% \begin{equation} \frac{dZ^{H}}{dt}=0, \end{equation} and substituting Eq. (4.2) in Eq. (4.1), one obtains% \begin{equation} (\nabla_{Y}Z)^{H}=\frac{i}{2}\left\{ \left[ Y^{H},Z^{H}\right] +F\left( \left[ Y^{H},G(Z^{H})\right] +\left[ Z^{H},G(Y^{H})\right] \right) \right\} , \end{equation} which is also true everywhere on the manifold. One can next proceed to obtain the geodesic equation. A geodesic in $SU(2^{n})$ is a curve $U(t)$ with tangent vector $H(t)$ parallel transported along the curve, namely,% \begin{equation} D_{t}H=0. \end{equation} However, according to Eq. (4.1) with $Y^{H}=Z^{H}=H$, one has% \begin{equation} D_{t}H=\frac{dH}{dt}+\frac{i}{2}([H,H]+F\left( \left[ H,G(H)\right] +\left[ H,G(H)\right] \right) ), \end{equation} which when substituting Eq. (4.4) becomes \cite{Nielsen 4}% \begin{equation} \frac{dH}{dt}=-iF\left( \left[ H,G(H)\right] \right) . \end{equation} One can rewrite Eq. (4.6) using Eqs. (3.16) and (3.45), \begin{equation} L\equiv G(H)=F^{-1}(H), \end{equation} and then noting that% \begin{equation} \frac{dL}{dt}=\frac{d}{dt}\left( F^{-1}(H)\right) =F^{-1}\left( \frac {dH}{dt}\right) . \end{equation} Thus substituting Eq. (4.6) in Eq. (4.8), one obtains% \begin{equation} \frac{dL}{dt}=-iF^{-1}\left( F([H,G(H)]\right) ), \end{equation} or% \begin{equation} \frac{dL}{dt}=-i[H,G(H)], \end{equation} and again using Eq. (4.7), Eq. (4.10) becomes \begin{equation} \frac{dL}{dt}=-i[H,L]=i[L,H]. \end{equation} Furthermore, again using Eq. (4.7) in Eq. (4.11), one obtains the sought geodesic equation \cite{Nielsen 4}:% \begin{equation} \frac{dL}{dt}=i[L,F(L)]. \end{equation} Equation (4.12) is a Lax equation. a well-known nonlinear differential matrix equation, and $L$ and $iF(L)$ are Lax pairs \cite{Lax 1}-\cite{Lax 4}. An alternative form for the geodesic equation can be obtained by first substituting Eq. (3.15) in Eq. (3.45), obtaining% \begin{equation} F=P+q^{-1}Q. \end{equation} Equation (4.13) follows since one then has according to Eqs. (4.7), (3.15), and (4.13), that% \begin{equation} G^{-1}G=FG=\left( P+q^{-1}Q\right) (P+qQ), \end{equation} or% \begin{equation} G^{-1}G=P^{2}+qPQ+q^{-1}QP+Q^{2}, \end{equation} and using Eqs.\ (2.2), this becomes% \begin{equation} G^{-1}G=I, \end{equation} as it must. Then substituting Eq. (4.13) in Eq. (4.12), one obtains% \begin{equation} \frac{dL}{dt}=i[L,P(L)+q^{-1}Q(L)]. \end{equation} Using Eq. (2.2) in Eq. (4.17), one has% \begin{equation} \ \frac{dL}{dt}=i\left[ L,P(L)+q^{-1}(L-P(L))\right] , \end{equation} or% \begin{equation} \ \frac{dL}{dt}=iq^{-1}[L,L]+i(1-q^{-1})[L,P(L)]. \end{equation} Finally then Eq. (4.19) becomes \cite{Nielsen 4}% \begin{equation} \ \frac{dL}{dt}=i(1-q^{-1})[L,P(L)]. \end{equation} It follows that if $[L,P(L)]=0$, or equivalently if $H_{1}$ and $H_{2}$ commute with $H_{m}$ for $m\geq3$, then $dL/dt=0$, or using Eq. (4.7), then also $dH/dt=0$, namely the Hamiltonian is constant, and therefore, using the Schr\"{o}dinger equation, it follows that the geodesic path becomes simply $e^{-iHt}$. The latter is also the case if only one and two-body terms appear in the Hamiltonian. Yet another useful form for the geodesic equation follows by first defining% \begin{equation} \bigskip M=(1-q^{-1})L,\ \ \ \ q\neq1. \end{equation} Then% \begin{equation} \bigskip\frac{dM}{dt}=(1-q^{-1})\frac{dL}{dt}, \end{equation} and substituting Eqs. (4.20) and (4.21) in Eq. (4.22), then\newline% \begin{equation} \frac{dM}{dt}=i(1-q^{-1})^{2}\left[ L,\frac{1}{(1-q^{-1})}P(M)\right] , \end{equation} or equivalently \cite{Nielsen 4},\newline% \begin{equation} \frac{dM}{dt}=i[M,P(M)],\ \ \ \ \ \ q\neq1, \end{equation} independent of $q$, provided $q\neq1$. Equations (4.7) and (4.21) imply% \begin{equation} H=G^{-1}(L)=\frac{1}{(1-q^{-1})}G^{-1}(M), \end{equation} and therefore solving Eq. (4.24) for $M$ yields the Hamiltonian $H$ producing the geodesic path. \section{CONSTANTS OF MOTION} Constants of the motion for the geodesic Eq. (4.12) are readily obtained as follows. For arbitrary constant $L_{0}$ and unitary transformation $U(t)$, define the function $L(t)$ by \begin{equation} L(t)=U(t)L_{0}U^{\dag}(t)\ . \end{equation} Then \begin{equation} \frac{dL(t)}{dt}=\frac{dU(t)}{dt}L_{0}U^{\dag}(t)+U(t)L_{0}\frac{dU^{\dag}% (t)}{dt}\ \ . \end{equation} Also, for a state $\left\vert \Psi\right\rangle$ given by \begin{equation} \left\vert \Psi\right\rangle =U(t)\ \left\vert \Psi_{0}\right\rangle \ , \end{equation} one has \begin{equation} \frac{d\left\vert \Psi\right\rangle }{dt}=\frac{dU(t)}{dt}\left\vert \Psi _{0}\right\rangle \ . \end{equation} But the Schr\"{o}dinger equation is \begin{equation} i\hbar\frac{d\left\vert \Psi\right\rangle }{dt}=H\left\vert \Psi\right\rangle =HU\left\vert \Psi_{0}\right\rangle \ , \end{equation} and substituting Eqs. (5.3) and (5.4) in Eq. (5.5), one obtains (letting $\hbar=1)$: \begin{equation} \frac{dU(t)}{dt}=\frac{1}{i}HU(t) \end{equation} and therefore \begin{equation} \frac{dU^{\dag}(t)}{dt}=-\frac{1}{i}U^{\dag}(t)H. \end{equation} Next substituting Eqs. (5.6) and (5.7) in Eq. (5.2), one has \begin{equation} \frac{dL(t)}{dt}=\frac{1}{i}HU(t)L_{0}U^{\dag}(t)-\frac{1}{i}U(t)L_{0}U^{\dag }(t)H\ ,\ \end{equation} or \begin{equation} \frac{dL(t)}{dt}=\frac{1}{i}[H,U(t)L_{0}U^{\dag}(t)]\ .\ \end{equation} Also, according to Eq. (4.7 ), one has \begin{equation} \ H=G^{-1}(L)=F(L)\ . \end{equation} Next, substituting Eqs. (5.1) and (5.10) in Eq. (5.9), one obtains% \begin{equation} \frac{dL}{dt}=\frac{1}{i}[F(L),L]\ \ , \end{equation} or% \begin{equation} \frac{dL}{dt}=i[L,F(L)]\ .\ \end{equation} Thus $L(t)$ given by Eq. (5.1) satisfies the geodesic equation, Eq. (4.12), for any $L_{0}=G(H_{0})$, in which $H_{0}$ is some constant Hamiltonian. Next it follows from Eq. (5.1) that \begin{equation} U^{\dag}(t)L(t)U(t)=U^{\dag}(t)\ U(t)\ L_{0}U^{\dag}(t)U(t), \end{equation} which by unitarity, \begin{equation} U^{\dag}(t)U(t)=1,\ \end{equation} becomes \begin{equation} U^{\dag}(t)L(t)U(t)=L_{0}, \end{equation} a matrix-valued constant of geodesic motion\ which completely determines the system geodesics. One can also show that one-body terms are constants of the motion. Let $S(X)$ map an $n$-body matrix $X$ into 1-body terms. Then Eq. (4.20) implies:% \begin{equation} \frac{dS(L)}{dt}=S\frac{dL}{dt}=i(1-q^{-1})S([L,P(L)]\ ,\ \end{equation} or using Eq. (2.2), then \begin{equation} \frac{dS(L)}{dt}=i(1-q^{-1})S([P(L)+Q(L),P(L)],\ \ \end{equation} or% \begin{equation} \frac{dS(L)}{dt}=i(1-q^{-1})S([Q(L),P(L)]\ ).\ \end{equation} Next letting $T$ map into two-body terms, one has \begin{equation} \lbrack Q(L),P(L)]\ =[Q(L),S(L)+T(L)]\ =[Q(L),S(L)]+[Q(L),T(L)]\ , \end{equation} but the commutator of $Q(L)$ with one-body terms in $P(L)$ yields three- or more-body terms. For example,% \begin{align} \ [Q(L),S(L)]\ & \backsim\ [\sigma_{i}\otimes\sigma_{j}\otimes\sigma _{k},I\otimes I\otimes\sigma_{l}]\nonumber\\ & =(\ \sigma_{i}\otimes\sigma_{j}\otimes(\sigma_{k}\sigma_{l})-\ \sigma _{i}\otimes\sigma_{j}\otimes(\sigma_{l}\sigma_{k}))\nonumber\\ & =i\varepsilon_{klm}\sigma_{i}\otimes\sigma_{j}\otimes\sigma_{m}% -i\varepsilon_{lkm}\sigma_{i}\otimes\sigma_{j}\otimes\sigma_{m}\nonumber\\ & =2i\varepsilon_{klm}\sigma_{i}\otimes\sigma_{j}\otimes\sigma_{m}. \end{align} Thus the commutator consists of three- or more-body terms, since $Q(L)$ generally contains three- or more-body terms. Also in Eq. (5.19), the commutator of $Q(L)$ with two-body terms in $P(L)$ yields two- and more-body terms. For example, \begin{align} \lbrack Q(L),T(L)]\ & \backsim\ [\sigma_{i}\otimes\sigma_{j}\otimes \sigma_{k},I\otimes\sigma_{j}\otimes\sigma_{i}]\nonumber\\ & =\ (\sigma_{i}\otimes I\otimes i\varepsilon_{kim}\sigma_{m}-\sigma _{i}\otimes I\otimes i\varepsilon_{ikm}\sigma_{m})\nonumber\\ & =2i\varepsilon_{kim}\sigma_{i}\otimes I\otimes\sigma_{m}. \end{align} Thus the commutator consists of two- or more-body terms. So one has \begin{equation} S([Q(L),P(L)]\ )\ =0, \end{equation} and substituting Eq. (5.22) in Eq. (5.18), one obtains% \begin{equation} \frac{dS(L)}{dt}=0\ , \end{equation} and one concludes that% \begin{equation} S(L)=S_{0}% \end{equation} is constant. \section{GEODESICS FOR CONSTANT HAMILTONIAN} Using the geodesic equation in the form given by Eq. (4.20), it is also evident that when $q=1$, $L$ is constant along geodesics, namely,% \begin{equation} \ L(t)=L_{0}\text{, }% \end{equation} where $L_{0}$ is a constant matrix. It then follows from Eqs. (5.10) and (6.1), that the Hamiltonian is also constant, namely, \begin{equation} H=G^{-1}(L_{0})\equiv H_{0}. \end{equation} Also, for large $q$, Eq. (4.20) becomes $\$ \begin{equation} \ \frac{dL}{dt}=i[L,P(L)]. \end{equation} For $q\neq1$, Eqs. (4.20) and (6.3) are vanishing if \begin{equation} \ [L,P(L)]=0, \end{equation} or equivalently, using Eq. (2.2), \begin{equation} \lbrack Q(L),P(L)]=0\ .\ \end{equation} One again concludes that if one- and two-body terms commute with three- and more-body terms, or if the Hamiltonian $H$ contains only one- and two-body terms, then Eqs. (6.1) and (6.2) again hold, namely the Hamiltonian is constant. It then follows from the Schr\"{o}dinger equation that the corresponding unitary evolution is given by the geodesic \begin{equation} U(t)=e^{-iHt}, \end{equation} as one might expect. \section{\bigskip THREE-QUBIT GEODESICS} Next consider the three-qubit case. For this case define% \begin{equation} G_{3}^{s}\equiv sS+T+qQ, \end{equation} in which $S$, $T$, and $Q$ are superoperators (matrices) mapping onto the subspace of three-qubit Hamiltonians containing only one-, two-, and three-body terms, respectively. Also in Eq. (7.1), $s$ is a useful parameter. One has the following commutation relations between the matrix subspaces \textbf{S, T}, and \textbf{Q}: \begin{equation} \lbrack\text{\textbf{S}},\text{\textbf{T}}]\subseteq\text{\textbf{T}}, \end{equation}% \begin{equation} \lbrack\text{\textbf{S}},\text{\textbf{Q}}]\subseteq\text{\textbf{Q}}, \end{equation}% \begin{equation} \lbrack\text{\textbf{T}},\text{\textbf{Q}}]\subseteq\text{\textbf{T}}. \end{equation} Examples supporting Eqs. (7.2)-(7.4) are as follows:% \begin{align} \lbrack\text{\textbf{S}},\text{\textbf{T}}] & \backsim\ [\sigma_{1}\otimes I\otimes I,\sigma_{2}\otimes\sigma_{3}\otimes I]\ =(\sigma_{1}\sigma _{2})\otimes\sigma_{3}\otimes I-(\sigma_{2}\sigma_{1})\otimes\sigma_{3}\otimes I\nonumber\\ & =2i\sigma_{3}\otimes\sigma_{3}\otimes I\ \subseteq\text{\textbf{T}}, \end{align}% \begin{align} \lbrack\text{\textbf{S}},\text{\textbf{Q}}]\ & \backsim\ \ [I\otimes I\otimes\sigma_{l},\sigma_{i}\otimes\sigma_{j}\otimes\sigma_{k}]\ \nonumber\\ & =(\sigma_{i}\otimes\sigma_{j}\otimes(\sigma_{l}\sigma_{k})-\ \ \sigma _{i}\otimes\sigma_{j}\otimes(\sigma_{k}\sigma_{l}))\nonumber\\ & =2i\varepsilon_{lkm}\sigma_{i}\otimes\sigma_{j}\otimes\sigma_{m}% \subseteq\text{\textbf{Q}}, \end{align}% \begin{align} \lbrack\text{\textbf{T}},\text{\textbf{Q}}] & \backsim\ \ [\sigma_{n}% \otimes\sigma_{m}\otimes I,\sigma_{1}\otimes\sigma_{2}\otimes\sigma _{3}]\ \nonumber\\ & =(\ (\ \sigma_{n}\sigma_{1})\otimes(\sigma_{m}\sigma_{2})\otimes\sigma _{3}-(\ \sigma_{1}\sigma_{n})\otimes(\ \sigma_{2}\sigma_{m})\otimes\sigma _{3})\nonumber\\ & =i\varepsilon_{n1p}\sigma_{p}\otimes i\varepsilon_{m2q}\sigma_{q}% \otimes\sigma_{3}-i\varepsilon_{1np}\sigma_{p}\otimes i\varepsilon_{2mq}% \sigma_{q}\otimes\sigma_{3}=0, \end{align} and% \begin{align} \lbrack\text{\textbf{T}},\text{\textbf{Q}}] & \backsim\ \ [\sigma_{1}% \otimes\sigma_{3}\otimes I,\sigma_{1}\otimes\sigma_{2}\otimes\sigma _{3}]\nonumber\\ & =\ (\ (\sigma_{1}\sigma_{1})\otimes(\sigma_{3}\sigma_{2})\otimes\sigma _{3}-(\sigma_{1}\sigma_{1})\otimes(\sigma_{2}\sigma_{3})\otimes\sigma _{3})\nonumber\\ & =I\otimes i\varepsilon_{32p}\sigma_{p}\otimes\sigma_{3}-I\otimes i\varepsilon_{23p}\sigma_{p}\otimes\sigma_{3}\nonumber\\ & =2i\varepsilon_{32p}I\otimes\sigma_{p}\otimes\sigma_{3}\subseteq \text{\textbf{T}}. \end{align} Next define% \begin{equation} L=(S+T+Q)(L), \end{equation} and in much of the following, at the risk of an ambiguous but convenient notation,% \begin{equation} S\equiv S(L),\ \ \ T\equiv T(L),\ \ Q\equiv Q(L). \end{equation} It then follows from the geodesic equation (5.12) and Eqs. (7.1), (5.10) and (7.2)-(7.4) that \begin{align} \frac{dS(L)}{dt} & =iS([(S+T+Q)(L),(s^{-1}S(L)+T(L)+q^{-1}Q(L)])\nonumber\\ & =iS([S,T]+q^{-1}[S,Q]\nonumber\\ & +s^{-1}[T,S]+q^{-1}[T,Q]+s^{-1}[Q,S]+[Q,T])\nonumber\\ & =iS(\{\subseteq\text{\textbf{T}}\}+q^{-1}\{\subseteq\text{\textbf{Q}% }\}+s^{-1}\{\subseteq\text{\textbf{T}}\}+q^{-1}\{\subseteq\text{\textbf{T}% }\}+s^{-1}\{\subseteq\text{\textbf{Q}}\}+\{\subseteq\text{\textbf{T}% }\})\nonumber\\ & =0,\ \ \ \ \ \ \ \ \ \ \ \ \end{align}% \begin{align} \frac{dT(L)}{dt} & =iT([(S+T+Q)(L),(s^{-1}S(L)+T(L)+q^{-1}Q(L)])\nonumber\\ & =iT([S,T]+q^{-1}[S,Q]+s^{-1}[T,S]\nonumber\\ & +q^{-1}[T,Q]+s^{-1}[Q,S]+[Q,T])\nonumber\\ & =iT(\{\subseteq\text{\textbf{T}}\}+q^{-1}\{\subseteq\text{\textbf{Q}% }\}+s^{-1}\{\subseteq\text{\textbf{T}}\}+q^{-1}\{\subseteq\text{\textbf{T}% }\}+s^{-1}\{\subseteq\text{\textbf{Q}}\}+\{\subseteq\text{\textbf{T}% }\})\nonumber\\ & =i(([S,T])+0+s^{-1}([T,S])\nonumber\\ & +q^{-1}([T,Q])+0+([Q,T]))\nonumber\\ & =i((1-s^{-1})[S,T]+(1-q^{-1})[Q,T]), \end{align} and% \begin{align} \frac{dQ(L)}{dt} & =iQ([S,T]+q^{-1}[S,Q]+s^{-1}[T,S]\nonumber\\ & +q^{-1}[T,Q]+s^{-1}[Q,S]+[Q,T])\nonumber\\ & =i(q^{-1}[S,Q]+s^{-1}[Q,S])\nonumber\\ & =i(q^{-1}-s^{-1})[S,Q]. \end{align} Thus one has \cite{Nielsen 4} \begin{equation} \frac{dS(L)}{dt}=0, \end{equation} \qquad% \begin{equation} \frac{dT}{dt}=i([((1-s^{-1})S+(1-q^{-1})Q),T]), \end{equation} and% \begin{equation} \frac{dQ}{dt}=i(q^{-1}-s^{-1})[S,Q]. \end{equation} From Eq. (7.14), it follows that% \begin{equation} S(t)=S_{0}, \end{equation} where $S_{0}$ is a constant matrix. Next substituting Eq. (7.17) in Eq. (7.16), and defining \begin{equation} k=i(q^{-1}-s^{-1}), \end{equation} one obtains \begin{equation} \frac{dQ}{dt}=k[S_{0},Q], \end{equation} or equivalently, \begin{equation} \frac{dQ}{dt}=kS_{0}Q-kQS_{0}. \end{equation} Next, in order to solve Eq. (7.20) for $Q(t)$, one may make the ansatz:% \begin{equation} Q(t)=f(S_{0},t)\overline{Q}g(S_{0},t), \end{equation} in which $f(S_{0},t)$ and $g(S_{0},t)$ are matrix functions to be determined, and $\overline{Q}$ is a constant matrix. Then one has \begin{equation} \frac{dQ(t)}{dt}=\frac{d}{dt}f(S_{0},t)\overline{Q}g(S_{0},t)+f(S_{0}% ,t)\overline{Q}\frac{d}{dt}g(S_{0},t), \end{equation} or equivalently, \begin{equation} \frac{dQ(t)}{dt}=\frac{d}{dt}f(S_{0},t)f(S_{0},t)^{-1}f(S_{0},t)\overline {Q}g(S_{0},t)+f(S_{0},t)\overline{Q}g(S_{0},t)g(S_{0},t)^{-1}\frac{d}% {dt}g(S_{0},t), \end{equation} and substituting Eq. (7.21) in Eq. (7.23), one obtains \begin{equation} \frac{dQ(t)}{dt}=\frac{d}{dt}f(S_{0},t)f(S_{0},t)^{-1}Q(t)+Q(t)g(S_{0}% ,t)^{-1}\frac{d}{dt}g(S_{0},t). \end{equation} Next comparing Eqs. (7.20) and (7.24), it follows that \begin{equation} \frac{d}{dt}f(S_{0},t)f(S_{0},t)^{-1}=-g(S_{0},t)^{-1}\frac{d}{dt}% g(S_{0},t)=kS_{0}, \end{equation} and therefore \begin{equation} f(S_{0},t)=c_{f}e^{kS_{0}t}, \end{equation} and \begin{equation} g(S_{0},t)=c_{g}e^{-kS_{0}t}, \end{equation} where $c_{f}$ and $c_{g}$ are constants. Therefore, substituting Eqs. (7.26), (7.27), and (7.18) in Eq. (7.21), using Eq. (7.17), and defining $Q_{0}\equiv c_{f}c_{g}\overline{Q}=Q(0)$, one obtains \cite{Nielsen 4}% \begin{equation} Q(t)=e^{it(q^{-1}-s^{-1})S_{0}}Q_{0}e^{-it(q^{-1}-s^{-1})S_{0}}. \end{equation} A check that Eq. (7.28) does indeed satisfy Eq. (7.16) is given in Appendix F. Next substituting Eqs. (7.17), (7.18), and (7.28) in Eq. (7.15), and defining \begin{equation} k_{1}=i(1-s^{-1}) \end{equation} and \begin{equation} k_{2}=i(1-q^{-1}), \end{equation} one obtains \begin{equation} \frac{dT}{dt}=k_{1}[S_{0},T]+k_{2}[e^{kS_{0}t}Q_{0}e^{-kS_{0}t},T], \end{equation} or \begin{equation} \frac{dT}{dt}=(k_{1}S_{0}+k_{2}e^{kS_{0}t}Q_{0}e^{-kS_{0}t})T-T(k_{1}% S_{0}+k_{2}e^{kS_{0}t}Q_{0}e^{-kS_{0}t}). \end{equation} Next, making the ansatz \begin{equation} T(t)=a(S_{0},Q_{0},t)\overline{T}b(S_{0},Q_{0},t), \end{equation} in which $a(S_{0},Q_{0},t)$ and $b(S_{0},Q_{0},t)$ are matrix functions to be determined, and $\overline{T}$ is a constant matrix, then \begin{equation} \frac{dT}{dt}=\frac{da(S_{0},Q_{0},t)}{dt}\overline{T}b(S_{0},Q_{0}% ,t)+a(S_{0},Q_{0},t)\overline{T}\frac{db(S_{0},Q_{0},t)}{dt}, \end{equation} or equivalently \begin{align} \frac{dT}{dt} & =\frac{da(S_{0},Q_{0},t)}{dt}a^{-1}(S_{0},Q_{0}% ,t)a(S_{0},Q_{0},t)\overline{T}b(S_{0},Q_{0},t)\nonumber\\ & +a(S_{0},Q_{0},t)\overline{T}b(S_{0},Q_{0},t)b^{-1}(S_{0},Q_{0}% ,t)\frac{db(S_{0},Q_{0},t)}{dt}. \end{align} But substituting Eq.\ (7.33) in Eq. (7.35), one obtains \begin{equation} \frac{dT}{dt}=\frac{da(S_{0},Q_{0},t)}{dt}a^{-1}(S_{0},Q_{0},t)T(t)+T(t)b^{-1}% (S_{0},Q_{0},t)\frac{db(S_{0},Q_{0},t)}{dt}. \end{equation} Comparing Eqs. (7.32) and (7.36), then \begin{equation} \frac{da(S_{0},Q_{0},t)}{dt}a^{-1}(S_{0},Q_{0},t)=-b^{-1}(S_{0},Q_{0}% ,t)\frac{db(S_{0},Q_{0},t)}{dt}=k_{1}S_{0}+k_{2}e^{kS_{0}t}Q_{0}e^{-kS_{0}t}. \end{equation} To solve Eq. (7.37), one makes the ansatz: \begin{equation} a(S_{0},Q_{0},t)=c_{a}e^{k_{3}S_{0}t}e^{(k_{4}S_{0}+k_{5}Q_{0})t}, \end{equation} where $c_{a}$ is a constant. Then one has \begin{equation} \frac{da}{dt}=k_{3}S_{0}a+a(k_{4}S_{0}+k_{5}Q_{0}), \end{equation} and multiplying from the right with $a^{-1}$, one obtains \begin{equation} \frac{da}{dt}a^{-1}=k_{3}S_{0}+a(k_{4}S_{0}+k_{5}Q_{0})a^{-1}. \end{equation} Next substituting Eq. (7.38) in Eq. (7.40), one obtains \begin{align} \frac{da}{dt}a^{-1} & =k_{3}S_{0}+e^{k_{3}S_{0}t}e^{(k_{4}S_{0}+k_{5}% Q_{0})t}(k_{4}S_{0}+k_{5}Q_{0})e^{-(k_{4}S_{0}+k_{5}Q_{0})t}e^{-k_{3}S_{0}% t}\nonumber\\ & =k_{3}S_{0}+e^{k_{3}S_{0}t}(k_{4}S_{0}+k_{5}Q_{0})e^{-k_{3}S_{0}% t}\nonumber\\ & =k_{3}S_{0}+k_{4}S_{0}+k_{5}e^{k_{3}S_{0}t}Q_{0}e^{-k_{3}S_{0}t}\nonumber\\ & =(k_{3}+k_{4})S_{0}+k_{5}e^{k_{3}S_{0}t}Q_{0}e^{-k_{3}S_{0}t}. \end{align} Comparing Eqs. (7.37) and (7.41), it follows that \begin{equation} k_{3}+k_{4}=k_{1}, \end{equation}% \begin{equation} k_{5}=k_{2}, \end{equation} and \begin{equation} k_{3}=k. \end{equation} From Eqs. (7.42), (7.44), (7.29), (7.30), and (7.18), it follows that \begin{equation} k_{4}=k_{1}-k=i(1-s^{-1})-i(q^{-1}-s^{-1})=i(1-q^{-1})=k_{2}, \end{equation} and Eq. (7.38) becomes \begin{equation} a(S_{0},Q_{0},t)=c_{a}e^{i(q^{-1}-s^{-1})S_{0}t}e^{i(1-q^{-1})(S_{0}+Q_{0})t}. \end{equation} Next make the ansatz: \begin{equation} b(S_{0},Q_{0},t)=c_{b}e^{(k_{6}S_{0}+k_{7}Q_{0})t}e^{k_{8}S_{0}t,}, \end{equation} where $c_{b}$ is a constant. Then \begin{equation} \frac{db}{dt}=(k_{6}S_{0}+k_{7}Q_{0})b+bk_{8}S_{0}, \end{equation} and therefore \begin{equation} b^{-1}\frac{db}{dt}=k_{8}S_{0}+b^{-1}(k_{6}S_{0}+k_{7}Q_{0})b. \end{equation} Then substituting Eq. (7.47) in Eq. (7.49), one obtains \begin{align} b^{-1}\frac{db}{dt} & =k_{8}S_{0}+e^{-k_{8}S_{0}t}e^{-(k_{6}S_{0}+k_{7}% Q_{0})t}(k_{6}S_{0}+k_{7}Q_{0})e^{(k_{6}S_{0}+k_{7}Q_{0})t}e^{k_{8}S_{0}% t}\nonumber\\ & =k_{8}S_{0}+k_{6}S_{0}+k_{7}e^{-k_{8}S_{0}t}Q_{0}e^{k_{8}S_{0}t}\nonumber\\ & =(k_{8}+k_{6})S_{0}+k_{7}e^{-k_{8}S_{0}t}Q_{0}e^{k_{8}S_{0}t}. \end{align} Comparing Eqs. (7.50) and (7.37), then \begin{equation} k_{8}+k_{6}=-k_{1}, \end{equation}% \begin{equation} k_{7}=-k_{2}, \end{equation}% \begin{equation} k_{8}=-k, \end{equation} and using Eqs. (7.45), (7.51), and (7.53), one also has \begin{equation} k_{6}=-k_{1}+k=-k_{2}, \end{equation} so that Eq. (7.47) becomes \begin{equation} b(S_{0},Q_{0},t)=c_{b}e^{-k_{2}(S_{0}+Q_{0})t}e^{-kS_{0}t}. \end{equation} Then substituting Eqs. (7.30) and (7.18) in Eq. (7.55), one obtains \begin{equation} b(S_{0},Q_{0},t)=c_{b}e^{-i(1-q^{-1})(S_{0}+Q_{0})t}e^{-i(q^{-1}-s^{-1}% )S_{0}t}. \end{equation} Finally substituting Eqs. (7.56) and (7.46) in Eq. (7.33), and defining $T_{0}\equiv c_{a}c_{b}\overline{T}=T(0)$, one concludes that the solution to Eq. (7.15) is \cite{Nielsen 4}% \begin{equation} T(t)=e^{it(q^{-1}-s^{-1})S_{0}}e^{it(1-q^{-1})(S_{0}+Q_{0})}T_{0}% e^{-it(1-q^{-1})(S_{0}+Q_{0})}e^{-it(q^{-1}-s^{-1})S_{0}}. \end{equation} A check that Eq. (7.57) does indeed satisfy Eq. (7.15) is given in Appendix F. Next using Eqs. (7.1) and (7.10), the Hamiltonian is given by% \begin{equation} H(t)=\left( G_{3}^{s}\right) ^{-1}(L)=s^{-1}S(t)+T(t)+q^{-1}Q(t), \end{equation} or substituting Eqs. (7.17), (7.28), and (7.57), one obtains the locally optimal Hamiltonian,% \begin{align} H(t) & =s^{-1}S_{0}+e^{it(q^{-1}-s^{-1})S_{0}}e^{it(1-q^{-1})(S_{0}+Q_{0}% )}T_{0}\\ & \times e^{-it(1-q^{-1})(S_{0}+Q_{0})}e^{-it(q^{-1}-s^{-1})S_{0}}\nonumber\\ & +q^{-1}e^{it(q^{-1}-s^{-1})S_{0}}Q_{0}e^{-it(q^{-1}-s^{-1})S_{0}}.\nonumber \end{align} \section{SOLUTION FOR LARGE PENALTY FACTOR} In this section, the solution to the geodesic equation is obtained for a large penalty parameter $q$. One can assume the following normalization:% \begin{equation} \left\langle H(t),H(t)\right\rangle =1, \end{equation} and using Equations (3.14) and (8.1), one obtains (again for the three-qubit case)% \begin{equation} \frac{1}{2^{3}}\text{Tr}(H(t)G_{3}^{s}(H(t)))=1. \end{equation} Next using Eqs. (3.16 ), (3.17), (7.9), (7.10), and (7.1) in Eq. (8.2), one has% \begin{align} 1 & =\frac{1}{2^{3}}\text{Tr}(L(t)H(t))=\frac{1}{2^{3}}\text{Tr}% (L(t)G_{3}^{s-1}L(t)).\nonumber\\ & =\frac{1}{2^{3}}\text{Tr}\{(S(L)+T(L)+Q(L))(s^{-1}S(L)+T(L)+q^{-1}% Q(L)\}\nonumber\\ & =\frac{1}{2^{3}}\text{Tr}(s^{-1}S^{2}+ST+q^{-1}SQ+s^{-1}TS+T^{2}\nonumber\\ & +q^{-1}TQ+s^{-1}QS+QT+q^{-1}Q^{2})\nonumber\\ & \geq\frac{1}{2^{3}}\text{Tr}(s^{-1}S^{2}), \end{align} or equivalently,% \begin{equation} \frac{1}{2^{3}}\text{Tr}(S^{2})\leq s. \end{equation} Analogously,% \begin{equation} 1\geq\frac{1}{2^{3}}\text{Tr}(T^{2}), \end{equation} or% \begin{equation} \frac{1}{2^{3}}\text{Tr}(T^{2})\leq1. \end{equation} Also analogously, one has% \begin{equation} 1\geq\frac{1}{2^{3}}\text{Tr}(q^{-1}Q^{2}), \end{equation} or% \begin{equation} \frac{1}{2^{3}}\text{Tr}(Q^{2})\leq q. \end{equation} According to Eq. (8.8), one has that $q^{-1}Q\ \backsim\ O(\ q^{-1}% q^{1/2})\ \backsim\ O(q^{-1/2})$, and therefore $q^{-1}Q$ can be neglected for large $q$. According to Eq. (7.58), the resulting error in $U(t)$ is $O(tq^{-1/2})$. Then using Eq. (7.57) and neglecting $q^{-1}$ terms, one has% \begin{equation} T(t)\underset{q\rightarrow\infty}{\longrightarrow}=e^{-its^{-1}S_{0}% }e^{it(S_{0}+Q_{0})}T_{0}e^{-it(S_{0}+Q_{0})}e^{its^{-1}S_{0}}. \end{equation} The resultant error in $T(t)\leq t(q^{-1}s^{1/2}+q^{-1}q^{1/2})\ \backsim \ t(q^{-1}s^{1/2}+q^{-1/2})$, so that the resulting error in $U(t)\leq t^{2}(q^{-1}s^{1/2}+q^{-1/2})$. Using Eq. (7.58), one therefore has% \begin{equation} H(t)=s^{-1}S(t)+T(t)+q^{-1}Q(t)\approx\widetilde{H}(t)\equiv s^{-1}S(t)+T(t). \end{equation} It then follows that the approximate Hamiltonian $\widetilde{H}(t)$ for large $q$ is given by% \begin{equation} \widetilde{H}(t)\equiv s^{-1}S_{0}+e^{-its^{-1}S_{0}}e^{it(S_{0}+Q_{0})}% T_{0}e^{-it(S_{0}+Q_{0})}e^{its^{-1}S_{0}}. \end{equation} The resulting approximate solution $\widetilde{U}(t)$ to the Schr\"{o}dinger equation then satisfies $||U(t)-\widetilde{U}(t)||\leq O(tq^{-1/2}% +t^{2}(s^{1/2}q^{-1}+q^{-1/2}))$. Next make a change of variables to% \begin{equation} \widetilde{V}\equiv e^{-it(S_{0}+Q_{0})}e^{its^{-1}S_{0}}\widetilde{U}. \end{equation} Then% \begin{equation} \frac{d\widetilde{V}}{dt}\equiv-i(S_{0}+Q_{0})\widetilde{V}+is^{-1}% e^{-it(S_{0}+Q_{0})}S_{0}e^{its^{-1}S_{0}}\widetilde{U}+e^{-it(S_{0}+Q_{0}% )}e^{its^{-1}S_{0}}\frac{d\widetilde{U}}{dt}. \end{equation} But using Eq. (5.6), one has% \begin{equation} \frac{d\widetilde{U}}{dt}\approx\frac{dU(t)}{dt}=\frac{1}{i}HU\approx\frac {1}{i}\widetilde{H}\widetilde{U}, \end{equation} so substituting Eqs. (8.14), (8.12), and (8.11) in Eq. (8.13), and simplifying, it follows that% \begin{align} \frac{d\widetilde{V}}{dt} & \approx-i(S_{0}+Q_{0})\widetilde{V}% +is^{-1}e^{-it(S_{0}+Q_{0})}S_{0}e^{its^{-1}S_{0}}e^{-its^{-1}S_{0}% }e^{it(S_{0}+Q_{0})}\widetilde{V}\nonumber\\ & +\frac{1}{i}e^{-it(S_{0}+Q_{0})}e^{its^{-1}S_{0}}\nonumber\\ & \times\left( s^{-1}S_{0}+e^{-its^{-1}S_{0}}e^{it(S_{0}+Q_{0})}% T_{0}e^{-it(S_{0}+Q_{0})}e^{its^{-1}S_{0}}\right) \nonumber\\ & \times e^{-its^{-1}S_{0}}e^{it(S_{0}+Q_{0})}\widetilde{V}\nonumber\\ & =-i(S_{0}+Q_{0})\widetilde{V}+is^{-1}e^{-it(S_{0}+Q_{0})}S_{0}% e^{it(S_{0}+Q_{0})}\widetilde{V}\nonumber\\ & +\frac{1}{i}e^{-it(S_{0}+Q_{0})}e^{its^{-1}S_{0}}\nonumber\\ & \times\left( s^{-1}S_{0}e^{-its^{-1}S_{0}}e^{it(S_{0}+Q_{0})}% +e^{-its^{-1}S_{0}}e^{it(S_{0}+Q_{0})}T_{0}\right) \widetilde{V}\nonumber\\ & =-i(S_{0}+Q_{0})\widetilde{V}+is^{-1}e^{-it(S_{0}+Q_{0})}\nonumber\\ & \times\left( S_{0}e^{it(S_{0}+Q_{0})}-S_{0}e^{it(S_{0}+Q_{0})}\right) \widetilde{V}\nonumber\\ & +\frac{1}{i}T_{0}\widetilde{V}\nonumber\\ & =-i(S_{0}+T_{0}+Q_{0})\widetilde{V}. \end{align} Next integrating Eq. (8.15), and noting that $\widetilde{V}(0)=\widetilde {U}(0)=1$, one obtains% \begin{equation} \widetilde{V}=e^{-it(S_{0}+T_{0}+Q_{0})}, \end{equation} and substituting Eq. (8.16) in Eq. (8.12), one obtains for large $q$:% \begin{equation} \widetilde{U}=e^{-its^{-1}S_{0}}e^{it(S_{0}+Q_{0})}\widetilde{V}% =e^{-its^{-1}S_{0}}e^{it(S_{0}+Q_{0})}e^{-it(S_{0}+T_{0}+Q_{0})},\ \ q\gg1. \end{equation} For the case $s\rightarrow0$, there is negligible cost for single-qubit unitary operations, the $S_{0}$ term in the last two exponents in Eq. (8.17) can be neglected, and Eq. (8.17) becomes$\ \$% \begin{equation} \widetilde{U}=e^{-its^{-1}S_{0}}e^{itQ_{0}}e^{-it(T_{0}+Q_{0})},\ \ q\gg1. \end{equation} In the case of general $s$, according to Eq. (8.17), one has% \begin{equation} \widetilde{U}=e^{-its^{-1}S_{0}}e^{it(S_{0}+Q_{0})}e^{-it(S_{0}+T_{0}+Q_{0}% )},\ \ q\gg1. \end{equation} One might expect that $S_{0}+Q_{0}\gg T_{0}$ and that $\ S_{0}+Q_{0}$ is nondegenerate. Then first-order perturbation theory implies \cite{Nielsen 4} \begin{equation} \widetilde{U}\approx e^{-its^{-1}S_{0}}e^{-itR_{S_{0}+Q_{0}}(T_{0})}% ,\ \ q\gg1, \end{equation} in which $R_{S_{0}+Q_{0}}(T_{0})$ is the diagonal matrix remaining in the eigenbasis of $S_{0}+Q_{0}$ and the off-diagonal terms of $T_{0}$ are removed. If $Q_{0}$ is nondegenerate as $s\rightarrow0$, then one obtains% \begin{equation} \widetilde{U}(t)\approx e^{-its^{-1}S_{0}}e^{-itR_{Q_{0}}(T_{0})},\ \ q\gg1. \end{equation} It is important to emphasize that in general the solution described here is only locally geodesic and is a free optimal Hamiltonian evolution from some initial value over a limited interval of time. Global geodesics are not addressed here. \section{RIEMANN CURVATURE TENSOR} For a right-invariant vector field $Z$, one has after substituting% \begin{equation} Z=\underset{\tau}{% %TCIMACRO{\dsum }% %BeginExpansion {\displaystyle\sum} %EndExpansion }z^{\tau}\tau,\ \ \ Y=\underset{\sigma}{% %TCIMACRO{\dsum }% %BeginExpansion {\displaystyle\sum} %EndExpansion }y^{\sigma}\sigma \end{equation} in Eq. (4.3) (Here the Hamiltonian representation is to be understood):% \begin{equation} \underset{\sigma\tau}{% %TCIMACRO{\dsum }% %BeginExpansion {\displaystyle\sum} %EndExpansion }\nabla_{\sigma}\tau y^{\sigma}z^{\tau}=\frac{i}{2}\underset{\sigma\tau}{% %TCIMACRO{\dsum }% %BeginExpansion {\displaystyle\sum} %EndExpansion }\left( [\sigma,\tau]+F([\sigma,G(\tau)]+[\tau,G(\sigma)]\right) )y^{\sigma }z^{\tau}, \end{equation} and therefore% \begin{equation} \nabla_{\sigma}\tau=\frac{i}{2}\left( [\sigma,\tau]+F([\sigma,G(\tau )]+[\tau,G(\sigma)]\right) ). \end{equation} Evidently, using Eqs. (3.15), (14.3) and (14.4) of Appendix C, one has% \begin{equation} \lbrack\sigma,G(\tau)]=\left\{ \begin{array} [c]{c}% \lbrack\sigma,\tau],\ \ \tau\in S_{12}\cup S_{0}\\ q[\sigma,\tau],\ \ \tau\notin S_{12}\cup S_{0}% \end{array} \right. , \end{equation} and therefore% \begin{equation} F\left( [\sigma,G(\tau)]\right) =\left\{ \begin{array} [c]{c}% F([\sigma,\tau]),\ \ \tau\in S_{12}\cup S_{0}\\ qF([\sigma,\tau]),\ \ \tau\notin S_{12}\cup S_{0}% \end{array} \right. , \end{equation} or using Eq. (4.13), this becomes% \begin{equation} F\left( [\sigma,G(\tau)]\right) =\left\{ \begin{array} [c]{c}% \frac{1}{q_{[\sigma,\tau]}}[\sigma,\tau],\ \ \tau\in S_{12}\cup S_{0}\\ \frac{q}{q_{[\sigma,\tau]}}[\sigma,\tau],\ \ \tau\notin S_{12}\cup S_{0}% \end{array} \right. , \end{equation} where% \begin{equation} q_{_{[\sigma,\tau]}}=1\ \text{if }[\sigma,\tau]=0,\ \ q_{_{[\sigma,\tau]}% }=q_{\lambda}\ \text{if }[\sigma,\tau]\propto\lambda,\ \text{and }% q_{_{[\sigma,\tau]}}=q_{_{[\tau,\sigma]}}, \end{equation} and $q_{\lambda}$ is defined by Eq. (14.8) of Appendix C. Equation (9.6) can be written as \begin{equation} F\left( [\sigma,G(\tau)]\right) =\frac{q_{\tau}}{q_{[\sigma,\tau]}}% [\sigma,\tau]. \end{equation} Next substituting Eq. (9.8 ) in Eq. (9.3 ), one obtains% \begin{equation} \nabla_{\sigma}\tau=\frac{i}{2}\left( \left( 1+\frac{q_{\tau}}% {q_{[\sigma,\tau]}}\right) [\sigma,\tau]+\frac{q_{\sigma}}{q_{[\tau,\sigma]}% }[\tau,\sigma]\right) , \end{equation} or equivalently, using Eq.(9.7), this becomes% \begin{equation} \nabla_{\sigma}\tau=\frac{i}{2}\left( 1+\frac{q_{\tau}-q_{\sigma}}% {q_{[\sigma,\tau]}}\right) [\sigma,\tau], \end{equation} or% \begin{equation} \nabla_{\sigma}\tau=ic_{\sigma,\tau}[\sigma,\tau], \end{equation} where \begin{equation} c_{\sigma,\tau}=\frac{1}{2}\left( 1+\frac{q_{\tau}-q_{\sigma}}{q_{[\sigma ,\tau]}}\right) . \end{equation} The Riemann curvature tensor with the inner-product (metric) Eq. (3.14) is given by \cite{Hall},\cite{MTW}% \begin{equation} R(W,X,Y,Z)=\left\langle \nabla_{W}\nabla_{X}Y-\nabla_{X}\nabla_{W}% Y-\nabla_{i[W,X]}Y,Z\right\rangle , \end{equation} and after substituting the vector fields, expressed in terms of a basis of right-invariant frame fields $\rho$, $\sigma$, $\tau$, and $\mu$,% \begin{equation} W=\underset{\sigma}{% %TCIMACRO{\dsum }% %BeginExpansion {\displaystyle\sum} %EndExpansion }w^{\rho}\rho,\ \ \ X=\underset{\sigma}{% %TCIMACRO{\dsum }% %BeginExpansion {\displaystyle\sum} %EndExpansion }z^{\sigma}\sigma,\ \ \ Y=\underset{\tau}{% %TCIMACRO{\dsum }% %BeginExpansion {\displaystyle\sum} %EndExpansion }y^{\tau}\tau,\ \ Z=\underset{\mu}{% %TCIMACRO{\dsum }% %BeginExpansion {\displaystyle\sum} %EndExpansion }z^{\mu}\mu, \end{equation} Eq. (9.13) becomes% \begin{equation} R_{\rho\sigma\tau\mu}=\left\langle \nabla_{\rho}\nabla_{\sigma}\tau -\nabla_{\sigma}\nabla_{\rho}\tau-\nabla_{i[\rho,\sigma]}\tau,\mu\right\rangle . \end{equation} Next, for three right-invariant vector fields $X$, $Y$, and $Z$, one has \begin{equation} 0=\nabla_{Y}\left\langle X,Z\right\rangle =\left\langle X,\nabla _{Y}Z\right\rangle +\left\langle \nabla_{Y}X,Z\right\rangle , \end{equation} or% \begin{equation} \left\langle X,\nabla_{Y}Z\right\rangle =-\left\langle \nabla_{Y}% X,Z\right\rangle , \end{equation} and substituting Eqs. (9.14) in Eq. (9.17), one then has% \begin{equation} \left\langle \sigma,\nabla_{\tau}\mu\right\rangle =-\left\langle \nabla_{\tau }\sigma,\mu\right\rangle . \end{equation} Therefore% \begin{equation} \left\langle \nabla_{\rho}\nabla_{\sigma}\tau,\mu\right\rangle =-\left\langle \nabla_{\sigma}\tau,\nabla_{\rho}\mu\right\rangle , \end{equation} and% \begin{equation} \left\langle \nabla_{\sigma}\nabla_{\rho}\tau,\mu\right\rangle =-\left\langle \nabla_{\rho}\tau,\nabla_{\sigma}\mu\right\rangle . \end{equation} Then substituting Eqs. (9.19) and (9.20) in Eq. (9.15), and interchanging the first and second terms, one obtains% \begin{equation} R_{\rho\sigma\tau\mu}=\left\langle \nabla_{\rho}\tau,\nabla_{\sigma}% \mu\right\rangle -\left\langle \nabla_{\sigma}\tau,\nabla_{\rho}% \mu\right\rangle -\left\langle \nabla_{i[\rho,\sigma]}\tau,\mu\right\rangle . \end{equation} Also clearly \begin{equation} \nabla_{iY}Z=i\nabla_{Y}Z, \end{equation} so Eq. (9.21) can also be written as% \begin{equation} R_{\rho\sigma\tau\mu}=\left\langle \nabla_{\rho}\tau,\nabla_{\sigma}% \mu\right\rangle -\left\langle \nabla_{\sigma}\tau,\nabla_{\rho}% \mu\right\rangle -i\left\langle \nabla_{\lbrack\rho,\sigma]}\tau ,\mu\right\rangle . \end{equation} Next substituting Eq. (9.11) in Eq. (9.23), one obtain the following useful form for the Riemann curvature tensor \cite{Nielsen 4}, \cite{Milnor}% -\cite{Arnold 2}:% \begin{equation} R_{\rho\sigma\tau\mu}=c_{\rho,\tau}c_{\sigma,\mu}\left\langle i[\rho ,\tau],i[\sigma,\mu]\right\rangle -c_{\sigma,\tau}c_{\rho,\mu}\left\langle i[\sigma,\tau],i[\rho,\mu]\right\rangle -c_{[\rho,\sigma],\tau}\left\langle i[i[\rho,\sigma],\tau],\mu\right\rangle . \end{equation} The Riemannian curvature is important in determining the Jacobi field and the Jacobi equation (See Section 11), and in investigations of the global characteristics of geodesic paths in the group manifold \cite{Berger}. \section{SECTIONAL CURVATURE} The sectional curvature spanned by orthonormal right-invariant orthonormal vector fields $X$ and $Y$ is defined by \cite{Lee}% \begin{equation} K(X,Y)\equiv\frac{R(X,Y,Y,X)}{|X|^{2}|Y|^{2}-\left\langle X,Y\right\rangle ^{2}}=R(X,Y,Y,X). \end{equation} From Eqs. (9.14) and (9.21), it immediately follows that% \begin{equation} R(W,X,Y,Z)=\left\langle \nabla_{W}Y,\nabla_{X}Z\right\rangle -\left\langle \nabla_{X}Y,\nabla_{W}Z\right\rangle -\left\langle \nabla_{i[W,X]}% Y,Z\right\rangle , \end{equation} and substituting Eq. (10.2) in Eq. (10.1), one obtains% \begin{equation} K(X,Y)=\left\langle \nabla_{X}Y,\nabla_{Y}X\right\rangle -\left\langle \nabla_{Y}Y,\nabla_{X}X\right\rangle -\left\langle \nabla_{i[X,Y]}% Y,X\right\rangle . \end{equation} Next it is useful to define% \begin{equation} B(X,Y)=F(i[G(X),Y]), \end{equation} and using Eqs. (3.14) and Eq. (10.4), one obtains% \begin{equation} \left\langle B(X,Y),Z\right\rangle =\left\langle F(i[G(X),Y]),Z\right\rangle , \end{equation} or equivalently, using Eq. (3.14), then% \begin{equation} \left\langle B(X,Y),Z\right\rangle =\frac{1}{2^{n}}\text{Tr(}% F(i[G(X),Y])G(Z)). \end{equation} Because the superoperator $G$ is Hermitian, Eq. (10.6) can also be written as \begin{equation} \left\langle B(X,Y),Z\right\rangle =\frac{1}{2^{n}}\text{Tr}(GF(i[G(X),Y])Z), \end{equation} but according to Eq. (3.45) one has% \begin{equation} GF=I, \end{equation} and therefore Eq. (10.7) becomes% \begin{equation} \left\langle B(X,Y),Z\right\rangle =\frac{1}{2^{n}}\text{Tr}(i[G(X),Y]Z). \end{equation} Next expanding the commutator, Eq. (10.9) becomes% \begin{equation} \left\langle B(X,Y),Z\right\rangle =\frac{i}{2^{n}}\text{Tr}(G(X)YZ-YG(X)Z), \end{equation} and using the cyclic property of the trace, one obtains% \begin{equation} \left\langle B(X,Y),Z\right\rangle =\frac{i}{2^{n}}\text{Tr}(G(X)YZ-G(X)ZY), \end{equation} or, equivalently,% \begin{equation} \left\langle B(X,Y),Z\right\rangle =\frac{i}{2^{n}}\text{Tr}(G(X)[Y,Z]). \end{equation} But since the superoperator $G$ is Hermitian, Eq. (10.12) can also be written as% \begin{equation} \left\langle B(X,Y),Z\right\rangle =\frac{i}{2^{n}}\text{Tr}(XG([Y,Z])), \end{equation} or equivalently, using Eq. (3.14), this becomes \begin{equation} \left\langle X,i[Y,Z]\right\rangle =\left\langle B(X,Y),Z\right\rangle . \end{equation} But according to Eqs. (3.14), it follows that for vectors $X$ and $Y$ one has% \begin{equation} \left\langle X,Y\right\rangle =\frac{1}{2^{n}}\text{Tr}(XG(Y)), \end{equation} and because the superoperator $G$ is Hermitian, this can also be written as% \begin{equation} \left\langle X,Y\right\rangle =\frac{1}{2^{n}}\text{Tr}(G(X)Y), \end{equation} which by the cyclic invariance of the trace becomes% \begin{equation} \left\langle X,Y\right\rangle =\frac{1}{2^{n}}\text{Tr}(YG(X)), \end{equation} or equivalently using Eq. (3.14), it follows that% \begin{equation} \left\langle X,Y\right\rangle =\left\langle Y,X\right\rangle , \end{equation} consistent with the Riemannian symmetric metric. Next, for a right-invariant field $Y$, one has, using Eq. (4.3),% \begin{equation} \nabla_{X}Y=\frac{1}{2}\left( i[X,Y]+F(i[X,G(Y)])+F(i[Y,G(X)])\right) , \end{equation} or% \begin{equation} \nabla_{X}Y=\frac{1}{2}\left( i[X,Y]-F(i[G(Y),X])-F(i[G(X),Y])\right) . \end{equation} Then substituting Eq. (10.4) in Eq. (10.20), one obtains% \begin{equation} \nabla_{X}Y=\frac{1}{2}\left( i[X,Y]-B(X,Y)-B(Y,X)\right) . \end{equation} Next, according to Eq. (10.3), it follows that% \begin{equation} K(X,Y)=R(X,Y,Y,X)=\left\langle \nabla_{X}Y,\nabla_{Y}X\right\rangle -\left\langle \nabla_{Y}Y,\nabla_{X}X\right\rangle -i\left\langle \nabla_{\lbrack X,Y]}Y,X\right\rangle . \end{equation} According to Eq. (10.21), one has% \begin{equation} \nabla_{\lbrack X,Y]}Y=\frac{1}{2}\left( i[[X,Y],Y]-B([X,Y],Y)-B(Y,[X,Y])\right) . \end{equation} Also using Eq. (10.21), one obtains% \begin{equation} \left\langle \nabla_{X}Y,\nabla_{Y}X\right\rangle =\frac{1}{4}\left( \left\langle i[X,Y]-B(X,Y)-B(Y,X),i[Y,X]-B(Y,X)-B(X,Y)\right\rangle \right) , \end{equation} or, equivalently,% \begin{equation} \left\langle \nabla_{X}Y,\nabla_{Y}X\right\rangle =\frac{1}{4}\left( \left\langle i[X,Y]-B(X,Y)-B(Y,X),-i[X,Y]-B(X,Y)-B(Y,X)\right\rangle \right) , \end{equation} or% \begin{align} \left\langle \nabla_{X}Y,\nabla_{Y}X\right\rangle & =\frac{1}{4}\left\langle -i[X,Y],i[X,Y]\right\rangle \nonumber\\ & -\frac{1}{4}\left\langle i[X,Y],B(X,Y)+B(Y,X)\right\rangle +\frac{1}% {4}\left\langle B(X,Y)+B(Y,X),i[X,Y]\right\rangle \nonumber\\ & +\frac{1}{4}\left\langle B(X,Y)+B(Y,X),B(X,Y)+B(Y,X)\right\rangle . \end{align} Next using Eq. (10.18) in Eq. (10.26), then% \begin{equation} \left\langle \nabla_{X}Y,\nabla_{Y}X\right\rangle =-\frac{1}{4}\left\langle i[X,Y],i[X,Y]\right\rangle +\frac{1}{4}\left\langle B(X,Y)+B(Y,X),B(X,Y)+B(Y,X)\right\rangle . \end{equation} Also, one has% \begin{align} \left\langle \nabla_{Y}Y,\nabla_{X}X\right\rangle & =\frac{1}{4}% (\left\langle -i[Y,Y],i[X,X]\right\rangle -\left\langle i[Y,Y],2B(X,X)\right\rangle +\left\langle 2B(Y,Y),i[X,X]\right\rangle \nonumber\\ & +4\left\langle B(Y,Y),B(X,X)\right\rangle ). \end{align} But, according to Eq. (10.18 ), one has% \begin{equation} \left\langle B(Y,Y),B(X,X)\right\rangle =\left\langle B(X,X),B(Y,Y)\right\rangle . \end{equation} Then simplifying, Eq.\ (10.28), one obtains% \begin{equation} \left\langle \nabla_{Y}Y,\nabla_{X}X\right\rangle =\left\langle B(X,X),B(Y,Y)\right\rangle . \end{equation} Next substituting Eqs. (10.27), (10.30), and (10.23) in Eq. (10.22), one has% \begin{align} K(X,Y) & =-\frac{1}{4}\left\langle i[X,Y],i[X,Y]\right\rangle +\frac{1}% {4}\left\langle B(X,Y)+B(Y,X),B(X,Y)+B(Y,X)\right\rangle \nonumber\\ & -\frac{i}{2}\left\langle i[[X,Y],Y],X\right\rangle +\frac{i}{2}\left\langle B([X,Y],Y),X\right\rangle +\frac{i}{2}\left\langle B(Y,[X,Y]),X\right\rangle \nonumber\\ & -\left\langle B(X,X),B(Y,Y)\right\rangle . \end{align} Expanding the third term of Eq. (10.31), one has, using Eq. (3.14),% \begin{align} -\frac{i}{2}\left\langle i[[X,Y],Y],X\right\rangle & =\frac{1}{2}\left( \frac{1}{2^{n}}\right) \text{Tr}\left( [[X,Y],Y]G(X)\right) \nonumber\\ & =\frac{1}{2}\left( \frac{1}{2^{n}}\right) \text{Tr(}\left( [X,Y]Y-Y[X,Y])G(X)\right) , \end{align} and using the cyclic invariance of the trace, then% \begin{align} -\frac{i}{2}\left\langle i[[X,Y],Y],X\right\rangle & =\frac{1}{2}\left( \frac{1}{2^{n}}\right) \text{Tr}([X,Y]YG(X)-[X,Y])G(X)Y)\nonumber\\ & =\frac{1}{2}\left( \frac{1}{2^{n}}\right) \text{Tr}(i[X,Y]i[G(X),Y]). \end{align} Next using Eqs. (10.4), (3.14), and (3.45) in Eq. (10.33), one obtains% \begin{equation} -\frac{i}{2}\left\langle i[[X,Y],Y],X\right\rangle =\frac{1}{2}\left\langle i[X,Y],B(X,Y)\right\rangle . \end{equation} Next, in the fourth term of Eq. (10.31) one has, using Eq. (10.14),% \begin{equation} \left\langle B([X,Y],Y),X\right\rangle =\left\langle [X,Y],i[Y,X]\right\rangle =i\left\langle i[X,Y],i[X,Y]\right\rangle . \end{equation} In the fifth term of Eq. (10.31), using Eq. (10.14), one has% \begin{equation} \left\langle B(Y,[X,Y]),X\right\rangle =\left\langle Y,i[[X,Y],X]\right\rangle , \end{equation} or equivalently,% \begin{equation} \left\langle B(Y,[X,Y]),X\right\rangle =\left\langle Y,i[X,[Y,X]]\right\rangle , \end{equation} and using Eq. (10.14), this becomes% \begin{equation} \left\langle B(Y,[X,Y]),X\right\rangle =-\left\langle B(Y,X),[X,Y]\right\rangle . \end{equation} Next using Eq. (10.18), Eq. (10.38) becomes \begin{equation} \left\langle B(Y,[X,Y]),X\right\rangle =-\left\langle [X,Y],B(Y,X)\right\rangle . \end{equation} Next, substituting Eqs. (10.34), (10.35), and (10.39) in Eq. (10.31), one has \begin{align} K(X,Y) & =-\frac{1}{4}\left\langle i[X,Y],i[X,Y]\right\rangle +\frac{1}% {4}\left\langle B(X,Y)+B(Y,X),B(X,Y)+B(Y,X)\right\rangle \nonumber\\ & +\frac{1}{2}\left\langle i[X,Y],B(X,Y)\right\rangle -\frac{1}% {2}\left\langle i[X,Y],i[X,Y]\right\rangle -\frac{1}{2}\left\langle i[X,Y],B(Y,X)\right\rangle \nonumber\\ & -\left\langle B(X,X),B(Y,Y)\right\rangle , \end{align} and combining terms, this becomes \cite{Nielsen 4}, \cite{Milnor}-\cite{Arnold 2}% \begin{align} K(X,Y) & =-\frac{3}{4}\left\langle i[X,Y],i[X,Y]\right\rangle +\frac{1}% {4}\left\langle B(X,Y)+B(Y,X),B(X,Y)+B(Y,X)\right\rangle \nonumber\\ & +\frac{1}{2}\left\langle i[X,Y],B(X,Y)-B(Y,X)\right\rangle -\left\langle B(X,X),B(Y,Y)\right\rangle . \end{align} \section{JACOBI FIELD} Consider a one-parameter family of geodesics \begin{equation} x^{j}=x^{j}(s,t), \end{equation} in which the parameter $s$ distinguishes a particular geodesic in the family, and $t$ is the usual curve parameter which can be taken to be time. The Riemannian geodesic equation in a coordinate representation is given by \cite{Lee} \begin{equation} \frac{\partial^{2}}{\partial t^{2}}x^{j}(s)+\ \Gamma_{kl}^{j}(s)\frac{\partial x^{k}}{\partial t}\frac{\partial x^{l}}{\partial t}=0, \end{equation} in which, according to Eq. (3.38) and the symmetry of the metric in its indices, \begin{equation} \ \Gamma_{kl}^{j}(s)=\frac{1}{2}g^{jm}(s)(g_{km,l}(s)+g_{lm,k}(s)-g_{kl,m}% (s)), \end{equation} for metric $g_{ij}(s,x)\equiv g_{ij}(s)$. Let $x^{j}(0,t)$ be the base geodesic, and define the lifted Jacobi field along the base geodesic by \cite{Nielsen 4}% \begin{equation} J^{j}(t)=\frac{\partial}{\partial s}x^{j}(s,t)_{|s=0}, \end{equation} describing how the base geodesic changes as the parameter $s$ is varied. Using a Taylor series expansion, one has for small $\Delta s$ in the neighborhood of the base geodesic,% \begin{equation} x^{j}(\Delta s,t)=x^{j}(0,t)+\Delta sJ^{j}(t)+O(\Delta s^{2}). \end{equation} Here $x^{j}(\Delta s,t)$ satisfies the geodesic equation with the metric $g_{ij}(\Delta s)$. Operating on the geodesic equation, Eq. (11.2) with $\partial_{s}\equiv\frac{\partial}{\partial s}$ and substituting Eqs. (11.4) and (11.5), one obtains for $\Delta s\rightarrow0,$% \begin{align} 0 & =\frac{\partial^{2}}{\partial t^{2}}\underset{\Delta s\rightarrow 0}{\text{Lim}}\frac{\Delta sJ^{j}(t)}{\Delta s}+\ \Gamma_{kl,m}^{j}% (s)_{|s=0}\underset{\Delta s\rightarrow0}{\text{Lim}}\frac{\Delta sJ^{m}% (t)}{\Delta s}\frac{\partial x^{k}}{\partial t}\frac{\partial x^{l}}{\partial t}+\partial_{s}\Gamma_{kl}^{j}(s)_{|s=0}\frac{\partial x^{k}}{\partial t}% \frac{\partial x^{l}}{\partial t}\nonumber\\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\Gamma_{kl}^{j}(0)\left\{ \frac{\partial}{\partial t}\left( \underset{\Delta s\rightarrow0}{\text{Lim}% }\frac{\Delta sJ^{k}(t)}{\Delta s}\right) \frac{\partial x^{l}}{\partial t}+\frac{\partial x^{k}}{\partial t}\frac{\partial}{\partial t}\underset {\Delta s\rightarrow0}{\text{Lim}}\frac{\Delta sJ^{l}(t)}{\Delta s}\right\} , \end{align} in which $g_{ij}(0)\equiv g_{ij}$ is the base metric and $\Gamma_{kl}% ^{j}(0)\equiv\Gamma_{kl}^{j}$ is the base connection. Equation (11.6) then becomes \begin{align} 0 & =\frac{\partial^{2}J^{j}(t)}{\partial t^{2}}+\ \Gamma_{kl,m}% ^{j}(s)_{|s=0}J^{m}(t)\frac{\partial x^{k}}{\partial t}\frac{\partial x^{l}% }{\partial t}\nonumber\\ & +\partial_{s}\Gamma_{kl}^{j}(s)_{|s=0}\frac{\partial x^{k}}{\partial t}\frac{\partial x^{l}}{\partial t}+\Gamma_{kl}^{j}\left( \frac{\partial J^{k}}{\partial t}\frac{\partial x^{l}}{\partial t}+\frac{\partial x^{k}% }{\partial t}\frac{\partial J^{l}}{\partial t}\right) . \end{align} Taking account of dummy indices summed over, it is clearly true that% \begin{equation} -\Gamma_{lq}^{j}\Gamma_{ik}^{q}\frac{\partial x^{i}}{\partial t}\frac{\partial x^{l}}{\partial t}J^{k}+\Gamma_{kp}^{j}\Gamma_{mn}^{p}\frac{\partial x^{k}% }{\partial t}\frac{\partial x^{m}}{\partial t}J^{n}=0. \end{equation} One also has \begin{equation} -\Gamma_{ik,l}^{j}\frac{\partial x^{i}}{\partial t}\frac{\partial x^{l}% }{\partial t}J^{k}+\Gamma_{kp,m}^{j}\frac{\partial x^{m}}{\partial t}% \frac{\partial x^{k}}{\partial t}J^{p}=0. \end{equation} Also, using the geodesic equation, Eq. (11.2), one has% \begin{equation} \Gamma_{kp}^{j}\frac{\partial^{2}x^{k}}{\partial t^{2}}J^{p}=-\Gamma_{kp}% ^{j}\Gamma_{iq}^{k}\frac{\partial x^{i}}{\partial t}\frac{\partial x^{q}% }{\partial t}J^{p}, \end{equation} or renaming dummy indices on the right hand side, it follows that \begin{equation} \Gamma_{kp}^{j}\frac{\partial^{2}x^{k}}{\partial t^{2}}J^{p}+\Gamma_{qk}% ^{j}\Gamma_{il}^{q}\frac{\partial x^{i}}{\partial t}\frac{\partial x^{l}% }{\partial t}J^{k}=0. \end{equation} Next adding Eqs. (11.7)-(11.9) and (11.11), one obtains% \begin{align} 0 & =\frac{\partial^{2}J^{j}(t)}{\partial t^{2}}+\ \Gamma_{kl,m}^{j}% J^{m}(t)\frac{\partial x^{k}}{\partial t}\frac{\partial x^{l}}{\partial t}\nonumber\\ & +\partial_{s}\Gamma_{kl}^{j}(s)_{|s=0}\frac{\partial x^{k}}{\partial t}\frac{\partial x^{l}}{\partial t}+\Gamma_{kl}^{j}\left( \frac{\partial J^{k}}{\partial t}\frac{\partial x^{l}}{\partial t}+\frac{\partial x^{k}% }{\partial t}\frac{\partial J^{l}}{\partial t}\right) \nonumber\\ & -\Gamma_{lq}^{j}\Gamma_{lk}^{q}\frac{\partial x^{i}}{\partial t}% \frac{\partial x^{l}}{\partial t}J^{k}+\Gamma_{kp}^{j}\Gamma_{mn}^{p}% \frac{\partial x^{k}}{\partial t}\frac{\partial x^{m}}{\partial t}% J^{n}\nonumber\\ & -\Gamma_{ik,l}^{j}\frac{\partial x^{i}}{\partial t}\frac{\partial x^{l}% }{\partial t}J^{k}+\Gamma_{kp,m}^{j}\frac{\partial x^{m}}{\partial t}% \frac{\partial x^{k}}{\partial t}J^{p}+\Gamma_{kp}^{j}\frac{\partial^{2}x^{k}% }{\partial t^{2}}J^{p}+\Gamma_{qk}^{j}\Gamma_{il}^{q}\frac{\partial x^{i}% }{\partial t}\frac{\partial x^{l}}{\partial t}J^{k}, \end{align} or equivalently,% \begin{align} \frac{\partial^{2}J^{j}(t)}{\partial t^{2}} & =-\ \Gamma_{kl,m}^{j}% \frac{\partial x^{k}}{\partial t}\frac{\partial x^{l}}{\partial t}J^{m}% +\Gamma_{lq}^{j}\Gamma_{ik}^{q}\frac{\partial x^{i}}{\partial t}\frac{\partial x^{l}}{\partial t}J^{k}\nonumber\\ & -\Gamma_{kp}^{j}\Gamma_{mn}^{p}\frac{\partial x^{k}}{\partial t}% \frac{\partial x^{m}}{\partial t}J^{n}-\Gamma_{qk}^{j}\Gamma_{il}^{q}% \frac{\partial x^{i}}{\partial t}\frac{\partial x^{l}}{\partial t}J^{k}% +\Gamma_{ik,l}^{j}\frac{\partial x^{i}}{\partial t}\frac{\partial x^{l}% }{\partial t}J^{k}-\Gamma_{kp}^{j}\frac{\partial^{2}x^{k}}{\partial t^{2}% }J^{p}\nonumber\\ & -\Gamma_{kl}^{j}\left( \frac{\partial J^{k}}{\partial t}\frac{\partial x^{l}}{\partial t}+\frac{\partial x^{k}}{\partial t}\frac{\partial J^{l}% }{\partial t}\right) -\partial_{s}\Gamma_{kl}^{j}(s)_{|s=0}\frac{\partial x^{k}}{\partial t}\frac{\partial x^{l}}{\partial t}-\Gamma_{kp,m}^{j}% \frac{\partial x^{m}}{\partial t}\frac{\partial x^{k}}{\partial t}J^{p}. \end{align} Rearranging terms, then% \begin{align} \frac{\partial^{2}J^{j}(t)}{\partial t^{2}} & =\Gamma_{ik,l}^{j}% \frac{\partial x^{i}}{\partial t}\frac{\partial x^{l}}{\partial t}% J^{k}-\ \Gamma_{kl,m}^{j}\frac{\partial x^{k}}{\partial t}\frac{\partial x^{l}}{\partial t}J^{m}+\Gamma_{lq}^{j}\Gamma_{ik}^{q}\frac{\partial x^{i}% }{\partial t}\frac{\partial x^{l}}{\partial t}J^{k}\nonumber\\ & -\Gamma_{kp}^{j}\Gamma_{mn}^{p}\frac{\partial x^{k}}{\partial t}% \frac{\partial x^{m}}{\partial t}J^{n}-\Gamma_{kp,m}^{j}\frac{\partial x^{m}% }{\partial t}\frac{\partial x^{k}}{\partial t}J^{p}-\Gamma_{kp}^{j}% \frac{\partial^{2}x^{k}}{\partial t^{2}}J^{p}\nonumber\\ & -\Gamma_{kl}^{j}\frac{\partial x^{k}}{\partial t}\frac{\partial J^{l}% }{\partial t}-\Gamma_{kl}^{j}\frac{\partial x^{l}}{\partial t}\frac{\partial J^{k}}{\partial t}\nonumber\\ & -\Gamma_{qk}^{j}\Gamma_{il}^{q}\frac{\partial x^{i}}{\partial t}% \frac{\partial x^{l}}{\partial t}J^{k}-\partial_{s}\Gamma_{kl}^{j}% (s)_{|s=0}\frac{\partial x^{k}}{\partial t}\frac{\partial x^{l}}{\partial t}. \end{align} Noting that \begin{equation} \Gamma_{qp}^{j}=\Gamma_{pq}^{j}, \end{equation} and renaming dummy indices, Eq. (11.14) becomes% \begin{align} \frac{\partial^{2}J^{j}}{\partial t^{2}} & =\left( \Gamma_{ik,l}% ^{j}-\ \Gamma_{il,k}^{j}+\Gamma_{lq}^{j}\Gamma_{ik}^{q}-\Gamma_{kp}^{j}% \Gamma_{li}^{p}\right) \frac{\partial x^{i}}{\partial t}\frac{\partial x^{l}% }{\partial t}J^{k}\nonumber\\ & -\Gamma_{kp,m}^{j}\frac{\partial x^{m}}{\partial t}\frac{\partial x^{k}% }{\partial t}J^{p}-\Gamma_{kp}^{j}\frac{\partial^{2}x^{k}}{\partial t^{2}% }J^{p}-\Gamma_{kl}^{j}\frac{\partial x^{k}}{\partial t}\frac{\partial J^{l}% }{\partial t}\nonumber\\ & -\Gamma_{pk}^{j}\frac{\partial x^{k}}{\partial t}\left( \frac{\partial J^{p}}{\partial t}+\Gamma_{mn}^{p}\frac{\partial x^{m}}{\partial t}% J^{n}\right) -\partial_{s}\Gamma_{kl}^{j}(s)_{|s=0}\frac{\partial x^{k}% }{\partial t}\frac{\partial x^{l}}{\partial t}. \end{align} Next, using the expression for the covariant derivative, one has% \begin{align} \frac{D^{2}J^{j}}{Dt^{2}} & =\frac{\partial}{\partial t}\left( \frac {DJ^{j}}{Dt}\right) +\Gamma_{kp}^{j}\frac{\partial x^{k}}{\partial t}% \frac{DJ^{p}}{Dt}\nonumber\\ & =\frac{\partial}{\partial t}\left( \frac{\partial J^{j}}{\partial t}+\Gamma_{kp}^{j}\frac{\partial x^{k}}{\partial t}J^{p}\right) +\Gamma _{kp}^{j}\frac{\partial x^{k}}{\partial t}\frac{DJ^{p}}{Dt}, \end{align} or% \begin{align} \frac{D^{2}J^{j}}{Dt^{2}} & =\frac{\partial^{2}J^{j}}{\partial t^{2}}% +\Gamma_{kp,m}^{j}\frac{\partial x^{m}}{\partial t}\frac{\partial x^{k}% }{\partial t}J^{p}+\Gamma_{kp}^{j}\frac{\partial^{2}x^{k}}{\partial t^{2}% }J^{p}+\Gamma_{kp}^{j}\frac{\partial x^{k}}{\partial t}\frac{\partial J^{p}% }{\partial t}\nonumber\\ & +\Gamma_{kp}^{j}\frac{\partial x^{k}}{\partial t}\left( \frac{\partial J^{p}}{\partial t}+\Gamma_{mn}^{p}\frac{\partial x^{m}}{\partial t}% J^{n}\right) . \end{align} Also it is known that the Riemann curvature tensor is given by \cite{MTW} \begin{equation} R_{ikl}^{j}=\Gamma_{il,k}^{j}-\Gamma_{ik,l}^{j}+\Gamma_{kp}^{j}\Gamma_{li}% ^{p}-\Gamma_{lq}^{j}\Gamma_{ik}^{q}. \end{equation} Substituting Eqs. (11.16) and (11.19) in Eq. (11.18), one obtains the so-called lifted Jacobi equation \cite{Nielsen 4},% \begin{equation} \frac{D^{2}J^{j}}{Dt^{2}}+R_{ikl}^{j}\frac{\partial x^{i}}{\partial t}% \frac{\partial x^{l}}{\partial t}J^{k}+\partial_{s}\Gamma_{kl}^{j}% (s)_{|s=0}\frac{\partial x^{k}}{\partial t}\frac{\partial x^{l}}{\partial t}=0. \end{equation} This equation is useful for investigations of the global behavior of geodesics and their extrapolation to nonvanishing values of the parameter $s$ \cite{Nielsen 4}. For $g_{ij}$ independent of $s$, one has% \begin{equation} \partial_{s}\Gamma_{kl}^{j}(s)_{|s=0}=0, \end{equation} the last term of Eq. (11.20) is then vanishing, and one obtains the standard Jacobi equation for the Jacobi vector $J^{j}$ \cite{Lee},% \begin{equation} \frac{D^{2}J^{j}}{Dt^{2}}+R_{ikl}^{j}\frac{\partial x^{i}}{\partial t}% \frac{\partial x^{l}}{\partial t}J^{k}=0. \end{equation} Equation (11.22) is also known as the equation of geodesic deviation \cite{Wasserman},\cite{MTW}, measuring the local convergence or divergence of neighboring geodesics, and it is useful in the determination of geodesic conjugate points \cite{Lee}, \cite{Nielsen 4}. Next consider the factor in the last term of the lifted Jacobi equation, Eq. (11.20),% \begin{equation} L_{kl}^{j}\equiv\partial_{s}\Gamma_{kl}^{j}(s)_{|s=0}. \end{equation} Substituting Eq. (11.3) in Eq. (11.23), one has% \begin{equation} L_{kl}^{j}\equiv\left\{ \partial_{s}\left[ \frac{1}{2}g^{jm}(s)(g_{km,l}% (s)+g_{lm,k}(s)-g_{kl,m}(s)\right] \right\} _{|s=0}, \end{equation} or substituting Eq. (11.3), \begin{equation} L_{kl}^{j}\equiv\frac{\partial g^{jm}(s)}{\partial s}_{|s=0}\Gamma_{mkl}% +\frac{1}{2}g^{jm}(g_{km,l}^{\prime}+g_{lm,k}^{\prime}-g_{kl,m}^{\prime}), \end{equation} in which% \begin{equation} g_{km}^{\prime}\equiv\partial_{s}g_{km}(s)_{|s=0}. \end{equation} Next, the covariant derivative \ of $g_{km}^{\prime}$ is given by \cite{MTW}% \begin{equation} g_{km;l}^{\prime}=g_{km,l}^{\prime}-g_{ki}^{\prime}\Gamma_{ml}^{i}% -g_{mi}^{\prime}\Gamma_{kl}^{i}. \end{equation} Then substituting Eq. (11.27) in Eq. \ (11.25) and using Eq. (11.15), one obtains% \begin{align} L_{kl}^{j} & \equiv\frac{\partial g^{jm}(s)}{\partial s}_{|s=0}\Gamma _{mkl}+\frac{1}{2}g^{jm}(g_{km;l}^{\prime}+g_{ki}^{\prime}\Gamma_{ml}% ^{i}+g_{mi}^{\prime}\Gamma_{kl}^{i}\nonumber\\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +g_{lm;k}% ^{\prime}+g_{li}^{\prime}\Gamma_{mk}^{i}+g_{mi}^{\prime}\Gamma_{kl}% ^{i}\nonumber\\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ -g_{kl;m}% ^{\prime}-g_{ki}^{\prime}\Gamma_{lm}^{i}-g_{li}^{\prime}\Gamma_{km}^{i}), \end{align} or% \begin{align} L_{kl}^{j} & \equiv\frac{1}{2}g^{jm}(g_{km;l}^{\prime}+g_{lm;k}^{\prime }-g_{kl;m}^{\prime})\nonumber\\ & +\frac{\partial g^{jm}(s)}{\partial s}_{|s=0}\Gamma_{mkl}+g^{jm}% g_{mi}^{\prime}\Gamma_{kl}^{i}. \end{align} Next, one notes that% \begin{equation} (g^{jm}g_{mi})^{\prime}=(\delta_{i}^{j})^{\prime}=0, \end{equation} and therefore% \begin{equation} g^{jm}(0)\left( \frac{\partial}{\partial s}g_{mi}(s)\right) _{|s=0}=-\left( \frac{\partial g^{jm}(s)}{\partial s}\right) _{|s=0}g_{mi}(0). \end{equation} Multiplying both side of Eq. (11.31) by $\Gamma_{kl}^{i}$, one obtains% \begin{equation} g^{jm}g_{mi}^{\prime}\Gamma_{kl}^{i}=-\left( \frac{\partial g^{jm}% (s)}{\partial s}\right) _{|s=0}\Gamma_{mkl}, \end{equation} so that Eq. (11.29) reduces to% \begin{equation} L_{kl}^{j}\equiv\frac{1}{2}g^{jm}(g_{km;l}^{\prime}+g_{lm;k}^{\prime}% -g_{kl;m}^{\prime}). \end{equation} Finally then combining Eqs. (11.20), (11.23) and (11.33), one obtains% \begin{equation} \frac{D^{2}J^{j}}{Dt^{2}}+R_{ikl}^{j}\frac{\partial x^{i}}{\partial t}% \frac{\partial x^{l}}{\partial t}J^{k}+\frac{1}{2}g^{jm}(g_{km;l}^{\prime }+g_{lm;k}^{\prime}-g_{kl;m}^{\prime})\frac{\partial x^{k}}{\partial t}% \frac{\partial x^{l}}{\partial t}=0. \end{equation} Next define the vector field,% \begin{equation} C^{j}\equiv\frac{1}{2}g^{jm}(g_{km;l}^{\prime}+g_{lm;k}^{\prime}% -g_{kl;m}^{\prime})\frac{\partial x^{k}}{\partial t}\frac{\partial x^{l}% }{\partial t},. \end{equation} which is independent of the Jacobi field $J^{j}.$ Equivalently, by symmetry, Eq. (11.35) can also be written as% \begin{equation} C^{j}\equiv\frac{1}{2}g^{jm}(2g_{km;l}^{\prime}-g_{kl;m}^{\prime}% )\frac{\partial x^{k}}{\partial t}\frac{\partial x^{l}}{\partial t}. \end{equation} Substituting Eq. (11.35) in Eq. (11.34), one obtains the second-order differential equation,% \begin{equation} \frac{D^{2}J^{j}}{Dt^{2}}+R_{ikl}^{j}\frac{\partial x^{i}}{\partial t}% \frac{\partial x^{l}}{\partial t}J^{k}+C^{j}=0, \end{equation} the so-called `lifted Jacobi equation' \cite{Nielsen 4}. Nielsen and Dowling used the lifted Jacobi equation, Eq. (11.37), to deform geodesics from the value $q=1$ for the penalty parameter to much larger values, and this enabled them to define a so-called geodesic derivative and to deform a geodesic as the penalty parameter is varied without changing the fixed values $U=1$ and $U=U_{f}$ of the initial and final unitary transformation corresponding to the quantum computation \cite{Nielsen 4}. \section{APPENDIX A: PAULI MATRICES} The Pauli matrices are defined by \cite{NC}% \begin{equation} \ \sigma_{0}\equiv I\equiv\left[ \begin{array} [c]{cc}% 1 & 0\\ 0 & 1 \end{array} \right] ,\ \ \sigma_{1}\equiv X\equiv\left[ \begin{array} [c]{cc}% 0 & 1\\ 1 & 0 \end{array} \right] ,\ \sigma_{2}\equiv Y\equiv\left[ \begin{array} [c]{cc}% 0 & -i\\ i & 0 \end{array} \right] ,\ \ \sigma_{3}\equiv Z\equiv\left[ \begin{array} [c]{cc}% 1 & 0\\ 0 & -1 \end{array} \right] , \end{equation} They are Hermitian,% \begin{equation} \sigma_{i}=\sigma_{i}^{\dag}\ ,\ \ \ \ \ i=0,1,2,3, \end{equation} and, except for$\ \sigma_{0}$, they are traceless,% \begin{equation} \text{Tr}\sigma_{i}=0,\ \ \ i\neq0. \end{equation} Their products are given by% \begin{equation} \sigma_{i}^{2}=I, \end{equation} and, using the Einstein sum convention for repeated indices (both lower in this case), \begin{equation} \sigma_{i}\sigma_{j}=i\varepsilon_{ijk}\sigma_{k},\ \ \ i,j,k\neq0, \end{equation} expressed in terms of the totally antisymmetric Levi-Civita symbol with $\varepsilon_{123}=1.$ Quantum gates can be expressed in terms of tensor products of Pauli matrices. For example the CNOT gate \cite{NC} can be expressed as follows:% \begin{equation} CNOT=\left[ \begin{array} [c]{cccc}% 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 1\\ 0 & 0 & 1 & 0 \end{array} \right] =\frac{1}{2}(I\otimes I+I\otimes\sigma_{1}+\sigma_{3}\otimes I-\sigma_{3}\otimes\sigma_{1}). \end{equation} \section{APPENDIX B: SUPEROPERATOR ADJOINTS} The purpose of this Appendix is to derive Eqs. (3.12) and (3.13) for the adjoint superoperators $E_{X}^{\dag}$ and $D_{X}^{\dag}$ (at least to first order). For vectors $X,Y,Z$, consider the trace inner product defined by% \begin{equation} \left( Y,\text{ad}_{X}Z\right) \equiv\text{Tr}\left( Y^{\dag}\text{ad}% _{X}Z\right) , \end{equation} or using Eq. (3.7), this becomes% \begin{equation} \left( Y,\text{ad}_{X}Z\right) \equiv\text{Tr}\left( Y^{\dag}[X,Z]\right) , \end{equation} or expanding the commutator, then% \begin{equation} \left( Y,\text{ad}_{X}Z\right) \equiv\text{Tr}\left( Y^{\dag}XZ-Y^{\dag }ZX\right) . \end{equation} Next using the cyclic property of the trace, Eq. (13.3) becomes% \begin{equation} \left( Y,\text{ad}_{X}Z\right) \equiv\text{Tr}\left( Y^{\dag}XZ-XY^{\dag }Z\right) , \end{equation} or equivalently, again using the trace inner product, \begin{equation} \left( Y,\text{ad}_{X}Z\right) \equiv-\text{Tr}\left( [X,Y^{\dag}]Z\right) =-\left( [X,Y^{\dag}]^{\dag},Z\right) , \end{equation} or% \begin{equation} \left( Y,\text{ad}_{X}Z\right) \equiv-\left( [Y,X^{\dag}],Z\right) =\left( [X^{\dag},Y],Z\right) . \end{equation} But by definition of the adjoint, one has% \begin{equation} \left( Y,\text{ad}_{X}Z\right) \equiv\left( \text{ad}_{X}^{\dag}Y,Z\right) , \end{equation} and comparing Eqs. (13.6) and (13.7), one obtains% \begin{equation} \text{ad}_{X}^{\dag}Y=[X^{\dag},Y]. \end{equation} Next, if $X$ is Hermitian, then $X=X^{\dag}$, and Eq. (13.8) becomes% \begin{equation} \text{ad}_{X}^{\dag}Y=[X,Y]=\text{ad}_{X}Y, \end{equation} and it follows that ad$_{X}$ is Hermitian,% \begin{equation} \text{ad}_{X}^{\dag}=\text{ad}_{X}. \end{equation} Also, Eq. (3.7) implies% \begin{equation} \text{ad}_{X}Y=[X,Y]=-[-X,Y]=-\text{ad}_{-X}Y, \end{equation} and therefore% \begin{equation} \text{ad}_{X}=-\text{ad}_{-X}. \end{equation} Next, Eqs. (3.10) and (13.10) imply% \begin{equation} E_{X}^{\dag}=I+\frac{i}{2}\text{ad}_{X}+O(X^{2}). \end{equation} Also according to Eqs. (3.10) and (13.12), one has% \begin{equation} E_{-X}=I+\frac{i}{2}\text{ad}_{X}+O(X^{2}). \end{equation} Comparing Eqs. (13.13) and (13.14), one has (at least to first order),% \begin{equation} E_{X}^{\dag}=E_{-X}. \end{equation} $\bigskip$Also, according to Eqs. (3.9) and (3.10), \begin{equation} D_{X}=E_{X}^{-1}=I+\frac{i}{2}\text{ad}_{X}+O(X^{2}). \end{equation} Then \begin{equation} D_{X}^{\dag}=I-\frac{i}{2}\text{ad}_{X}^{\dag}+O(x^{2}), \end{equation} and substituting Eq.\ (13.10) and using Eq. (13.12), then% \begin{equation} D_{X}^{\dag}=I+\frac{i}{2}\text{ad}_{-X}+O(x^{2}). \end{equation} Also, Eq. (13.16) implies that% \begin{equation} D_{-X}=I+\frac{i}{2}\text{ad}_{-X}+O(x^{2}). \end{equation} Comparing Eqs. (13.18) and (13.19), then (at least to first order) one has% \begin{equation} D_{X}^{\dag}=D_{-X}. \end{equation} \section{APPENDIX C: INVERSE METRIC} From Eq. (12.3) and the property of the trace of a tensor product \cite{Steeb}, it follows that for a generalized Pauli matrix $\sigma \notin\{I\otimes I\otimes...\}$, one has% \begin{equation} \ \text{Tr}(\sigma)=\text{Tr}(I\otimes..\sigma_{i}\otimes..\sigma_{j}% \otimes..)=\text{Tr}(I)..\text{Tr}(\sigma_{i})..\text{Tr}(\sigma_{j})..=0. \end{equation} Also, using Eqs. (2.7) and (3.15), one has% \begin{equation} \left\langle \sigma,\tau\right\rangle \equiv\frac{1}{2^{n}}\text{Tr}\left( \sigma G(\tau)\right) =\frac{1}{2^{n}}\left\{ \text{Tr}\left( \sigma P(\tau)\right) +q\text{Tr}\left( \sigma Q(\tau)\right) \right\} . \end{equation} Next, denoting $S_{0}$ (not to be confused with $S_{0}$ in Section 7) as the set of generalized Pauli matrices containing only tensor products of the identity, and $S_{12}\$\ as the set of generalized Pauli matrices containing only one and two body terms, that is% \begin{equation} S_{0}\ \equiv\{I\otimes I\otimes...\}, \end{equation} and% \begin{equation} S_{12}=\{I\otimes I\otimes...\sigma_{i}\otimes I..,...\}\cup\{I\otimes I\otimes...\sigma_{i}\otimes I..\sigma_{j}\otimes I..,...\}, \end{equation} then Eq. (14.2) becomes \begin{equation} \left\langle \sigma,\tau\right\rangle =\frac{1}{2^{n}}\left\{ \begin{array} [c]{c}% \text{Tr}\left( IP(\tau)\right) +q\text{Tr}\left( IQ(\tau)\right) ,\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \sigma\in S_{0}% \ \ \ \ \ \ \ \\ \text{Tr}\left( P(\sigma)P(\tau)\right) +q\text{Tr}\left( P(\sigma )Q(\tau)\right) ,\ \ \ \ \ \ \ \ \ \ \ \sigma\in S_{12}\ \ \ \ \ \ \\ \text{Tr}\left( Q(\sigma)P(\tau)\right) +q\text{Tr}\left( Q(\sigma )Q(\tau)\right) ,\ \ \ \ \ \ \ \sigma\notin S_{0},S_{12}\ \ \ \end{array} \right. , \end{equation} and using Eqs. (12.3)-(12.5), (14.1), and the Kronecker delta $\delta _{\sigma\tau}$, then Eq. (14.5) becomes \begin{equation} \left\langle \sigma,\tau\right\rangle =\left\{ \begin{array} [c]{c}% 0,\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \sigma\in S_{0}\ \ \ \ \ \ \ \ \ \ \\ \delta_{\sigma\tau},\ \ \ \ \ \ \ \ \ \ \ \sigma\in S_{12}\ \ \ \ \ \ \ \ \ \\ q\delta_{\sigma\tau},\ \ \ \ \ \ \ \ \ \sigma\notin S_{0}\cup S_{12}\ \ \ \end{array} \right. , \end{equation} or equivalently% \begin{equation} \left\langle \sigma,\tau\right\rangle =q_{\sigma}\delta_{\sigma\tau}, \end{equation} where% \begin{equation} q_{\sigma}\equiv\left\{ \begin{array} [c]{c}% 0,\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \sigma\in S_{0}\ \ \ \ \ \ \ \ \ \ \\ 1,\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \sigma\in S_{12}\ \ \ \ \ \ \ \ \ \\ q,\ \ \ \ \ \ \ \ \ \ \ \ \sigma\notin S_{0}\cup S_{12}\ \ \ \end{array} \right. . \end{equation} \bigskip Next, according to Eqs. (3.26), (3.28), (14.2) and (14.7), the metric at the origin is given by% \begin{equation} g_{\tau\lambda}=\frac{1}{2^{n}}\text{Tr}(\tau G(\lambda))=\left\langle \tau,\lambda\right\rangle =q_{\tau}\delta_{\tau\lambda}=q_{\lambda}% \delta_{\tau\lambda}. \end{equation} The inverse metric is given by% \begin{equation} g^{\sigma\tau}=\frac{1}{2^{n}}\text{Tr}(\sigma G^{-1}(\tau)), \end{equation} which is justified by the following. Substituting Eqs. (3.45) and (4.13) in Eq. (14.10), then% \begin{equation} g^{\sigma\tau}=\frac{1}{2^{n}}\left\{ \text{Tr}(\sigma P(\tau))+\frac{1}% {q}\text{Tr}(\sigma Q(\tau))\right\} , \end{equation} which, analogously to Eqs. (14.2), (14.5) and (14.6), becomes (ignoring $S_{0}$ which was assumed to be excluded)% \begin{equation} g^{\sigma\tau}=\left\{ \begin{array} [c]{c}% \delta^{\sigma\tau},\ \ \ \ \ \ \sigma\in S_{12}\ \ \ \\ \ \ \frac{1}{q}\delta^{\sigma\tau},\ \ \ \ \sigma\notin S_{0}\cup S_{12}% \end{array} \right. , \end{equation} or equivalently, \begin{equation} g^{\sigma\tau}=\frac{1}{q_{\sigma}}\delta^{\sigma\tau}. \end{equation} Equations (14.13) and (14.9) with the Einstein convention, summing over repeated upper and lower indices, imply% \begin{equation} g^{\sigma\tau}g_{\tau_{\lambda}}=% %TCIMACRO{\dsum \limits_{\tau}}% %BeginExpansion {\displaystyle\sum\limits_{\tau}} %EndExpansion \frac{1}{q_{\sigma}}\delta^{\sigma\tau}q_{\lambda}\delta_{\tau\lambda}% =\delta_{\lambda}^{\sigma}, \end{equation} so in fact Eq. (14.10) is the valid inverse metric. Using Eq. (3.45), the inverse metric Eq. (14.10) can also be written as \cite{Nielsen 4} \begin{equation} g^{\sigma\tau}=\frac{1}{2^{n}}\text{Tr}(\sigma F(\tau)). \end{equation} \section{APPENDIX D: RAISED CHRISTOFFEL SYMBOLS} The raised form of the Christoffel symbols (Christoffel symbols of the second kind) \cite{Lee},\cite{Petersen} is obtained by substituting Eqs. (14.15) and (3.42) in% \begin{equation} \Gamma_{\sigma\tau}^{\rho}=g^{\rho\lambda}\Gamma_{\lambda\sigma\tau,}% \end{equation} and summing over $\lambda$ (Einstein sum convention). Thus% \begin{equation} \Gamma_{\sigma\tau}^{\rho}=% %TCIMACRO{\dsum \limits_{\lambda}}% %BeginExpansion {\displaystyle\sum\limits_{\lambda}} %EndExpansion \frac{1}{2^{n}}\text{Tr}(\rho F(\lambda))\frac{i}{2^{n+1}}\text{Tr}\left( \lambda\left( \lbrack\sigma,G(\tau)]+[\tau,G(\sigma)]\right) \right) . \end{equation} Next one notes using Eq. (4.13) that% \begin{equation} F(\lambda)\text{Tr}(\lambda...)=\left( P+\frac{1}{q}Q\right) (\lambda )\text{Tr}(\lambda...), \end{equation} or% \begin{equation} F(\lambda)\text{Tr}(\lambda...)=\left\{ \begin{array} [c]{c}% P(\lambda)\text{Tr}(P(\lambda)...),\ \ \ \ \ \ \ \ \ \ \ \lambda\in S_{12}\ \ \ \ \ \ \\ \frac{1}{q}Q(\lambda)\text{Tr}\left( Q(\lambda)...\right) ,\ \ \ \ \ \ \ \lambda\notin S_{0},S_{12}\ \ \ \end{array} \right. , \end{equation} or equivalently, taking account of Eq. (2.2) and using Eq. (4.13), then% \begin{equation} F(\lambda)\text{Tr}(\lambda...)=\lambda\text{Tr}((P+\frac{1}{q}Q)(\lambda )...)=\lambda\text{Tr}(F(\lambda)...). \end{equation} Then using Eq. (15.5) in Eq. (15.2), one obtains% \begin{equation} \Gamma_{\sigma\tau}^{\rho}=% %TCIMACRO{\dsum \limits_{\lambda}}% %BeginExpansion {\displaystyle\sum\limits_{\lambda}} %EndExpansion \frac{1}{2^{n}}\text{Tr}(\rho\lambda)\frac{i}{2^{n+1}}\text{Tr}\left( F(\lambda)\left( [\sigma,G(\tau)]+[\tau,G(\sigma)]\right) \right) . \end{equation} From Eqs. (12.3)-(12.5) it follows that the generalized Pauli matrices are orthogonal with respect to the trace inner product, namely,% \begin{equation} \text{Tr}(\rho\lambda)=2^{n}\delta^{\rho\lambda}, \end{equation} and substituting Eq. (15.7) in Eq. (15.6), then% \begin{equation} \Gamma_{\sigma\tau}^{\rho}=% %TCIMACRO{\dsum \limits_{\lambda}}% %BeginExpansion {\displaystyle\sum\limits_{\lambda}} %EndExpansion \frac{1}{2^{n}}2^{n}\delta^{\rho\lambda}\frac{i}{2^{n+1}}\text{Tr}\left( F(\lambda)\left( [\sigma,G(\tau)]+[\tau,G(\sigma)]\right) \right) , \end{equation} or finally,% \begin{equation} \Gamma_{\sigma\tau}^{\rho}=\frac{i}{2^{n+1}}\text{Tr}\left( F(\rho)\left( [\sigma,G(\tau)]+[\tau,G(\sigma)]\right) \right) . \end{equation} \section{APPENDIX E: AN IDENTITY} $\$In this Appendix, the identity given by Eq. (3.49) is derived. It follows from Eq. (12.4) and (12.5) that for some coefficients $a_{\tau\lambda\sigma}$ and $b_{\tau\lambda}$ one has% \begin{equation} \lbrack\tau,G(\lambda)]=\ %TCIMACRO{\dsum \limits_{\sigma}}% %BeginExpansion {\displaystyle\sum\limits_{\sigma}} %EndExpansion a_{\tau\lambda\sigma}\sigma+b_{\tau\lambda}I. \end{equation} Then multiplying both sides of Eq. (16.1) by $\kappa$ and taking the trace, one obtains% \begin{equation} \text{Tr}\left( \kappa\lbrack\tau,G(\lambda)]\right) =% %TCIMACRO{\dsum \limits_{\sigma}}% %BeginExpansion {\displaystyle\sum\limits_{\sigma}} %EndExpansion a_{\tau\lambda\sigma}\text{Tr}(\kappa\sigma)+b_{\tau\lambda}\text{Tr}(\kappa). \end{equation} Next substituting Eqs. (12.3) and (15.7) in Eq. (16.2), the latter becomes% \begin{equation} \text{Tr}\left( \kappa\lbrack\tau,G(\lambda)]\right) =% %TCIMACRO{\dsum \limits_{\sigma}}% %BeginExpansion {\displaystyle\sum\limits_{\sigma}} %EndExpansion a_{\tau\lambda\sigma}2^{n}\delta_{\kappa\sigma}=2^{n}a_{\tau\lambda\kappa}, \end{equation} and therefore% \begin{equation} a_{\tau\lambda\sigma}=\frac{1}{2^{n}}\text{Tr}\left( \sigma\lbrack \tau,G(\lambda)]\right) . \end{equation} Next operating on both sides of Eq. (16.1) with $F$, and using Eq. (4.13), one has% \begin{equation} F\left( [\tau,G(\lambda)]\right) =% %TCIMACRO{\dsum \limits_{\sigma}}% %BeginExpansion {\displaystyle\sum\limits_{\sigma}} %EndExpansion a_{\tau\lambda\sigma}F(\sigma), \end{equation} and substituting Eq. (16.4) in Eq. (16.5), one obtains% \begin{equation} F\left( [\tau,G(\lambda)]\right) =% %TCIMACRO{\dsum \limits_{\sigma}}% %BeginExpansion {\displaystyle\sum\limits_{\sigma}} %EndExpansion F(\sigma)\frac{1}{2^{n}}\text{Tr}\left( \sigma\lbrack\tau,G(\lambda)]\right) , \end{equation} or% \begin{equation} 2^{n}F\left( [\tau,G(\lambda)]\right) =% %TCIMACRO{\dsum \limits_{\sigma}}% %BeginExpansion {\displaystyle\sum\limits_{\sigma}} %EndExpansion F(\sigma)\text{Tr}\left( \sigma\lbrack\tau,G(\lambda)]\right) . \end{equation} Then substituting Eq. (15.5) in Eq. (16.7), one obtains% \begin{equation} 2^{n}F\left( [\tau,G(\lambda)]\right) =% %TCIMACRO{\dsum \limits_{\sigma}}% %BeginExpansion {\displaystyle\sum\limits_{\sigma}} %EndExpansion \sigma\text{Tr}\left( F(\sigma)[\tau,G(\lambda)]\right) , \end{equation} or equivalently,% \begin{equation}% %TCIMACRO{\dsum \limits_{\sigma}}% %BeginExpansion {\displaystyle\sum\limits_{\sigma}} %EndExpansion \sigma\text{Tr}\left( F(\sigma)[\tau,G(\lambda)]\right) =2^{n}F\left( [\tau,G(\lambda)]\right) . \end{equation} which is Eq. (3.49). \section{APPENDIX F: CHECKS OF THREE-QUBIT SOLUTIONS} In this Appendix, checks are given of the solution, Eq. (7.28) to Eq. (7.16),% \begin{equation} Q(t)=e^{it(q^{-1}-s^{-1})S_{0}}Q_{0}e^{-it(q^{-1}-s^{-1})S_{0}}, \end{equation} and the solution, Eq. (7.57) to Eq. (7.15),% \begin{equation} T(t)=e^{it(q^{-1}-s^{-1})S_{0}}e^{it(1-q^{-1})(S_{0}+Q_{0})}T_{0}% e^{-it(1-q^{-1})(S_{0}+Q_{0})}e^{-it(q^{-1}-s^{-1})S_{0}}. \end{equation} A check that Eq. (17.1) does indeed satisfy Eq. (7.16) proceeds by first calculating% \begin{align} \frac{dQ}{dt} & =i(q^{-1}-s^{-1})S_{0}Q(t)-i(q^{-1}-s^{-1})Q(t)S_{0}\\ & =i(q^{-1}-s^{-1})[S_{0},Q(t)],\nonumber \end{align} which agrees with Eqs. (7.16) and (7.17). Proceeding to check that Eq. (17.2) satisfies Eq. (7.15), it follows from Eq. (17.2) that% \begin{align} \frac{dT}{dt} & =i(q^{-1}-s^{-1})S_{0}T(t)-i(q^{-1}-s^{-1})T(t)S_{0}% +e^{it(q^{-1}-s^{-1})S_{0}}\nonumber\\ & \times\frac{d}{dt}\left( e^{it(1-q^{-1})(S_{0}+Q_{0})}T_{0}e^{-it(1-q^{-1}% )(S_{0}+Q_{0})}\right) e^{-it(q^{-1}-s^{-1})S_{0}}\nonumber\\ & =i(q^{-1}-s^{-1})[S_{0},T(t)]+e^{it(q^{-1}-s^{-1})S_{0}}\{i(1-q^{-1}% )\nonumber\\ & \times\lbrack(S_{0}+Q_{0}),e^{it(1-q^{-1})(S_{0}+Q_{0})}T_{0}% e^{-it(1-q^{-1})(S_{0}+Q_{0})}]\}\nonumber\\ & \times e^{-it(q^{-1}-s^{-1})S_{0}}. \end{align} The following commutation relation is true:% \begin{equation} \lbrack A,BC]=ABC-BCA=BAC-BCA+ABC-BAC=B[A,C]+[A,B]C, \end{equation} and using Eq. (17.5) twice, and noting that $(S_{0}+Q_{0})$ commutes with itself, Eq. (17.4) becomes% \begin{align} \frac{dT}{dt} & =i(q^{-1}-s^{-1})[S_{0},T(t)]+e^{it(q^{-1}-s^{-1})S_{0}% }i(1-q^{-1})\nonumber\\ & \times\left\{ e^{it(1-q^{-1})(S_{0}+Q_{0})}[(S_{0}+Q_{0}),T_{0}% e^{-it(1-q^{-1})(S_{0}+Q_{0})}]\right. \nonumber\\ & +\left. [(S_{0}+Q_{0}),e^{it(1-q^{-1})(S_{0}+Q_{0})}]T_{0}e^{-it(1-q^{-1}% )(S_{0}+Q_{0})}\right\} \nonumber\\ & \times e^{-it(q^{-1}-s^{-1})S_{0}}, \end{align} or \begin{align} \frac{dT}{dt} & =i(q^{-1}-s^{-1})[S_{0},T(t)]+e^{it(q^{-1}-s^{-1})S_{0}% }i(1-q^{-1})\nonumber\\ & \times\left\{ e^{it(1-q^{-1})(S_{0}+Q_{0})}T_{0}[(S_{0}+Q_{0}% ),e^{-it(1-q^{-1})(S_{0}+Q_{0})}]\right. \nonumber\\ & +\left. e^{it(1-q^{-1})(S_{0}+Q_{0})}[(S_{0}+Q_{0}),T_{0}]e^{-it(1-q^{-1}% )(S_{0}+Q_{0})}\right\} \nonumber\\ & \times e^{-it(q^{-1}-s^{-1})S_{0}}. \end{align} Equivalently then,% \begin{align} \frac{dT}{dt} & =i(q^{-1}-s^{-1})[S_{0},T(t)]\nonumber\\ & +e^{it(q^{-1}-s^{-1})S_{0}}i(1-q^{-1})e^{it(1-q^{-1})(S_{0}+Q_{0})}% [(S_{0}+Q_{0}),T_{0}]\nonumber\\ & \times e^{-it(1-q^{-1})(S_{0}+Q_{0})}e^{-it(q^{-1}-s^{-1})S_{0}}, \end{align} or% \begin{align} \frac{dT}{dt} & =i(q^{-1}-s^{-1})[S_{0},T(t)]+i(1-q^{-1})e^{it(q^{-1}% -s^{-1})S_{0}}\nonumber\\ & \times(S_{0}+Q_{0})e^{it(1-q^{-1})(S_{0}+Q_{0})}T_{0}e^{-it(1-q^{-1}% )(S_{0}+Q_{0})}\nonumber\\ & \times e^{-it(q^{-1}-s^{-1})S_{0}}\nonumber\\ & -i(1-q^{-1})e^{it(q^{-1}-s^{-1})S_{0}}e^{it(1-q^{-1})(S_{0}+Q_{0})}% T_{0}\nonumber\\ & \times e^{-it(1-q^{-1})(S_{0}+Q_{0})}\nonumber\\ & \times(S_{0}+Q_{0})e^{-it(q^{-1}-s^{-1})S_{0}}. \end{align} This becomes% \begin{align} \frac{dT}{dt} & =i(q^{-1}-s^{-1})[S_{0},T(t)]+i(1-q^{-1})e^{it(q^{-1}% -s^{-1})S_{0}}\nonumber\\ & \times Q_{0}e^{it(1-q^{-1})(S_{0}+Q_{0})}T_{0}e^{-it(1-q^{-1})(S_{0}% +Q_{0})}e^{-it(q^{-1}-s^{-1})S_{0}}\nonumber\\ & -i(1-q^{-1})e^{it(q^{-1}-s^{-1})S_{0}}e^{it(1-q^{-1})(S_{0}+Q_{0})}% T_{0}\nonumber\\ & \times e^{-it(1-q^{-1})(S_{0}+Q_{0})}Q_{0}e^{-it(q^{-1}-s^{-1})S_{0}% }\nonumber\\ & +i(1-q^{-1})e^{it(q^{-1}-s^{-1})S_{0}}\nonumber\\ & \times\lbrack S_{0},e^{it(1-q^{-1})(S_{0}+Q_{0})}T_{0}e^{-it(1-q^{-1}% )(S_{0}+Q_{0})}]e^{-it(q^{-1}-s^{-1})S_{0}}. \end{align} Next, the right side of Eq. (7.15), using Eqs. (7.17), (7.28), and (7.57), becomes% \begin{align} & i[((1-s^{-1})S+(1-q^{-1})Q),T]\nonumber\\ & =i(1-s^{-1})[S,T]+i(1-q^{-1})[Q,T]\nonumber\\ & =i[S_{0},T]-is^{-1}[S_{0},T]+i(1-q^{-1})[Q,T]\nonumber\\ & =i[S_{0},T]-is^{-1}[S_{0},T]+i(1-q^{-1})\nonumber\\ & \times\lbrack e^{it(q^{-1}-s^{-1})S_{0}}Q_{0}e^{-it(q^{-1}-s^{-1})S_{0}% },e^{it(q^{-1}-s^{-1})S_{0}}e^{it(1-q^{-1})(S_{0}+Q_{0})}\nonumber\\ & \times T_{0}e^{-it(1-q^{-1})(S_{0}+Q_{0})}e^{-it(q^{-1}-s^{-1})S_{0}% }]\nonumber\\ & =i(1-q^{-1})[S_{0},T]+i(q^{-1}-s^{-1})[S_{0},T]\nonumber\\ & +i(1-q^{-1})e^{it(q^{-1}-s^{-1})S_{0}}Q_{0}e^{it(1-q^{-1})(S_{0}+Q_{0}% )}T_{0}\nonumber\\ & \times e^{-it(1-q^{-1})(S_{0}+Q_{0})}e^{-it(q^{-1}-s^{-1})S_{0}}\nonumber\\ & -i(1-q^{-1})e^{it(q^{-1}-s^{-1})S_{0}}e^{it(1-q^{-1})(S_{0}+Q_{0})}% T_{0}\nonumber\\ & \times e^{-it(1-q^{-1})(S_{0}+Q_{0})}Q_{0}e^{-it(q^{-1}-s^{-1})S_{0}% }\nonumber\\ & =i(q^{-1}-s^{-1})[S_{0},T]\nonumber\\ & +i(1-q^{-1})[S_{0},e^{it(q^{-1}-s^{-1})S_{0}}e^{it(1-q^{-1})(S_{0}+Q_{0}% )}T_{0}\nonumber\\ & \times e^{-it(1-q^{-1})(S_{0}+Q_{0})}e^{-it(q^{-1}-s^{-1})S_{0}% }]\nonumber\\ & +i(1-q^{-1})e^{it(q^{-1}-s^{-1})S_{0}}Q_{0}e^{it(1-q^{-1})(S_{0}+Q_{0}% )}T_{0}\nonumber\\ & \times e^{-it(1-q^{-1})(S_{0}+Q_{0})}e^{-it(q^{-1}-s^{-1})S_{0}}\nonumber\\ & -i(1-q^{-1})e^{it(q^{-1}-s^{-1})S_{0}}e^{it(1-q^{-1})(S_{0}+Q_{0})}% T_{0}\nonumber\\ & \times e^{-it(1-q^{-1})(S_{0}+Q_{0})}Q_{0}e^{-it(q^{-1}-s^{-1})S_{0}}. \end{align} Equivalently then,% \begin{align} & i[((1-s^{-1})S+(1-q^{-1})Q),T]\ \nonumber\\ & =i(q^{-1}-s^{-1})[S_{0},T]\nonumber\\ & +i(1-q^{-1})e^{it(q^{-1}-s^{-1})S_{0}}Q_{0}e^{it(1-q^{-1})(S_{0}+Q_{0}% )}T_{0}\nonumber\\ & \times e^{-it(1-q^{-1})(S_{0}+Q_{0})}e^{-it(q^{-1}-s^{-1})S_{0}}\nonumber\\ & -i(1-q^{-1})e^{it(q^{-1}-s^{-1})S_{0}}e^{it(1-q^{-1})(S_{0}+Q_{0})}% T_{0}\nonumber\\ & \times e^{-it(1-q^{-1})(S_{0}+Q_{0})}Q_{0}e^{-it(q^{-1}-s^{-1})S_{0}% }\nonumber\\ & +i(1-q^{-1})e^{it(q^{-1}-s^{-1})S_{0}}e^{it(1-q^{-1})(S_{0}+Q_{0})}% T_{0}\nonumber\\ & \times e^{-it(1-q^{-1})(S_{0}+Q_{0})}[S_{0},e^{-it(q^{-1}-s^{-1})S_{0}% }]\nonumber\\ & +i(1-q^{-1})e^{it(q^{-1}-s^{-1})S_{0}}[S_{0},e^{it(1-q^{-1})(S_{0}+Q_{0}% )}T_{0}\nonumber\\ & \times e^{-it(1-q^{-1})(S_{0}+Q_{0})}]e^{-it(q^{-1}-s^{-1})S_{0}% }\nonumber\\ & =i(q^{-1}-s^{-1})[S_{0},T]\nonumber\\ & +i(1-q^{-1})e^{it(q^{-1}-s^{-1})S_{0}}Q_{0}e^{it(1-q^{-1})(S_{0}+Q_{0}% )}T_{0}\nonumber\\ & \times e^{-it(1-q^{-1})(S_{0}+Q_{0})}e^{-it(q^{-1}-s^{-1})S_{0}}\nonumber\\ & -i(1-q^{-1})e^{it(q^{-1}-s^{-1})S_{0}}e^{it(1-q^{-1})(S_{0}+Q_{0})}% T_{0}\nonumber\\ & \times e^{-it(1-q^{-1})(S_{0}+Q_{0})}Q_{0}e^{-it(q^{-1}-s^{-1})S_{0}% }\nonumber\\ & +i(1-q^{-1})e^{it(q^{-1}-s^{-1})S_{0}}[S_{0},e^{it(1-q^{-1})(S_{0}+Q_{0}% )}T_{0}\nonumber\\ & \times e^{-it(1-q^{-1})(S_{0}+Q_{0})}]e^{-it(q^{-1}-s^{-1})S_{0}}. \end{align} Comparing Eqs. (17.12), (17.10), and (7.15), one concludes that the left and right sides of Eq. (7.15) agree. \begin{thebibliography}{99} % \bibitem {NC}M. A. Nielsen and I. L. Chuang, \textit{Quantum Information and Computation} (Cambridge University Press, 2000). \bibitem {Mont}R. Montgomery, \textit{A Tour of Sub-Riemannian Geometries, Their Geodesics and Applications}, Vol. 91 of \textit{Mathematical Surveys and Monographs }(American Mathematical Society, Providence, Rhode Island, 2002). \bibitem {Khaneja}N. Khaneja, S. J. Glaser, and R. Brockett, "Sub-Riemannian Geometry and Time Optimal Control of Three Spin Systems: Quantum Gates and Coherence Transfer," Phys. Rev. A \textbf{65}, 032301(1-11) (2002). \bibitem {Moseley 0}C. G. Moseley, "Geometric control of quantum spin systems," in \textit{Quantum Information and Computation II,} edited by E. Donkor, A. R. Pirich, and H. E. Brandt, Proc. SPIE Vol. 5436, pp. 319-323, SPIE, Bellingham, WA (2004). \bibitem {Nielsen 1}M. A. Nielsen, "A Geometric Approach to Quantum Circuit Lower Bounds," Quantum Information and Computation \textbf{6}, 213-262 (2006). \bibitem {Nielsen 2}M. A. Nielsen, M. R. Dowling, M. Gu, and A. C. Doherty, "Optimal Control, Geometry, and Quantum Computing," Phys. Rev. A \textbf{73}, 062323(1-7) (2006). \bibitem {Nielsen 3}M. A. Nielsen, M. R. Dowling, M. Gu, and A. C. Doherty, "Quantum Computation as Geometry," Science \textbf{311}, 1133-1135 (2006). \bibitem {Nielsen 4}M. R. Dowling and M. A. Nielsen, "The Geometry of Quantum Computation," Quantum Information and Computation \textbf{8}, 0861-0899\ (2008). \bibitem {Moseley}J. N. Clelland and C. G. Moseley, "Sub-Finsler Geometry in Dimension Three," Differential Geometry and its Applications \textbf{24}, 628-651 (2006). \bibitem {Hou}Bo-Yu Hou and Bo-Yuan Hou, \textit{Differential Geometry for Physicists} (World Scientific, Singapore, 1997). \bibitem {Sagle}A. A. Sagle and R. E. Walde, \textit{Introduction to Lie Groups and Lie Algebras}, Academic Press, New York (1973). \bibitem {Conlon}L. Conlon, \textit{Differentiable Manifolds,} 2nd Edition, Birkh\"{a}user, Boston (2001). \bibitem {Lee}J. M. Lee, \textit{Riemannian Manifolds: An Introduction to Curvature}, Springer, New York (1997). \bibitem {Berger}M. Berger, \textit{A Panoramic View of Riemannian Geometry}, Springer-Verlag, Berlin (2003). \bibitem {Hall}B. C. Hall, \textit{Lie Groups, Lie Algebras, and Representations}, Springer, New York (2004). \bibitem {Farout}J. Farout, \textit{Analysis on Lie Groups}, Cambridge University Press, Cambridge, UK (2008). \bibitem {Postnikov}M. M. Postnikov, \textit{Geometry VI: Riemannian Geometry, Encyclopedia of Mathematical Sciences}, vol. 91, Springer-Verlag, Berlin (2001). \bibitem {Petersen}P. Petersen, \textit{Riemannian Geometry}, 2nd Edition, Springer, New York (2006). \bibitem {Jost}J. Jost, \textit{Riemannian Geometry and Geometric Analysis}, 5th Edition, Springer-Verlag,\textit{\ }Berlin\textit{\ }(2008)\textit{.} \bibitem {Wasserman}R. Wasserman, \textit{Tensors and Manifolds, }2nd Edition, Oxford University Press, Oxford, UK (2004). \bibitem {Naimark}M. A. Naimark and A. I. Stern, \textit{Theory of Group Representations, }Springer-Verlag,\textit{\ }New York (1982)\textit{.} \bibitem {Sepen}Mark R. Sepanski, \textit{Compact Lie Groups}, Springer (2007). \bibitem {Pfeifer}Walter Pfeifer, \textit{The Lie Algebras su(N), }Birkh\"{a}user, Basel, (2003). \bibitem {Stillwell}John Stillwell, \textit{Naive Lie Theory}, Springer, NY (2008). \bibitem {Cornwell Intro}J. F. Cornwell, \textit{Group Theory in Physics: An Introduction}, Academic Press, San Diego, CA (1997). \bibitem {Cornwell}J. F. Cornwell, \textit{Group Theory in Physics, }Vol. 2, Academic Press, London (1984). \bibitem {Steeb}W. Steeb and Y. Hardy, \textit{Problems and Solutions in Quantum Computing and Quantum Information}, 2nd Edition, World Scientific, New Jersey (2006). \bibitem {Weigert}S. Weigert, "Baker-Campbell-Hausdorff Relation for Special Unitary Groups SU(N)," J. Phys. A: Math. Gen. \textbf{30}, 8739-8749 (1997). \bibitem {Reut}C. Reutenauer, \textit{Free Lie Algebras}, Clarendon Press, Oxford (1993). \bibitem {Dynkin}E. Dynkin, "Calculation of the coefficients in the Campbell-Hausdorff formula," Dokl. Akad. Nauk \textbf{57}, 323-326 (1947). \bibitem {Baker}H. F. Baker, "Alternants and continuous groups," Proc. London Math. Soc. (2\textbf{)} \textbf{3}, 24-47 (1905). \bibitem {Campbell 1}J. E. Campbell, "On a law of combination of operators bearing on the theory of continuous transformation groups," Proc. London Math. Soc. (1\textbf{)} \textbf{28}, 381-390 (1897). \bibitem {Campbell}J. E. Campbell, "On a law of combination of operators," Proc. London Math. Soc. (1\textbf{)} \textbf{29}, 14-32, (1898). \bibitem {Hausdorff}F. Hausdorff, "Die symbolische Exponentialformel in der Gruppentheorie," Leipziger Berichte \textbf{58}, 19-48 (1906). \bibitem {MTW}C. W. Misner, K. S. Thorne, and J. A. Wheeler, \textit{Gravitation}, pp. 223, 224, \ 310, W. H. Freeman and Company, New York (1973). \bibitem {Lax 1}P. D. Lax, "Integrals of Nonlinear Equations of Evolution and Solitary Waves," Communications on Pure and Applied Math. \textbf{21}, 467-490 (1968). \bibitem {Abraham}R. Abraham and J. E. Marsden, \textit{Foundations of Mechanics, }2nd Edition, AMS Chelsea Publishing, American Mathematical Society, Providence, Rhode Island (2008). \bibitem {Lax 1b}D. Zwillinger, \textit{Handbook of Differential Equations, }Third Edition, Academic Press, San Diego, CA (1998). \bibitem {Lax 2}R. S. Kaushal and D. Parashar, \textit{Advanced Methods of Mathematical Physics}, CRC Press, Boca Raton, FL (2000). \bibitem {Lax 2b}T. Miwa, M. Jimbo, and E. Date, \textit{Solitons}, Cambridge University Press, Cambridge, UK (2000). \bibitem {Lax 3}L. Debnath, \textit{Nonlinear Partial Differential Equations}, Birkh\"{a}user, Boston, MA (1997). \bibitem {Lax 4}E. Zeidler, \textit{Nonlinear Functional Analysis and its Applications IV: Applications to Mathematical Physics}, Springer-Verlag, New York, NY (1997). \bibitem {Milnor}J. Milnor, "Curvatures of Left Invariant Metrics on Lie Groups," Advances in Mathematics \textbf{21}, 293-329 (1976). \bibitem {Arnold 1}V. I. Arnold, \textit{Mathematical Methods of Classical Mechanics}, 2nd Edition, Springer-Verlag, New York (1989). \bibitem {Arnold 2}V. I. Arnold and B. A. Khesin, \textit{Topological Methods in Hydrodynamics}, Springer, New York (1999). \end{thebibliography} \end{document}