Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Topics in quantum information -- Continuous quantum measurements and quantum walks
(USC Thesis Other)
Topics in quantum information -- Continuous quantum measurements and quantum walks
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
TOPICS IN QUANTUM INFORMATION | CONTINUOUS QUANTUM MEASUREMENTS AND QUANTUM WALKS by Martin Varbanov A Dissertation Presented to the FACULTY OF THE USC GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulllment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (PHYSICS) December 2009 Copyright 2009 Martin Varbanov Dedication To my parents, Mladenka Varbanova and Valentin Varbanov, My brother, Lyubomir Varbanov, And my grandmother, Veska Mineva ii Acknowledgements I owe my deepest gratitude to my advisor, Todd Brun, for being an amazing advisor and mentor. I am thankful for his help with my rst steps in the eld of quantum information science. His continual guidance and thoughtful insights helped me tremendously with my research over the years. I thank him for always nding time for me and for his patience and support for the direction of my research. I am also thankful to Igor Devetak for his thoughtful introduction to quantum communication theory. As a part of our research group he always sparked exciting discussions leading to great insights into quantum error correction and quantum communication protocols. Special thanks go to Stephan Haas, my graduate advisor, for his constant help and guidance and his always cheerful attitude. His advice and support throughout the years are indispensable. I also thank Betty Byers, Beverly Ferguson and Mary Beth Hicks, from the Physics department, and Milly Montenegro, Mayumi Thrasher and Gerrelyn Ramos, from the Engineering department, for their help with the multitude of administrative issues and the pleasant working environment they created. I also thank Gkhan Esirgen for making teaching the physics labs a wonderful and fun experience. iii I especially want to thank all the graduate students in Todd Brun's group. I thank my collaborator Hari Krovi for his insights and enhancing my understand- ing in the area of quantum walks. My gratitude to Ognyan Oreshkov for being an amazing friend and for the many hours of discussions in the eld of quantum information and beyond cannot be expressed by mere words. I am grateful for the engaging and fun discussions we led with Shesha Raghunathan. Bilal Shaw, Min-Hsiu Hsieh and Zhicheng Luo were an amazing group of people to share so many discussions and group meetings with. Big thank to all my friends that were always there for me when I needed them | Ogy, Veselin, Nikolay, Iskra, Denitza, Elena, Justin, Renaldo, Stephanie and Venci. My gratitude goes to my family for their love and support. I thank my mom and dad, Mladenka and Valentin, my grandma Veska and my brother Lyubo and his wife Tanya. iv Table of Contents Dedication ii Acknowledgements iii List of Figures vii Abstract viii Chapter 1: Introduction 1 1.1 Quantum mechanics, quantum information and quantum compu- tation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1.1 The laws of quantum mechanics . . . . . . . . . . . . . . . 3 1.1.2 Quantum computers and models for quantum computation 7 1.2 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Chapter 2: Decomposing generalized measurements into contin- uous stochastic processes 12 2.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.2 Repeated weak measurements for projective measurements . . . . 14 2.3 Continuous process for projective measurements . . . . . . . . . . 20 2.4 Continuous processes for generalized measurements . . . . . . . . 38 2.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Chapter 3: Hitting time for the continuous quantum walk 44 3.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 3.2 Hitting time denition for the continuous quantum walk . . . . . 46 3.3 Conditions for existence of innite hitting times . . . . . . . . . . 58 3.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 3.4.1 Example 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 3.4.2 Example 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 3.4.3 Example 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 3.4.4 Example 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 3.4.5 Example 5 and 6 . . . . . . . . . . . . . . . . . . . . . . . 70 v 3.5 Innite hitting times for graphs with non-connected complemen- tary graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 3.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 Chapter 4: Quantum scattering theory on graphs with tails 78 4.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 4.2 Energy eigenstates . . . . . . . . . . . . . . . . . . . . . . . . . . 81 4.2.1 Propagating states . . . . . . . . . . . . . . . . . . . . . . 82 4.2.2 Bound states . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.3 Orthogonality of propagating and bound states . . . . . . . . . . 88 4.4 Properties of the S-matrix . . . . . . . . . . . . . . . . . . . . . . 91 4.5 Vertex state basis and energy eigenvalue basis . . . . . . . . . . . 95 4.6 Cutting a Tail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 4.6.1 Leaving a Stump . . . . . . . . . . . . . . . . . . . . . . . 99 4.6.2 Cutting k Tails . . . . . . . . . . . . . . . . . . . . . . . . 101 4.7 Attaching a Tail . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 4.8 Connecting two tails to form an edge . . . . . . . . . . . . . . . . 102 4.8.1 Composition of unitary gates . . . . . . . . . . . . . . . . . 105 4.9 Unitarity preservation in cutting and connecting tails . . . . . . . 108 4.10 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 Chapter 5: Conclusion 113 Bibliography 116 vi List of Figures 1 Two families of geodesics on 3 with the metric g ij (x). Each blue geodesic is orthogonal, at the point of inter- section, to the central red one. . . . . . . . . . . . . . . . . 34 2 The scalar curvature on 3 with the metricg ij (x). ln(R) is plotted on the z axis. . . . . . . . . . . . . . . . . . . . . . 35 3 Graph examples (with assigned vertex labels) . . . . . . . 66 vii Abstract The topics presented in this thesis have continuous quantum measurements and quantum walks at their core. The rst topic being discussed centers around simulating a generalize measurement with a nite number of outcomes using a continuous measurement process with a continuous measurement history. We provide conditions under which it is possible to prove that such a process exists and that at long times it simulates faithfully the generalized measurement. We give the stochastic equations governing the feedback between the measurement history and the instantaneous weak measurements. The second topic examines a denition of \hitting time" for continuous-time quantum walks. A crucial com- ponent for such a denition is the use of weak measurements. Several methods using alternative but equivalent denitions of weak, continuous measurements are employed to derive a formula for the hitting time. The behavior of the thus dened hitting time is studied subsequently, in general and for specic graphs. The last topic explores continuous-time quantum walks on graphs with innite tails. The equations for propagating and bound states are derived and the S- matrix is dened. Their properties, such as orthogonality of the propagating and bound states, unitarity of the S-matrix, are discussed. Formulas for the S-matrix under operations of cutting, adding or connecting tails are derived. viii Chapter 1: Introduction 1.1 Quantum mechanics, quantum information and quantum computation The foundations of quantum information theory and quantum computation were laid in the early decades of the twentieth century by the development of quantum mechanics by Niels Bohr, Erwin Schrdinger, Werner Heisenberg, Albert Einstein and later on by John von Neumann and Richard Feynman. The pioneers in the eld were surprised by the counterintuitive nature of the new theory. New the- oretical results following from it seemed contradictive, like the famous thought experiment, the EPR paradox, that questioned the validity of the new theory [27]. The unsettling feeling with which it leaves anyone that starts delving into it has been a constant. Despite that quantum mechanics has proven many a time that it describes the micro world amazingly well. Its predictions have been invariably conrmed by experiment. The now classical results explaining the photoelectric eect, the black body radiation and the hydrogen atom and its spectrum, of which every physics student is well aware, are just a few examples of the successes of the theory. One of the most accurately measured physical quantities | the gyromagnetic ratio of the electron | has been predicted using 1 the rules of quantum electrodynamics. Fields such as condensed matter physics, solid-state physics, atomic physics, molecular physics, computational chemistry, quantum chemistry, nuclear and particle physics, have their foundations in quan- tum mechanics. Thus it is no surprise that despite its counterintuitive nature Quantum Me- chanics has many applications. Surprisingly, one of the more recently developed scientic elds which has Quantum Mechanics at its roots has turned its para- doxical character on its head. It was well known that the need for faster classi- cal computers leading to further miniaturization would hit a barrier when their components becomes so small the the laws governing them would no longer ap- ply. In that sense the laws of Quantum Mechanics were seen as an impediment to the progress in computer technology. That prompted physicist to ponder whether one could imagine a scenario where the reverse is possible. In 1982 Richard Feynman suggested that controlled quantum system could be useful not only in simulation of other quantum systems but also for computation. In 1985 David Deutsch gave the rst description of a quantum computer [24]. The rst indication that quantum computers can be more powerful than their classical counterparts appeared in 1992 when David Deutsch and Richard Josza proposed a quantum algorithm more ecient than any classical algorithm [25]. Though the algorithm solved an articial problem with no practical application it proved that quantum computers could outperform classical for certain problems. A year later Peter Shor published a paper proposing an algorithm that solves ef- ciently a nontrivial problem | the ecient factoring of large integers [76]. In 1996, Lov Grover developed another quantum algorithm that solves the practical problem for searching an unordered database faster than any classical algorithm 2 [40]. These rst algorithms peaked the interest in the possibilities for compu- tation that the rules of Quantum Mechanics aord and established the eld of Quantum Information theory and Quantum Computation. Along with the discovery of new quantum algorithms the paradoxical nature of the rules of Quantum Mechanics proved benecial in a dierent kind of sce- nario. When quantum systems are used for communication certain advantages emerge. With the introduction of Peter Shor's factoring algorithm the security of the RSA cryptosystem that is currently used for secure transactions over the internet has been compromised, at least in theory. But nearly a decade before the invention of the aforementioned algorithm, in 1984, a quantum protocol for se- cure key distribution was developed by Charles Bennett and Gilles Brassard [10]. Quantum teleportation is another famous protocol that uses the paradoxical EPR pairs to its advantage [11]. Since then a multitude of quantum communication protocols have been devised that explore the the possibilities and interchange- ability of the new resources Quantum Mechanics gives us | quantum channels and quantum entanglement | and the old classical channels, establishing the eld of Quantum Communication. 1.1.1 The laws of quantum mechanics The mathematical framework of quantum mechanics was developed starting from the beginning of the 20ieth century until the 1930ies with the rst complete formulation that was given by John von Neumann in his book Mathematical Foundations of Quantum Mechanics in 1932, [81]. Each quantum system is assigned a Hilbert spaceH | a separable complex linear space with an inner product. The inner product of statesj i andji is 3 a complex function which is an anti-linear in the rst and linear in the second argument and is denoted by ( ;) or byh ji in Dirac notation. The inner product naturally leads to a unique function on operators acting on the Hilbert space called the trace. It can be uniquely determined by its 3 characteristic properties | linearity, cyclic property and normalization: tr(A +B) =tr(A) +tr(B); tr(AB) =tr(BA); tr(j ihj) =hj i: (1) The state of the system is described by either a vector of unit length in the Hilbert space, in which case the state is called pure, or by a positive linear operator with trace 1 acting onH | the density matrix of the system. If the density matrix cannot be represented as =j ih j (2) for any unit vectorj i2H the state of the system is mixed. Naturally every pure state can be represented by a density matrix by the above formula. A com- posite quantum system consisting of 2 or more quantum subsystem is associated with the Hilbert space which is the tensor product of the Hilbert spaces of its subsystems. The evolution of a closed quantum system is given by a unitary transformation of its state. Thus for two states,j 1 i andj 2 i, that described the system at two 4 dierent times there exists a unitary operator U (a unitary operator satises (U ;U) = ( ;) for any twoj i;ji2H) such that j 2 i =Uj 1 i: (3) Any unitary operatorU can be represented by ase i ~ H where ~ H is a hermitian op- erator. A hermitian operatorH is by denition equal to its Hermitian conjugate H y dened through the equation (H y ;) = ( ;H) for any twoj i;ji2H). If the time that elapsed between the moment when the system was in statej 1 i and the moment when it was in the statej 2 i is t then the operator H = ~ H ~t (4) is called the Hamiltonian of the system. An equivalent denition of the time evolution of the statej i of a quantum system can be given by rst introducing its Hamiltonian H | a hermitian operator acting onH and the Schr odinger equation: i~ dj i dt =Hj i: (5) In the above~ is the Planck's constant which is usually set to be equal to 1 by adjusting the units of length, time and mass appropriately | a convention that we will adopt as well. When a closed quantum system interacts with an external system that is classical e.i. measurement equipment, the system's evolution is not described by unitary evolution anymore. We say that a measurement has been performed on 5 the system. It is described by a set of operators M m on the Hilbert spaceH where the index m stands for the dierent outcomes of the measurement. If the state of the system before the measurement isj i its state after is given by j m i = M m j i r D M y m M m E : (6) If the state of the system is initially given by a density matrix the following formula gives the nal state: m = M m M y m tr(M m M y m ) : (7) The outcome of the measurement is random and the probability to obtain out- come m for the above two cases is p m = M y m M m (8) or p m =tr(M m M y m ): (9) From the requirement that the probabilities need to sum to 1 and the fact that the initial state can be chosen arbitrary we pbtain the following condition for the operators M m : X m M y m M m =I (10) 6 where I is the identity operator onH. If all measurement operators are projec- tive, M m = P m and P m satisfy P m = P y m = (P m ) 2 the measurement is called projective or von Neumann measurement. Its characteristic property is that if the system is measured again using the same projective measurement we will obtain the same outcome and its state will not change. For a more extensive overview of the laws of quantum mechanics see [67]. 1.1.2 Quantum computers and models for quantum com- putation The physical implementation of the abstract idea of a universal quantum com- puter requires a certain criteria to be met. DiVincenzo proposed a set of ve criteria [26, 65]: (1) A scalable physical system with well characterized qubits; (2) The ability to initialize the qubits in a simple fudicial state; (3) Long decoherence times, much longer than the gate operation time; (4) A \universal" set of quantum gates; (5) A qubit-specic measurement capability. Two other criteria are appended when one needs to employ a network of quantum computers: (6) The ability to interconvert stationary and ying qubits; (7) The ability to faithfully transmit ying qubits between separate locations. 7 There have been a lot of physical system that have been considered for the purposes of quantum computing, each with its advantages and disadvantages. Some of the candidates are based on the following physical systems or phenom- ena: optical photon, optical cavity, optical lattices, ion traps, nuclear magnetic resonance, superconductors. They all have their advantages or disadvantages when we have look at their properties keeping in mind the list of criteria above. The most promising system until now is the ion traps one which satises all 5 criteria for a quantum computer. The software for a quantum computer comes in many dierent forms. In the standard, circuit, model the qubits are represented by horizontal quantum wires where the direction of the computation is from left to right [67]. The unitary transformations or gates are given by boxes that overlap or connect the wires that they act on. A quantum algorithm or computation is thus represented as a sequence of unitary gates that act on one or more of the qubits. On the left side the qubits start in some standard initial state. The overall unitary that needs to be implemented could be decomposed in a sequence of gates drawn from a set of gates. A universal set of gates has the property that each unitary could be implemented or approximated by such a sequence. One such set of gates consists of all single qubit gates and the CNOT gate. Another discrete universal set includes the Hadamard, phase, CNOT and=8 gates. The measurements needed to extract the classical information after the quantum evolution has nished done could be done at the end of the sequence of unitaries by measuring each qubit. Another model of computation is the so called adiabatic quantum computa- tion. In this model the solution to the problem is represented as the ground state of a Hamiltonian which is usually nontrivial and quite complex. The physical 8 system with a variable Hamiltonian is initialized in its ground state. This is done for some standard choice of the variables in the Hamiltonian for which both the Hamiltonian and its ground state have a simple form. The quantum computation consists of slowly changing the variables until the Hamiltonian changes to the one solves the problem. The time needed for this process is chosen to be long enough as that the probability for the system to leave its ground state during the whole computation is small. This time is usually estimated using on the adiabatic theorem. It has been shown that this model is equivalent to the circuit model and can be eciently simulated by it [61]. Quantum walks are a third model of quantum computation. They are devised as an analog to the classical random walks. But in contrast the quantum walk evolution is as usual unitary and thus reversible. The walks space could be a directed or undirected graph, with or without loops, in discrete or continuous time. The continuous-time quantum walk on an undirected graph with in- nite tails has been shown to be universal model for quantum computation [20]. Studying quantum walks and their properties is motivated by their usefulness for developing new quantum algorithms. 1.2 Outline The thesis explores two topics in quantum information theory | measurement theory and quantum walks. In chapter 2, we examine the concept of weak measurement continuous in time. The measurement record of such a continuous measurement scheme is a classical continuous stochastic process. Their connection to the standard de- nition of measurement in quantum mechanics is explored. The simulation of a 9 generalized measurement withn outcomes is performed by using a feedback con- tinuous measurement scheme. The classical stochastic process lives on a (n 1)- dimensional Riemannian manifold. The feedback procedure involves estimating the current position of the stochastic process on the manifold. The instantaneous measurement depends only on the current estimate of the position and not on the previous history of the measurement process. We give the exact formulas for the feedback and the Riemannian metric of the manifold and prove that for the measurement scheme converges to one of the outcomes of the generalized measurement needed to be simulated with the right probabilities in the limit of time going to innity. The coupled stochastic equations for the quantum state of the system and the position of the point on the outcome process are given in the case a generalized measurement with commuting positive measurement operators. In chapter 3, we give a denition of the hitting time for the continuous- time quantum walk. A Poisson process is used to generate the times at which the unitary evolution of the quantum walk is interrupted by a measurement which checks whether the nal state has been reached or not. The hitting time is the average of the hitting times over all trajectories of the Poisson process. An analytical formula for the hitting time is derived. An alternative version using weak measurements is given and proven to produce the same result as the above one. The denition of the hitting time permits us to investigate whether and under what conditions innite hitting times exist. We explore the above denition and its consequences for the hitting time behavior, the quantum Zeno eect and innite hitting time for several simple graphs. In the nal subsection we consider graphs such that their complementary graph is not connected. We 10 show that is condition is sucient for the existence of innite hitting times for the quantum walk on the original graph. In chapter 4, we examine the model of continuous-time quantum walks on graphs that are not nite but innitely long tails are attached to some of the vertices of a nite graph. The Hamiltonian on such graphs naturally gives rise to propagating and bound eigenstates and the equations that dene them is derived.A notable feature is that in contrast to the usual continuous in space models from quantum mechanics the bound states split in two classes - one the states in which decay exponentially on the innite tails of the graph and another for which the states turn out to be zero on all tails. We prove the orthogonality of the propagating and bound states, dene the S-matrix and explore its properties. The transformation between the vertex state and energy eigenstate basis is given as well. In the last sections of this chapter we provide formulas for the S-matrix under dierent operations on the graph such as cutting a tail, attaching a tail, or connecting two tails to form an edge. We prove that the S-matrix remains unitary under such operations. 11 Chapter 2: Decomposing generalized measurements into continuous stochastic processes 2.1 Preliminaries The rst and simplest denition of measurement that one learns in a rst course on quantum mechanics involves projection operators which sum up to the identity [70]. Usually, these projective operators come from a resolution of the identity generated by a particular observable, represented by a Hermitian operator. A generalization is needed when composite systems are considered|a projective measurement on the whole system is not usually a projective measurement on a part of the system. Every generalized measurement can be performed by adding an ancilla to our given system, doing an appropriate unitary transforming both system and ancilla, followed by a projective measurement on the ancilla [67]. The dimension of the ancilla must be at least equal to the number of outcomes from the measurement and the number of orthogonal projectors on the ancilla. A projective measurement can be carried out as a sequence of weak generalized measurements [67]. Here, weak means that after each step in the sequence, the system is disturbed by a small amount, but yields only a small amount of 12 information. Each step also takes a certain nite amount of time, and thus so will the nal strong projective measurement. This picture of a measurement, as a continuous sequence of innitesimal steps taking a nite amount of time, contrasts with the usual assumptions we make about abstract measurements| that they are instantaneous and thus strong. While this construction works for projective measurements, it cannot be im- mediately adapted to strong generalized measurements. We wish to construct a measurement procedure that is continuous in time, but that at long times pro- duces the same result as a given strong measurement: that is, in the end the system is in one of the correct nal states with the correct probabilities. We will prove that such a continuous decomposition exists for any generalized measure- ment, and give an explicit construction. Such a continuous procedure can serve a variety of purposes. For certain problems, a continuous description is useful: we are able to take derivatives and use other analytical tools of calculus [68, 69], apply methods from quantum ltering theory and quantum feedback control. On the level of physical reality, it is dicult to argue whether one particular description of a measurement is more fundamental (if any of them is). For much of the time since the discovery of quantum mechanics it made sense to treat measurements as instantaneous because those were the kind of measurements we were able to perform. With more recent advances in quantum optics and atomic physics, where quantum systems can be continuously monitored, new ways of describing the evolution of the system had to be developed|models such as quantum trajectories or decoherent histories [18, 83, 41, 45, 37, 38, 13, 12]. That there are numerous ways to think about measurements in quantum mechanics, 13 which are not necessarily contradictory, but on the contrary are in some sense complementary. In the following we construct a quantum continuous stochastic process|a family of consecutive weak measurements|which governs the evolution of the state of the quantum system, and for which the nal result (at long times) is the same as a that of a specied (strong) generalized measurement. We will give an explicit construction of such a process, and prove that it does indeed have the correct long-time behavior. As we will see, in general this requires that the choice of measurements at later times will depend on the measurement results at earlier time, so this continuous measurement procedure must include feedback. 2.2 Repeated weak measurements for projective measurements Let's consider a projective measurement on the system. The measurement oper- ators are denoted by ^ P i and they have the following properties ^ P y i = ^ P i = ( ^ P i ) 2 ; ^ P i ^ P j = ij ^ P i ; (11) n X i=1 ^ P i = ^ I: 14 We would also assume that initially our system is in statej 0 i. After we perform the projective measurement (11) on the system, it states collapses to one of the following states j i i = ^ P i j 0 i p p 0 i (12) with probability p 0 i =h 0 j ^ P i j 0 i respectively. We now give the discrete procedure which decomposes the projective mea- surement into a sequence of weak measurements. We denote by n the classical state space n = ( x2 (0; 1) n R n n X i=1 x i = 1 ) and by n { the closure of n n = ( x2 [0; 1] n R n n X i=1 x i = 1 ) : Letfx (k) 2 n ;k = 1;::;ng be n xed points in n with the property n X k=1 x (k) =ne (13) wheree = ( 1 n ;:::; 1 n )2 n . Thesen pointsx (k) will represent the outcomes of the weak measurements that we do, and we will refer to them as the fundamental steps for our discrete stochastic process. The measurement operators that we construct out of them are ^ N (k) = n X i=1 q x i (k) ^ P i (14) 15 with the completeness condition satised: n X k=1 ^ N y (k) ^ N (k) = n X k;i;j=1 q x i (k) ^ P i q x j (k) ^ P j = n X k;i=1 q x i (k) x j (k) ij ^ P i = n X k;i=1 x i (k) ^ P i =n n X i=1 e i ^ P i = ^ I: This means that the measurement operators (14) dene a complete quantum measurement. If we measure the system using the above measurement operators over and over again, after a long enough time (strictly speaking in the limit of time going to innity) the system is bound to end up in one of states (12) with the right probability. After every measurement we get a measurement outcome (k);k2f1;:::;ng so after s steps we have a sequence (k 1 ;k 2 ;:::;k s ) of outcomes. The state of the system at that time has changed to j s i =j (ks;:::;k 1 ) i = 1 p p ks:::k 1 ^ N (ks) ::: ^ N (k 1 ) j 0 i = 1 p p ks:::k 1 n X i=1 q x i (ks) :::x i (k 1 ) ^ P i j 0 i = n X i=1 s x i (ks) :::x i (k 1 ) p 0 i p ks:::k 1 j i i (15) where p ks:::k 1 =h 0 j ^ N y (k 1 ) ::: ^ N y (ks) ^ N (ks) ::: ^ N (k 1 ) j 0 i = n X i=1 x i (ks) :::x i (k 1 ) h 0 j ^ P i j 0 i = n X i=1 x i (ks) :::x i (k 1 ) p 0 i : 16 Equation (15) can be rewritten as j s i = n X i=1 p ~ x i s j i i (16) with ~ x i s = x i (ks) :::x i (k 1 ) p 0 i p ks:::k 1 : (17) Because n X i=1 ~ x i s = P n i=1 x i (ks) :::x i (k 1 ) p 0 i p ks:::k 1 = 1 it follows that ~ x s 2 n . In these new coordinates the initial state of the system is justj 0 i = P n i=1 p ~ x i 0 j i i where ~ x i 0 = p 0 i . Another way of writing equation (15) is j s i = n X i=1 s x i s p 0 i P n l=1 x l s p 0 l j i i (18) with x i s = x i (ks) :::x i (k 1 ) P n l=1 x l (ks) :::x l (k 1 ) : (19) Here again x s 2 n . Thenj 0 i is given by (18) with x i 0 = 1 n . Let us stress here that both formulas (16) and (18) imply that the state of the system changes after each weak measurement, generated by (14), and that there is a correspondence between the evolving vector ~ x s = (~ x 1 s ;:::; ~ x n s ), or 17 x s = (x 1 s ;:::;x n s ), the time evolution of each given by formula (17) and (19), respectively, and the evolving state of the systemj s i. Another observation is that instead of keeping track of the sequence of out- comes (k 1 ;k 2 ;:::) we can equally well use the sequence of points (~ x 1 ; ~ x 2 ;:::) or (x 1 ;x 2 ;:::) in n . There is a one-to-one correspondence between sequences of outcomes and sequences of permissible points in n . By a \permissible point" we mean a point for which there exists at least one sequence of outcomes (k 1 ;k 2 ;:::;k s ) that generates that point, by formula (17) or (19) respectively, given some initially chosen and xed fundamental stepsfx (k) g. To prove that all three type of sequences carry the same information we use the fact that n can be given a group structure: Denition 1 n is an abelian group with multiplication ? : n n ! n where x?y for x;y2 n is dened as (x?y) i = x i y i P n k=1 x k y k : (20) We also dene the Hadamard product xy of two vectors x and y of the same dimension by (xy) i =x i y i ; and the trace Tr(x) of a vector x, which is dened as Tr(x) = n X i=1 x i : 18 Using these notations, equation (20) reads as x?y = xy Tr(xy) : The identity e of the group n is e = (1=n;:::; 1=n), and the inverse x 1 of an element x is x Tr( x) where x = 1 x 1 ;:::; 1 x n . Now it is easy to see that given a sequence of permissible points (x 1 ;x 2 ;:::;x s ), generated by the measurement procedure with some pre-xed fundamental steps, every element of the sequence (x 1 ;x 2 x 1 1 ;:::;x s x 1 s1 ) is one of the fundamental stepsx (k) for somek. Herex 1 is the inverse ofx with respect to the group mul- tiplication. So (x 1 ;x 2 x 1 1 ;:::;x s x 1 s1 ) = (x (k 1 ) ;x (k 2 ) ;:::;x (ks) ) for some (k 1 ;:::;k s ) chosen appropriately, which is exactly the sequence of outcomes. It's as easy to prove the same for a sequence (~ x 1 ; ~ x 2 ;:::; ~ x s ). The value of keeping track of sequences of permissible points is that we need only the last element x s of the sequence and the last outcome i s+1 from the (s + 1)-th measurement in order to nd the state of the systemj s+1 i at time s + 1: it is given by formula (18) with x s+1 =x s ?x (i s+1 ) = x s x (i s+1 ) Tr(x s x (i s+1 ) ) : (21) We can make use of this result by making a correspondence between the measurement procedure and a classical discrete stochastic process. Let ( ;F;P) be a probability space. We dene the stochastic process as x :N 0 ! P n by giving its distribution law. Here P n is the set of permissible points for a concrete choice of fundamental steps x (k) andN 0 =f0; 1; 2;:::g is the time set. Usually we will denote x(s;!) as x s (!) for !2 . The probability distribution 19 of x s thought of as random variable, is d s (x) = P p k 1 ;:::;ks where the sum is over all sequences of outcomes (k 1 ;:::;k s ) for which x = x i (ks) :::x i (k 1 ) P n l=1 x l (ks) :::x l (k 1 ) ; (22) or in other words, for which the last entry in the sequence with s elements is exactly x. If there is no such sequence with s elements then d s (x) = 0. The reinterpretation of formula (21) in these new terms is that the process x is Markovian. 2.3 Continuous process for projective measure- ments We will now construct a continuous quantum measurements process which de- composes the strong projective measurement (11). Unlike the discrete case, we are now not restricted to performing the same measurement at every time step (in a sense, the fundamental steps can be chosen to be dierent). These mea- surements can be arbitrary, but they must depend smoothly on the state of the system at the current moment and not explicitly on the time. (This will be vital when we take the next step to decomposing generalized measurements, where the measurements must be dierent at dierent steps.) Let's make these ideas precise. As in the discrete case, we will concentrate on the evolution of the classical state analog of the system X t 2 n rather than its 20 quantum statej t i. The measurement on the system is now given by a Lesbegue- measurable setD(x;) n , of vectors which is a neighborhood ofe2 n , and satises a condition analogous to (13): Z D(x;) zdh ? (z) =e Z D(x;) dh ? (z) =eA(x;) (23) withA(x;) being the (n 1)-dimensional volume ofD(x;) andh ? (x) the Haar measure on n . The other condition needed is lim !0 d(x;) = lim !0 sup z2D(x;) d(z;e) = 0 (24) where d(z;x) is the standard distance function on n thought of as a subset of R n . The setD(x;) depends on the current state of the systemx and on a small positive parameter (0 1) which at some point we will let go to zero, thus getting a continuous stochastic process. The measurement operators are given as in (14): ^ N(z;x;) = n A 1=2 n X i=1 p z i ^ P i with z2D(x;) (25) where the completeness condition is given by Z D(x;) ^ N y (z;x) ^ N(z;x)dh ? (z) = n A n X i;j=1 ^ P i ^ P j Z D(x;) p z i p z j dh ? (z) = n A n X i=1 ^ P i Z D(x;) z i dh ? (z) = n A n X i=1 ^ P i e i A = ^ I: 21 When we perform a measurement we get an outcome z o from the set D(x;). The new state of the system z is given by formula (21): z =x?z o = xz o Tr(xz o ) : The probability density for the system to be in state z2D ? x (x;) is (z) =h t j ^ N y (z o ;x;) ^ N(z o ;x;)j t i (26) wherej t i is given by (18) and D ? u (x;) is the left translation of D(x;) by u (sometimes the notation u?D(x;) could also be used) dened as D ? u (x;) =fy2 n jy =u?y o ; for some y o 2D(x;)g: Expanding (26) we get (z) = n A n X k;l=1 s x k p 0 k P n m=1 x m p 0 m s x l p 0 l P n m=1 x m p 0 m n X i;j=1 p (z o ) i p (z o ) j h k j ^ P i ^ P j j l i = n A n X i=1 x i p 0 i (z o ) i P n m=1 x m p 0 m = n A Tr(xp 0 z o ) Tr(xp 0 ) = n A Tr(xp 0 (x 1 ?z)) Tr(xp 0 ) (27) and if z2 n nD ? x (x;) then (z) = 0. We want to emphasize that performing a measurement with operators (25) on the system at time t gives the new state of the system at a later time t +". For the stochastic process to be continuous we want that "! 0 as ! 0. 22 The easiest way to derive the stochastic dierential equation for the process X requires the concept of stopping time. To give a proper denition we need to consider not just a probability space but a ltered probability space. More information on probability spaces, stochastic processes, stochastic dierential equations and It o's calculus can be found in [9, 47, 48, 30, 46]. Denition 2 Let ( ;F;P) be a probability space. A family fF(t)g t0 of - algebras F(t) F is a ltration on the probability space ( ;F;P) if F(s) F(t) for st. We are going to call ( ;F(t)g t0 ;F;P) a ltered probability space. In this paper we will always assume that the ltered probability space satises all the needed hypotheses. F isP-complete, F t contains all the null sets in F, F t is right-continuous. Also, all the stochastic processes are to be adapted to the ltration. Denition 3 A process X :R + ! V (with (V;V) a measurable space) is adapted if X t =X(t) :=X(t;) is F t -measurable for all t2R + . Denition 4 A process X : R + ! V (with (V;) a topological space) is continuous if X(t;!) is a continuous function of t for all !2 . Now we can give the denition of a stopping time: Denition 5 A random variable : ! [0;1] is called a stopping time with respect tofF t g t0 provided ftg = 1 ([0;t])2F t for all t 0: 23 Denition 6 Given a stochastic process X :R + !V and a stopping time the stopped process X is dened as X t =X minft;g The stopping times that we will deal with are called hitting times. Lemma 1 Let X :R + !V be an adapted stochastic process and SV a measurable set. Then := infft 0jX t 2Sg is a stopping time. The hitting time is just the rst time when the stochastic process touches the set S. Now we can derive the stochastic dierential equation for our stochastic pro- cess X t . Let S 2 n and B @S be a part of the boundary @S of S. Let P X B (X 0 ) be the probability that, when the stochastic processX with time evolu- tion generated by the measurement operators (25) and initial condition X 0 hits the boundary, @S, it hits it in the subset B rather than in @SnB. Let's denote by @S the hitting time of the process of the the boundary @S. What we want is to nd a second-order dierential equation forP X B (x). Let's assume that at time t the system state is localized at the point x; X t = x. Performing the measurement with operators (25) gives us the state of the system X t+" at timet+" which is a random variable with probability density distribution given by (27). Now we need the following simple theorem: 24 Theorem 2 Let X t be the value of the stochastic process dened above at some time t< @S . Then if X ! is the stochastic process with time evolution generated by the measurement operators (25) but with initial condition X ! 0 =X t we have P X B (X 0 ) =P X ! B (X t ): The statement of the theorem is trivial because X ! t =X t+T . From the theorem follows that P X B (x) =P X B (X t+" ): (28) P X B (X 0 ) is a conditional probability|it's the probability for hittingB given that the initial condition is X 0 , and thus it satises the following property P X B (X 0 ) = Z n P X B (z)P(X 1 0 (dh ? (z))) (29) = Z n P X B (z) X 0 (dh ? (z)) = Z n P X B (z) X 0 (z)dh ? (z) (30) where X 0 (z) =P(X 1 0 (z)) is the probability distribution of X 0 and X 0 (z) the probability density with respect to the Haar measure h ? . Substituting this in (28) we have P X B (x) =P X B (X t+" ) = Z D ? x (x;) P X B (z) X t+" (z)dh ? (z) (31) 25 with X t+" (z) given by (27) X t+" (z) = n A Tr(xp 0 (x 1 ?z)) Tr(xp 0 ) : After changing the coordinatesz!x?z, given that the Haar measure is invariant under left translations (h(A) =h(x?A) for8x2 n andA n ), (31) becomes P X B (x) = Z D(x;) P X B (x?z) ~ X t+" (z)dh ? (z) (32) with ~ X t+" (z) = n A Tr(xp 0 z) Tr(xp 0 ) : (33) Equations (32) and (33) are the starting point for deriving the dierential equa- tion for P X B (x). We rst do an integral estimate that is needed to take the limit ! 0 in (32). We want to estimate the integral Z i 1 ;:::;i k k (x;) = Z D(x;) ((ze) k ) i 1 ;:::;i k dh ? (z) = Z D(x;) (ze) i 1 :::(ze) i k dh ? (z): Using the notation in (24) for thei k -th component ofze thought of as a vector inR n we get (ze) i k d(x;) for z2D(x;) 26 and it follows immediately that (ze) i 1 :::(ze) i k 2O(d k ()) as ! 0: In the end Z i 1 ;:::;i k k (x;)d k (x;) Z D(x;) dh ? (z) =d k (x;)A(x;)2O(d k ()A()) as ! 0: From this it is a trivial consequence that Z i 1 ;:::;i k k (x;) d 2 ()A() 2O(d k2 ()) as ! 0: (34) Now we can take the limit! 0 of (32), dividing it rst on both sides byd 2 (). For this purpose we rst expand the integrand in Taylor series. Because of (34), all terms of order three or more in the series go to zero as ! 0, and so we need to keep only terms of order zero, one and two. To simplify the notation we denote p(z) =P X B (z), ~ (z) = ~ X t+" (z) and y =x?z. We have p(x) =p(y)j z=e + @p(y) @z z=e (ze) + 1 2 (ze) @ 2 p(y) @z 2 z=e (ze) +O((ze) 3 ) =p(x) + @p(y) @y y=x @y @z z=e (ze) + 1 2 (ze) @y @z z=e @ 2 p(y) @y 2 y=x @y @z z=e (ze) + 1 2 @p(y) @y y=x (ze) @ 2 y @z 2 z=e (ze) +O((ze) 3 ): 27 Above the dot denotes the usual dot product in R n . Substituting this in (32) while recalling that Z D(x;) ~ X t+" (z)dh ? (z) = 1; we get p(x) = p(y) + @p(y) @y @y @z T 1 (x;) + 1 2 Tr @y @z @ 2 p(y) @y 2 @y @z T 2 (x;) + 1 2 @p(y) @y Tr @ 2 y @z 2 T 2 (x;) y=x;z=e +O(d 3 ()A()); (35) where T k (x;) = Z D(x;) (ze) k ~ (z)dh ? (z): (36) Expanding ~ (z) in a Taylor series, ~ (z) = ~ (e) + @ ~ (z) @z z=e (ze) = 1 A + n A (x?p 0 ) (ze); and substituting the series into (36), we get T k (x;) = 1 A Z k (x;) + n A (x?p 0 )Z k+1 (x;): (37) 28 Now we can simplify (35), and after dividing both sides of the equation by d 2 () we obtain 0 = @p(y) @y y=x @y @z z=e Z 1 (x;) d 2 ()A() + @p(y) @y y=x @y @z z=e nZ 2 (x;) d 2 ()A() (x?p 0 ) + 1 2 Tr @y @z z=e @ 2 p(y) @y 2 y=x @y @z z=e Z 2 (x;) d 2 ()A() ! + 1 2 @p(y) @y y=x Tr @ 2 y @z 2 z=e Z 2 (x;) d 2 ()A() +O(d()A()): (38) The term in the rst line of (38) is zero because (23) can be rewritten as Z D(x;) (ze)dh ? (z) = 0 =Z 1 (x;): Now we can take the limit ! 0. As Z 2 (x;) d 2 ()A() 2O(1) as ! 0 it has a nite limit which we denote by (x). From its denition it follows that this matrix is positive with at least one eigenvalue equal to zero, because n X i=1 Z D(x;) (ze) i (ze) k dh ? (z) = Z D(x;) k X i=1 (z i e i )(ze) k dh ? (z) = 0: (39) 29 We will assume that all other eigenvalues of (x) are dierent from zero. Be- causeO(d()A())! 0 as ! 0, we nally derive the second-order dierential equation for p(x): n @p @x @y @z z=e (x) (x?p 0 ) + 1 2 Tr @y @z z=e @ 2 p @x 2 @y @z z=e (x) + 1 2 @p @x Tr @ 2 y @z 2 z=e (x) = 0: (40) Dierentiating y =x?z we get @y i @z j z=e =nx i ( i j x j ) (41) and @ 2 y i @z j @z k z=e =n 2 x i (2x j x k x j i k x k i j ): (42) The matrix (41) has a pseudo-inverse given by 1 nx i ( k i 1 n ) and so it satises n X i=1 @y i @z j z=e 1 nx i ( k i 1 n ) = n X i=1 ( i j x j ) k i 1 n = k j 1 n : Substituting these in (40) we get the nal simple form of our equation 1 2 g ij (x) @ 2 p @x i @x j +b i (x)g ij (x) @p @x j = 0 (43) where g ij (x) = 1 n 2 @y i @z z=e (x) @y j @z z=e (44) 30 and b j (x) = (x?p 0 ) j x j = p 0 j Tr(xp 0 ) : (45) It's simple to derive the stochastic equations for the process X t having equa- tion (43) for the probabilityP X B . We use the following theorems to this purpose. Theorem 3 (It o's formula for stopping times) Given a stochastic process satis- fyingdX i t = i (X;t)dt+ P n j=1 i j (X;t)dW j , a stopping time and aC 2;1 -function u(x;t), it follows that u(X t ;t)j 0 = Z 0 @u @t +Lu t=s ds + Z 0 @udW (46) where Lu = 1 2 n X i;j=1 ij @ 2 u @x i @x j + n X i=1 i @u @x i , ij = n X k=1 i k j k : (47) Taking expectations of (46) we obtain E(u(X ;))E(u(X 0 ; 0)) =E 0 @ Z 0 @u @t +Lu t=s ds 1 A : (48) Theorem 4 Given the conditions in Theorem 3, the C 2 -function u(x) dened on some smooth, bounded domain U given by u(x) =E(h(X )) with X 0 =x 31 satises the boundary value problem Lu = 0 in U; u =h on @U: We can now apply this to our case: LP X B = 0 in S; P X B = 1 on @B; P X B = 0 on @SnB (49) with Lu = 1 2 g ij (x) @ 2 u @x i @x j +b i (x)g ij (x) @u @x j : (50) From these theorems we see that if we require that the stochastic process satises the following stochastic dierential equation, it will satisfy equations (49) and (50) as well, dx i t = n X j=1 g ij (x t )b j (x t )dt + n X k=1 a i k (x t )dW k (51) where a(x) is the unique square root of g(x): g ij = n X k=1 a i k a j k : (52) (51) is the equation we consider for governing the evolution of the continuous measurement process. We want to point out the form of the equation is not 32 accidental. Such equation arise when one considers stochastic processes in local coordinates on Riemannian manifolds equipped with an adapted (but not torsion- free) connection. When one considers Brownian motion on manifolds the same equation appears but the term in front ofdt involves the Levy-Civita connection associated with the metric g. For further reading on stochastic processes on manifolds one can turn to references [44, 28, 29, 71, 57]. Now we prove that if the metricg(x) is chosen properly, the stochastic process reproduces the strong measurement at long times. Theorem 5 Let the metric g be invariant under the action of n on itself and under the action of the symmetric group S n . Then the stochastic process sat- isfying (51) starting from X 0 = e ends in one of the vertices v (k) of n with probability p 0 k in the limit t!1. The action of S n on n is dened as follows | for 2S n and x2 n , (x) i =x (i) : The components of the vertices v i (k) are equal to i k . Proof. The conditions for invariance of the metric g together with condition (39) x the form of the metric very stringently | the metric is unique up to a constant conformal factor and has the following form: g ij (x) = n X ;=1 x i ( i x ) x j ( j x ); (53) 33 Figure 1: Two families of geodesics on 3 with the metric g ij (x). Each blue geodesic is orthogonal, at the point of intersection, to the central red one. where the matrix is constant and equal to the projector fromR n to n : = 1 n : (54) Two families of geodesics are given in Fig. 1. It is apparent that the metric is not at. The scalar curvature for this metric is also given in Fig. 2. The scalar curvature at a certain point becomes larger in absolute value the closer the point is to the boundaries of n . Thus on the boundaries the curvature is innite. Obviously, the manifold with that metric is non-compact although we use the x coordinates to visualize n . The stochastic equation (51) will have the form dx i = n X ;=1 x i ( i x ) x j ( j x )b j (x)dt + n X =1 x i ( i x ) k dW k ; (55) 34 Figure 2: The scalar curvature on 3 with the metric g ij (x). ln(R) is plotted on the z axis. with is the unique square root of : k = k 1 n : (56) If we change the coordinates from x to ~ x, dened by ~ x =x?p 0 ; (57) the equation in these new coordinates looks simple: d~ x i = n X =1 ~ x i ( i ~ x ) k dW k = ~ x i n X k=1 ( i k ~ x k )dW k : (58) From this it immediately follows that the stochastic process X t in these coordi- nates is a local martingale; and as it takes values in the bounded set n , it is a 35 martingale. As the process is aL p -integrable martingale for anyp 1, it follows by the martingale convergence theorem that there exists a random variable ~ x 1 which is the limit of ~ x t ast!1. Now we will prove that the limit ~ x 1 is localized on the vertices of n . Let's denote the m-th moment of ~ x m m (~ x) = P n i=1 (~ x i ) m . Then using It o's calculus we get dm 2 (~ x) = n X i;k=1 ~ x i ( i k ~ x k )~ x i ( i k ~ x k )dt + 2 n X i;k=1 (~ x i ) 2 ( i k ~ x k )dW k = m 2 (~ x) 2m 3 (~ x) +m 2 2 (~ x) dt + 2 n X i;k=1 (~ x i ) 2 ( i k ~ x k )dW k : Taking the expectation value, we arrive at an ordinary dierential equation for E(m 2 ): dE(m 2 ) dt =E(m 2 ) 2E(m 3 ) +E m 2 2 : (59) As the limit of ~ x t as t!1 exists, so does the limit ofE(m 2 (~ x t )). This means that the time derivative ofE(m 2 (~ x t )) goes to zero as time goes to innity: E(m 2 (~ x 1 )) 2E(m 3 (~ x 1 )) +E m 2 2 ~ x 1 )) = 0: (60) This last equation implies that the range of the random variable ~ x 1 is a subset of the set of all zeros of the function (m 2 2m 3 +m 2 2 ) (x). It is easy to show 36 that the roots of this function are exactly the vertices of n by considering the inequality m 2 2m 3 +m 2 2 (x) = n X i=1 " (~ x i ) 2 1 2~ x i + n X k=1 (~ x k ) 2 !# n X i=1 (~ x i ) 2 (1 2~ x i + (~ x i ) 2 ) = n X i=1 (~ x i ) 2 (1 ~ x i ) 2 0 (61) with equality when x2fv (1) ;:::;v (n) g. This proves that the stochastic process ends at one of the vertices of n when time goes to innity. Let's denote the probability distribution function of ~ x 1 by M(x). Then M(x) = 8 > > < > > : q k ; for x =v (k) 0; for x6=v (k) for8k = 1;:::;n (62) for some q = (q 1 ;:::;q n )2 n . As ~ x t is a martingale it follows that E(~ x 0 ) =E(~ x t ) =E(~ x 1 ): (63) In thex-coordinates the process starts fromx 0 =e. By (57) the initial condition in the ~ x-coordinates is ~ x 0 =p 0 . Then p 0 i =E(~ x i 0 ) =E(~ x i 1 ) = n X k=1 q k v i (k) = n X k=1 q k i k =q i : (64) This proves the theorem. 37 Now we can describe the evolution of the state of the system when it is subjected to our feedback control scheme. The equations are a special case of equations (72) with M j =P j : dj t i = 1 8 n X j=1 ^ P j D ^ P j E ^ P j D ^ P j E j t idt + 1 2 n X i=1 ^ P i D ^ P i E j t idW i ; (65) dx i =g ij D ^ P j E dt +a i dW ; (66) with D ^ P j E =h t j ^ P j j t i. The notable thing in these equations, that distinguishes them from the equations describing the the decomposition of a generalized mea- surement, is that feedback is not needed. The state of the systemj t i evolves in accordance with equation (65) which has no dependance on the vector x. As we will see below, that is not true for generalized measurement. In that case the equations forj t i and x are coupled in a nontrivial way, and feedback is necessary. 2.4 Continuous processes for generalized mea- surements Building from the above process on the classical state space n , it is now easy to prove the existence of a stochastic process on the quantum state space that de- composes any kind of generalized measurement, given by measurement operators ^ M j (j = 1;:::;n) satisfying the usual completeness relation P n j=1 ^ M y j ^ M j = ^ I, into continuous measurements. (It is not necessary that these measurement opera- tors commute.) To that purpose we will use the fact that the homomorphisms of 38 stochastic dierential equations(the transformations that preserve the structure of the equations) must be at least twice dierentiable maps. We will give an example of a mapM taking points in n and mapping them to operators acting on the state space of our quantum system. This map should satisfy the following three requirements: it should be twice dierentiable in n , equal to the identity ^ I at the identitye of n , and equal to the measurement operators at the vertices of n . Let ^ M j = ^ U j ^ L j be the left polar decomposition of ^ M j , so ^ L j are positive and ^ U j are unitary. The map ^ M will be the product of two maps ^ (x) and ^ (x): the rst involving the unitaries ^ U i , and the second the positive operators ^ L j . If the hamiltonians corresponding to the unitaries ^ U j are ^ H j , then we can choose the map ^ (x) to be ^ (x) =exp in n 1 n X j=1 x j x j 1 n ^ H j ! : (67) We can readily see that, as constructed, the map has all the required properties, and also has the property that ^ (x) is unitary for all x2 n . ^ (x) is given by ^ (x) =f 1 2 (x) n X j=1 x j ^ L y j ^ L j ! 1 2 : (68) withf(x) = 1+n P n j=1 x j (1x j ). The square root above is well-dened because the the expression in parentheses is a positive operator for everyx2 n . We also note that ^ (x) is invertible for everyx2 n . This follows from the completeness property satised by thef ^ L j g: P n j=1 ^ L y j ^ L j = ^ I. The map ^ M(x) is given by ^ (x) ^ (x). 39 The quantum state will respectively evolve according to the following for- mula t = ^ M(x t ) 0 ^ M y (x t ) Tr( ^ M y (x t ) ^ M(x t ) 0 ) : (69) It is obvious that the constructed map is far from unique|there are innitely many maps that satisfy the three required conditions. This shows that there are numerous ways to decompose a strong generalized measurement into weak measurements of the type that we are considering. In the very special case where the measurement operators are positive and commuting, we will now show that the state evolves according to a generalized quantum state diusion equation. As the measurement operators are positive, the unitaries in the their polar decomposition are all equal to the identity oper- ator. Thus the map ^ M(x) is just ^ (x). A pure state will evolve by the analog of (69): j t i = ^ (x t )j 0 i h 0 j ^ 2 (x t )j 0 i 1 2 : (70) As the measurement operators commute, it is easy to take derivatives, and then use the It o rule to get the quantum diusion equation forj t i. In the following we use the Einstein summation rule over repeated upper and lower indices: dj t i = @j t i @x i dx i + 1 2 @ 2 j t i @x j @x k dx j dx k ; @j t i @x i = 1 2 0 @ ^ M 2 i x m ^ M 2 m D ^ M 2 i E 0 D x m ^ M 2 m E 0 1 A j t i; 40 @ 2 j t i @x j @x k = 1 4 0 @ ^ M 2 k ^ M 2 j (x m ^ M 2 m ) 2 + ^ M 2 j D ^ M 2 k E 0 x m ^ M 2 m D x l ^ M 2 l E 0 + D ^ M 2 j E 0 ^ M 2 k D x m ^ M 2 m E 0 x l ^ M 2 l 3 D ^ M 2 j E 0 D ^ M 2 k E 0 D x m ^ M 2 m E 2 0 1 C A j t i; dx i =g ij b j dt +a i dW =g ij D ^ M 2 j E 0 D x m ^ M 2 m E 0 dt +a i dW ; dx j dx k =g jk dt; with D ^ M 2 j E 0 =h 0 j ^ M 2 j j 0 i =h 0 j ^ M y j ^ M j j 0 i. It's easy to see that h ^ M 2 j i 0 hx m ^ M 2 mi 0 = ^ M 2 j x m ^ M 2 m where D ^ M 2 j E =h t j ^ M 2 j j t i. Denote ^ A i = ^ M 2 i x m ^ M 2 m : (71) Putting this all together, we arrive at the following coupled stochastic dierential equations: dj t i = 1 8 g jk ^ A j D ^ A j E ^ A k D ^ A k E j t idt + 1 2 ^ A i D ^ A i E j t ia i dW ; dx i =g ij D ^ A j E dt +a i dW : (72) From here we can easily derive an equation for t =j t ih t j: d t =g jk ^ Q j t ^ Q k 1 2 f ^ Q k ^ Q j ; t g dt +f ^ Q j ; t ga j dW ; dx i =g ij Tr ^ A j t dt +a i dW ; (73) 41 where ^ Q j ( t ;x) = 1 2 ^ A j (x)Tr ^ A j (x) t : (74) 2.5 Discussion In this chapter we have constructed stochastic processes that are continuous de- compositions of a discrete, instantaneous measurement. The state of the system evolves in accordance with stochastic dierential equations, which in the case of generalized measurement involve feedback based on the measurement history. We have proven that in the long-time limit this process arrives at the same nal states with the same probabilities as the strong measurement. This gives us the ability to think in terms of this continuous process when we consider measure- ments, and a natural way to express statements involving measurements in the language of stochastic calculus. For example, we can easily dierentiate with respect to the process using It o's calculus. One application of this idea has al- ready been made to entanglement monotones, giving new dierential conditions for them, as shown in [68]. A related and still unsolved question concerns the converse problem to the one considered in this paper: namely, if we are able to perform a certain restricted class of operations or if we have control over certain parameters of a quantum system, to what extent we can utilize this freedom to set up an experiment in which the system's evolution is governed by the model considered above? Given some class of weak measurements that can be experimentally performed, what class of generalized measurements can we generate? This question has relevance when one starts exploring an experimental realization of the proposed type of 42 continuous measurement. Investigation in this direction could potentially also lead to better understanding of the structure of the set of local operations and classical communication which are important for entangling manipulation. 43 Chapter 3: Hitting time for the continuous quantum walk 3.2 Preliminaries There are two main types of quantum walks: continuous-time and discrete-time quantum walks. Discrete-time quantum walks evolve by the application of a unitary evolution operator at discrete time intervals, and continuous-time walks evolve under a (usually time-independent) Hamiltonian. Continuous-time quan- tum walks have been dened by Farhi and Gutmann in [31] as a quantized version of continuous-time classical random walks. Classical random walks are used in computer science to design probabilistic algorithms for computational problems most notably for 3-satisability (3-SAT) [64]. In a similar vein, quantum walks provide a framework for the design of quantum algorithms. As such quantum walks have been used in many quantum algorithms such as element distinctness [5], matrix product verication [17], triangle nding [59] and group commutativ- ity testing [58]. Recently, a quantum algorithm for evaluating NAND trees has been proposed which uses a quantum walk as a part of the algorithm [32]. In order to be able to understand how to better use quantum walks for algorithms, we need to study the properties of these walks. There have been many papers 44 which study the behavior of quantum walks for various graphs. For example quantum walks on the line have been examined for the continuous-time case in Refs. [21, 31, 22] and for the discrete-time case in [66, 8, 16, 14, 15]. TheN-cycle is treated in [?, 79], and the hypercube in [75, 63, 50, 54, 55]. Quantum walks on general undirected graphs are dened in [51, 3], and on directed graphs in [62]. Kendon [52] has a recent review of the work done in this eld so far, focusing mainly on decoherence. Other reviews include an introductory review by Kempe in [49], and a review from the perspective of algorithms by Ambainis in [3]. Dierent quantitative characterizations of quantum walks are dened by anal- ogy to the classical ones, such as mixing times, hitting (absorbing times), corre- lation times, etc. [2]. Often for this purpose the evolution of the quantum walk must be modied, to include not only the Hamiltonian evolution, but also a measurement process to extract information about the current state of the walk. There is a natural way to introduce such a measurement process in the discrete case: namely, a measurement is made after each step of unitary evolution. The outcome from this measurement is used in dening the characteristic time scale in question. In the case of the continuous-time walk, such a natural denition of a measured walk does not exist. There is no intrinsic time step after which we can perform the measurement. Classically this is not a diculty, because measurements do not disturb the state of the system. The quantum case is quite dierent. If we choose the measurement times arbitrarily, they can either be too long or too short with respect to the unitary evolution of the quantum walk. We can either miss important details in the evolution by measuring too infrequently, or overly distort the unitary evolution by measuring too often. In the limiting case, we can completely freeze the evolution by the Quantum Zeno eect [60]. 45 Hitting times for discrete-time quantum walks have been dened and analyzed in [50, 55]. The eect on mixing times of making random measurements, and its possible algorithmic applications, has been studied in [53, 73, 74]. Below, we introduce a measurement process for the continuous-time quantum walk which gives rise to a denition and an analytical formula for the hitting time as function of the measurement rate (or equivalently, measurement strength). We explore the limits of measuring too weakly or too strongly, and show that the hitting time diverges in either case. This suggests the existence of an optimum rate of measurement, which depends on the unitary dynamics of the particular walk. We also show another dierence from hitting times for classical random walks. In the classical case, a random walk on a nite connected graph always leads to nite hitting time for any vertex. This is not true for the quantum case. The existence of innite hitting times has been argued for discrete-time and continuous-time quantum walks in [55]; in this paper we show this explicitly for the continuous-time quantum walks based on the denition of hitting time that we give, and derive conditions for the existence of innite hitting times. Another sucient condition that we prove is that if the complementary graph is not connected, this automatically leads to innite hitting times for the continuous- time quantum walk on the original graph. 3.2 Hitting time denition for the continuous quantum walk We want to dene hitting time for the continuous unitary evolution on undirected graph (V;E), where V is the set of vertices and E is the set of edges. Two 46 vertices v 1 ;v 2 2 V are connected if there exists an edge e =fv 1 ;v 2 g2 E (here fv 1 ;v 2 g should be taken as the unordered pair or set of the two vertices v 1 and v 2 ). Corresponding to the graph , we assign the Hilbert spaceH =` 2 (V ). The vertex states in that Hilbert space are just labeled by the vertices of the graph| for v2V ,jvi2H. They form an orthonormal basis forH :hv n jv m i = nm . The hitting time for classical random walks is dened naturally as the average time to nd the walk in a specic vertex. When we turn to the quantum case the walk on the graph is not dened as a stochastic process on the vertices of the graph but as the unitary evolution of a closed quantum system with a Hilbert space dened as above. In order to be able to tell when the quantum walk has reached a vertex, we need to measure the system in order to gain information about the current state of the system. There are several reasonable ways to do that in the continuous-time case. We could perform strong measurements periodically with some xed but arbitrary period T . This is not unlike in the discrete case, in which the period T is given naturally by the walk itself: a measurement is performed after each unitary evolution step. This way to perform a measurement in our case is unsatisfactory. We have no way to know how to choose T . If we choose it too small we could introduce too much decoherence, eectively masking the unitary evolution of the walk, or even worse, freezing it. IfT is too large then we can miss the moment when the walk actually reaches the nal vertex. And in general, the unitary transformation between measurements can be complicated and dicult to work out (unlike the discrete-time case). Another way to measure the system is through strong measurements but per- formed at random times. The measurement times are chosen according to some probability distribution with some measurement rate. The advantage is that we 47 don't introduce an articial periodicity into the dynamics, and it allows one to calculate averaged eects over dierent measurement patterns. The disadvan- tage is that it is still necessary to introduce a time scale for the measurements, this time given by the rate at which measurements are performed. A third way to measure the system is using \weak" measurements, analogous to the way photodetection is described. In a small time period t we perform a measurement which either allows the system to evolve unitarily with a probability 1, or performs a measurement to determine whether it has reached the nal state with a probability . In this case, the evolution is unitary for most of the time, with jumps at random time when the measurement is performed. This case and the previous one are actually equivalent|the values of t and determine the measurement rate |but they give a somewhat dierent intuition about how to look at the measurement procedure. In the second case, it is natural to describe the random times is by a Poisson process with a given rate. In the third case the role of a rate is played by the strength of the \weak" measurement. In his work on mixing times [74], Peter Richter argued that the qualitative and even quantitative behavior of the system is not too sensitive to the exact details of the measurement scheme. This suggests that the rst choice above might be as good as the the other two, but it still introduces a discrete structure which is not desirable in dealing with continuous evolution. In the following, we explore a measurement scheme described in the second and third cases. We do a measurement to check whether the system is in the nal statejv f i, given by the measurement operatorsfP f ;Q f g where P f =jv f ihv f j, Q f = I P f . We have to specify the times when we perform the measurements. We will measure the system at random times distributed according to a Poisson 48 process X t with rate > 0. Each time we observe a jump in the Poisson process we measure the system. Between the moments at which we perform the measurements the system evolves unitarily with a HamiltonianH = L, where L is the discrete version of the continuous Laplacianr 2 . In our case it is given byL =AD, whereD is a diagonal matrix in the basis spanned by the vertex states with the degree of each vertex along the diagonal, and A is the adjacency matrix of the graph [31, 23]. In this paper we take = 1. If the degree of the vertex v n is d n then we have the following representation of D and A D = X n d n jv n ihv n j; (75) A = X n;m a nm jv n ihv m j; (76) where a nm = 8 > > < > > : 1; iffv n ;v m g2E; 0; iffv n ;v m g62E (77) is the adjacency matrix of the graph (V;E). Let ! = (t 1 ;t 2 ;:::) be a sequence of random times when the jumps of the Poisson process are observed, with (t n 2R; 0<t 1 <t 2 <:::). For convenience we will take t 0 = 0. The sequences ! belong to a probability space ( ;F;P) on which the Poisson process X t is dened: X :R !f0; 1; 2;:::g; X t (!) =n; if t2 [t n ;t n+1 ): (78) 49 Here is the set of all sequences of random times!,F is the-algebra, generated by the Poisson process. The probability measureP on is the one induced by the Poisson process. (For reference, see [41, 42].) For each sequence !2 we dene the hitting time as ! = 1 X n=1 t n p n ; (79) where p n is the probability to nd the system in the nal state at time t n given that the system wasn't measured to be in the nal state in any of the previous timest n1 ;:::;t 1 . (For reference, see [55, 84].) Dene the intervals between jumps t j1 andt j as t j =t j t j1 . We want to average ! over all possible trajectories ! of the Poisson process, and take this as our denition for the hitting time: h =E P ( ! ) = Z ! dP(!): (80) We would like to nd an analytical formula for h . From the denition ofp n , we have p n = Tr 8 < : P f n1 Y m=1 e i(t m+1 tm)H Q f e it 1 H i e it 1 H ! n1 Y m=1 Q f e i(t m+1 tm)H 9 = ; : (81) The arrow above the products signify whether the operators entering the prod- ucts are ordered from left to right or vice versa, in other words Q n m=1 U i = U n U n1 :::U 1 and ! Q n m=1 U i =U 1 U 2 :::U n . We now introduce superoperatorsU t andQ f , dened by U t (X) =e i tH Xe i tH ; (82) 50 Q f (X) =Q f XQ f ; (83) and use them to rewrite (81) as p n = Tr P f U tn Q f U t n1 Q f :::U t 1 ( i ) : (84) We want to express the sum, (79), as a function of thef t n g. We do that by adding, subtracting and rearranging terms in the sum. ! =t 1 p 1 +t 2 p 2 +t 3 p 3 +t 4 p 4 +::: =t 1 p 1 +t 2 p 2 t 1 p 2 +t 1 p 2 +t 3 p 3 t 2 p 3 +t 2 p 3 t 1 p 3 +t 1 p 3 +t 4 p 4 ::: =t 1 (p 1 +p 2 +p 3 +:::) + (t 2 t 1 )(p 2 +p 3 +:::) + (t 3 t 2 )(p 3 +p 4 +:::) +::: = 1 X k=1 t k 1 X n=k p n (85) Because theft n g are the event times of a Poisson process, the interval timesf t n g are independent and identical random variables, exponentially distributed with parameter and a probability density function given by f tn ( t) = 8 > > < > > : e t ; t 0 0; t< 0: (86) Knowing that, we reexpress formula (80): h = 0 @ 1 Y l=1 1 Z 0 d t l e t l 1 A ( ! ): (87) 51 Then h = 1 X k=1 1 X n=k 0 @ 1 Y l=1 1 Z 0 d t l e t l 1 A ( t k p n ): (88) In the above expression there are two types of integrals: A(X) = 1 Z 0 d te t U t (X); (89) B(X) = 1 Z 0 d te t tU t (X): (90) Integrating by parts we get the following equations for the operators A and B A + i [H;A] =X; (91) B + i [H;B] = 1 A(X); (92) whereA(X) is the solution to the rst equation. (We will prove that this solution exists below.) Dening the superoperator L (X) =X + i [H;X]; we rewrite these equations as L (A) =X; (93) L (B) = 1 A(X): (94) 52 We want to prove that the superoperatorL is invertible when is a real number. For this we need to know how the adjoint of a superoperator is dened with respect to the Hilbert-Schmidt inner product for operators, hX;Yi HS = Tr(X y Y ): (95) Using this inner product, we see that if C(X) = X n c n C n XD y n ; the adjoint ofC(X) is given by C y (X) = X n c n C y n XD n : From that it follows thatL is a normal superoperator: L y L L L y = 0: (96) This means thatL is diagonalizable. If X n is an eigenvector ofL , then X n is an eigenvector of the hermitian and anti-hermitian parts ofL separately, which are given by L H (X) = 1 2 (L +L y )(X) =I(X) =X and L A (X) = 1 2 (L L y )(X) = i [H;X]; 53 respectively. Let us denote the eigenvalue ofL A corresponding to X n by ix n (x n 2R becauseL A is anti-hermitian). ThenL (X n ) = (1 +ix n )X n 6= 0. This proves that each eigenvalue ofL is nonzero, and thatL is invertible. The solutions to equations (91) and (92) are A =L 1 (X); (97) B = 1 L 2 (X): (98) Substituting these in (88) we get h = 1 X k=1 1 X n=k 1 Tr n P f L 1 Q f nk L 2 Q f L 1 k1 ( i ) o = 1 X k=1 1 X n=k 1 Tr n P f L 1 Q f L 1 nk L 1 Q f L 1 k1 ( i ) o = 1 X k=0 1 X l=0 1 Tr n P f L 1 Q f L 1 l L 1 Q f L 1 k ( i ) o : (99) Note that the eigenvalues ofL are always greater than or equal to 1 in absolute value, from which it follows that the eigenvalues ofQ f L 1 are all less than or equal to 1 in absolute value. If all the eigenvalues are strictly less than 1 in absolute value, the following sum exists: 1 X k=0 Q f L 1 l = IQ f L 1 1 : Substituting this into (99) and denotingN =L Q f , we get the following formula for the hitting time: h = 1 Tr P f N 2 ( i ) : (100) 54 This formula is closely analogous to the formula for the hitting time derived in [55, 54] for the case of a discrete-time quantum walk. IfQ f L 1 has any eigenvalues equal to 1, the inverse in the above formula should be thought of as a pseudoinverse. Another quantity that may be dened is the total probability to ever hit the nal vertex [84, 55]: p h = 1 X n=1 p n : (101) We can derive a formula similar to (100) for p h : p h = Tr P f N 1 ( i ) : (102) When the hitting time is not innite, or equivalently when the superoperator L Q f is invertible, p h = 1. In the case of innite hitting time, we replace the inverse with a pseudoinverse, as before. There is another way to derive formulas (100) and (102): by looking at this procedure as an iterated weak measurement (case 3 that we discussed above). Weak measurements have been considered in the literature [1, 13] and one can think of them as measurements that disturb the state of the system by a small amount and thus give little information about the state of the system. There are two types of weak measurement. The rst one leaves the system in a state close to the initial one no matter what outcome is observed. The second type could change the state of system dramatically for some outcomes but these outcomes are observed with a very small probability such that on average the change of the state is small. Below we consider a measurement of the second type. Instead 55 of summing over all trajectories of the Poisson process at each time period t, we perform a generalized measurement with measurement operators: M 0 = p 1" 2 e itH ; M 1 ="P f ; (103) M 2 ="Q f : These operators form a complete measurement as P 2 i=0 M y i M i = I. The mea- surement is weak (in the particular sense of giving little information about the system on average, noted above) when " 1. Let us dene a positive matrix c describing the state of the system at time t conditioned on the assumption that outcome \2" has not occurred up to this time. We measure the system repeat- edly at intervals of time t, using the same measurement operators, and if we don't observe outcome \2" the state of the system is described by the following matrix: c (t +t) = 1 X i=0 M i c (t)M y i : (104) We expand in powers of the small parameter " and take the limit "! 0 and t! 0, keeping the ratio"= p t constant, and obtain a master equation for c (t). This gives the connection between the strength of the measurement " and the measurement rate = lim t!0 " 2 =t. After we expand to second order of " we get c (t +t) =(1" 2 )(1itH) c (1 +itH) +" 2 Q f c Q f +O(" 3 ) = c (t)it[H; c ]" 2 ( c (t)Q f c Q f ) +O(" 3 ): (105) 56 Taking the limit t! 0 and using the fact that lim t!0 c (t +t) c (t) t = d c dt ; lim t!0 " 2 t = we arrive at the master equation for c : d c dt =i[H; c ] ( c Q f c Q f ) =N ( c ): (106) Note that c is positive, but not normalized; the trace of c is the probability that measurement result \2" has not been seen up until timet. The total probability to hit the nal vertex p h and the hitting time h are given by: p h = 1 Z 0 TrfP f c (t)gdt; (107) h = 1 Z 0 tTrfP f c (t)gdt: (108) Substituting the solution of (106) c (t) =e tN ( c (0)) in (107) and (108) and integrating by parts we obtain formulas (100) and (102) for the total probability to hit and the hitting time. 57 3.3 Conditions for existence of innite hitting times We want to prove that the existence of innite hitting times is equivalent to the non-invertibility of the superoperatorL Q f . We will need the following denitions [36, 39]. Denition 7 1. A matrix pencilA+sB (whereA andB arenn matrices and s is a complex number) is said to be regular if there exists at least one complex s for which the pencil is nonsingular. Denition 8 2. A complex number s is a nite eigenvalue of the regular matrix pencil A +sB if det(A + sB) = 0. Denition 9 3. The regular matrix pencil A +sB is said to have an innite eigenvalue if B is a singular matrix. In discrete-time quantum walks, innite hitting times were observed for spe- cic graphs and initial states [54, 55]. This occurs when, starting from the initial state, the total probability to ever nd the walk at the nal vertex is less than 1. It has been argued in the papers above that innite hitting times occur given that the graph has a symmetry group that leads to a degenerate Hamiltonian of the quantum walk. The symmetry of the group leads to splitting the Hilbert space into invariant subspaces under the action of the group, on each of which the group acts with one of its irreducible representations. The evolutionary operator for the quantum walk leaves these subspace invariant under its action because the symmetry group of the graph is necessarily the symmetry group for the Hamil- tonian. Thus if the nal state is in one invariant subspace, but the initial state 58 does not lie entirely in the same subspace, there will be a nonzero probability to never hit the nal state. For such a situation to occur the nal state, which we always assume to be a vertex state, must lie entirely in one of those invariant subspace. A sucient condition for that in the case of a discrete-time quantum walk on a regular graph is the presence of an irreducible representation of the symmetry group in the Hamiltonian with a dimension larger than the dimension of the coin space. For the continuous-time quantum walk the equivalent condi- tion is the presence of an irreducible representation with dimension larger than one. As Abelian groups always have one-dimensional representations, we might expect that in order to have innite hitting times we need to have a symmetry group that is not Abelian. We will show that this is not true. The symmetry of the graph can lead to innite hitting times even if the Hamiltonian is not degen- erate, as is the case when the symmetry group of the graph is Abelian. Having a non-Abelian symmetry group is a sucient, but not a necessary, condition. Another condition for the existence of innite hitting times is connected to the invertibility of the superoperatorN which enters formulas (100) and (102). Consider all operatorsX such that [H;X] = 0 andP f X =XP f = 0, and denote the projector on the linear subspace of all such operators byP. We will prove thatP6= 0 if and only ifN =L Q f is not regular. IfP6= 0 then choose X such thatP( X) = X6= 0. Then [H; X] = 0 and from P f X = XP f = 0 follows thatQ f ( X) = X. ThusN ( X) = 0 which means thatN is singular for every and thus not regular. 59 IfN is not regular then it is non-invertible for any . Let us x to be real and dierent from 0. There exist a X6= 0 such thatN ( X) = 0. We have already proven thatL is invertible for real . Then N ( X) =L (IL 1 Q f )( X) = 0 (109) and thus L 1 Q f ( X) = X: (110) Taking into account that all eigenvalues ofL are greater or equal to 1 in absolute value the above equality is true only if L 1 ( X) = X; (111) Q f ( X) = X; (112) which are equivalent to [H; X] = 0; (113) P f X = XP f = 0: (114) This means thatP is nonzero. Let's explicitly give the form of the projective superoperatorP in terms of the projectors on the eigenspaces of the Hamiltonian H. For this purpose we have to dene the intersection operation\ for orthogonal projections. For any two orthogonal projection operators P 1 and P 2 , P 1 \P 2 will denote the orthogonal 60 projector on the subspace which is the intersection of the subspaces onto which P 1 and P 2 project. Let the Hamiltonian H have the following decomposition: H = r X i=1 E i P i ; whereE i are the eigenvalues ofH (E i 6=E j fori6=j),P i are the projectors onto the eigenspaces of H corresponding to eigenvalues E i . Since H is Hermitian, P i P j = ij P i , and TrP i =d i is the multiplicity of theE i eigenvalue. The projector P is then P(X) = r X i=1 (P i \Q f )X(P i \Q f ): (115) From this form it is easy to see thatP is a completely positive superoperator. As such, if it is dierent from 0, then there must exist a density matrix such thatP() = . If the walk begins in such a state , it will never arrive at the nal vertex. Now we will prove that ifN is regular as a matrix pencil then all its eigen- values lie on the imaginary axis. Let's assume thatN is invertible for some 6= 0 andN 0 is non-invertible for some 0 6= 0, 0 6= . ThenN (X)6= 0 for all X 6= 0 and there exists X 0 6= 0 such thatN 0 (X 0 ) = 0. Then (N N 0 )(X 0 ) = i(1= 1= 0 )[H;X 0 ]6= 0 and therefore [H;X 0 ]6= 0. Analogously (N 0 N 0 )(X 0 ) = ( 0 )(IQ f )(X 0 )6= 0 therefore (IQ f )(X 0 )6= 0. As IQ f is a projector, it follows that hX 0 ; (IQ f )(X 0 )i HS 6= 0: (116) 61 Taking into account thatIQ f andH() = [H;] are both Hermitian superoper- ators,hX 0 ; (IQ f )(X 0 )i HS andhX 0 ;H(X 0 )i HS are both real numbers. Denoting r 0 =Re(1= 0 ) and i 0 =Im(1= 0 ) we have hX 0 ; (IQ f i 0 H)(X 0 )i HS +i r 0 hX 0 ;H(X 0 )i HS = 0: (117) This equality is only possible if both the real and imaginary parts vanish, imply- ing that r 0 = 0 and hence Re( 0 ) = 0. Using this result, we will now prove that ifN is a regular matrix pencil, and thus doesn't have innite eigenvalues, then the hitting time h behaves regularly as a function of on the real line: it won't diverge for any real except when goes to 0 or innity. Physically, this means that when we measure either very weakly or very strongly we never nd the particle in the nal vertex. The rst limit is easy to understand|if we never measure, we will never nd the particle anywhere. The second limit| going to innity|corresponds to the Quantum Zeno eect, in which the evolution of the system is restricted to a subspace orthogonal to the nal vertex. To prove this conclusion, we represent superoperators as matrices using the following isomorphism: :C() = X n c n C n ()D y n !(C) = C = X n c n C n D n : (118) Now we represent the superoperator pencilN by the matrix pencil N =(N ) =I IQ f Q f i (H II H ): (119) 62 By assumption this matrix pencil is regular. Every regular matrix pencilA +sB has the following canonical form: A +B =Tdiag N (m 1 ) ;:::;N (mp) J (n 1 ) ( n 1 );:::;J (nq ) ( nq ) S; (120) where T and S are invertible matrices, constant with respect to , and diag N (m 1 ) ;:::;N (mp) ;J (n 1 ) ;:::;J (nq ) is a block-diagonal matrix with the ma- trices N (m 1 ) ;:::;N (mp) ;J (n 1 ) ;:::;J (nq ) on the diagonal. The blocks N (m) and J (n) are square matrices of order m and n respectively of the form N (m) = 0 B B B B B B B B B B B B B B @ 1 0 0 0 0 1 0 0 0 0 1 0 0 . . . . . . . . . . . . . . . . . . 0 0 0 1 0 0 0 0 1 1 C C C C C C C C C C C C C C A ; (121) J (n) ( l ) = 0 B B B B B B B B B B B B B B @ + l 1 0 0 0 0 + l 1 0 0 0 0 + l 0 0 . . . . . . . . . . . . . . . . . . 0 0 0 + l 1 0 0 0 0 + l 1 C C C C C C C C C C C C C C A ; (122) or more succinctly N (m) = I (k) +K (m) and J (n) = ( + 0 )I (n) +K (n) , where I (m) is the the identity matrix of order m and K (m) is a mm matrix with 63 \1s" immediately above the diagonal and \0s" everywhere else. TheN (m) blocks are present when the matrix pencil has innite eigenvalues and the J (n) blocks correspond to nite eigenvalues. We want to examine the behavior of the inverse of a regular matrix pencil when approaches one of its eigenvalues. Assume that 0 is a nite eigenvalue of the pencil and the correspondingnn block to that eigenvalue isJ( 0 ). The inverse of this block is given by J 1 ( 0 ) = n1 X j=0 (K) j ( + 0 ) j+1 : (123) In the above K 0 =I and K =K (n) . The inverses of all blocks that do not cor- respond to the eigenvalue 0 will have regular behavior when approaches 0 . We want to examine the behavior of N (n) when goes to innity. Analogously the inverse of this block is given by N 1 = n1 X j=0 (K) j : (124) The inverse of the blocks corresponding to nite eigenvalues will have regular behavior when approaches innity. As the matrix pencil N = N 1= =I IQ f Q f +i(H II H ) (125) is regular and has both nite ( = 0) and innite eigenvalues (because both matrices I IQ f Q f and i(H II H ) are singular), both types of 64 blocks N (k) and J (l) are present in its normal form. If we express formula (100) in terms of matrices and vectors with = 1= we get an analogous expression h =P v f N 2 v i ; (126) where P v f and v i are the vectorized versions of the matrices P f and i . When goes to 0 we can see from formula (123) with 0 = 0 that the asymptotic behavior of h is given by h = r h () + 1 P v f (S 1 P 0 T 1 ) 2 v i +O 1 2 ; (127) where r h () is a function which is regular in a neighborhood of = 0, T and S are the invertible matrices in the canonical form (120) of the matrix pencil N and P 0 is the projector on the eigenspace with eigenvalue = 0. Now as long as P v f (S 1 P 0 T 1 ) 2 v i 6= 0, h will go to innity when goes to 0( going to innity) no matter whether terms of higher order, O 1 2 , are present or not. Analogously, when goes to innity, h has the asymptotic behavior h =P v f (S 1 P 1 T 1 ) 2 v i +O( 2 ); (128) whereP 1 is the projector on the eigenspace with innite eigenvalue. Here again as long as P v f (S 1 P 1 T 1 ) 2 v i 6= 0, h will go to innity when goes to innity ( going to 0) no matter whether terms of higher order, O( 2 ), are present or not. 65 v 1 v 2 K 2 v 1 v 2 v 3 L 3 v 1 v 2 v 3 K 3 v 1 v 2 v 3 v 4 L 4 v 1 v 2 v 3 v 4 KL 3;1 v 1 v 2 v 3 v 4 S 4 Figure 3: Graph examples (with assigned vertex labels) As we shall see in the next section the hitting time in the examples that we will give below has the following form as a function of : h = (1) + (1) ; (129) where the constants (1) and (1) depend on the particular graph. 3.4 Examples In this section, we will consider as examples the graphs in Fig. 3, using the labeling of the vertices given in the gure when necessary. We can see examples of innite hitting times (p h 6= 0) for the graphs L 3 , K 3 , L 4 , KL 3;1 and S 4 . We can describe the probability to hit p h and the hitting time h in another way, by specifying two operators P (;v f ) and H (;v f ), and calculating their expectations in the initial state: p h = TrfP (;v f ) i g; (130) 66 h = TrfH (;v f ) i g: (131) Here i is the density matrix describing the initial state of the system. These equations follow from formulas (100) and (102), respectively. By using the de- nition of the Hilbert-Schmidt inner product, we derive the following formulas for P (;v f ) and H (;v f ): P (;v f ) = (L Q f ) 1 y (P f ); (132) H (;v f ) = 1 (L Q f ) 2 y (P f ): (133) In the following, we will show the operators P (;v f ) and H (;v f ) in the vertex state basis for each of the graphs on Fig. 3, and give a brief discussion of the quantum walk on each graph. It is useful to describe the hitting probability and time in terms of these matrices, because they give the result for any starting state. 3.4.1 Example 1 The graphK 2 has the symmetry groupC 2 . As this group is Abelian, the Hamil- tonian of this graph is nondegenerate. The two eigenvectors have nonzero overlap with both vertex states, and therefore there can be no innite hitting times, as we can see from the matrices P and H: P (K 2 ;v 1 ) = 0 B @ 1 0 0 1 1 C A (134) 67 H (K 2 ;v 1 ) = 0 B @ 2 i i 2 + 2 1 C A (135) The dependence of the hitting time for vertexv 1 can include two terms: the 2= term diverges as ! 0, which simply represents the increasing time it takes to nd the particle as the measurement rate goes to zero; if the system starts at vertex v 2 there is also a =2 term, which diverges as !1 because of the Quantum Zeno eect: as the measurement rate increases, we can \freeze" the system's evolution. 3.4.2 Example 2 The situation is dierent in the case of the L 3 graph. The graph again has symmetry group C 2 , and the Hamiltonian has no degeneracies. Despite that, however, one of the three energy eigenstates has zero overlap with the v 2 vertex: (1= p 2; 0;1= p 2). This means that even without degeneracy, there is an innite hitting time for the nal vertex v 2 . This is not accidental and the symmetry of the graph is still responsible for the existence of this innite hitting time. Under the action of C 2 each energy eigenstateje i i will have to be either symmetric or anti-symmetric. The Hilbert space thus splits into a symmetric and anti- symmetric subspaces which are orthogonal to each other. As the vertex state jv 2 i is obviously symmetric under the action of the group it will be orthogonal to the anti-symmetric subspace. This is what leads to an innite hitting time. This is a general observation for any graph that has C 2 as a symmetry group and vertex state that are left invariant under the action of the group. There are 68 no innite hitting times for reaching vertices v 1 and v 3 . We can see all these properties by examining the matrices P and H . P (L 3 ;v 1 ) = 0 B B B B @ 1 0 0 0 1 0 0 0 1 1 C C C C A ; (136) H (L 3 ;v 1 ) = 0 B B B B @ 3 1 2 +i 1 2 i 2 1 2 i + 4 2 1 2 + i 2 1 2 + i 2 2 1 2 i 2 3 2 + 3 1 C C C C A : (137) The existence of innite hitting time for reaching v 2 can easily be seen from the P matrix for v 2 : P (L 3 ;v 2 ) = 0 B B B B @ 1 2 0 1 2 0 1 0 1 2 0 1 2 1 C C C C A ; (138) H (L 3 ;v 2 ) = 0 B B B B @ 8 + 9 8 1 4 i 4 8 + 9 8 1 4 + i 4 2 1 4 + i 4 8 + 9 8 1 4 i 4 8 + 9 8 1 C C C C A : (139) As P (L 3 ;v 2 ) is not the identity, there must be initial states that will result in a less than 1 probability to hit v 2 . For example, this will be true for any initial state which is a superposition of statesjv 1 i andjv 3 i. 69 3.4.3 Example 3 The graph K 3 has symmetry group D 3 , and its Hamiltonian is degenerate. It has innite hitting times to hit any vertex. If we calculate theP andH matrices for this graph, we discover a new property of these hitting time and hitting probability matrices. The graph K 3 is not isomorphic to L 3 , but its P and H matrices for any vertex of the K 3 are also given by (138) and (139) (or their appropriate cyclic permutations). This is because the Hamiltonians of the L 3 and K 3 graphs commute. We will observe the same kind of behavior below for other graphs with commuting Hamiltonians. 3.4.4 Example 4 The quantum walk on the graphL 4 has the same qualitative behavior as the walk on L 2 . They both have the same symmetry group, C 2 , as does the L 3 graph. But in the case of L 4 , as in the case of L 2 , there are no innite hitting times. 3.4.5 Example 5 and 6 We will examine the graphs KL 3;1 and S 4 together, because it turns out that their behavior is closely related. The graphKL 3;1 again hasC 2 for its symmetry group. The Hamiltonian is nondegenerate, but there are innite hitting times for the vertices v 1 and v 2 , due to the existence of an eigenvector which vanishes on those two vertices: (0; 0; 1= p 2;1= p 2). This is quite analogous to the case of the graph L 3 . In general, graphs with C 2 symmetry will have innite hitting times for hitting vertices that are xed points under the action of the symmetry group. 70 The graph S 4 has D 3 as a symmetry group. Its Hamiltonian is degenerate, and it has innite hitting times to hit any vertex. It turns out that the matrices P and H for hitting vertices v 1 and v 2 coincide with the same matrices for the graph KL 3;1 : P (KL 3;1 ;v 1 ) =P (S 4 ;v 1 ) = 0 B B B B B B B @ 1 0 0 0 0 1 0 0 0 0 1 2 1 2 0 0 1 2 1 2 1 C C C C C C C A ; (140) H (KL 3;1 ;v 1 ) =H (S 4 ;v 1 ) = 0 B B B B B B B @ 3 1 +i 3 4 + i 2 3 4 + i 2 1 i + 13 2 2 1 + i 4 2 1 + i 4 3 4 i 2 2 1 i 4 + 15 8 + 15 8 3 4 i 2 2 1 i 4 + 15 8 + 15 8 1 C C C C C C C A ; (141) P (KL 3;1 ;v 2 ) =P (S 4 ;v 2 ) = 0 B B B B B B B @ 1 3 0 1 3 1 3 0 1 0 0 1 3 0 1 3 1 3 1 3 0 1 3 1 3 1 C C C C C C C A ; (142) H (KL 3;1 ;v 2 ) =H (S 4 ;v 2 ) 71 = 0 B B B B B B B @ 18 + 8 9 1 3 i 6 18 + 8 9 18 + 8 9 1 3 + i 6 2 1 3 + i 6 1 3 + i 6 18 + 8 9 1 3 i 6 18 + 8 9 18 + 8 9 18 + 8 9 1 3 i 6 18 + 8 9 18 + 8 9 1 C C C C C C C A : (143) Just as with graphsL 3 andK 3 , these two graphs have the same matrices because the Hamiltonians of the graphs KL 3;1 and S 4 commute; and, as we saw above, this produces similar dynamics when we measure the walk in the corresponding nal vertices v 1 and v 2 . This is not the case, however, when the nal vertex isv 3 orv 4 for these graphs. ForS 4 , theP andH matrices forv 3 andv 4 can be obtained from those above by interchanging v 1 with v 3 or v 4 . For KL 3;1 , however the matrices are P (KL 3;1 ;v 3 ) = 0 B B B B B B B @ 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 1 C C C C C C C A ; (144) H (KL 3;1 ;v 3 ) = 0 B B B B B B B @ + 5 2 1 i 2 0 2 i 2 1 + i 2 5 2 + 7 1 3i 2 1 + i 2 0 1 + 3i 2 4 1 18 + 8 9 1 i 2 1 + 4 1 C C C C C C C A : (145) 72 The matrices P (KL 3;1 ;v 4 ) and H (KL 3;1 ;v 4 ) can be found by interchangingv 3 and v 4 in the matrices above. The innite hitting times for the L 3 and KL 3;1 graphs can be understood to arise because the Hamiltonian of those graphs commutes with the Hamiltonian of a more symmetric graph. As we shall see in the next section, this fact can lead to innite hitting times under certain circumstances. 3.5 Innite hitting times for graphs with non- connected complementary graph We will now look in a little more detail at innite hitting times, which are one of the most surprising dierences between classical random walks and quantum walks. In the classical case, if the graph is nite and connected, the probability to reach any vertex starting from any other is always 1. That is not the case for quantum walks. A sucient condition for innite hitting times was given in [56]: if the graph's Hamiltonian is suciently degenerate, innite hitting times will always exist. For continuous-time walks, any degeneracy at all is sucient. (This is a sucient but not a necessary condition because, as we've shown above, even graphs with nondegenerate Hamiltonians may have innite hitting times.) We will now show that another sucient condition for a continuous-time quantum walk to have an innite hitting time is the non-connectedness of the complemen- tary graph. Consider a graph withn vertices and HamiltonianH given by the 73 usual expressions Eq. (75) and Eq. (76). The complete graphK n withn vertices has the following Hamiltonian (in the basis spanned by the vertex states): H Kn = 0 B B B B B B B @ n 1 1 1 1 n 1 1 . . . . . . . . . . . . 1 1 n 1 1 C C C C C C C A : (146) This can be rewritten more succinctly as H Kn =n (Ij 0 ih 0 j) =n P 0 ; (147) wherej 0 i = 1 p n P n k=1 jki and P 0 =Ij 0 ih 0 j. The complementary graph c of a graph is obtained by connecting vertices that are not connected in the original graph , and removing the edges that are present in the original graph. Then it's easy to see that the Hamiltonian of c is H c =H Kn H : (148) Another observation is that the Hamiltonian of every graph commutes with the Hamiltonian of the complete graph with the same number of vertices: [H ;H Kn ] = 0: (149) This follows from the observation thatj 0 i is always an eigenvector of H with eigenvalue 0. As H = P 0 H P 0 ; (150) 74 (149) is obvious. We can see from (148) that [H ;H c] = 0. Let us assume that the graph is connected but the complementary graph c is not, and consider the quantum walk on c . Because c is not connected, there are initial states that never reach a particular nal vertex if the initial state includes only vertices which are not connected to the nal vertex. Let's consider an initial statej i i that contains only vertex states that belong to one of the connected components of c . Let us further assume that h 0 j i i = 1 p n n X k=1 hkj i i = 0; which is always possible if the connected component has more than one vertex. Sincej i i is orthogonal toj 0 i, it immediately follows that it is an eigenstate of H Kn with eigenvalue n. If the nal statej f i contains only vertices belonging to a dierent connected component of c , the probability to ever reach the nal state is 0: h f je itH c j i i = 08t: (151) Putting all this together, we can see now that h f je itH j i i =h f je it(H Kn H c) j i i =e itn h f je itH c j i i = 0: (152) This proves the existence of innite hitting time for the original graph . We note that similar considerations may apply in some cases if the complete graph is replaced by a symmetric graph whose Hamiltonian commutes with with 75 H . An example of this is the similarity of the dynamics of the KL 3;1 and S 4 graphs. Finally, we note that this sucient condition for innite hitting times is not particularly strong. For a graph with a large number of vertices, the complemen- tary graph is almost always connected. 3.6 Discussion We have examined continuous-time quantum walks and studied natural deni- tions for the hitting time. After considering dierent possibilities for introducing a measurement scheme, one of them emerges as a natural one for the continuous case: measuring the presence or absence of the particle at the nal vertex at Poisson-distributed random times, with an adjustable rate . This is exactly equivalent to performing a particular type of weak measurement at frequent in- tervals, in the limit yielding continuous monitoring with time-resolution 1=. Using this measurement scheme, we derived an analytical formula for the hitting time which closely resembles the formula for the discrete-time case. This formula enables us to nd a necessary and sucient condition for the existence of quantum walks with innite hitting times, namely that a certain superoperator pencil is not regular. In the case of nite hitting-times, the de- pendance of the hitting time on the rate of the measurement was studied, and the intuitive expectation for its behavior in the limits of weak and strong mea- surement rate was conrmed. In particular, as the measurement rate goes to innity, the hitting time can diverge due to the Quantum Zeno eect. As in the discrete case, the symmetry of the graph plays a very strong role in the emergence of innite hitting times. The graph symmetry group, if big enough, 76 causes degeneracies in the eigenspectrum of the Hamiltonian which in turn leads to the emergence of innite hitting times for certain vertices. But this is not the only way in which symmetry can lead to innite hitting times. Even when no degeneracy is present, symmetry can cause some eigenvectors of the Hamiltonian to have zero overlap with some vertex states, as in the case of the L 3 andKL 3;1 graphs examined in section 3.4. This can be attributed to the fact that under the action of the groupC 2 , the Hilbert space splits into symmetric and antisymmetric subspaces, and some eigenvectors from the antisymmetric subspace could have zero overlap with certain vertex states. A further study exploring this idea is necessary to see if similar eects can occur for other symmetry groups. Finally, in section 3.5 we show another condition for innite hitting times. We have shown that the quantum walk on a connected graph can have innite hitting times if the complementary graph is disconnected. This is in sharp con- trast with the classical case, where every random walk on a connected graph will hit any vertex with probability 1 at long times. While this new condi- tion is rather specic, it is possible that it can be generalized by replacing the completely-connected graph with some other highly symmetric graph such that the Hamiltonian still commutes with the Hamiltonian of the original graph. It may be possible to explain any innite hitting times on any graph in this way, giving a unifying view of the whole subject. It is clear that many questions remain, and that hitting times for continuous-time quantum walks are a very fruitful area of research. 77 Chapter 4: Quantum scattering theory on graphs with tails 4.1 Preliminaries Quantum walks have become important tools for modeling and analyzing the behavior of a quantum system in quantum information theory. Two seem- ingly dierent types of quantum walks have been dened | discrete-time and continuous-time quantum walks. Discrete-time quantum walks come in several avors | on a regular undirected graph equipped with a coin space in [82], on undirected graphs in [3, 51], on the edges of an undirected graph in [43, 33] or corresponding to a classical Markov chain in [77]. Continuous-time quantum walks rst appeared in [31]. Despite the similarities in their behavior, a way to obtain one as a limit of the other, something easily done in the classical case, was not known. This problem was addressed in [19]. It was also established that continuous-time quantum walks on sparse, low-degree graph are universal for quantum computation [20]. Quantum walks, similarly to random walks in the classical case, are very use- ful for developing quantum algorithms. Some of the rst algorithms discovered employing this approach are element distinctness [5], matrix product verication 78 [17], triangle nding [59] and group commutativity testing [58]. The quantum algorithm for the glued-tree graph is based on a continuous-time quantum walk and was proven to be exponentially faster than its classical counterpart in [21]. These examples of quantum algorithms are based on the fact that the quantum walk \hits" a special vertex polynomially or exponentially faster when compared with the classical walk or the fastest known classical algorithm for the problem in question. Thus the question for a proper denition of a \hitting time" for quantum walks arises. In the discrete-time case denition for the hitting time were given and explored in [50, 54, 55], and for the continuous-time case in [80]. An optimal quantum algorithm for evaluating a balanced NAND tree proposed in [32] is another example of a continuous-time quantum algorithm for which the value of the NAND tree depends on whether a certain part of the graph is hit or not. The premise for the graph is somewhat dierent though | it involves the consideration of a graph with innite tails attached to it. This result spurred a plethora of quantum algorithms for evaluating formulas [7, 72]. Scattering the- ory for discrete-time quantum walks was proposed in [33, 35]. Another type of quantum walk-based algorithm involves generating a sample from the uniform distribution over a graph [2, 6, 66]. Thus one needs to dene \mixing time" for quantum walks. Properties of mixing times and lower bounds are proven in [73, 74, 53]. In this paper we study the model considered in [32]. There, two tails of innite length are connected to a nite graph that contains information about the values of the variables that enter the NAND formula. A wave with xed energy (or at least with narrowly peaked energy spectrum) is sent along a tail. After a time T equal to square root of the number of variable N the walk is measured to be 79 found on one of the two tails as the wave is either totally re ected or transmitted for the selected energy. As there is a direct correspondence between the value of the re ection coecient and the value of the NAND formula we deduce the value of the formula. The discrete-time scattering theory model, [33], uses innite tails attached to a nite graph. The graph is directed and to each dierent oriented edge there is a corresponding state in an orthonormal basis of the Hilbert space of the quantum walk. In the case we explore, the graph is undirected and the walk is dened on the vertices of the graph instead of the edges. In the discrete-time case each tail has a specic avor | it is either an incoming or outgoing tail, which means that a state conned to just one of these tails will be moving freely in just one direction. This is a dierence from what happens in the continuous-time case, as we shall see below, because of the nature of the Hamiltonian used to dene the quantum walk | both incoming and outgoing states could propagate on any tail. Another dierence between the discrete-time and continuous-time case is the type of bound states that exist. In discrete time, only bound states that are conned to the nite graph could exist. In continuous time, bound states that decay exponentially on the tails are present as well. As we already noted, [20] proves that continuous-time quantum walks on graphs with innite tails are universal for quantum computation. In this model, as in the previous one, the energy of the waves incoming along the tails is xed. An arbitrary state of a qubit is represented by superposition of waves incom- ing along two tails. Thus N qubits are represented by 2 N tails. A one-qubit (two-qubit) gate is a graph that has 2 (4) incoming and 2 (4) outgoing tails. 80 Universality follows from the existence of graphs that implement the control- not, phase and Hadamard gates. 4.2 Energy eigenstates We want to introduce scattering on an undirected graph in continuous time. With each vertexv of a graphG we associate a normalized statejvi in a Hilbert spaceH G , the graph Hilbert space, with the property that states corresponding to dierent vertices are orthogonal to each other: hv i jv j i = ij . The structure of the graph determines the unitary evolution in the graph Hilbert space. The Hamiltonian we consider is a Hermitian operator on the graph Hilbert space which in the basis of vertex states is given by minus the adjacency matrix of the graph [32]. In order to investigate the scattering properties of graphs we need to connect \tails" to a nite graph. A tail is a semi-innite linear graph with its end connected to one vertex of the graph. More than one tail can be attached to any one vertex of the graph. The vertices of each tail are numbered from 1 to innity and the vertex to which it is attached is labeled with a 0. To identify dierent tails sometimes we will label them with the vertex of the nite graph to which they are attached. If more than one tail is attached to the same vertex we can use a second index to dierentiate between dierent tails. The Hamiltonian on a tail is dened as in the above as minus the adjacency matrix of the tail. Explicitly H tail = 1 X n=1 (jnihn + 1j +jn + 1ihnj): (153) 81 Let's consider a linear graph innite in both directions. It's easy to see that the energy eigenstatesjki are given by waves propagating in the positive or negative direction: hnjki = e ikn with k2 (0;). States that correspond to k = 0 or k = do not behave exactly like propagating states and need to be analyzed separately. The energy corresponding to a state with absolute value of its momentum given byjkj regardless of propagation direction isE k =2 cosk. 4.2.1 Propagating states We want to nd the continuous energy spectrum and propagating energy states of a graph with tails. We start by considering the usual stationary Schr odinger equation: H ~ G j i =Ej i: (154) The Hamiltonian is given by H ~ G =H G + X v mv X m=1 H tail vm j0 v ih1 vm jj1 vm ih0 v j : (155) In the above equation the rst sum is over all vertices of H G to which tails are connected and the second sum is over all tails connected to a certain vertex v. The number of tails connected to vertex v is denoted by m v and total number of tails by ~ m = P v m v . The notationjn vm i refers to then-th vertex of them-th tail connected to the vertexv of the graphG and as noted abovej0 v i is the state associated to the vertex v, in other wordsj0 v i =jvi. Let's denote byjk;vmi, withk2 (0;), the propagating energy eigenstate of the above Hamiltonian which has an incoming component only on the m-th tail 82 connected to the vertexv of the graphG. We denote the restriction of this state on the graph G byjk;vmi G . Then hn vm jk;vmi =e ikn +r vm (k)e ikn : (156) This state on all of the rest of the tails will look like: hn v 0 m 0jk;vmi =t v 0 m 0 ;vm (k)e ikn : (157) Explicitly jk;vmi =jk;vmi G + 1 X n=1 (e ikn +r vm (k)e ikn )jn vm i + X 0 (v 0 ;m 0 ) t v 0 m 0 ;vm (k) 1 X n=1 e ikn jn v 0 m 0i: (158) The prime on the sum means that we sum over all ordered pairs (v 0 ;m 0 ) dierent from (v;m), a convention we will use further in the paper. The statejk;vmi G denotes the restriction ofjk;vmi on the nite graph G. We note that these states are not normalizable as propagating states should be. Substituting the above expression for them in (154) and taking into account that the energy corresponding to this state is E k =2 cosk, after some cancelations we obtain: H G jk;vmi G + (1 +r vm )j1 vm i (e ik +r vm e ik )j0 v i + X 0 (v 0 ;m 0 ) t v 0 m 0 ;vm (j1 v 0 m 0ie ik j0 v 0i) X (v 0 ;m 0 ) h0 0 v jk;vmi G j1 v 0 m 0i =2 coskjk;vmi G : (159) 83 As the terms that live on the tails should cancel it follows that r vm + 1 =h0 v jk;vmi G ; t v 0 m 0 ;vm =h0 v 0jk;vmi G : (160) Substituting these back in equation (159) it becomes H G jk;vmi G (e ik e ik )j0 v ie ik j0 v ih0 v jki G vm e ik X 0 (v 0 ;m 0 ) j0 0 v ih0 0 v jk;vmi G =2 coskjk;vmi G : (161) Finally the equation thatjk;vmi G satises is H G + 2 coske ik X v 0 m v 0j0 v 0ih0 v 0j ! jk;vmi G = (e ik e ik )j0 v i: (162) We can get a more compact form of the equation if we use the variable z =e ik . Then I +zH G +z 2 Q jz;vmi G = (1z 2 )j0 v i; (163) where we introduce the following operators R = X v 0 m v 0j0 v 0ih0 v 0j; (164) Q =IR: (165) 84 For convenience we denote A(z) =I +zH G +z 2 Q. Later, in section 4.3, we will prove that when z lies on the unit circle in the complex plane equation (163) will always have a nonzero solution, and thus a propagating state will exist with re ection and transmission coecients dened by (160). Here we want to note that when k2 (; 0) the statesjk;vmi =jjkj;vmi are dened exactly as above. They can be expressed as linear combinations of the statesjjkj;vmi. The formula for this will be given in section 4.5. Another thing we want to note is that the operator A(z) and the solution to (163) associated with it is an instance of the so-called quadratic eigenvalue problem. For a good overview of the subject with many applications and examples discussed one can look at [78]. The equation for the bound states derived in the next section is exactly the equation for the the right eigenvectors of A(z). The two cases k = 0 and k = need to be analyzed separately. The states corresponding to those values of k still need to satisfy equation (163) for z = 1 and z =1, respectively. Thus (IH G +Q)j 1i G vm = 0: (166) This equation may and may not have solutions, which is in contrast with the other propagating states which always exist as we shall prove below. If a solutionji vm with =1 exists and Qji G vm 6= 0 then such a state will be nonzero on some of the tails and thus it will not be square-summable. A dierence from the rest of the propagating states is that there may not be linearly independent solutions for each tail. The notion of incoming and outgoing waves is also lost when it comes to these states. Thus re ection and transmission coecients cannot be dened properly. In some respects these states behave more like bound states 85 in the sense that they satisfy an equation identical to the bound state equation (170). In the case whenQji G vm = 0 the full stateji vm will be zero on each tail and thus be a real bound state of the second kind dened below. 4.2.2 Bound states Bound states are solutions to (154) which are normalizable. This is possible if and only if the amplitudes of such bound state on the tails of the graph are either exponentially decaying or zero. Thus two kinds of bound states exist. The rst are states for which there is at least one tail on which the state is non-zero. The second kind of bound states are zero on all tails of the graph. We will see that the energy of these two kinds of bound states is qualitatively dierent. 4.2.2.1 Bound states of the rst kind Let us denote byj{ b i a solution of (154) normalized such that on the tails of the graph it has the form hn vm j{ b i vm ({ b )e { b n : (167) As we want this to be a bound state of the rst kind, vm 6= 0 for at least one pair (v;m). We have the following expression for the bound state j{ b i =N { b j{ b i G + X v mv X m=1 vm ({ b ) 1 X n=1 e { b n jn vm i ! (168) with N { b being a normalization factor. Here againj{ b i G is the part of the state living on the nite graph G. By applying the tail Hamiltonian (153) to the tail 86 of this state for which vm 6= 0 we nd its energy E { b =(e { b +e { b ). For the energy to be a real number e { b should be either a real number or a complex number that belongs to the unit circle. From the requirement that the state is normalized 1 =jN { b j 2 G h{ b j{ b i G + X v mv X m=1 j vm j 2 1 X n=1 e { b n 2 ! (169) it follows that the innite sum P 1 n=1 je { b n j 2 is convergent which leads to the condition thatje { b j 2 < 1. This means that for bound states of the rst kind z b =e { b should be a real number with absolute value strictly less than 1 or in other words<({ b )> 0 and=({ b ) = 0 or . Insertingj{ b i in (154) leads to the following equations: H G + 2 cosh{ b e { b R j{ b i G = 0; vm ({ b ) =h0 vm j{ b i G : If we again use a change of variables,z b =e { b , the equations take the following form: I +z b H G +z 2 b Q jz b i G = 0; (170) vm (z b ) =h0 vm jz b i G : (171) We note that the operator A(z) dening the propagating states through (163) appears in the denition of the bound state of the rst kind in the above equation but with z =z b real and satisfyingjz b j 2 < 1. 87 To determine the normalization factor N z b we consider equation (169) in the new variable taking into account (171). After some simplications we obtain jN z b j 2 = 1z 2 b G hz b jIz 2 b Qjz b i G : (172) 4.2.2.2 Bound states of the second kind In contrast with the bound states of the rst kind the bound states of the second kind have zero overlap with any vertex on any tail. They live entirely on the graph G. From this it easily follows that such a bound stateji has the property hvji = 0 for any vertex v2 G to which a tail is attached. Because of that equation (154) for a such bound state reduces to H G ji =E b ji: (173) We see that such a state is an energy eigenstate of the nite graph G with energy E . We will show in the next section that if such bound states exist and their energy is less than 2 this will lead to non-invertibility of the operator I +zH G +z 2 Q for somez on the unit circle. This will necessitate the redenition of propagating states for the value of z. 4.3 Orthogonality of propagating and bound states In this section we will prove the existence of propagating states for anyk2 (0;). The bound states of the rst kind are obviously orthogonal to any propagating 88 state because their energy, E { b =2 cosh{ b , is always greater in absolute value than 2 and the energy of the propagating states, E k =2 cosk, is always less than or equal in absolute value to 2 (it is well known that eigenstates of a Hermitian operator with dierent eigenvalues are orthogonal to each other). Let's assume now that the operator A(z) =I +zH G +z 2 Q is not invertible for some z = z 0 on the unit circle. Then there is a normalized statejui living totally on the nite graph such that (I +z 0 H G +z 2 0 Q)jui = 0: (174) Multiplying the above equation on the left withhuj and denotinghujH G jui = h2R andhujQjui = 1huj( P v m v j0 v ih0 v j)jui = 1r2R we ge the following equation for z 0 1 +z 0 h +z 2 0 (1r) = 0: (175) Considering the above as an equation with respect to z 0 its solution could be a complex number on the unit circle only if r =huj( P v m v j0 v ih0 v j)jui = 0. As the operator R = P v m v j0 v ih0 v j = P v m v P v is positive, as well as a sum of positive operators, it follows that P v jui =j0 v ih0 v jui = 0 for all v2G to which tails are attached. From this it is obvious that the statejui is a bound state of the second kind. Equation (174) reduces to ((1 +z 2 0 )I +z 0 H G )jui = 0 (176) 89 which can be rewritten as H G jui = z 0 + 1 z 0 jui: (177) Comparing this to (173) we nd the energy of the statejui to be E u =(z 0 + 1=z 0 ). We want to prove that even when A(z 0 ) is not invertible as in the case de- scribed above, (163) still has a well-dened non-zero solution which in addition is orthogonal to all bound states. First we prove that ifjui satises equation (174) then it satises A(z 0 )jui = 0 as well: A(z 0 )jui = (I +z 0 H G + (z 0 ) 2 (IR))jui = (1 +z 0 E u + (z 0 ) 2 )jui = 1z 0 z 0 + 1 z 0 + (z 0 ) 2 jui = 0: (178) The last equality is true because z 0 lies on the unit circle. Let's denote by K z 0 the orthonormal projector on the kernel of the operator A(z 0 ) in the Hilbert space ofG. In other wordsK z 0 is the projector on the linear subspace spanned by alljui which satisfy (174). Thus K z 0 has the properties A(z 0 )K z 0 = 0 and as we saw K z 0 P v =P v K z 0 = 0. From the above proof we also see that A(z 0 )K z 0 = 0. Then K z 0 A(z 0 ) = (A y (z 0 )K y z 0 ) y = (A(z 0 )K z 0 ) y = 0: (179) From this it follows that IK z 0 is the orthonormal projector on the image of A(z 0 ) and as (IK z 0 )P v 0 =P v 0(IK z 0 ) =P v 0 we see thatj0 v 0i is in the image 90 of A(z 0 ). Thus a solution to (163) exists. To dene the propagating state for z = z 0 we choose the only such solution that is orthogonal to kernel of A(z 0 ) which ensures its orthogonality to all bound states of the second kind. To do this we consider the pseudo-inverse of A(z 0 ) dened by A 1 (z 0 ) = (A(z 0 ) +K z 0 ) 1 K z 0 (180) which we can use to dene a solution to (163) with the necessary properties: jz 0 ;vmi = (1z 2 0 )A 1 (z 0 )j0 v i = (1z 2 0 )(A(z 0 ) +K z 0 ) 1 j0 v i: (181) This concludes the proof of the orthogonality of propagating states and bound states. 4.4 Properties of the S-matrix The S-matrix is dened as s vm;vm =r vm ; (182) s vm;v 0 m 0 =t vm;v 0 m 0 (183) wherer vm andt vm;v 0 m 0 are given by (160). The dimension of the S-matrix is equal to the number of tails attached to the graphG. To simplify the notation instead of using double indices for the S-matrix elements we will use just a single one,, which will stand for the ordered pair (v;m). We will also use the notationji for 91 the state to which the tail labeled by = (v;m) is connected:ji =j0 v i =jvi. From (163) it follows that the elements of the S-matrix will be given by s 0(z) = (1z 2 ) A(z) 1 0 0: (184) When A(z) is uninvertible, A(z) 1 should be thought of as a pseudoinverse in the sense of the previous section. To prove the unitarity of the S-matrix we need to introduce the following quantity j v 2 v 1 (j i;j'i) =i (h jv 2 ihv 1 j'ih jv 1 ihv 2 j'i) (185) wherej i;j'i2H G and the two vertices,v 1 ;v 2 2G, are connected to each other by an edge. A obvious property ofj v 2 v 1 that will be used later is the antisymmetry in the vertices: j v 2 v 1 =j v 1 v 2 . Let's consider the Schr odinger equation for the graph with the Hamiltonian given by (155) i dj i dt =H ~ G j i: (186) Any two of its solutions,j (t)i;j(t)i, will satisfy the following \conservation" equation dh jvihvji dt + X v 0 j v 0 v (j i;ji) = 0 (187) 92 where the sum is over all vertices connected to v. If we dene v =hvj i and takej (t)i =j(t)i the equation takes the more standard form d v v dt + X v 0 j v 0 v (j i;j i) = 0: (188) The quantity v v is just the probability at vertexv at timet and thusj v 0 v (j i;j i) can be thought of as the probability current at time t owing from vertex v to vertex v 0 . Let's assume now thatj (0)i andj'(0)i are eigenstates of H ~ G with equal energy E. Then d v ' v dt = 0 (189) and so X v 0 j v 0 v (j i;j'i) = 0 (190) for any vertexv2 ~ G. Summing the formula above over all vertices inG we have X v2G X v 0 j v 0 v (j i;j'i) = 0: (191) In the above sum for each termj v 0 v (j i;j'i) there exist a termj v v 0(j i;j'i) when- ever v;v 0 2 G. Because of the antisymmetry of j v 0 v every such pair of two term will cancel out, leaving us with terms for which one of the vertices is on a tail: X v2G X v 0 j v 0 v (j i;j'i) = X 0 j 1 0 0 0 (j i;j'i) = 0: (192) 93 If we choosej i = j'i = jz;i (the solution to equation (154) which when restricted toH G satised equation (163)) the above equation reduces to js j 2 + X 0 0 js 0 j 2 = 1: (193) Here we use the convention the a primed sum signies that we sum over all 0 6=. If we choosej i =jz; 1 i andj'i =jz; 2 i we obtain X 0 s 0 1 s 0 2 = 0: (194) This proves the unitarity of the S-matrix. Another property that we will need is s 0(z ) = (s y ) 0(z) =s 0 (z): (195) Equation (195) is easy to prove using (184): s 0(z ) = (1 (z ) 2 ) A(z ) 1 0 0 = (1z 2 ) (A(z) y ) 1 0 0 = (1z 2 ) 0 A(z) 1 0 = (1z 2 ) 0 A(z) 1 0 =s 0 (z) = (s y ) 0(z): 94 4.5 Vertex state basis and energy eigenvalue ba- sis In this section we present formulas for the spectral decomposition of the identity corresponding to the Hamiltonian H ~ G . It is easy to obtain after determining its eigenstates | the propagating states together with the bound states of the rst and second kind. For v;v 0 2 ~ G we have vv 0 = X b b (v) b (v 0 ) + X (v) (v 0 ) + 1 2 Z 0 X k; (v) k; (v 0 )dk; (196) where b (v) =hvj{ b i; (v) =hvji; k; (v) =hvjk;i: In Dirac notation the formula will give the resolution of the identity operator I H ~ G for the Hilbert spaceH ~ G : I H ~ G = X b j{ b ih{ b j + X jihj + 1 2 Z 0 X jk;ihk;jdk; (197) 95 Using this formula we can easily obtain a simple expression for a vertex state jn i corresponding to a vertex on the tail of the graph ~ G in terms of the energy eigenstates: jn i = X b h{ b jn ij{ b i + X hjn iji + 1 2 Z 0 X 0 hk; 0 jn ijk; 0 idk: (198) This can be further simplied. As we have already proven, the bound states of the second kind have zero overlap with any vertex that lies on a tail,hjn i = 0. Thus the second sum in the above expression is zero. Formula (167) also implies thath{ b jn i =e { b n h{ b j0 i. From (156) and (157) we see that hk; 0 jn i = 0e ikn +s 0(k)e ikn ; (199) or in terms of the z variable hz; 0 jn i = 0z n +s 0(z)z n ; (200) Here we need the following identity: X s (k)jk;i =jk;i; (201) or in terms of the z variable X s (z)jz ;i =jz;i: (202) 96 First we prove this for the restriction of the propagating states on the graph G: X jz ;i G s (z) = X (1z 2 )A 1 (z )ji (1z 2 ) A 1 (z) = (z 2 1) z 2 A 1y (z) (1z 2 )RA 1 (z)I ji = (1z 2 )A 1y (z) z 2 A(z)z 2 (1z 2 )R A 1 (z)ji = (1z 2 )A 1y (z) z 2 I +z H +IRz 2 R +R A 1 (z)ji = (1z 2 )A 1y (z)A y (z)A 1 (z)ji =jz;i G (203) In the above proof we used thatjzj = 1. All inverses above are well dened as well because they act onji which lies in the image of A(z). Now we need to prove the identity for any vertex lying on a tail. Multiplying (202) byjn 0i we arrive at the following formula which we need to prove: X s (z)hn 0jz ;i =hn 0jz;i: It is easily proven using (200) and (195): X s (z)hn 0jz ;i = X s (z)hz ;jn 0i = X s (z)( 0 z n +s 0 (z )z n ) = X s (z)( 0 z n + (s y ) 0 (z)z n ) =s 0 (z)z n + 0 z n = ( 0 z n +s 0 (z)z n ) =hz;jn 0i =hn 0jz;i: (204) This proves (201). 97 Now we can simplify the sum under the integral in (198) with the help of (199),(201) and the unitarity of the S-matrix: X 0 hk; 0 jn ijk; 0 i = X 0 ( 0e ikn +s 0(k)e ikn )jk; 0 i =e ikn jk;i +e ikn jk;i: Substituting this back into (198) leads to: jn i = X b h{ b jn ij{ b i + 1 2 Z 0 (e ikn jk;i +e ikn jk;i)dk = X b hz b jn ijz b i + 1 2 Z e ikn jk;idk: (205) Using the z variable this formula takes very appealing form: jn i = X b z n b hz b j0 ijz b i + 1 2i I C z n jz;i dz z : (206) 4.6 Cutting a Tail In this section we want to investigate the eect of cutting a tail on the S-matrix of the graph. We will show that the S-matrix of the new, pruned graph can be expressed in terms of the elements of the S-matrix of the old graph. Thus we are given a graph G to which n tails are attached and we assume that we know itsS-matrix. Without loss of generality we can choose to cut the rst tail which 98 we denote by c . For the unpruned graph and pruned graph the elements of the S-matrix are given by (184) with A(z) =I +zH G +z 2 Q; (207) A c (z) =I +zH G +z 2 Q c =A +z 2 P c =A +z 2 j c ih c j; (208) respectively. As can be seen from (184) the problem reduces to expressing the matrix elements of A 1 c in terms of the matrix elements of A 1 . hjA 1 c j 0 i =hj(A +z 2 P c ) 1 j 0 i =hj(I +z 2 A 1 P c ) 1 A 1 j 0 i =hj 1 X j=0 (z 2 A 1 P c ) j A 1 j 0 i =hjA 1 j 0 iz 2 hjA 1 j c i 1 X j=1 (z 2 h c jA 1 j c i) j1 h c jA 1 j 0 i =hjA 1 j 0 i z 2 hjA 1 j c ih c jA 1 j 0 i 1 +z 2 h c jA 1 j c i : Multiplying both sides of this expression by 1z 2 and using (184) we get s c 0 =s 0 z 2 s c s c 0 1 +z 2 s cc : (209) 4.6.1 Leaving a Stump In stead of cutting the tail c at the root now we want to leave a stump of length L. This case can be easily subsumed in the previous one. We dene a new tail ~ c which coincides with c but its beginning is at the L-th vertex of the tail c , jL c i. Thusj0 ~ c i =jL c i. The propagating solution to (154) in this case will be 99 equal up to a phase to the solution when we think of the tail being attached to the rootj0 c i. The conditions that the re ection and transmission coecients need to satisfy is given by (156) and (157) which is only possible when the phase between the two solutions is chosen appropriately:jki ~ c =e ikL jki c . Thus hn ~ c jki ~ c =e ikL h(n +L) c jki c =e ikn +e 2ikL r c e ikn =z n +z 2L+n r c ; hn jki ~ c =e ikL hn jki c =e ikL t c e ikn =z L+n t c : For energy eigenstates with the same energy but which are incoming on a dierent tail we get hn ~ c jki =h(n +L) c jki =e ikL t c e ikn =z L+n t c : All other such conditions that don't involve the tail that is being cut are satised automatically. From the above equations and the denitions of the re ection and transmission coecients (160) we see that the elements of the S-matrix for the graph with tail ~ c satisfy: ~ s ~ c ~ c =z 2L s cc ; ~ s ~ c =z L s c ; ~ s ~ c =z L s c ; ~ s 0 =s 0: Plugging those in (209) we nally get: s c 0 =s 0 z 2(L+1) s c s c 0 1 +z 2(L+1) s cc : (210) 100 4.6.2 Cutting k Tails It is obvious that if we cut more than one tail it doesn't matter the order in which we cut them. Let us assume that the S-matrix has the following block form: S = 0 B @ T ( ~ mk)( ~ mk) U ( ~ mk)k V k( ~ mk) W kk 1 C A : (211) Here ~ m, as dened in section 4.2, is the total number of tails andk is the number of tails that will be cut. Thus we want to cut the tails corresponding to the last k entries in theS-matrix. We obtain the following formula for the newS-matrix: S c =Tz 2 U(I +z 2 W ) 1 V: (212) (The identity matrix in the above formula is a kk matrix.) It is obvious that formula (209) follows from the above. 4.7 Attaching a Tail Let's assume that we want to attach one more tail to a vertex that already has at least one tail attached to it. We want to express the S-matrix of the new graph in term of the old one. Of course, it is also possible to add a tail to a vertex that doesn't already have one, but in this case the new S-matrix cannot be derived simply from theS-matrix of the original graph. Let us denote the new tail being attached with a , the vertex we attach it to with ~ v and the tail already attached 101 to the vertex ~ v with ~ v . We note thatj ~ v i =j a i =j~ vi. By similar argument to the one we made in section 4.6 we get hjA 1 a j 0 i =hjA 1 j 0 i + z 2 hjA 1 j ~ v ih ~ v jA 1 j 0 i 1z 2 h ~ v jA 1 j ~ v i ; where A a =Az 2 P ~ v =Az 2 j ~ v ih ~ v j: In order to reexpress everything in terms of the elements of the old S-matrix we consider dierent cases depending on the rst and second indices of theS-matrix. In the below and 0 signify tails that are dierent from both a and ~ v . After some simplication we get s a 0 =s 0 + z 2 s ~ v s ~ v 0 1z 2 (2 +s ~ v ~ v ) ; s a ~ v =s a a = (1z 2 )s ~ v 1z 2 (2 +s ~ v ~ v ) ; s a ~ v =s a a = (1z 2 )s ~ v 1z 2 (2 +s ~ v ~ v ) ; (213) s a ~ v a =s a a ~ v = (1z 2 )(1 +s ~ v ~ v ) 1z 2 (2 +s ~ v ~ v ) ; s a ~ v ~ v =s a aa = z 2 +s ~ v ~ v 1z 2 (2 +s ~ v ~ v ) : 4.8 Connecting two tails to form an edge The setup is as in the previous sections but the goal now is to connect two tails | in other words to cut two of the tails connected to the graph and replace them with an edge between the vertices they were connected to. Again without loss 102 of generality we connect the rst and second tails, 1 and 2 . When we do that the Hamiltonian and the operator Q change to ~ H G =H G X (2) ; ~ Q =Q +P (2) where we have used the notation X (2) =j 1 ih 2 j +j 2 ih 1 j; P (2) =j 1 ih 1 j +j 2 ih 2 j: This leads to the following expression for ~ A(z) ~ A =I +z ~ H G +z 2 ~ Q =AzX (2) +z 2 P (2) : For convenience we denote B = z 2 P (2) zX (2) . Again we look at the matrix elements of ~ A 1 : hj ~ A 1 j 0 i =hj(A +B) 1 j 0 i =hj(I +A 1 B) 1 A 1 j 0 i =hj 1 X j=0 (A 1 B) j A 1 j 0 i From the denition of B it follows that P (2) BP (2) =P (2) B =BP (2) =B; (214) 103 from which we see that hj ~ A 1 j 0 ihjA 1 j 0 i = 1 X j=1 hj(A 1 P (2) BP (2) ) j A 1 j 0 i =hjA 1 P (2) 1 X j=1 B(P (2) A 1 P (2) B) j1 P (2) A 1 j 0 i =hjA 1 P (2) B(P (2) +P (2) A 1 P (2) B) 1 P (2) A 1 j 0 i =hjA 1 P (2) (B 1 +P (2) A 1 P (2) ) 1 P (2) A 1 j 0 i: (215) In the above whenever necessary the inverses should be thought of as pseudo- inverses. We will use a block-matrix representation again almost identical to the one we used before S = 0 B @ T nn U n2 V 2n W 22 1 C A ; (216) but now the last two entries in the S-matrix correspond to the tails that are to be connected. Multiplying (215) by 1z 2 and after some simplications we get: ~ S =TzU(zWX) 1 V (217) with X being just the X Pauli matrix. 104 To give an explicit formula for the elements of the newS-matrix we need the pseudo-inverses of B 1 +P (2) A 1 P (2) : B 1 +P (2) A 1 P (2) 1 = 1 D(z) Bz 2 (1z 2 )C (2) (218) where D(z) = 1 (a 12 +a 21 )z + (a 11 +a 22 )z 2 (a 11 a 22 a 12 a 21 )z 2 (1z 2 ); C (2) =a 22 j 1 ih 1 ja 12 j 1 ih 2 ja 21 j 2 ih 1 j +a 11 j 2 ih 2 j; where a ij are dened through the following formula: P (2) A 1 P (2) =a 11 j 1 ih 1 j +a 12 j 1 ih 2 j +a 21 j 2 ih 1 j +a 22 j 2 ih 2 j: (219) Substituting (218) in (215) and again using the denition of the elements of the S-matrix (184) we nd ~ s 0 =s 0 + z(s 1 s 2 0 +s 2 s 1 0) 1 (s 1 2 +s 2 1 )z (s 1 1 s 2 2 s 1 2 s 2 1 )z 2 z 2 (s 1 2 s 1 s 2 0 +s 2 1 s 2 s 1 0) 1 (s 1 2 +s 2 1 )z (s 1 1 s 2 2 s 1 2 s 2 1 )z 2 + z 2 (s 2 2 s 1 s 1 0 +s 1 1 s 2 s 2 0 ) 1 (s 1 2 +s 2 1 )z (s 1 1 s 2 2 s 1 2 s 2 1 )z 2 : (220) 4.8.1 Composition of unitary gates The universality of quantum walks in continuous time was proved in [20]. In the model presented there a quantum wire corresponds to a set of tails connected to a graph. Specically the state of a qudit is represented by a linear superposition 105 of incoming waves on d of the tails with the same energy. Thus to each tail corresponds one of the orthogonal states of the qudit. The graph to which the tails are connected will implement the quantum gate. Another set of d tails connected to the graph represent the quantum wire which carries the state of the qudit with the quantum gate applied to it. The order for this to represent a quantum gate it is needed that a wave coming in on any incoming tail needs to scatter on outgoing tails only for the xed energy at which the computation is performed (the scattering matrix of a graph cannot satisfy this condition for every energy). Thus it is easy to see that for that particular energy the S-matrix for this graph needs to have a block form: S = 0 B @ 0 S io S oi 0 1 C A (221) where the rst d entries stand for incoming tails and the last d for outgoing. Thus the quantum gate being implemented is given by the unitary matrix S oi . We want to show that if we connect two graph using the rules for connecting tails we will end up with a gate that is composition of the quantum gates corre- sponding to each graph. Before we have connected the two graphs the S-matrix is just the direct product of the S-matrices: S = 0 B B B B B B B @ 0 S 1 io 0 0 S 1 oi 0 0 0 0 0 0 S 2 io 0 0 S 2 oi 0 1 C C C C C C C A : (222) 106 We will use a generalization of formula (217) to nd the S-matrix after the connections are being made. The formula retains the same form even if we connect more than two tails as long as we arrange the tails being connected to correspond to entries in the lower right corner of the S-matrix. The generalization ~ X of X in formula (217) is going to be given by either ~ X = 0 B B B B B B B @ X 0 0 0 0 X 0 0 0 0 . . . 0 0 0 0 X 1 C C C C C C C A (223) or ~ X = 0 B @ 0 I I 0 1 C A (224) depending on how the tails being connected are arranged in the matrixW . Thus by permuting the entries (222) we obtain: S 0 = 0 B B B B B B B @ 0 0 0 S 1 io 0 0 S 2 oi 0 0 S 2 io 0 0 S 1 oi 0 0 0 1 C C C C C C C A : (225) Thus for the block matrices T;U;V and W we have T = 0; U = 0 B @ 0 S 1 io S 2 oi 0 1 C A ; (226) 107 V = 0 B @ 0 S 2 io S 1 oi 0 1 C A ; W = 0 (227) and ~ X needs to be of the second form. For the new S-matrix we obtain: ~ S =TzU(zWX) 1 V (228) =z 0 B @ 0 S 1 io S 2 oi 0 1 C A 0 B @ 0 I I 0 1 C A 1 0 B @ 0 S 2 io S 1 oi 0 1 C A (229) =z 0 B @ 0 S 1 io S 2 io S 2 oi S 1 oi 0 1 C A (230) We see that that unitary being implemented is given by zS 2 oi S 1 oi which is exactly equal to the composition of the two unitaries up to the phase z. The edges that connect the two graphs which are formed after connecting the tails lead to the introductions of this phase. 4.9 Unitarity preservation in cutting and con- necting tails In this section we prove that the above operations preserve the unitary of the S-matrix. We need the following lemma. Lemma 6 Given a unitary matrix S with block-form representation S = 0 B @ T nn U nm V mn W mm 1 C A ; 108 and an unitary matrix G mm such that G +W is invertible, the matrix ~ S =TU(G +W ) 1 V (231) is unitary. Proof. The unitarity of S implies that TT y =IUU y ; T y T =IV y V; T y U =V y W; U y T =W y V; TV y =UW y ; VT y =WU y ; VV y =IWW y ; U y U =IW y W: (232) In the above formulas, although not indicated, the identity I needs to be under- stood as having the appropriate dimension for each equation. Then ~ S ~ S y = TU(G +W ) 1 V TU(G +W ) 1 V y =TT y U(G +W ) 1 VT y TV y (G y +W y ) 1 U y +U(G +W ) 1 VV y (G y +W y ) 1 U y =IUU y +U(G +W ) 1 WU y +UW y (G y +W y ) 1 U y +U(G +W ) 1 (IWW y )(G y +W y ) 1 U y =IU I (G +W ) 1 WW y (G y +W y ) 1 (G +W ) 1 (G y +W y ) 1 +(G +W ) 1 WW y (G y +W y ) 1 U y =IU (I (G +W ) 1 W )(IW y (G y +W y ) 1 ) (G +W ) 1 (G y +W y ) 1 U y 109 =IU (G +W ) 1 GG y (G y +W y ) 1 (G +W ) 1 (G y +W y ) 1 U y =I: (233) This proves the lemma. In the case of cutting a tail from formula (212) we see that G =I=z 2 which is unitary becausejzj = 1. In the case of connecting two tails from formula (217) we see thatG =X=z which is obviously unitary as well. This proves the unitarity of the S-matrix after an operation of cutting or connecting tails. 4.10 Discussion Quantum walks in their many varieties have proven immensely useful in the area of Quantum Computation as a tool for developing new algorithms or as an abstract model to studying the behavior of quantum systems. The model we are concerned with in this paper involves quantum walks on innite graphs, it is natural to develop scattering theory for these systems. We have studied both propagating and bound states, and how they depend on the structure of the nite graph to which the tails are attached. The equations that dene the propagating and bound states are an example of the quadratic eigenvalue problem. The S-matrix, its unitarity, denition and conservation of probability density and current were addressed, as well as the orthogonality of propagating and bound states. We were able to derive formulas for the S-matrix of a graph that is obtained by operations on the tails in terms of the S-matrix of the original graph. 110 In future work, we would like to pursue the denition of hitting time for such graphs. In previous work, [50, 54, 55, 80], properties of the hitting time for nite graphs were explored. We would like to give appropriate denition for hitting time when we consider the tailed graphs. One such denition may follow in the footsteps of [80], where the quantum walk starts from an arbitrary state, and we are interested in the probability and average time to hit any of the tails. A dierent approach could be followed if we consider a wave packet that is incoming along one of the tails with an energy in a narrow band. Then a denition for a hitting time may be the time it takes for the packet to reach a dierent outgoing tail to some standard graph. For example, if we have a graph with just two tails, we can compare the time for which a packet travels through the graph from one of the tails to the other and compare it with the time that the same packet will take to move along the innite line. Dening the hitting time will permit us to explore new ideas and paradigms for quantum algorithms. In [55] innite hitting times were discovered for the discrete-time case, and later the same phenomenon was observed for the continuous- time case in [80]. The existence of an innite hitting times can lead to possible quantum algorithms. The algorithms can be based on the simple observation that the quantum walk may or may not hit a certain set of vertices depending on the existence of innite hitting time. This would serve as the nal result in our quantum computation, much like the evaluation of the NAND tree is based on whether the quantum walk hits the second tail or not. Both innite hitting times and exponentially fast quantum algorithms as in [21] are based on the idea that the quantum walk is constrained to evolve on a subspace of the whole Hilbert space because some special property of the graph. Symmetry of the graph is one 111 property that can lead to these eects, but other properties may do so as well. This is an open problem, and an area of active research. 112 Chapter 5: Conclusion This thesis discussed topics that were centered around weak measurements and continuous-time quantum walks. We have obtained results based on feedback control of weak measurements, dened hitting times for continuous-time quan- tum walks through the use of weak continuous measurements and developed a model of continuous-time quantum walks on graphs with tails. The result of a weak measurement is two-fold | a small change in the state of the quantum system being measured and a classical signal giving information about the aected change. We showed that a discrete measurement procedure can simulate a projective measurement without the use of feedback. The classi- cal signal obtained lives on a grid on a suitable manifold and in a sense could be thought of as a classical random walk on that grid. We used that idea to con- struct a feedback weak measurement process that in the long time limit could simulate any strong generalized measurement. As in the simpler projective case the classical signal lives on a curved Riemannian manifold with an appropriately chosen metric. The feedback that changes the instantaneous weak measurement is based on a ltered classical signal history. The nal result is a closed set of stochastic equations for the measurement process and the ltered classical sig- nal. This kind of construction could potentially be used to better understand the 113 structure of the set of all generalized measurement not unlike the idea of study- ing Lie algebras to characterize Lie groups. Whether such a procedure could be implemented experimentally depends a lot on how much control we have on the system, or in other words, on what kind of operations we could perform on the system. Weak measurements were again used to properly dene hitting times for continuous-time quantum walks. The strength of the measurements can be varied and the dependance of the hitting time with it has been examined. In both case, of the strength going to zero or innity, we saw that the hitting time goes to innity which naturally leads to a value of the strength for which the hitting time achieves a minimum. We also observed that innite hitting times exist when the quantum walk starting from a particular state is naturally conned to evolve in a subspace of the whole Hilbert space if the Hilbert space could be represented as a direct sum of such orthogonal subspaces. The transition between a classical discrete-time and continuous-time random walk is well understood which is in contrast with the quantum case. This work gives the possibility to compare hitting times for continuous-time classical random walks and quantum walks. In the last topic a model of a continuous-time quantum walk on graphs with innite tails was developed. Propagating and two types of bound states were dened and their orthogonality proven. In contrast with quantum scattering theory onL 2 (R n ), for example, where there are only one type of bound states, here we observe the existence of bound states that do not decay exponentially on the innite tails. The S-matrix for this model was dened and its properties explored. Later, we give formulas for the S-matrix under the operations of cut- ting, adding or connecting tails and we prove its unitarity. This suggests that 114 any graph could be constructed from smaller, simpler graphs and the S-matrix calculated using the formulas we derived. 115 Bibliography [1] Y. Aharonov, D. Z. Albert and L. Vaidman, Phys. Rev. Lett. 60, 1351, (1988). [2] D. Aharonov, A. Ambainis, J. Kempe and U. Vazirani, Proc. 33rd Annual ACM Symposium on Theory of Computing (STOC 2001), 50, Assoc. for Comp. Machinary, New York, 2001. [3] A. Ambainis, Int. J. Quantum Info. 1, No. 4, 507, (2003). [4] A. Ambainis, Proc. FOCS 22, (2004). [5] A. Ambainis, Quantum walk algorithm for element distinctness, In Proc. of 45st FOCS, 22, (2004). [6] A. Ambainis, E. Bach, A. Nayak, A. Vishwanath and J. Watrous, Proc. 33rd Annual ACM Symposium on Theory of Computing, 37, 2001. [7] A. Ambainis, A. M. Childs, B. W. Reichardt, R. Spalek and S. Zhang, Proc. 48th IEEE Symposium on Foundations of Computer Science, 363, (2007). [8] E. Bach, S. Coppersmith, M. Goldschen, R. Joynt and J. Watrous, Journal of Computer and System Sciences 69, 562, (2004). [9] A. Bain, Stochastic Calculus, http://www.chiark.greenend.org.uk/ ~ alanb/stoc-calc.pdf, (2006). [10] C. H. Bennet and G. Brassard, Quantum Cryptography: Public key dis- tribution and coin tossing, Proc. of the IEEE International Conference on Computers, Systems, and Signal Processing, Bangalore, 175, (1984). [11] C. H. Bennett, G. Brassard, C. Crpeau, R. Jozsa, A. Peres and W. K. Wootters, Teleporting an Unknown Quantum State via Dual Classical and Einstein-Podolsky-Rosen Channels, Phys. Rev. Lett. 70, 1895, (1993). [12] T. A. Brun, Continuous measurements, quantum trajectories, and decoher- ent histories, Phys. Rev. A 61, 042107 (2000). 116 [13] T. A. Brun, A simple model of quantum trajectories, Am. J. Phys. 70, 719, (2002). [14] T. A. Brun, H. A. Carterer and A. Ambainis, Phys. Rev. A 67, 032304 (2003). [15] T. A. Brun, H. A. Carteret and A. Ambainis, Phys. Rev. A 67, 052317 (2003). [16] T. A. Brun, H. A. Carterer and A. Ambainis, Phys. Rev. Lett. 91, 130602 (2003). [17] H. Buhrman and R. Spalek, Quantum verication of matrix products, In Proc. of 17th ACM-SIAM SODA, 880, (2006). [18] H. J. Carmichael, An Open Systems Approach to Quantum Optics, Springer, Berlin, 1993. [19] A. M. Childs, e-print arXiv: quant-ph/0810.0312, (2009). [20] A. M. Childs, Phys. Rev. Lett. 102, 180501 (2009). [21] A. M. Childs, R. Cleve, E. Deotto, E. Farhi, S. Gutmann, and D. A. Spiel- man, Proc. 35th ACM Symposium on Theory of Computing, 59, (2003). [22] A. M. Childs, E. Farhi and S. Gutmann, Quantum Information Processing 1, 35, (2002). [23] A. M. Childs and J. Goldstone, Phys. Rev. A 70, 022314 (2004). [24] D .Deutsch1985), Quantum theory, the Church-Turing principle and the uni- versal quantum computer, Proc. R. Soc. Lond. A 400, 97, (1985). [25] D. Deutsch and R. Jozsa, Rapid solutions of problems by quantum compu- tation, Proc. R. Soc. Lond. A 439, 553, (1992). [26] D.P. DiVincenzo, The Physical Implementation of Quantum Computation, e-print arXiv:quant-ph/0002077, (2000). [27] A. Einstein, B. Podolsky and N. Rosen, Can Quantum-Mechanical Descrip- tion of Physical Reality Be Considered Complete?, Physical Review 47, 777, (1935). [28] K. D. Elworthy, Stochastic Dierential Equations on Manifolds, New York: Cambridge University Press, Cambridge, 1982. [29] M. Emery, Stochastic Calculus in Manifolds, New York: Springer-Verlag, Berlin, 1989. 117 [30] L. C. Evans, An Introduction to Stochastic Dierential Equation, Version 1.2,http://math.berkeley.edu/ ~ evans/SDE.course.pdf, Department of Mathematics, UC Berkeley. [31] E. Farhi and S. Gutmann, Phys. Rev. A 58, 915, (1998). [32] E. Farhi, J. Goldstone and S. Gutmann, e-print arXiv: quant-ph/0702144v2, (2007). [33] E. Feldman and M. Hillery, Phys. Lett. A 324, 277, (2004). [34] E. Feldman and M. Hillery, Quantum walks on graphs and quantum scatter- ing theory, Coding Theory and Quantum Computing, edited by D. Evans, J. Holt, C. Jones, K. Klintworth, B. Parshall, O. Pster, and H. Ward, Contemporary Mathematics 381, 71, (2005). [35] E. Feldman and M. Hillery, J. Phys. A: Math. Theor. 40, 11343, (2007). [36] F. R. Gantmakher, The theory of matrices, New York, Chelsea Pub. Co., Volume I&II, 1959. [37] N. Gisin, Quantum Measurements and Stochastic Processes, Phys. Rev. Lett. 52, 1657, (1984). [38] N. Gisin and I. C. Percival, Quantum State Diusion: from Foundations to Applications, e-print arXiv: quant-ph/9701024, (1997). [39] S. K. Godunov, Modern aspects of linear algebra, Providence, R.I.: American Mathematical Society, 159, 1998. [40] L. K. Grover, A fast quantum mechanical algorithm for database search, Proc. of the 28th Annual ACM Symposium on the Theory of Computing, 212, (1996). [41] M. Gut a, L. Bouten and H. Maassen, Stochastic Schr odinger equation, e- print arXiv: quant-ph/0309205, (2003). [42] A. Guichardet, Symmetric Hilbert Spaces and Related Topics, Lect. Notes Math., Vol. 261, Springer, 1972. [43] M. Hillery, J. Bergou and E. Feldman, Phys. Rev. A 68, 032314 (2003). [44] E. P. Hsu, Stochastic Analysis on Manifolds, American Mathematical Soci- ety, Providence, R.I., 2002. [45] L. P. Hughston, Geometry of Stochastic State Vector Reduction, Proceed- ings: Mathematical, Physical and Engineering Sciences 452, issue 1947, 953, (1996). 118 [46] N. Ikeda and S. Watanabe, Stochastic Dierential Equations and Diusion Processes, North-Holland Pub. Co., Amsterdam, 1981. [47] D. Kannan, An Introduction to Stochastic Processes, North Holland Pub. Co., New York, 1979. [48] I. Karatzas, A Tutorial Introduction to Stochastic Analysis and its Applica- tions, http://www.math.columbia.edu/ ~ ik/tutor.pdf, (1988). [49] J. Kempe, Contemp. Phys. 44, 307, 2003. [50] J. Kempe, Proc. of 7th International Workshop on Randomization and Ap- proximation Techniques in Computer Sciences (RANDOM 2003), edited by S. Arora, K. Jansen, J. D. P. Rolim, and A. Sahai, 354, Springer, Berlin, 2003. [51] V. Kendon, Int. J. Quantum Info. 4, No. 5, 791, (2006). [52] V. Kendon, e-print arXiv: quant-ph/0606016, (2006). [53] V. Kendon and B. Sanders, Phys. Rev. A 71, 022307 (2004). [54] H. Krovi and T. A. Brun, Phys. Rev. A 73, 032341 (2006). [55] H. Krovi and T.A. Brun, Phys. Rev. A 74, 042334 (2006). [56] H. Krovi and T. A. Brun, Phys. Rev. A 75, 062332 (2007). [57] S. Laurent, Semimartingales and Their Stochastic Calculus on Mani- folds/Laurent Schwarz: edited by I. Iscoe, Presses de l'Universit e de Montr eal, Montreal, Que., 1984. [58] F. F. Magniez and A. Nayak, Proc. ICALP Vol. 1770 of Lecture Notes in Computer Science, 1312, (2005). [59] F. F. Magniez, M. Santha and M. Szegedy, Quantum algorithms for the triangle problem, In Proc. of 16th SODA, 1109, (2005). [60] B. Misra and E. C. G. Sudarshan, J. Math. Phys. 18, 756, 1977. [61] A. Mizel, D. A. Lidar and M. Mitchell, Phys. Rev. Lett. 99, 070502 (2007). [62] A. Montanaro, e-print arXiv: quant-ph/0504116, (2005). [63] C. Moore and A. Russell, Proc. of 6th International Workshop on Random- ization and Approximation Techniques in Computer Sciences (RANDOM 2002), Vol. 2483 of LNCS, 16, (2002). 119 [64] R. Motwani and P. Raghavan, Randomized Algorithms Cambridge Univer- sity Press, Cambridge, 1995. [65] M. Nakahara, Physical Implementation of Quantum Computing: Are the DiVincenzo criteria fullled in 2004?, World Scientic, (2004). [66] A. Nayak and A. Vishwanath, DIMACS Technical Report 2000-43, (2000). [67] M. A. Nielsen and I. L. Chuang, Quantum Computation and Quantum In- formation, Cambridge University Press, Cambrdge, 2000. [68] O. Oreshkov and T. A. Brun, Weak measurements are universal, Phys. Rev. Lett. 95, 110409 (2005). [69] O. Oreshkov and T. A. Brun, Innitesimal local operations and dierential conditions for entanglement monotones, Phys. Rev. A 73, 042314 (2006). [70] A. Peres, Quantum Theory: Concepts and Methods, Kluwer, Dordrecht, 1993. [71] M. M. Rao, Real and Stochastic Analysis: New Perspective, Birkh auser, Boston, 2004. [72] B. W. Reichardt and R. Spalek, Proc. 40th ACM Symposium on Theory of Computing, 103, (2008). [73] P.C. Richter, New J. Phys. 9, 72, (2007). [74] P.C. Richter, Phys. Rev. A 76, 042306 (2007). [75] N. Shenvi, J. Kempe and K. Birgitta Whaley, Phys. Rev. A 67, 052307 (2003). [76] P. W. Shor, SIAM J. Comp. 26, 1484 (1997). [77] M. Szegedy, Proc. 45th IEEE Symposium on Foundations of Computer Sci- ence, 32, 2004. [78] F. Tisseur and K. Meerbergen, SIAM Review 43, No. 2, 235, (2001). [79] B. Tregenna, W. Flanagan, R. Maile and V. Kendon, New J. Phys. 5, 83, (2003). [80] M. Varbanov, H. Krovi and T.A. Brun, Phys. Rev. A 78, 022324 (2008). [81] J. Von Neumann, The Mathematical Foundations of Quantum Mechanics, Princeton University Press, (1996). 120 [82] J. Watrous, Proc. 33rd Symposium on the Theory of Computing, p. 60, ACM Press, New York, 2001. [83] H. M. Wiseman, Quantum Trajectories and Quantum Measurement Theory, Quantum Semiclass. Opt. 8, 205, (1996). [84] T. Yamasaki, H. Kobayashi and H. Imai, Phys. Rev. A 68, 012302 (2003). 121
Abstract (if available)
Abstract
The topics presented in this thesis have continuous quantum measurements and quantum walks at their core. The first topic being discussed centers around simulating a generalize measurement with a finite number of outcomes using a continuous measurement process with a continuous measurement history. We provide conditions under which it is possible to prove that such a process exists and that at long times it simulates faithfully the generalized measurement. We give the stochastic equations governing the feedback between the measurement history and the instantaneous weak measurements. The second topic examines a definition of hitting time for continuous-time quantum walks. A crucial component for such a definition is the use of weak measurements. Several methods using alternative but equivalent definitions of weak, continuous measurements are employed to derive a formula for the hitting time. The behavior of the thus defined hitting time is studied subsequently, in general and for specific graphs. The last topic explores continuous-time quantum walks on graphs with infinite tails. The equations for propagating and bound states are derived and the S-matrix is defined. Their properties, such as orthogonality of the propagating and bound states, unitarity of the S-matrix, are discussed. Formulas for the S-matrix under operations of cutting, adding or connecting tails are derived.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Topics in quantum information and the theory of open quantum systems
PDF
Symmetry in quantum walks
PDF
Open-system modeling of quantum annealing: theory and applications
PDF
Destructive decomposition of quantum measurements and continuous error detection and suppression using two-body local interactions
PDF
Quantum information flow in steganography and the post Markovian master equation
PDF
Quantum information techniques in condensed matter: quantum equilibration, entanglement typicality, detection of topological order
PDF
Topics in modeling, analysis and simulation of near-term quantum physical systems with continuous monitoring
PDF
Topics in quantum cryptography, quantum error correction, and channel simulation
PDF
Applications of quantum error-correcting codes to quantum information processing
PDF
Advancing the state of the art in quantum many-body physics simulations: Permutation Matrix Representation Quantum Monte Carlo and its Applications
PDF
Trainability, dynamics, and applications of quantum neural networks
PDF
Open quantum systems and error correction
PDF
Tunneling, cascades, and semiclassical methods in analog quantum optimization
PDF
Quantum information-theoretic aspects of chaos, localization, and scrambling
PDF
Quantum information and the orbital angular momentum of light in a turbulent atmosphere
PDF
Error suppression in quantum annealing
PDF
Towards optimized dynamical error control and algorithms for quantum information processing
PDF
Towards robust dynamical decoupling and high fidelity adiabatic quantum computation
PDF
Quantum computation in wireless networks
PDF
Explorations in semi-classical and quantum gravity
Asset Metadata
Creator
Varbanov, Martin
(author)
Core Title
Topics in quantum information -- Continuous quantum measurements and quantum walks
School
College of Letters, Arts and Sciences
Degree
Doctor of Philosophy
Degree Program
Physics
Publication Date
11/23/2009
Defense Date
11/02/2009
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
continuous measurements,graphs,hitting times,OAI-PMH Harvest,scattering theory,weak measurements
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Brun, Todd A. (
committee chair
), Caire, Giuseppe (
committee member
), Haas, Stephan (
committee member
), Johnson, Clifford (
committee member
), Zanardi, Paolo (
committee member
)
Creator Email
varbanov@usc.edu,varbanovmartin@yahoo.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-m2758
Unique identifier
UC1214690
Identifier
etd-Varbanov-3387 (filename),usctheses-m40 (legacy collection record id),usctheses-c127-279352 (legacy record id),usctheses-m2758 (legacy record id)
Legacy Identifier
etd-Varbanov-3387.pdf
Dmrecord
279352
Document Type
Dissertation
Rights
Varbanov, Martin
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Repository Name
Libraries, University of Southern California
Repository Location
Los Angeles, California
Repository Email
cisadmin@lib.usc.edu
Tags
continuous measurements
hitting times
scattering theory
weak measurements