Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Dissipation as a resource for quantum information processing: harnessing the power of open quantum systems
(USC Thesis Other)
Dissipation as a resource for quantum information processing: harnessing the power of open quantum systems
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
University of Southern California Doctoral Dissertation Jeffrey Marshall Dissipation as a Resource for Quantum Information Processing: Harnessing the Power of Open Quantum Systems May 2019 DISSIPATION AS A RESOURCE FOR QUANTUM INFORMATION PROCESSING: HARNESSING THE POWER OF OPEN QUANTUM SYSTEMS by Jerey Marshall supervised by Dr. Paolo Zanardi and Dr. Lorenzo Campos Venuti A Dissertation Presented to the FACULTY OF THE USC GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulllment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (PHYSICS & ASTRONOMY) May 2019 Acknowledgements I have many people to whom I would wish to express my sincerest of thanks to for helping me through this process. Firstly, to Emily, for joining me and moving thousands of miles to the US. You gave up a lot to make this work, and now, at the end, I think we both agree that it has been worth it. We've made many new amazing friends and had some truly fantastic experiences. Without your support, I don't know how this could have all worked out so perfectly. I cannot express how appreciative I am. Of course, I owe an immense amount of gratitude to my \co-advisors", Paolo Zanardi and Lorenzo Campos Venuti, who have been extremely generous throughout with their time (often for several consecutive hours!), and allowing me lots of freedom to pursue my own ideas and projects. You were both always willing to listen to new ideas, and objectively critique them, but also help expand upon them. Your approach to science has perhaps had the most direct impact on me, and will stay with me for the rest of my career. Moreover, I thank you for being extremely supportive of me pursuing additional projects and lecture courses not directly in my eld of research, which has helped to broaden my knowledge base considerably. To Itay Hen, whom I also had the pleasure of working with on several projects, I would like to thank for providing me with many new learning and research opportunities. I learned a lot in the work we conducted together, in particular about computer simulations and data analysis. These valuable skills, which I would not have otherwise developed in such detail, have been extremely valuable, and I will continue to use them after completing this degree. You were also very supportive of me pursuing my own ideas, and also actively provided lots of your own input during the research process, as well as introducing me to several other collaborators. I thank Eleanor G. Rieel and Davide Venturelli for allowing me to work with them during this past summer. I really enjoyed interacting with your group, discussing new papers, and conducting exciting research, and am looking forward to continue working with you. To all of my friends and family from the UK, you all made this process as easy as possible, always encouraging, and never questioning the choice to move abroad. I thank, in particular, my parents, Karen and Ian, and brother Kenny, as it must have been most I dicult for you. You were always very supportive of me, and visited us many times over the last few years, which has been most appreciated. Similarly to all of the friends who journeyed (often multiple times!) across the pond, your visiting really helped us feel less estranged. I give special thanks to Emily's family { Anne, Des, Vic, Matt, Bobbie { for their enthusiastic support, encouragement, and also for visiting us as often as possible. You've always been available when needed, and have helped make the distances feel smaller. Lastly, I thank all of the friends we've made whilst in Los Angeles. By name I would like to mention from the Physics department Georgios, Jared, Malte, Mischi, Pancham, Pete, and Roelof. We have all had many enlightening conversations, both formal and informal, about dierent aspects of Physics and Philosophy, which undoubtedly has impacted my outlook on science, and life. II Abstract Dissipation, noise, and the associated decoherence are typically regarded as detrimental from the perspective of performing quantum information processing tasks. Therefore, it is remarkable that dissipative quantum processes can in fact be harnessed and used them- selves as a computational resource, to enact quantum information primitives and prepare quantum states. In this work we explore several recent developments in this area, in partic- ular, i) the use of strongly dissipative processes to achieve universal quantum computation within a scalable framework, ii) classifying quantum data according to dissipation alone, iii) suppressing errors using noise of a non-Markovian nature, and iv) utilizing thermalization to improve the performance of quantum annealers. III List of Publications J. Marshall, L. Campos Venuti, and P. Zanardi, Quantum data classication by dissipation, arXiv:1811.03175 (2018) J. Marshall, D. Venturelli, I. Hen, and E. G. Rieel, The power of pausing: advanc- ing understanding of thermalization in experimental quantum annealers, arXiv:1810.05881 (2018) L. Barash, J. Marshall, M. Weigel, and I. Hen, Estimating the Density of States of Frustrated Spin Systems, arXiv:1808.04340 (2018) J. Marshall, L. Campos Venuti, and P. Zanardi, Noise suppression via generalized- Markovian processes, Phys. Rev. A 96, 052113 (2017) J. Marshall, E. G. Rieel, and I. Hen, Thermalization, freeze-out and noise: deci- phering experimental quantum annealers, Phys. Rev. Applied 8, 064025 (2017) J. Marshall, L. Campos Venuti, and P. Zanardi, Modular quantum information processing by dissipation, Phys. Rev. A 94, 052339 (2016) J. Marshall, V. Martin-Mayor, and I. Hen, Practical engineering of hard spin-glass instances, Phys. Rev. A 94, 012320 (2016) P. Zanardi, J. Marshall, and L. Campos Venuti, Dissipative universal Lindbladian simulation Phys. Rev. A 93, 022312 (2016) IV Contents 1 Introduction 1 1.1 Historical Prelude . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Why Quantum Computers? . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Principles of Quantum Computation . . . . . . . . . . . . . . . . . . . . . . 3 1.4 Challenges of the Age . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.5 Fighting Decoherence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.6 Dissipative Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.7 Contributions of this Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2 Theory and Overview: Open Quantum Systems 9 2.1 Basics and Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2 Interaction with an Environment . . . . . . . . . . . . . . . . . . . . . . . . 10 2.3 Quantum Maps and Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.4 Approximations and Master Equations . . . . . . . . . . . . . . . . . . . . . 13 2.4.1 The Markovian regime and the Lindblad master equation . . . . . . 13 2.4.2 Spectral properties of the Lindblad master equation . . . . . . . . . 16 2.4.3 Beyond the Markovian regime . . . . . . . . . . . . . . . . . . . . . . 17 3 Dissipative Universal Simulation and Computation 20 3.1 A Perturbative Approach to Dynamics . . . . . . . . . . . . . . . . . . . . . 21 3.1.1 Time dependence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 3.1.2 Higher order eects . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.2 Simulating Markovian Systems . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.2.1 Scaling of Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.2.2 Simulating collective amplitude damping . . . . . . . . . . . . . . . . 26 3.3 A Dissipative Computational Network . . . . . . . . . . . . . . . . . . . . . 27 3.3.1 Coherent dynamics in DGMs . . . . . . . . . . . . . . . . . . . . . . 30 3.3.2 Incoherent DGMs: coherence and entanglement . . . . . . . . . . . . 31 3.3.2.1 Dephasing in the computational basis . . . . . . . . . . . . 35 3.3.3 Robustness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 V CONTENTS CONTENTS 3.3.3.1 Hamiltonian Errors . . . . . . . . . . . . . . . . . . . . . . 36 3.3.3.2 Lindbladian Errors . . . . . . . . . . . . . . . . . . . . . . . 37 3.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 4 Dissipative Quantum Circuits and Data Classication 40 4.1 Computing Boolean Formulas by Dissipation . . . . . . . . . . . . . . . . . 41 4.1.1 Dissipative evaluation of 3-CNF clause . . . . . . . . . . . . . . . . 42 4.1.2 Dissipative evaluation of arbitrary 3-CNF . . . . . . . . . . . . . . . 45 4.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 4.2.1 A dissipative quantum data classier . . . . . . . . . . . . . . . . . . 48 4.2.1.1 Quantum machine learning the class of conjunctions . . . . 49 4.2.2 Probabilistic preparation of quantum states . . . . . . . . . . . . . . 50 4.2.2.1 Preparation of entangled states . . . . . . . . . . . . . . . . 52 4.2.2.2 Generating a superposition of all solutions to 3-SAT problem 53 4.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 5 Quantum Error Suppression via Non-Markovianity 56 5.1 An Integro-Dierential Master Equation . . . . . . . . . . . . . . . . . . . . 57 5.1.1 Derivation of a generalized-Markovian master equation . . . . . . . 60 5.2 Protecting Qubits from Decoherence . . . . . . . . . . . . . . . . . . . . . . 61 5.2.1 Dephasing noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 5.2.1.1 Purely decaying correlations . . . . . . . . . . . . . . . . . 62 5.2.1.2 Modulated decaying correlations . . . . . . . . . . . . . . . 65 5.2.2 Thermalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 5.2.2.1 Thermal spectral theorem . . . . . . . . . . . . . . . . . . 68 5.3 Alternative Master Equations . . . . . . . . . . . . . . . . . . . . . . . . . . 70 5.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 6 Quantum Annealing Exploiting Thermalization: an Experimental Foray 73 6.0.1 Quantum annealing primer . . . . . . . . . . . . . . . . . . . . . . . 74 6.1 Open System Quantum Annealing . . . . . . . . . . . . . . . . . . . . . . . 78 6.2 Probing Thermalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 6.2.1 Reverse annealing with a pause . . . . . . . . . . . . . . . . . . . . 85 6.2.2 Correlation with the minimum gap . . . . . . . . . . . . . . . . . . 87 6.2.3 Quantum Boltzmann distribution . . . . . . . . . . . . . . . . . . . 91 6.2.3.1 Computing the quantum Boltzmann distribution . . . . . 95 6.2.4 Classical Boltzmann distribution . . . . . . . . . . . . . . . . . . . . 95 6.3 Improving Performance via Thermalization . . . . . . . . . . . . . . . . . . 99 7 Concluding Remarks 105 VI CONTENTS CONTENTS Appendices 107 A Numerical Methods: Vectorization . . . . . . . . . . . . . . . . . . . . . . . 107 B Appendices for Chapter 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 B.1 Eective unitary dynamics for time-dependent Hamiltonian . . . . . 108 B.2 Proof of second order eective generator [Eq. (3.7)] . . . . . . . . . 111 B.3 Derivation of Eq. (3.33) . . . . . . . . . . . . . . . . . . . . . . . . . 112 B.4 Derivation of Eq. (3.40) . . . . . . . . . . . . . . . . . . . . . . . . . 114 C Appendices for Chapter 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 C.1 Derivation of Eqs. (5.17) and (5.18) . . . . . . . . . . . . . . . . . . 114 C.2 Derivation of (t) [Eq. (5.24)] . . . . . . . . . . . . . . . . . . . . . 115 C.3 Conditions for complete positivity . . . . . . . . . . . . . . . . . . . 116 C.3.1 The case ! =ij!j . . . . . . . . . . . . . . . . . . . . . . . 116 C.4 Modulated decay noise - derivations . . . . . . . . . . . . . . . . . . 117 D Generation of Problem Instances for Quantum Annealing . . . . . . . . . . 118 Bibliography 121 VII Chapter 1 Introduction 1.1 Historical Prelude Computation has historically seemed to follow developments in mathematics, physics, as- tronomy and technology, for use in solving numerical problems of the age. The earliest devices were simple counting mechanisms (such as the abacus), occurring alongside the invention of number systems and basic algebra [1]. The next step in the history of com- puters was the development of more advanced and intricate mechanical devices for use in astronomy. This step was extremely important as these devices, in contrast to counting devices in which the human plays an integral role in its computing, operated essentially independently of the user (other than to initialize and wind up any spring mechanisms). The impetus to create such devices, stemming from advances in astronomy and the use in areas such as naval navigation, was made possible by certain technological developments, such as the gear. From these early designs, philosophers and scientists (such as Pascal, Leibniz and Napier) made progress constructing more and more elaborate devices, along the lines of what we would today most likely call mechanical calculators [1]. It was not until Babbage, who in the pre and early Victorian era, created plans (and built some small scale models) detailing a device very similar to the modern digital com- puter, built entirely mechanically using the technology of the time (e.g. gears, cogs and powered by steam) [2]. His analytical engine was unfortunately not built during his life, but this work paved the way for advances in the next century. His contribution took the ideas about special purpose mechanical calculators, and upgraded them to general purpose computers. The next signicant developments took place during the Second World War. In some sense these evolved out of necessity, with large electro-mechanical (and eventually fully electronic) devices used to aid in the war eorts. These included the Bombe and Colossus, which were designed to decrypt messages from Enigma and Lorenz machines [3]. The American contingent also made progress, completing the ENIAC in 1945 [2]. Following 1 1.2. WHY QUANTUM COMPUTERS? CHAPTER 1. the war, the stage was set to build general-purpose electronic computers. At this time, Turing had already outlined much of the basic theory of modern computer science, and the devices constructed during the war had developed the technology to the point where electronic digital general-purpose computers were realizable, on a more commercial scale. After the creation of the transistor, and the start of the silicon age, computers increased exponentially in power, according to Moore's law. At this point, computer science had become a well-dened subject in its own right. 1.2 Why Quantum Computers? With access to devices capable of performing millions of calculations per second, scientists and engineers started to rely heavily on computers, and new sub-elds such as compu- tational physics and chemistry emerged. The underlying goal behind this research is to understand how molecules and atoms interact at the quantum level. This has implications in areas such as human biology, cancers, drug design and material science, where it is necessary to understand what occurs on the sub-microscopic level. This in principle in- volves computing the time-evolution of a quantum system, for example, via Schr odigner's equation. Unfortunately, the exact simulation of such systems is not possible, except for those containing only a few atoms, due to the exponential size of the Hilbert space; i.e. for N, d-level atoms, one needs to keep track of, in principle, d N complex numbers. Even the most powerful supercomputers today cannot perform this task for even a modest number { around 50 { of atoms. Approximations can be made however, resulting in polynomially scaling computational techniques. These include density functional theory (DFT) and variants thereof, which can be used to make accurate calculations about the electronic structure of many-body systems [4, 5]. Despite the success of these approaches, they have a limited range of applicability, and there are many processes which cannot be explained at all by such methods [6]. Feynman, in the early 1980s, is typically accredited with the idea for using quantum mechanics itself to overcome such problems [7]. In particular, if one had access to a device which operated inherently according to the principles of quantum mechanics, and that could be used to simulate any other quantum system, then one need not try to emulate quantum physics on a `classical' computer with the exponential overhead; one could run it directly on this specialist quantum device itself, using polynomial resources. These ideas were put on a rigorous footing by David Deutsch in 1985 [8], who generalized the work of Turing by describing a quantum Turing machine, i.e., a quantum computer. However, it took until 1992 for the introduction of what could be considered the rst true quantum algorithm [9]; that is, an algorithm which relies on the unique properties of quantum mechanics. Though of no practical use (the algorithm decides whether a Boolean function is balanced or constant), the Deutsch-Jozsa algorithm demonstrated that a quantum computer may oer advantages in solving problems typically thought of as 2 CHAPTER 1. 1.3. PRINCIPLES OF QUANTUM COMPUTATION `classical' in nature. In 1994, the start of a new era began, when Shor introduced a quantum algorithm for ecient integer factorization [10], with complexity a O(n 2 ), as compared to the best known classical algorithm which is subexponential in the number of bitsn. This is the rst time a practical application could be found for a quantum computer, i.e. solving a relevant problem which is entirely classical in nature. Since then, several other algorithms have been developed [11{15], and the eld of quantum computation is now growing rapidly, spawning sub-elds itself (e.g. quantum machine learning, articial intelligence, and cryptography to name a few), all promising to show advantages over traditional, classical computer systems. 1.3 Principles of Quantum Computation The basic operating principles of quantum computation are fairly simple, and can be described in a way analogous to classical computation. A quantum computer consists of a set of two-level quantum systems, which are manipulated according to the laws of quantum mechanics, i.e. unitarily. These two-level systems are called `qubits', some examples of which are two isolated energy levels of an ion or atom [16], electron spins in nitrogen- vacancy centers [17], superconducting ux qubits [18], and the polarization state of a photon [19]. We denote these two levels asj0i;j1i, which are also referred to as computational basis states. Thus the state of an N-qubit quantum computer at any instance is the state of N qubits, which classically, is a complex vector of length 2 N . A quantum computer, like a classical computer, must be initialized in a well-dened, easy to prepare state, e.g. the product statej 0 i =j0i N . The computation is then noth- ing more than a unitary operation, i.e. j f i = Uj 0 i, wherej f i is the output quantum state of the computation, which is then measured to obtain the result. Typically this com- putation will be performed several times to collect statistics due to the probabilistic nature of quantum mechanics. Of course, constructing a U to perform a desired computation is a non-trivial task; the eld of quantum algorithms is devoted to nding such unitaries! This picture can be simplied even more, since any desired U can in fact be built out of local unitary operators, acting on just one or two qubits at a time. In this way, U is decomposed into a sequence of unitary operations: U = Q i U i where each U i acts non-trivially on at most two qubits at a time. Moreover, to construct any U to arbitrary precision, one only requires the U i are drawn from a restricted set of `universal quantum gates', that is, a nite number of one and two qubit unitary operators b [20]. Note, a universal set of quantum gates is also a universal set of classical gates, thus quantum computation generalizes classical computation c . The converse is of course not true. The a Some additional logarithmic factors suppressed for convenience. b An example of a universal set of quantum gates is the single qubit Hadamard gate (for creating super- position), a rotation by =4 about the computational basis axis, and a two qubit entangling gate, CNOT. c A universal set of classical gates being e.g., AND, OR, NOT. 3 1.4. CHALLENGES OF THE AGE CHAPTER 1. terminology used to describe this framework, the `quantum gate model', should now be clear; any quantum computation is dened by a sequence of quantum gates, drawn from a small set of one and two qubit unitary operators. 1.4 Challenges of the Age Advances in quantum hardware may mean we soon reach the stage where quantum devices exist which can perform certain tasks that are impossible to do on any classical supercom- puter. This is a necessary threshold to pass and is often referred to as `quantum supremacy' [21]. However, there are still fundamental barriers which must be overcome, from both the theoretical and experimental sides, before achieving the end goal of large-scale universal quantum computation. Indeed, physical systems can not be described so idyllically as in the previous section, as there will inevitably be a coupling of the qubits in the device with the environment, causing entanglement between the system of interest with uncontrollable degrees of freedom. The resulting sub-system dynamics of the qubits in the quantum computer are therefore in general non-unitary, and irreversible, which causes a loss of coherence { the tantamount property of a quantum system distinguishing it from a classical one { on some time-scale r . If r is shorter than the time required for the computation to take place c , then the actual nal state of the quantum computer will be far from the desired one. In fact, one will in general end up with a mixed state d . This is addressed in the DiVincenzo criteria, which is a set of general requirements (typically thought of as) necessary for the successful implementation of a quantum computer [22], one of which calls for `long' coherence times, i.e. r c . In the early days of quantum computation { and in fact still today, though we are now more optimistic { it was not at all clear whether one could ever construct a quantum device robust enough to errors and capable of preserving coherence for times required to perform non-trivial computations [23]. For example, using Shor's algorithm on a quantum computer, to factor an integer large enough so that it is impossible to do classically, would require hundreds of thousands of quantum gates. Therefore, if each gate operating time is around 50ns as is possible in current superconducting qubit technology [24], exceptionally long coherence times are required, around 100ms or longer, which is about 1000 times longer than the typical coherence times in these qubits e . Moreover, for a quantum circuit of M gates, even if each gate operates to precision f 1, the accuracy over the full quantum circuit (1) M decreases exponentially in M, d That is, a classical probabilistic distribution of quantum states. e Dierent qubit implementations may boast longer coherence times, but typically the associated gate time is also longer [25], and there can be additional challenges in scaling up the technology beyond the two-qubit testing phase. f The precision of a gate can be dened as the delity of the intended state and the actual state after the gate application. 4 CHAPTER 1. 1.5. FIGHTING DECOHERENCE seemingly negating any promised exponential algorithmic speed-up, thus making non error- corrected quantum computing wholly implausible. Note, the same argument can, of course, be applied to classical computing. However there are classical error correcting codes which can be easily implemented to avoid this problem g . As is shown in the next section, the situation is more complex in the quantum case. 1.5 Fighting Decoherence For these reasons therefore, dissipation, noise, and the associated decoherence has been typ- ically thought of as the nemesis holding back experimental progress in the eld of quantum computation. One may think to use similar techniques as in classical computer systems to mitigate such problems and protect quantum data, such as redundancy, where the com- puting state is essentially copied many times, so that (for example) by majority voting, one can recover any lost information, assuming the noise level is below some threshold. For three major reasons however, this is not possible. Firstly, due to the no-cloning theorem h , it is impossible to copy an arbitrary unknown quantum state, and secondly, due to the destructive nature of quantum measurements, one cannot easily ascertain whether an error has occurred, whilst still preserving the quantum state that may be required for subsequent computations. Lastly, errors in quantum systems are continuous in nature (e.g. adding a random local phase), not discrete. As a result, many new theoretical techniques have been developed to help preserve coherence and combat noise in quantum systems. For example, as early as 1995 (again, by Shor) basic quantum error correcting schemes were introduced [26]. These, though inspired by classical error correction, work in a somewhat dierent manner to overcome the diculties mentioned above. A quantum state can be distributed over many qubits (not copied), and thanks to the linearity of quantum mechanics, one can detect { and hence correct { arbitrary single qubit errors. For example, a quantum statej i = j0i +j1i can be encoded, instead of using 1 qubit, but many (for example, 9 in Shor's code), as j i = j 0i +j 1i, where the over bar represents a `logical qubit', i.e. the statesj 0i;j 1i actually consist of several qubits constructed in clever ways so that errors can be detected by projective measurements. To date, there are many quantum error correction schemes, which can be used to correct against dierent types of error, using diering amounts of additional resources [27{30]. Other `active' forms of error correction also exist, whereby well-dened periodic pulses of a control Hamiltonian can be applied to a system, with the result of reversing some of the eects of decoherence, typially falling under the term dynamical decoupling, or `bang- bang' protocols [31, 32]. Alongside these developments, a deeper understanding of the g Modern digital computers are however so reliable that these codes are not even typically required. h That is, there is no universal unitary operator which can copy an arbitrary quantum state, i.e., no U such that, e.g. Uj ; 0i =j ; i;8j i. 5 1.6. DISSIPATIVE TECHNOLOGIES CHAPTER 1. noise aicting quantum systems allowed for more passive protection schemes, such as the use of decoherence-free subspaces (or more generally, noiseless subsystems), which utilize certain symmetries in the way the noise itself acts, and results in naturally noise-protected regions of the Hilbert space [33{36]. These techniques all have dierent ranges of applicability and are often more suited to one technology over another. Though they theoretically demonstrate great promise, and most likely will be a requisite component of future quantum devices, there are today still fundamental diculties in implementing them due to the extra constraints it puts on the experimental system. 1.6 Dissipative Technologies It is therefore a remarkable realization { given the pursuit for highly coherent quantum systems { that noise itself can be harnessed and used as a resource, to the end of perform- ing quantum information processing tasks, typically which are associated with unitary dynamics. The general idea and motivation underlying this framework is that one can in fact exploit the unique features of non-unitary dynamics to process information, or prepare quantum states, in a manner which has an inherent robustness to it, and therefore may actually oer advantages over `traditional' unitary-based computation. In particular, dissi- pative processes typically have with them associated steady states; that is, no matter which state the system is initialized in, in the `long' time limit the system will evolve (arbitrarily close) to a steady state. If one can engineer the dissipative process so that the steady states are resource states (e.g. entangled states), or even encode computations themselves, then the dissipation will naturally drive the system towards the desired output. Since these processes happen somewhat indiscriminately, there is a built-in robustness to errors in preparing the initial state, or even perturbations which occur during the systems evolu- tion. In this context the environment itself induces a type of Zeno eect, which continuously projects the system back into the desired code space [37]. This is somewhat dierent to the case in unitary dynamics where errors accumulate with the run time (e.g. the number of quantum gates applied), thus eventually degrading the system beyond repair. The earliest insights into the potential use of noisy processes came in 1999 and 2000 by Refs. [38, 39] which demonstrated that atoms inside a lossy cavity can be entangled, actively utilizing the cavity decay. In particular, via an external laser eld driving transitions from ground to excited states in the atoms, which are themselves each coupled to the cavity modes via a Jaynes-Cummings type interaction (and hence indirectly coupled to each other), the detection of a photon outside of the cavity indicates whether or not the atoms have become entangled or not. After this, it was shown that any pure state can, in principle, be prepared using quantum dissipative processes by designing the noise so that the desired state is the unique steady 6 CHAPTER 1. 1.7. CONTRIBUTIONS OF THIS THESIS state of the dissipative process [40]. Following these results, the full power of dissipative systems started to become understood, being used to enact quantum simulations [41, 42], and even universal quantum computation [43, 44], thus seemingly defying some of the DiVincenzo criteria, meaning that quantum computation may be realizable under less stringent conditions than previously thought. 1.7 Contributions of this Thesis In this thesis the ideas and techniques mentioned above are built upon and extended, providing yet more applications and examples where dissipation can be utilized as a true resource in the context of quantum computation and information processing. Most of the work contained in this thesis can be found at the corresponding cited publications, however it will also include some unpublished results. After brie y reviewing some of the relevant theory in Chapter 2, we introduce a per- turbation theory technique { originally described in Refs. [45{47] { in Chapter 3 where the noise is assumed to be the dominant process, and a control Hamiltonian the perturbation. In this strongly dissipative regime, it is shown that one can simulate arbitrary Markov quantum processes, and in fact perform universal quantum computation, in a potentially scalable manner. One aspect of this technique which sets it apart from others is that it does not rely on uncontrollable approximations (such as the Born-Markov approximation) and therefore results in a rigorous error bound in the simulation of open quantum systems, which moreover decreases with increasing noise strength. After this, in Chapter 4 (Ref. [48]), we describe a purely dissipative way of classifying quantum data. Inspired by certain classical recurrent neural networks, such as the Hop- eld model, we demonstrate how to construct a network of qubits which interact entirely according to a strongly dissipative Markov process. The time evolution of this network naturally partitions the Hilbert space into decoherence-free subspaces (DFSs), which are used to classify quantum data. We demonstrate this can be used to prepare entangled states, and provide examples as to how it can be used in quantum machine learning. In Chapter 5, following Ref. [49], a novel way to reduce the rate at which information is lost from a quantum system is proposed. This technique actively uses noise which has a non-Markovian character to actually reduce the rate at which errors accumulate over a noisy evolution; that is, by introducing an additional, but tailored type of noise, one can observe a reduction in the rate at which coherence and entanglement is lost. This we will show, actively relies on the non-Markovian nature of the added noise, by exploiting the associated memory eects. Before some concluding remarks in Chapter 7, a more experimental approach is taken in Chapter 6 (from Refs. [50, 51]). Here an open system treatment is applied to the paradigm of quantum annealing, a technique inspired by simulated annealing and the adi- abatic theorem of quantum mechanics. Thermalization typically degrades the performance 7 1.7. CONTRIBUTIONS OF THIS THESIS CHAPTER 1. of a quantum annealer, since it drives transitions out of the ground state, a process typ- ically most dominant in the region of the minimum energy gap between the ground and rst excited state. However, as is demonstrated by a phenomenological model, and exper- imentally shown, one can exploit this same thermalization mechanism to re-populate the ground state at a later point during the anneal, provided one has the ability to make minor adjustments to the way in which the system is evolved (the `annealing schedule'). This can result in several orders of magnitude improvement in the performance. 8 Chapter 2 Theory and Overview: Open Quantum Systems We start by discussing the relevant theory and key concepts for the work presented in the subsequent chapters. The goal is to provide a concise, but nevertheless consistent framework from which to work from. We will introduce some ideas and notions pertaining to the theory of open quantum systems, and touch upon information theory. In particular, we hope to demonstrate which mechanisms can lead to a loss of coherence and information in a system coupled to an environment, and how to describe them mathematically. Most of the material covered which is not explicitly referenced can be found in any graduate level text book, e.g. Ref. [52]. 2.1 Basics and Notation We begin by introducing the notion of a `closed' quantum system, the relevant physics of which is captured in principle entirely by a `Hamiltonian' H S . Of course, nding an appropriate Hamiltonian to describe a physical system may be a non-trivial exercise in itself. The Hamiltonian describes the relevant degrees of freedom of the system (position, electronic structure, spin etc.). Given such a Hamiltonian, the dynamics are then captured with high precision (ignoring relativistic eects) by the Schr odinger equation: i~ d dt j (t)i =H S (t)j (t)i: (2.1) In this equation,j (t)i describes the state of the system at time t,i is the imaginary unit, and~ is the reduced Planck's constant. Mathematically the state is a vector in a vector space over the eld of complex numbers. In quantum physics, notions of `overlap' between states and `length' become important, and as such we endow this vector space with an inner product, upgrading it to an inner 9 2.2. INTERACTION WITH AN ENVIRONMENT CHAPTER 2. product space. In fact, one also requires the space is complete, making it a Hilbert space a . We will typically denote a Hilbert space byH. This inner product is represented conveniently via `bra-ket' notation, where for a ket j i2H, and a brahj2H which is an element of the dual space, the inner product be- tween vectorsji;j i2H is given byhj i = P d i=1 i i , whered = dimH (the dimension ofH =C d ), and i ; i 2C are the entries of the respective vectors. In quantum systems of innite dimension, the sum is replaced by an integral. We will throughout this thesis only be considering systems of nite dimension, thus will not mention innite dimensional systems henceforth. Within this framework, the Hamiltonian { or indeed any `observable' { is a Hermitian operator over the Hilbert space b , where the Hermitian conjugate is with respect to the inner product just described. Dening a basis ofH = spanfjiig d i=1 , withhijji = ij , allows one to describe a quantum state asj i = P d i=1 c i jii, where c i 2 C are known as the `probability amplitudes', and square sum to 1. That isk k 2 :=h j i = 1, since physicallyjc i j 2 represents the probability of nding the system in statejii, upon `measurement' in this basis. Throughout when we talk about measurement we will mean a projective measurement, precisely of the form just mentioned. A more thorough description of measurements can be found in most graduate level text books (e.g. Ref. [52]). To distinguish between more general `mixed' states introduced later, we will often call states as described above `pure' states. 2.2 Interaction with an Environment The general solution to Eq. (2.1) is given by a time-ordered exponential U(t) :=T exp i ~ Z t 0 dH S () = 1 X n=0 i ~ n Z t 0 dt 1 Z t 1 0 dt 2 Z t n1 0 dt n H S (t 1 ):::H S (t n ); (2.2) and known as the `evolution operator', which is by construction unitary UU y =U y U =I. Note, if H S is time-independent, this becomes the regular matrix exponential U(t) = e it ~ H S . If the state of the system is initially somej (0)i2H, the time-evolved state is given byj (t)i =U(t)j (0)i2H. If the system of interest is non-negligibly coupled to some external degrees of freedom { ones which we as an experimenter have no control over { this closed system description does not suce, and must be replaced by an `open' system description, where we take into account this interaction with the `environment'. The universe is separated into two pieces; the part we as an experimenter have control over and can manipulate (the system), and a In nite dimensions, these complex inner product spaces are Hilbert spaces. The distinction becomes more apparent in innite dimensions where issues of convergence can become problematic [52]. b We will not use the `hat' above operators convention ^ H where the meaning is clear. 10 CHAPTER 2. 2.2. INTERACTION WITH AN ENVIRONMENT everything else (the environment). In fact, any quantum system is an open system since one can never completely decouple a system from the environment. The extent to which a system can be accurately described as a closed system depends entirely on the type of experiment one is performing, and the time-scales involved. At its most basic, an open system treatment is still entirely described by an equation of the form of Eq. (2.1), where the Hamiltonian is replaced by a more general Hamiltonian, typically written H = H S +H B +H I , containing three terms which act on the system, environment (or `bath'), and both (interactions). The Hilbert space is now a tensor product of the system and environment, i.e.H =H S H B , and similarly the Hamiltonian operators can be written in the form H S =h s I b ; H B =I s h b ; H I = P k A k B k ; (2.3) whereh s ;h b ;A k ;B k are Hermitian operators over their respective Hilbert spaces (possibly depending on time), andI s;b is the identity operator overH S;B respectively. Since, as an experimenter, we only know and care about the state of the system, we would like a way to describe this state on its own, without regard to the state of the environment. To do this, we must rst generalize the notion of a quantum state, allowing for classical probabilistic mixtures of pure states, which are very typical in the open system setting. Such an object will be known as a `mixed state' { or, for convenience later, simply state { and is a linear operator over the Hilbert space c , typically denoted 2 L(H), and referred to as the density operator (or matrix). The density operator has several important properties, namely it is Hermitian = y , positive semi-denite 0, and has unit trace Tr = 1. These conditions arise directly from the interpretation of as a probability distribution. As such, can always be diagonalised to the form = P d i=1 p i jiihij, from which we can interpret this as a statistical mixture of pure statesjii, each existing with probability p i . The time evolution of a density operator is described by the von-Neumann equation (i.e. replacing the Schr odinger equation): d dt (t) = i ~ [H(t);(t)]: (2.4) Within this construction we can now refer to the state of the system alone as S , which in general is not a pure state. To arrive at S from the system-environment picture, one needs to take the partial trace, S = Tr B [] := P i (I s hij)(I s jii) where is the entire state of the system and environment together, andjii form a basis d overH B . The partial trace operation is used since it is the unique operation which preserves expectation values c Note, the vector space of (bounded) linear operators overH,L(H), is also a Hilbert space (of dimension d 2 overC) once equipped with the Hilbert-Schmidt inner producthA;Bi = TrA y B, where A;B2L(H). d The bath is often in fact innite dimensional, in which case the partial trace object would require an integral representation. We write it as a sum here for demonstrative purposes only. 11 2.3. QUANTUM MAPS AND DYNAMICS CHAPTER 2. when considering system observables O (i.e. Hermitian operators overH S ) extended over the environment, i.e. O I B . That is,hOi := Tr[ S O] = Tr[(O I B )], where in both cases Tr means trace over the full (respective) spaces. In general, this tracing out operation will result in non-unitary dynamics of the system (H S ). Indeed, tracing out the environment does not typically leave the system in a pure state; it becomes mixed. This general idea is responsible for the loss of quantum coherence which typically tends to make the system behave more `classically'. An extreme case of this is the complete loss of coherence in the particular basis of interest, leaving the density operator completely diagonal in this basis: a classical probabilistic mixture of states. An important, analytically solvable model in this context is the spin-boson model, describing the coupling of a two-level quantum system (`qubit') to an innite array of quantum harmonic oscillators at non-zero temperature. In this model, populations (i.e. diagonal elements of ) are left unaected, whereas coherences (o diagonal elements) decay exponentially on time-scales related to 1=k B T , where T is the temperature of the environment, and k B is the Boltzmann constant. Despite the success of this model, general solutions describing a systems evolution in contact with an environment are rare and typically one needs to make many approxima- tions. There are many other types of dissipation which can occur in such systems, and nding accurate mathematical descriptions of these remains a central research focus to- day. In the next section we will introduce some core mathematical notions necessary for representing the dynamics (i.e. time evolution) of quantum systems in general. After this we will nish this chapter by discussing perhaps the most useful technique available to theoretical open system physicists, the master equation approach. 2.3 Quantum Maps and Dynamics Given a quantum system, one can dene the set of all quantum states, S :=f2 L(H) : = y 0; Tr = 1g. The boundary of this set consists of all quantum states having at least one zero eigenvalue [53]. Indeed, the pure states are a subset of the entire boundary and are precisely those satisfying 2 = (they are rank-1 projectors). If a system evolves in time (e.g. represented by some dynamical equation, such as Eq. (2.4)), then there exists an evolution operatorE t which describes this dynamics. Writ- ing this down explicitly, as mentioned above, may be extremely challenging. In this case, an initial state 0 will be mapped to (t) =E t 0 (where t is the time of the evolution). One can, however, ask a more general question. What type of maps are permitted to act on S, which are consistent with the laws of quantum mechanics? The name given to such maps are `quantum maps' or `quantum operations', and math- ematically they are completely positive and trace preserving linear maps (`CPTP' maps). A positive map, , maps positive operators to positive operators: () 0 if 0. Thus, the `positive' and `trace preserving' parts simply guarantee that density operators 12 CHAPTER 2. 2.4. APPROXIMATIONS AND MASTER EQUATIONS are mapped to density operators. Complete positivity, related to the discussion of the par- tial trace above, arises when considering the system of interest as a subsystem of a larger system. That is, a quantum map must be positive, but so must Id k for all k 1 (where Id is the identity map Id(X) =X;8X). It is known that any quantum map can be written in `Kraus form' (or `operator sum representation') as (X) = P i K i XK y i , where the K i 2 L(H) are known as Kraus operators, satisfying P i K y i K i =I [52]. Note that a unitary map is the case where there is just one Kraus operator, and when there are more than one it is called a non-unitary map or evolution. Given a quantum map : S! S, since S is compact and convex e , Brouwer's xed point theorem guarantees the existence of a xed point, or steady state, 2S such that ( ) = . We refer to the set of steady states as S :=f 2S : ( ) = g, and when the meaning is clear, we will drop the dependence of and just write S . This set plays a central role in the theory of open systems, as it typically characterizes the long-time dynamics of a system, and allows us to understand the mechanisms leading to dissipation and decoherence, and the time-scales upon which they occur. 2.4 Approximations and Master Equations Though it is typically not possible to obtain a concise description of the evolution of the system alone (i.e. upon tracing out the environment), certain approximations can be made which not only capture relevant physics associated with dissipation, but also allow one to write down a dynamical `master equation' explicitly. The goal of the next part is not to derive master equations from scratch (indeed, there are books devoted to this), but to provide the basic ideas behind such derivations, and the physics which can be understood from this (e.g., relevant time-scales). In particular, we will outline the steps in what some may call the `conventional' Markovian master equation derivation, where, from rst principles, we will arrive at a dynamical equation for the system alone in the interaction picture after invoking the Born and Markov approximations, as well as the rotating wave approximation. There are many other derivations [55{58] involving dierent types of approximation which work in dierent limits, however, the nal basic form will be the same; the Lindblad form. We will mostly follow the weak-coupling derivation of Ref. [59]. We nish this section brie y discussing the non-Markovian regime. 2.4.1 The Markovian regime and the Lindblad master equation As discussed above, assuming a general system-environment coupling, described by a (pos- sibly time-dependent) HamiltonianH =H S +H B +H I overH =H S H B (see Eq. (2.3)), e By construction, any convex combination of states i is also a quantum state, i.e. P i pii2 S for probabilitiespi. Compactness comes from viewing the vector spaceL(H) as a real vector space of dimension 2d 2 , and using Heine-Borel theorem [54]. 13 2.4. APPROXIMATIONS AND MASTER EQUATIONS CHAPTER 2. the time evolution of the entire system and environment is given by a unitary operator U(t) of the form Eq. (2.2). Typically one moves to the interaction picture, writing U(t) = U 0 (t) ~ U(t), where U 0 solves the Schr odinger equation i~ dU 0 (t) dt =H 0 (t)U 0 (t), with H 0 (t) :=H S (t) +H B (t). It is then easy to show that i~ d dt ~ U(t) = ~ H I (t) ~ U(t); ~ U(0) =I SB (2.5) with ~ H I (t) =U y 0 (t)H I (t)U 0 (t) = P k ~ A k (t) ~ B k (t), where ~ A k ; ~ B k are the rotated versions of A k ;B k respectively f (i.e. with respect to u s;b , which solves the Schr odinger equation for h s;b from Eq. (2.3)). In this interaction picture then, ~ (t) := ~ U(t)(0) ~ U y (t) evolves according to a von-Neumann equation with Hamiltonian ~ H I , i~ d dt ~ (t) = [ ~ H I (t); ~ (t)]; ~ (0) =(0): (2.6) We would like to nd the state of the system by tracing out the environmental degrees of freedom, obtaining a time evolution of the system alone ~ S (t) = Tr B [ ~ U(t)(0)], where ~ U(t)X = ~ U(t)X ~ U y (t) is the `superoperator' version of the unitary operator g . We may also be interested in the dynamical equation describing ~ S itself h , which can be written as d dt ~ S (t) = i ~ Tr B [ ~ H I (t); ~ (t)]: (2.7) This is usually the starting point for the derivation of master equations (at least for `mi- croscopic' derivations). Though dierent derivations may have validity in dierent regimes (e.g. weak versus singular coupling limits), the general form of the nal result will be similar. We begin by noting that Eq. (2.6) can be formally solved ~ (t) =(0) i ~ Z t 0 d[ ~ H I (); ~ ()] (2.8) and inserted back into the von-Neumann equation (where it is convenient to put!t) d dt ~ (t) = i ~ [ ~ H I (t);(0)] 1 ~ 2 Z t 0 d[ ~ H I (t); [ ~ H I (t); ~ (t)]]: (2.9) f We use that by construction [HS (t);HB(t 0 )] = 0;8t;t 0 > 0, so U0 = USUB = UBUS = uS uB (suppressing the explicit time argument for convenience), where US;B;u s;b solve the Schr odinger equation for HS;B;h s;b respectively (see Eq. (2.3)). g We will often refer to superoperators when referring to any linear operator acting on the space of linear operators over the Hilbert space [i.e. an element of L(L(H))]. That is, all quantum maps, and (as will be seen below) generators of quantum maps. h Note that now ~ S is the systems density operator with respect to the rotating frame us(t): Since ~ U(t) = U y 0 (t)U(t), and U0(t) = UB(t)US (t) = us(t) u b (t) (see footnote f for notation), we have ~ S (t) = TrB[ ~ U(t)(0) ~ U y (t)] =u y s (t)TrB[U y B (t)(t)UB(t)]us(t) =u y s (t)TrB[(t)]us(t) =u y s (t)S (t)us(t). 14 CHAPTER 2. 2.4. APPROXIMATIONS AND MASTER EQUATIONS The rst approximation one makes is known as the Born approximation, which, loosely speaking, assumes that the environment is `large' and unchanging compared to the `small' system, so that ~ (t) ~ S (t) B (i.e., to the degree of approximation in all of our equations, the correlation between the system and environment are negligible). This weak-coupling approximation makes it possible to perform the partial trace operation in the above equa- tions, upon resolving ~ H I explicitly. In fact, one of the most important terms is the two-point bath correlation function, B kj () :=h ~ B k (t) ~ B j (t)i, which is assumed, based on general physical arguments, to decay (exponentially) on some time-scale B . The Markov approximation can be made if the time-scale ~ on which ~ S evolves is much longer than this, B ~ (where e.g. ~ 1=k ~ H I k). That is, the system evolves slowly compared to correlations of the environment. The benets of this assumption are that the upper limits in the integrals of Eq. (2.9) can be set to1 (assumingt B ), and crucially, the Markov approximation can be employed [to error O(( B =~ ) 2 )] allowing one to replace i ~ S (t) by ~ S (t), leaving the integrand eectively a function only of the ~ A k (), and B kj (). Note that, by these approximations, dynamics on time-scales of the order B will therefore not be resolved by the nal master equation. Typically this process is referred to as `coarse-graining'. The resulting equation, which is now local in time j , can eventually, after writing the operatorsA k in the eigen-basis ofh s (so that they now appear with an! dependence), and making the rotating wave approximation k , be written in the following form d dt ~ S (t) = (K LS +D)~ S (t) =:L~ S (t) (2.10) K LS X :=i X ! X k;j S kj (!)[ ~ A y k (!) ~ A j (!);X] (2.11) DX := X ! X k;j kj (!) ~ A j (!)X ~ A y k (!) 1 2 f ~ A y k (!) ~ A j (!);Xg (2.12) where kj (!) 0, and S kj (!)2R are dened by the two-point bath correlation function (see e.g. Ref. [59]). The Bohr frequencies, !, are dierences between the eigenvalues of h s , and the kj (!) are to be thought of as transition rates between the energy levels k;j. This is the nal form of the interaction picture master equation, and we see it is composed of two parts, a generator of unitary dynamics,K LS , and one of non-unitary (dissipative) dynamicsD. The unitary part, also known as the Lamb-Shift adds a correction to the system energy levels caused by the coupling with the environment. i Intuitively, on short time scales,O(B), the system state is relatively unchanged so that ~ S (t) ~ S (t), and for larger > B, the contribution of the integral is marginal due to the assumed exponential decay ofjB kj ()je = B . j We now see explicitly why the terminology `Markovian' is used, noticing that in Eq. (2.10), the evolution to the next time step depends only on the current system state (no `memory' of previous states). k The specics of this part of the derivation are cumbersome and beyond the scope of this overview, so we do not give any more detail here. See e.g. Ref. [59]. 15 2.4. APPROXIMATIONS AND MASTER EQUATIONS CHAPTER 2. Though many approximations have been made, this model in fact describes some very fundamental physics. In particular, assuming the state of the bath is the thermal (also known as Gibbs) state B exp(h b ), it can be shown (via the `KMS condition' [60, 61]) that the detailed balance condition holds , kj (!) = e ! jk (!), and that the thermal state of the system S = 1 Z exp(h s ) is a steady state of Eq. (2.10) (where Z = Tr[exp(h s )]). Generators of this type are known as Davies generators [62]. The associated time evolution operator generated byL is guaranteed by construction to be a genuine quantum map (i.e. a CPTP map). In fact, any generator of the form l L =i[H;] + X k L k L y k 1 2 fL y k L k ;g (2.13) is guaranteed to generate genuine quantum dynamics (a CPTP map). This is the most general form of a generator of Markovian quantum dynamics m , and is often referred to as a master equation in Lindblad form (or just, Lindblad master equation) [65, 66]. Here the L k 2 L(H) are known as the Lindblad (or jump) operators, and H = H y 2 L(H). L2L(L(H)) itself may also be referred to as the `Lindbladian'. Processes of this type can be used to describe fundamental dissipative processes such as decoherence, dephasing, and thermalization n , and provide a very convenient formalism with which to work. Eq. (2.13) is therefore often the starting point for modeling physical systems, or deriving certain types of dissipative dynamics, and this will be the typical approach we take throughout this work. In Appendix A we discuss numerical techniques pertaining to this master equation approach, which we also use throughout this work. 2.4.2 Spectral properties of the Lindblad master equation The superoperator of Eq. (2.13) can be block diagonalized like any other linear operator, as such we can write it in Jordan normal form. Following the presentation of Ref. [67], we have L = X i i P i +D i (2.14) where the eigen-projectorsP i ( P P i = 1) and eigen-nilpotentsD i satisfy:P i P j = i;j P i ;D i P j = P j D i = i;j D i [withD i D j = i;j D 2 i ]. Also, there is an integer m i 0 such thatD m i = 0 [andD m i 1 i 6= 0, when m i > 0]. l Note, it is always possible to re-write the sum in Eq. (2.12) as a sum over a single index, i.e. to match the `diagonal' form of Eq. (2.13). m That is, Eq. (2.13) is the most general generator of a quantum dynamical (one-parameter) semi-group, i.e. Et := e tL . Note, in the case where the L k ;H are time-dependent, the evolution operatorE t 0 ;t := T exp R t 0 t dL() is a CPTP map providedt 0 t, with the CP-divisibility propertyE t 00 ;t 0E t 0 ;t =E t 00 ;t (for t 00 t 0 t) [59, 63, 64]. n When we talk of decoherence, we typically mean dephasing in the eigen-basis one is interested in (e.g. the computational basis), thus preserving diagonal elements of the density matrix, but causing a loss of coherence in that basis. Thermalization in this context is also known as amplitude damping. 16 CHAPTER 2. 2.4. APPROXIMATIONS AND MASTER EQUATIONS This is so far completely general, but if indeedL is of the Lindblad form, we also have that Re i 0, and there is guaranteed to be at least one zero eigenvalue (with no eigen-nilpotent part) [54, 68, 69]. The zero eigenvalue states span the `steady state space' o , which is therefore of dimension at least 1. Assuming time-independence ofL, one can now write the exact form of the evolution (super) operator,E t :=e tL , E t = X i P i + m i 1 X k=1 t k D k i k! ! e i t : (2.15) From this very general equation, one can start to understand the dynamics of an open quantum system. If the non-zero eigenvalues have negative real parts (i.e., not purely imaginary), the steady state manifold is attractive and the evolution over innite time brings any initial state to the steady state space. The approach to the steady state space is exponentially fast and the time-scale upon which this happens is determined by the slowest decay rate, i.e. := 1= min i>0 jRe i j (i.e. on time-scales O(), one is guaranteed to be `close' to a steady state). Since this happens, in general, indiscriminately, the nal (innite time) state is completely independent of the initial state, thus any (quantum) information contained in the initial state is lost to the environment. The process by which this occurs, which is non-unitary, is irreversible. 2.4.3 Beyond the Markovian regime The approximations introduced above, e.g. to derive Eq. (2.10), are only applicable for certain physical systems. In general, some key features of the evolution will not be cap- tured by such a description. There has been much research devoted to deriving a general master equation, guaranteed to generate a CPTP map, which does not make the Markov approximation (that is, a non-Markovian equivalent of the Lindblad equation). We will brie y discuss some of these methods. A complete description of a systems evolution when coupled to an environment is given by the Nakajima-Zwanzig equation [70, 71], which is arrived at by projecting out the `irrelevant' degrees of freedom, leaving a master equation for the `relevant' degrees of freedomP = Tr B B . The resulting equation is exact and of the general form d dt P(t) = Z t 0 dsK(t;s)P(s) (2.16) whereK is the memory kernel, and contains all information about the system, environment, and their coupling. Note that the equation is not time-local since one integrates over the history of the state (weighted byK). The superoperatorK is highly non-trivial itself, and o That is, the steady state (vector) space is the span of the set of steady states S introduced above. 17 2.4. APPROXIMATIONS AND MASTER EQUATIONS CHAPTER 2. although this equation makes no approximations, it is typically not practical to use in this form as it is as dicult to solve as the un-projected dynamics alone [59]. One approach to simplify the description is to make the equation time local. This results in the time-convolutionless (TCL) master equation, where one introduces a backwards propagator so that(s) =G(t;s)(t). Of course, the resulting equation, although now time local, has shifted the complication to integrating the non-trivial superoperatorG. The main benet of using the TCL approach however is that it allows one to perturbatively expand the resulting master equation in powers of the system-environment coupling strength , where low-degree expansions can be numerically computed, e.g. d dt P(t) = X n>0 n K n (t)P(t) (2.17) where the K n (t) can (in principle) be computed p . For example, the so called TCL-2 (the TCL master equation expanded to second order in ) results in the same equation as making the Born-Markov approximations (before sending the upper integral limit to innity). Higher orders in the TCL equation in general lead to non-Markovian dynamics. Moving beyond these essentially exact descriptions, various approximations can be made to arrive at a non-Markovian (integro-dierential) master equation, of the general form d dt (t) = Z t 0 dsK(ts)(s); (2.18) where now is the system state. Typically such equations do not guarantee to generate genuine quantum dynamics (rst demonstrated in Ref. [72]), and the range of applicability is therefore limited. For example, an early approach by Ref. [73] investigated stochastic Hamiltonians of the form H = P k k (t)h k (h k =h y k ; k 2R), which, assuming certain conditions on the noise variables k , leads to an equation of motion of the form _ (t) = P k L k R t 0 dsK(t;s)(s), whereL k is a Lindbladian with Lindblad operatorsh k , andK(t;s) is a correlation function (e.g. e jtsj= ). Only under certain conditions does this equation guarantee to generate a CPTP map. We will in fact study similar equations in more detail in Chapter 5. Many other similar phenomenological models have been employed (e.g. see Refs. [74{77] for a few examples), as well as microscopic `collision' based models [78{80] which provide a much less coarse grained view of the environmental action on the system. More recently, large classes of master equations with memory kernels have been shown to generate genuine quantum dynamics, and which contain some of the aforementioned models as special cases [81, 82]. Though these frameworks can be more tricky to work with practically as compared to the Lindblad form (both numerically and analytically), they in general give rise to a much p There is often an inhomogeneous term on the right hand side of Eq. (2.17) of the formI(t)Q(0), which we have absorbed into theKn. 18 CHAPTER 2. 2.4. APPROXIMATIONS AND MASTER EQUATIONS richer dynamics. Numerous connections between open system dynamics of this type and information theory have been made, linking the ` ow of information' between the system and the environment, and the degree of `non-Markovianity' [83{86]. Indeed, an active research area is the broad classication of dynamics, trying to measure the amount of non- Markovianity, and understanding what this physically means. An interesting question is whether non-Markovianity oers any advantages for dissipative quantum computation or machine learning (over the Markov formalism), and whether it can be viewed as a resource in this context. We will discuss some of these ideas in Chapter 5. 19 Chapter 3 Dissipative Universal Simulation and Computation One of the main driving forces, and indeed the original proposal by Feynman [7], behind the pursuit for universal quantum computation is the ability to simulate quantum systems. For example, in chemistry, one may be interested in the unitary evolution of some molecule whilst interacting in a solvent. Exact simulation on a `classical' computer is impossible (except for tiny systems) due to the exponential size of the Hilbert space. Although many numerical techniques such as DFT [4, 5] have been developed and heavily optimized over the years giving (with errors), O(N 3 ) computation time (or even O(N) in some cases [87, 88]) for certain electronic-structure calculations, it is typically thought that a universal quantum simulator (computer) would massively accelerate and advance our understanding in many areas, including human biology, drug design, and cancers, to name a few. In this chapter, following work originally published in Refs. [45{47], we will show how using strong Markovian noise and a control Hamiltonian, one can engineer a system to eectively evolve, within the steady-state space (SSS) of the noise, according to a unitary evolution, or indeed an eective non-unitary Markovian evolution. That is, in principle, these techniques allow one to engineer by dissipation a simulator of any type of time- independent Markovian (or pure unitary) dynamics. In this sense therefore, one can turn a dissipative process into a true resource. The key idea behind this work, detailed in Ref. [45], is a simple one: once the system is prepared in the SSS the fast dissipative processes adiabatically decouple non steady- states away while at the same time strongly renormalize the system Hamiltonian in such a way that the SSS remains invariant under this projected dynamics. This phenomenon can be thought of as a sort of environment-induced quantum Zeno eect [37, 89] at the superoperator space level [90]. In the case in which the dissipation-projected Hamiltonian is vanishing, higher order virtual dissipative processes give rise, in a suitable limit, to an eective Liouvillian generator 20 CHAPTER 3. 3.1. A PERTURBATIVE APPROACH TO DYNAMICS that leaves the SSS invariant. However, at variance with the case studied in [45] this eective generator is no longer Hamiltonian: a slow irreversible process unfolds within the SSS. We will show how this mechanism can be exploited to the end of the simulation of any Markovian dynamics. More precisely, we will show that by suitably coupling a quantum system to a structured reservoir comprising multiple qubits undergoing fast amplitude damping one can implement an eective Liouvillian generator in general Lindblad form. As a basic illustration, Fig. 3.1 shows how an a priori closed system which is (unitarily) coupled to some dissipative degrees of freedom can be engineered so that it evolves according to a Markovian evolution with tunable decay rates. We will moreover show a practical construction of a network of `dissipation-generated modules' (DGMs), which will allow one to construct a general simulator capable of perform- ing universal quantum computation, as well as simulate arbitrary Markovian dynamics, in a scalable manner. g 1 g 2 g M τ 1 −1 τ 2 −1 τ M −1 g 1 2 τ 1 ≃ g 2 2 τ 2 g M 2 τ M … … S S Figure 3.1: A quantum system S (blue ball) is coupled with coupling strengths g i to M qubits (yellow balls). Each of these qubits is subject to amplitude damping with rates 1 i . We show that in the limit of small i the qubits can be adiabatically decoupled and the eective dynamics of S is described by M Lindblad operators of strength g 2 i i 3.1 A Perturbative Approach to Dynamics Let us consider a general Markovian evolution described by a time-independent generator (in the Lindblad form) composed of two partsL =L 0 +K whereKX =i[K;X] is a generator of unitary dynamics (K =K y ). We assume the dimension of the Hilbert space H is nite. We denote byP 0 (Q 0 := 1P 0 ) the spectral projection over the kernel ofL 0 , KerL 0 (the complementary subspace of KerL 0 ). We will assume that the set of steady states 21 3.1. A PERTURBATIVE APPROACH TO DYNAMICS CHAPTER 3. corresponding toL 0 , S 0 , is attractive, i.e., the non-zero eigenvalues i>0 ofL 0 are such that Re i>0 < 0, and thereforeP 0 = lim t!1 E (0) t , whereE (0) t :=e tL 0 . We now treatK as a perturbation to the dominant dissipative dynamicsL 0 , by writing K = 1 T ~ K, with ~ K =O(1), and whereT , which should be thought of as the total evolution time of the system, will be `large' (in the appropriate sense given below). Thus, in this limit, upon initializing the system in a steady state ofL 0 , one expects on time scales much greater than the relaxation time a r associated withL 0 , T r , the system state should remain inS 0 with high likelihood. The state itself will of course change non-trivially (when the dimension of the steady-state space is non-trivial), and as we will show, will be governed internally by an eective unitary (or, in some cases) Lindbladian evolution within S 0 . To understand this dynamics, note that we can formally solve the equation _ (t) =L(t) as E t =e tL 0 1 + Z t 0 de L 0 KE ; (3.1) which can then be expanded in higher orders ofK, E t =e tL 0 +e tL 0 Z t 0 de L 0 Ke L 0 +::: (3.2) We wish to study the dynamics within S 0 , thus consider, to rst order inK, E t P 0 =P 0 +tP 0 KP 0 +e tL 0 Z t 0 de L 0 Q 0 KP 0 +::: (3.3) where we use, by denition,P 0 L 0 =L 0 P 0 = 0, and resolution of the identityP 0 +Q 0 = 1 inside the integral. Note, of the three terms shown on the right hand side of this equation, only the third drives transitions outside of S 0 , and moreover, it can be shown that this third term is O( r =T ). Therefore, at this level of expansion in the Dyson equation Eq. (3.1), for an evolution over time t =T r , to error r =T , the system state remains inside S 0 , having evolved by an eective (dimensionless) Hamiltonian which at the superoperator level is given by b ~ K e :=P 0 ~ KP 0 . Indeed, in Ref. [45], it is shown that in the limit T r , kE T P 0 e ~ K e P 0 k =O( r =T ): (3.4) 3.1.1 Time dependence If the `control' Hamiltonian K is time-dependent, K = K(t) (where K(t) = 1 T ~ K(t)), one expects Eq. (3.4) to be replaced by a time-ordered exponential c kE T P 0 Te R 1 0 ds ~ K e (s) P 0 k =O( r =T ) (3.5) a 1 r := mini>0 Rei sets the energy scale associated withL0. b ~ K :=i[ ~ K;]. c In the integral we have dened s =t=T . 22 CHAPTER 3. 3.2. SIMULATING MARKOVIAN SYSTEMS where nowE t =Te R t 0 dL() . Though we do not provide a rigorous proof of this, nor does one exist in any publication to date, a proof sketch is provided in Appendix B.1. Simulations also seem to agree with this result, as in the time-independent case (not included in this work however). 3.1.2 Higher order eects Perhaps a more interesting question is what happens whenP 0 KP 0 = 0, in which case one should consider higher order terms in Eq. (3.3). Since we now strictly are interested in terms to second order inK, we set the scale of K by K = 1 p rT ~ K. The third term on the right hand side of Eq. (3.3) is therefore O( p r =T ). Looking to the second order d , in K, inserting the identity asK = (P 0 +Q 0 )K, gives four terms, only one of which survives whenP 0 PK 0 = 0, and T!1, E T P 0 =P 0 P 0 ~ KS ~ KP 0 +::: (3.6) whereS := Q 0 L 0 is a pseduo-inverse ofL 0 , where we note that after projecting outside ofS 0 , L 0 has a well dened inverse e . That is, to this level of expansion in the Dyson equation, the dynamics are governed by an eective generator, ~ L e :=P 0 ~ KS ~ KP 0 . In fact, as shown in Ref. [46], and also in Appendix B.2, this can be rigorously bound as kE T P 0 e ~ L e P 0 k =O( p r =T ): (3.7) 3.2 Simulating Markovian Systems Let us consider a system S coupled to a system B via the general Hamiltonian K = M X i=1 L i B i ; (3.8) where the tensor ordering follows that of the total Hilbert space,H =H S H B and, without loss of generality, we assumeB y i =B i ; (i = 1;:::;M). We also assume that the dissipative term is of the formL 0 = 1 S L B , such thatL B ( 0 ) = 0 where 0 is by assumption the unique steady state ofL B . The set of steady states ofL 0 , S 0 , is given by all the states of the form 0 and it is isomorphic to the full-state space of S: In this case one has P 0 (X) = Tr B (X) 0 andP 0 KP 0 () =i[K e ;] with K e = Tr B (K 0 ) 1 B [45]. Let S B be the the pseudo-inverse ofL B , as in footnote e. d The second order in K term can be written as e tL 0 R t 0 dt1e t 1 L 0 Ke t 1 L 0 R t 1 0 dt2e t 2 L 0 Ke t 2 L 0 , whereK can then be replaced by (P0 +Q0)K. e The slight abuse of notation in the denition ofS should be understood formally asS = R 1 0 dte tL 0 Q0. 23 3.2. SIMULATING MARKOVIAN SYSTEMS CHAPTER 3. Proposition 3.1. If K e = 0 thenL e =L (S) e 1 B ; L (S) e () =i[H e ;] + M X i;j=1 2 ij (L i L j 1 2 fL j L i ;g) (3.9) where := ( (A) + (A)y )=2,H e = 1 2i P M i;j=1 ( (A) (A)y ) i;j L j L i and (A) ij =Tr (S B (B i 0 )B j ): Proof. We directly compute the second order eective generatorL e :=P 0 KSKP 0 by acting on some state X, such thatP 0 (X) = 0 . Following Eq. (3.8) one has SKP 0 (X) =i M X i=1 (L i S B (B i 0 )L i S B ( 0 B i )); (3.10) where we have introduced notationS = 1 S S B , which only acts non-trivially on system B for this set-up, i.e. S B = R 1 0 e tL B (following fromS = R 1 0 e tL 0 Q 0 ). Acting with P 0 K on this we can see that: L e (X) = 8 < : M X i;j=1 (A) ij (L i L j L j L i ) + M X i;j=1 (B) ij (L j L i L i L j ) 9 = ; 0 ; (3.11) where (A) ij =Tr (S B (B i 0 )B j ), and (B) ij =Tr (S B ( 0 B i )B j ). Now observe that since S B is a Hermitian-preserving map, we have (A) = (B) . It then follows L (S) e () =i[H e ;] + M X i;j=1 2 ij (L i L j 1 2 fL j L i ;g) (3.12) where := ( (A) + (A)y )=2,H e = 1 2i P M i;j=1 ( (A) (A)y ) i;j L j L i and (A) ij =Tr (S B (B i 0 )B j ): This is Eq. (3.9), as required. Notice that H y e = H e and that Eq. (3.9) describes a truly Lindbladian dynamics i 0: One of our main results now follows as a particular case of Prop. 3.1 above. Let us consider a d-dimensional system S coupled to a system B comprising M qubits, by the HamiltonianK = P M i=1 g i (L y i i +h:c:); where theL i 's are given operators acting on the system state-space only. Let us also supposeL B = P M i=1 L i where each of the M qubits independently dissipates according to the local Liouvillian L i () = 1 i ( i + i 1 2 f + i i ; g): (3.13) The unique steady state ofL B is 0 =j0ih0j M and since Tr( i 0 ) = 0 (8i) one has K e = 0: 24 CHAPTER 3. 3.2. SIMULATING MARKOVIAN SYSTEMS Proposition 3.2.L e =L (S) e 1 B where L (S) e () = 4 M X i=1 g 2 i i (L i L y i 1 2 fL y i L i ; g): (3.14) Proof. To obtain Eq. (3.14) from Eq. (3.9), re-write the B i ;L i in (3.8) such that K = P M i=1 g i (L y i i + h:c:). Remembering that 0 =j0ih0j M , andL B as in Eq. (3.13), we recover Eq. (3.14) as required by direct evaluation of the matrix (A) in Prop. 3.1. Notice now that one can add any HamiltonianK 1 = T 1 ~ K 1 [k ~ K 1 k = O(1)] acting on the system S only ()P 0 KP 0 =P 0 K 1 P 0 ). This will result in ~ L e 7! ~ L e + ~ K 1 : Therefore we see that Prop. 3.1 shows that in principle any Liouvillian in the Lindblad form i.e., the most general generator of semi-groups of Markovian CP maps, can be obtained given the availability ofM auxiliary qubits (one for each Lindblad operator) subject to an amplitude damping channel and the ability to enact the Hamiltonian K. Dissipation turns into a resource that allows one to simulate a general Lindbladian evolution. We would like to make a few important remarks: 1) One might think of obtaining the Lindbladian dynamics Eq. (3.14) directly coupling the system S to some reservoir with an interaction Hamiltonian of type (3.8) and then using the standard Born Markov approximation. The point is that the latter involves uncontrolled approximations whereas Eq. (3.7) has a uniform and controlled error O( p R =T ): This means that the eective dynamics of S becomes exactly Lindbladian, with generator (3.14), for T!1: Of course this is true as long as the auxiliary qubits are exactly described by the Lindbladian in Eq. (3.13) i.e., their genuine Markovianity is a key resource in our universal simulation protocol along with the ability of switching on the the Hamiltonian in Eq. (3.8). 2) In view of physical applications, we stress that the eective dynamics in Eq. (3.14) still holds if the M qubits are replaced by M bosonic modes subject to amplitude damping i.e., the i in Eq. (3.13) are replaced by annihilation operators a i . 3) A comparison of the complexity of our analog simulation technique with \digital" ones [44] is not directly viable. A meaningful way to assess quantitatively the eciency of our proposal is to see how the required resources { the number of auxiliary qubits L and dissipation strength R = 1 R (energy) { scale with the numberN of qubits for a general Lindbladian simulation. Directly below it is shown that the dissipation rate fullls R = O( 1 LJ) where J := max i kL i k is independent of N and 1 is the simulation error (for a xed time). Furthermore for a k-local Lindbladian [44, 91] one has L =O(2 k N k ): This shows that to simulate any physically reasonable Lindbladian, resources scale polynomially with N: 3.2.1 Scaling of Resources From the denitionL e :=P 0 KSKP 0 it follows the \action" t :=kL e kt is of the order t R kKk 2 : By xing t =O(1) in the error bound derived in Appendix B.2, Eq. (B.22), one 25 3.2. SIMULATING MARKOVIAN SYSTEMS CHAPTER 3. obtains t R kKk(C 1 + t C 2 ) =:C R kKk; (3.15) with C = O(1): Now,kKk 2kKk and, from Eq. (3.8) (replacing M by L),kKk L max i kL i kkB i k :=LJ; with J =O(1): From Eq. (3.15) one has t R (2CJ)L whence R := 1 R 1 (2CJ)L implies t : The only quantity which (potentially) scales with the number of qubits is the total number L of Lindblad operators we have in the Liouvillian to be simulated (equal to the number of dissipating ancilla qubits required in our scheme). For the physically meaningful class of k-local Liouvillians considered in [44] one has L = O(N k 2 k ) (see Eqs. (3) and (4) in [44]). This proves the lower bound of the dissipation R rate mentioned above. 3.2.2 Simulating collective amplitude damping Here we use our general result Eq. (3.14) to simulate qubits subject to collective damping. This type of symmetric noise is interesting as it admits decoherence-free subspaces [33, 34, 92] and can be used to dissipatively prepare entangled states. Let us consider a system of N qubits coupled to a bosonic mode e.g., N atoms coupled to a cavity EM mode, via a (collective) Jaynes-Cummings Hamiltonian K = g (S a y +S + a). Moreover we assume that the system dissipates according to the LiouvillianL 0 = 1 S L B where L B () =i! [a y a; ] + 1 R (aa y 1 2 fa y a; g): (3.16) Using Eq. (3.14) with L 1 =S one nds the eective generator L (S) e () = 4g 2 R (S S + 1 2 fS + S ; g) (3.17) where is just theN-qubit state as the generator is trivial in the bosonic degrees of freedom (frozen atj0i): Prop. 3.1 shows that one can consider for the auxiliary qubits a Liouvillian that is more general than Eq. (3.13) (as long as its steady state is unique). We illustrate this fact by considering the thermalization of an auxiliary qubit at non-zero temperature. Namely, we add to Eq. (3.13) an excitation Liouvillian, such that now L B () = 1 ( + 1 2 f + ; g) + 1 + ( + 1 2 f + ; g): (3.18) By explicit computation of the matrix in Prop. 3.1 one can check that the new eective generator (in the system S sector) is L (S) e () = X = 1 e; (S S y 1 2 fS y S ; g) (3.19) 26 CHAPTER 3. 3.3. A DISSIPATIVE COMPUTATIONAL NETWORK 1/ √ T "(E T − e ˜ L eff )P 0 " 0.000 0.008 0.016 0.024 0.032 0.000 0.006 0.012 0.018 0.024 Figure 3.2: Distance from the exact evolution (E T :=e TL T ) and eective one with Liouvil- lian (3.19), as a function of 1= p T . N = 3; + = 2; = 1, and g = ( R T ) 1=2 (where the relaxation time is R = + + + ). The linear t is obtained using the least squares tting on all of the data points, and the norm is the maximum singular value of the maps realized as matrices. Time is measured in units of R . where 1 e; = 4g 2 + ( + + ) 2 . A numerical check of the validity of Eq. (3.7) is shown in Fig. 3.2. Following a similar set-up as the above but now with a Hamiltonian of the form K = gS x x , the eective generator becomes that of collective dephasing along thex-direction L (S) e () = 4g 2 + + + (S x S x 1 2 fS x S x ;g): (3.20) In Fig. 3.3 we plot the distance between the actual and the eective evolution as a function of t for dierent time-scales T . According to Eq. (3.7) by changing T ! cT (c> 1), we expect the distance to fall by a factor of p c (cf. in Fig. 3.3 the maximum error falls from the dash to solid line by a factor of p 10). In the limit of T!1, the exact evolution becomes identical to the eective one8t: 3.3 A Dissipative Computational Network We propose a new computation/simulation paradigm using the dissipation projected dy- namics as discussed above. In particular, we construct a network of so called dissipation- generated modules (DGMs), where the full dynamics within the network is determined by couplings between these dissipative modules. 27 3.3. A DISSIPATIVE COMPUTATIONAL NETWORK CHAPTER 3. −2 −10 1 2 3 4 0.00 0.02 0.04 0.06 0.08 0.10 !(E t − e tL eff )P 0 ! log 10 (t) T =10 3 T =10 4 Figure 3.3: Distance from the exact evolution (E t :=e tL T ) and eective one with Liovillian Eq. (3.20), as a function of log 10 (t). N = 1; + = 2; = 1, and g = ( R T ) 1=2 ( R = + + + ). Note that for the dashed line we have extended t past T , purely for convenience. The norm is the maximum singular value of the maps realized as matrices. Time is mea- sured in units of R . The Hilbert space of a single dissipation-generated module is of the formH =H n 1=2 H a , whereH 1=2 is the Hilbert space of a single qubit, andH a the Hilbert space of dissipative ancillary resources (e.g. qubits, bosonic modes, etc.). By allowing a coupling between dierent modules, one can construct a `DGM-network', which denes an undirected graph G := (V;E), where the vertices V are distinct modules, and the edges E, are dened by the couplings between modules. The full Hilbert space is thereforeH = i2V H i , whereH i has the same structure as above. This structure implies that the full Hamiltonian can be written in general as K = X (i;j)2E K (i;j) : (3.21) Moreover, each module is assumed to be dissipative, such that the unperturbed dynamics of the system may be described by a Liouvillian superoperator of the formL 0 = P i2V L (i) 0 , whereL (i) 0 only acts on the ith module. With eachL (i) 0 we associate a dissipative time- scale, i . The overall, dissipative, timescale of the whole network is given by := max i ( i ). We illustrate a four module patch of a two-dimensional DGM-network in Fig. 3.4. Since the dissipation acts, by denition, separately on each module, it is clear that the full SSS projection operator over an entire DGM network is simply of the formP 0 = i2V P (i) 0 , whereP (i) 0 is the projection operator associated with the i-th module. The Hamiltonian superoperator,K = P (i;j)2E K (i;j) , gives rise to rst-order eective dynamics 28 CHAPTER 3. 3.3. A DISSIPATIVE COMPUTATIONAL NETWORK … … … … … … … … … … Figure 3.4: A four-module patch of a two-dimensional network of coupled DGMs. Thei-th module dissipates according the LiouvillianL (i) 0 and the bare inter-module coupling are given by theK (i;j) =i[K (i;j) ;]. Yellow thick arrows represent strong dissipation whereas blue wavy lines represent Hamiltonian interactions. as dened by Eq. (3.4), given by Ke = X (i;j)2E (P (i) 0 P (j) 0 )K (i;j) (P (i) 0 P (j) 0 ): (3.22) Errors associated with this eective dynamics, as per Eq. (3.4), are of the order = O(J MAX jEj), where J MAX is the maximum inter-module coupling strength, andjEj the number of edges in the network f . This error presents a trade-o between the graph complexity (as measured byjEj), and the global dissipation rate, 1 . This trade-o stems from the observation that simple networks of N nodes, e.g. a tree withjEj = O(N), may require a greater number of elementary gates to perform some computation as compared to a fully connected graph withjEj = O(N 2 ), indicating there will be an optimal graph congurationG for a given simulation. For arbitrary computations/simulations, one needs to be able to connect any two nodes inG, where the cost of doing so is equal to the distance between these nodes (i.e. this is the order of the number of gates one would need to apply). Finally note that the dissipation required for these modules can in principle be engi- neered using the second-order dissipation lemma as given by Eq. (3.7). f This estimate is a worst case scenario which assumes that all the couplings K (i;j) are always on 29 3.3. A DISSIPATIVE COMPUTATIONAL NETWORK CHAPTER 3. 3.3.1 Coherent dynamics in DGMs If the type of noise acting on the system admits a Decoherence-Free Subspace (DFS) in each module, Eq. (3.22) becomes Ke = X i;j ( i j )K (i;j) ( i j ); (3.23) where i is the orthogonal projector over the DFS in module i. As an example, assume two connected DGMs, of N qubits, each module undergoing collective amplitude damping of the form L (i) 0 () = 1 i (S S + 1 2 fS + S ;g) (3.24) where i = 1; 2 indicates the DGM index. The Hilbert space for N qubits breaks down into separate angular momentum sectors asH = J n J H J , where n J is the multiplicity of H J = spanfjJ;mig J m=J . Under this type of dissipation, there is a DFS containing the lowest-weight angular momentum vectors in each sector (i.e.jJ;Ji). Let us concentrate on the case N = 2 in the following. In this case there is a two-dimensional DFS (i.e a qubit), spanned by the singlet,j 0i :=j0; 0i, and thej 1i :=j1;1i triplet state g . The projectors introduced above are then given by i =j 0ih 0j +j 1ih 1j. This set-up in fact allows us to generate eective, logical, single qubit gates, as well as entangling gates. Consider the two-local Hamiltonian K =g p 2I x . From Eq. (3.23) one obtains K e =g x :=g(j 1ih 0j +j 0ih 1j); (3.25) i.e., a logical, eective, Pauli x Hamiltonian. In a similar manner, K = g z z can be used to generate a logical Pauli z, z := j 1ih 1jj 0ih 0j. y , of course, can be generated using the commutation relations h . Hence using this type of dissipation, one can generate all of the single qubit gates required for universal quantum computation. We now study the coupling between the two DGMs dened above. In particular consider the following interaction K =g( + 2 1 + 2 + 1 ); (3.26) where the tensor product is the product between the DGMs and the sub-index labels the qubit inside each DGM. According to Eq. (3.23) the above term induces the following eective Hamiltonian K e = g 2 (j 0 1ih 1 0j +j 1 0ih 0 1j): (3.27) g j0; 0i = 1 p 2 (j01ij10i);j1;1i =j00i. h Consider the HamiltonianK = z z . Clearly the action of this onj 1i is the identity, whereas results in a minus sign onj 0i. Therefore, this will result in a logical z Hamiltonian. x is derived in a similar manner. 30 CHAPTER 3. 3.3. A DISSIPATIVE COMPUTATIONAL NETWORK The interaction in Eq. (3.27) can be used to generate entanglement between DGMs. In fact, this can be used to create a `square root ofi swap' gate, SQiSW, which in turn can be used to generate a CNOT gate [93]. Given that we can, in a similar fashion, generate all of the single qubit gates, this method can be used to enact universal QIP over the logical of the modules. To demonstrate these ideas we explicitly construct a CNOT gate dissipatively. Let us denote the Hamiltonian Eq. (3.27) by K swap (g), where the parameter g is the strength of the Hamiltonian. Then, in the (ordered) basisfj 0 0i;j 0 1i;j 1 0i;j 1 1ig, it is clear that (eective) evolution under K swap ( 2 ), generates the SQiSW=:S i gate: S i = 0 B B B @ 1 0 0 0 0 1 p 2 i p 2 0 0 i p 2 1 p 2 0 0 0 0 1 1 C C C A : (3.28) We dene qubit rotations by about x;y respectively as X =e i 2 x ;Y =e i 2 y . This ability to generate SQiSW, X , and Y enables us to create an eective CNOT gate [93]. In particular, CNOT =Y (1) 2 X (2) 2 X (1) 2 S i X (1) S i Y (1) 2 ; (3.29) where the superscript index indicates which DGM we are operating on. To simulate, for example,X (1) , we evolve the two modules (each containing 2 qubits), which are dissipating via Eq. (3.24), with HamiltonianK = p 2 x 2 I. As is requisite for this technique, we assume K is scaled by a factor of 1=T . Joining together several operations of this type (as given by Eq. (3.29)), we can in fact construct the CNOT gate, assuming the leakage out of the SSS is negligible (or at least, controllable) at each step. Since the errors at each step are always O(1=T ), we expect the scaling, for large T , to in fact still be linear in 1=T . To show our scheme is eective at preparing such a gate, and hence entangled states, we illustrate the ability to prepare the maximally entangled statej + i := 1 p 2 (j 0 0i +j 1 1i), from an initial product state, with arbitrarily small error, in Fig. 3.5. We see for larger and largerT , the error bound is better approximated as a linear function in 1=T , as expected. Fig. 3.5 shows that despite the 7 separate evolutions required to prepare the CNOT gate, where errors will accumulate at each step, by tuning the total evolution time, T , one can arbitrarily control the overall error in the system. Given this ability to perform CNOT and the single qubit operations means within a DGM network one can simulate, with arbitrary accuracy, any information processing task. 3.3.2 Incoherent DGMs: coherence and entanglement In Refs. [94, 95] it was observed that by coupling a qubit in a (lossy) cavity interacting with a bosonic mode, to an empty cavity supporting another bosonic mode, one can extend the 31 3.3. A DISSIPATIVE COMPUTATIONAL NETWORK CHAPTER 3. 0 0. 5 1 1. 5 1= T # 1 0 ! 3 0 0. 0 5 0 . 1 0. 1 5 k E T ( ; 0 ) ! j A + i h A + j k Figure 3.5: Distance between exact evolution, and the target maximally entangled Bell state,j + i (dened in main text). We initialize the system in the product state 0 = j 0 ih 0 j, wherej 0 i = j 0i+j 1i p 2 j 1i. The exact evolutionE T =E (7) T=7 E (6) T=7 :::E (1) T=7 , where each E (i) T=7 generates one of the 7 gates of Eq. (3.29), as described in the main text. The dissipative time-scales for the two DGMs were set to 1 = 2 = 1 arb. units for this simulation. The norm is the maximum singular value of the maps realized as matrices. The linear t is on the 10 largest T values. Time is measured in arbitrary units. coherence time of the qubit. By further adding an atom (qubit) in the extra cavity, it was also shown that the entanglement between the atoms survives for longer times. Similar results have also been found in [96]. We follow a more general set-up, of which the system described in [95] is a special case. We are able to show that by increasing the coupling strength between the modes, we can increase the eective dissipative time-scale of the qubits, hence increasing the time window where purely quantum eects can be observed. We consider N qubits ( = A;B) in each cavity, collectively coupled to a single bosonic mode in each cavity via a Jaynes-Cummings term. The unperturbed generator for two modules is given by (see Fig. 3.6 for reference) L 0 =i[H AB ;] + X =A;B L () 0 ; (3.30) where L () 0 () = 1 (c c y 1 2 fc y c ;g); (3.31) and c (c y ) is the annihilation (creation) operator for mode in cavity = A;B. The coupling Hamiltonian is given by H AB = J(c y A c B +c A c y B ). The unique steady state in 32 CHAPTER 3. 3.3. A DISSIPATIVE COMPUTATIONAL NETWORK … … Figure 3.6: Two modules, each containing qubits (yellow/light gray spheres, top) connected to a bosonic mode (blue/dark gray sphere, bottom), which dissipate at rate 1 . In turn, the two modules are coherently coupled via the bosonic modes, with strength J. the bosonic sector is the joint vacuum state =j0ih0j j0ih0j. The decay of the qubits is assumed to be mediated by the usual Jaynes-Cummings interaction between the qubits and modes, i.e., K =K 0 + X =A;B g (c S + +c y S ); (3.32) where K 0 = P =A;B (! q S z +! c y c ). One can check the second term in Eq. (3.32) vanishes at rst order. For the sake of simplicity we set ! =! q = 0 (see Appendix B.3). This system is equivalent, up to arbitrary controllable errors to two coupled modules containing N qubits per module (bosons frozen atj0i). The eective dynamics are gov- erned by Liouvillian L e =i[K e ;] + X =A;B L e ; (3.33) with K e =J e (S + A S B +S A S + B ); (3.34) and L e ( ) = 1 e; (S S + 1 2 fS + S ;g); (3.35) where is the N -qubit state. The eective dissipative rate is found to be (see Ap- pendix B.3 for derivation), 1 e; := 4 1 + 4(J) 2 g 2 ; (3.36) and the coupling strength, J e := 4Jg A g B 2 1 + 4(J) 2 : (3.37) 33 3.3. A DISSIPATIVE COMPUTATIONAL NETWORK CHAPTER 3. We observe immediately that the eective dissipative timescale is controlled by a factor of J 2 . Also note that if we set J = 0, we recover the result in Eq. (5) of [46]. If we allow the second DGM to contain no qubit (e.g.g B = 0), and place a single qubit in system A, we have the exact case as discussed in Ref. [95] (see Fig. 1 of [95]). The amount of coherence in a state , in a given basis, can be captured by C = P i6=j j ij j. It can be shown that i , in the standard basis, using the eective dynamics, one obtains C =e T 1 e;A =2 [note that this result is correct up to O( p =T )]. A plot ofC as a function of J for dierent dissipations, is given in Fig. 3.7. 0 1 2 3 4 5 J 0 0 . 2 0 . 4 0 . 6 0 . 8 1 C = = 0: 5 = = 1 = = 2 Figure 3.7: Coherence of the evolved eective dynamics, Eq. (3.33), as a function ofJ for a single qubit, for three values of (see legend). Initial qubit statej 0 i = 1 p 2 (j0i +j1i). We set Tg 2 A = 1 arb. units. This result is valid only in the regime T. Time is measured in arbitrary units (J;g are inverse time). We also consider the dynamics of the entanglement of the two atoms between modules. The entanglement can be quantied by the concurrence given by C() = max(0; 1 2 3 4 ), where i are the eigenvalues in decreasing order of p p ~ p where ~ = y y y y . We consider the eective dynamics for an initially maximally entangled two-qubit system, evolving via Eq. (3.33). The eect of this dissipative evolution, of course will result in a degradation of the entanglement. Remarkably however, such degradation can be controlled by increasing the coupling strength J. We also nd a similar eect, as expected, by increasing (i.e. decreasing the dissipative rate), see Fig. 3.8. This result is i One can calculate the functional form of the coherence, in the computational basis (jii, i = 0; 1), by noting thatL e (jiihjj) = 1 eff; 2 jiihjj, fori6=j, andL e (j1ih1j) = 1 e; (j0i0jj1ih1j). Note,j0ih0j is a steady state. The subscript on the states just indicate the system =A;B. 34 CHAPTER 3. 3.3. A DISSIPATIVE COMPUTATIONAL NETWORK also accurate up to O( p =T ). 0 2 4 6 8 1 0 J 0 0 . 2 0 . 4 0 . 6 0 . 8 1 C ( ; ) = = 0: 1 = = 0: 2 = = 0: 5 Figure 3.8: Dynamics of entanglement, given by the concurrenceC, as a function of the boson-boson coupling strength,J, for a two qubit system evolving via Eq. (3.33), for dier- ent dissipation timescales (see legend). The two qubits are initialized in the maximally entangled state,j 0 i = 1 p 2 (j00i +j11i). The qubit-boson coupling parameters were chosen such that Tg 2 A =Tg 2 B = 1 arb. units. This result is valid only in the regime T. Time is measured in arbitrary units (J;g are inverse time). 3.3.2.1 Dephasing in the computational basis By altering the type of noise acting on the system, we can enact completely dierent dynamics. In particular, dephasing in thez direction, can be used to prepare mixed states. Consider two coupled DGMs, each containingN physical qubits, undergoing dephasing in the z direction, that is, with Lindbladian L 0 () = 1 (S z S z 1 2 fS z S z ;g): (3.38) The SSS contains all states (J;m) := jJ;mihJ;mj, wherejJ;mi 2 H J . Consider the Hamiltonian K =g(S + S +S S + ); (3.39) where the tensor ordering respects the ordering of the two DGMs. The projector over the SSS isP 0 (X) = P J;m (J;m) X (J;m) , (for m =J;J + 1;:::;J), for quantum states X2 L(H N ). 35 3.3. A DISSIPATIVE COMPUTATIONAL NETWORK CHAPTER 3. Now, as an example, take a pair of coupled modules, with N = 1. The steady states in each DGM are of the forma (1=2;1=2) +(1a) (1=2;1=2) , for anya2 [0; 1]. Consider initial- izing the system in the (steady) state 0 =j0ih0j j1ih1j, wherej0i :=j1=2;1=2i;j1i := j1; 2; 1=2i. The rst non-zero eective term actually enters at second-order (i.e. Eq. (3.7)), and one can show that the eective dynamics are then given by (see Appendix B.4), E e ( 0 ) = 1 +e 2Tg 2 2 j0ih0j j1ih1j + 1e 2Tg 2 2 j1ih1j j0ih0j; (3.40) whereE e := e TL e , where Tg 2 = O(1) (and it is assumed that T ). In particular, this is the state of the system, up to arbitrarily small errors, after the evolution under Eq. (3.38), and (3.39). We see that by varying the parameters g; we can transfer, and tune, the populations between two modules, that is, we can dissipatively generate a one dimensional family of correlated two-qubit states. 3.3.3 Robustness The techniques outlined so far are robust with respect to a certain class of errors. For example, by Eq. (3.4), any perturbation to the control Hamiltonian K, say K!K +K 0 , results in the same eective dynamics provided thatP 0 K 0 P 0 = 0. Clearly an analogous result holds also for Lindbladian perturbations with similar properties [90]. 3.3.3.1 Hamiltonian Errors We demonstrate in the setting of DGMs, that this robustness holds throughout the network by considering errors to Eq. (3.21) of the form K (i;j) !K (i;j) +V (i;j) ; (3.41) where V (i;j) represents a Hamiltonian encoding error between the modules i;j, satisfying P 0 V (i;j) P 0 = 0, where V = i[V;]. Collectively then, as these terms only enter the dynamics at most at second order, they give rise to the same eective dynamics. Consider, as an illustrative example, two DGMs, each with two qubits collectively dissipating as per Eq. (3.24). As discussed above, each DGM has a two-dimensional DFS = spanfj 0i;j 1ig. We denote the remaining two basis states of the full Hilbert space of each DGM byje 0 i :=j1; 0i;je 1 i :=j1; 1i, using the angular momentum notation. Errors to the encoding Hamiltonian (e.g. Eq. (3.26)) of the form V := X ;=e 0 ;e 1 i;j=0;1 ij jih ij jih jj +H:c:; (3.42) will be projected out at rst order (since 12 V 12 = 0, where 12 = , and is dened as in Sect. 3.3.1). The tensor ordering respects that of the Hilbert space for the two DGMs. We call the `error matrix'. 36 CHAPTER 3. 3.3. A DISSIPATIVE COMPUTATIONAL NETWORK Note that, because of Eq. (3.4), such Hamiltonian errors give rise to the same eective dynamics accurate up to O(=T ). We demonstrate the dynamics under such an error in Fig. 3.9. We notice two important properties of this gure. 1) The dynamics with non- zero error matrix still result in overall error with respect to the eective evolution that is linear in 1=T . 2) With 6= 0, the overall error,k(E T e ~ K e )P 0 k, is strictly greater than the case = 0, as expected. 0 0 . 2 0. 4 0. 6 0. 8 1 1= T # 1 0 ! 4 0 0 . 5 1 1 . 5 2 2 . 5 3 k ( E T ! e ~ K e f f ) P 0 k # 1 0 ! 3 1 = 0 k 1 k = 0. 9 k 1 k = 1. 1 Figure 3.9: Robustness to Hamiltonian errors. We consider an unperturbed dissipation given by Eq. (3.24) plus a control Hamiltonian Eq. (3.26) and an error term of the form Eq. (3.42). The error matrix has random complex entries (for two xed, non-zero mag- nitudes). We plot the distance between the exact dynamics (i.e. the dynamics with error terms), to the eective dynamics as governed by Eq. (3.27). Blue data points are without error. The dissipative time-scales are xed at 1 = 2 = 1 arb. units, and we set Tg = 2. Norms are calculated using the maximum singular value of the maps realized as matrices. Time is measured in arbitrary units. 3.3.3.2 Lindbladian Errors This analysis can be extended to include the case where there are errors to the Lindblad operators, L i , dening a Lindbladian of the form L 0 () = X i (L i L y i 1 2 fL y i L i ;g): (3.43) 37 3.3. A DISSIPATIVE COMPUTATIONAL NETWORK CHAPTER 3. Consider an error of the form L i ! L i + i , where the i are assumed to be O(1=T ). To O(1=T ) then, we haveL 0 !L 0 +L 1 , where L 1 () = X i ( i L y i 1 2 fL y i i ;g) +H:c: (3.44) As in the previous subsection, all perturbations of this form such thatP 0 L 1 P 0 = 0 give rise to the same eective dynamics. We illustrate this by following the same example in the previous subsection withL 0 given by Eq. (3.24). It is easy to verify that by choosing i = V in Eq. (3.44) [V as in Eq. (3.42)] one obtainsP 0 L 1 P 0 = 0 j . We numerically validate this in Fig. 3.10, where we do indeed see extra O(1=T ) errors resulting from the Lindbladian perturbations. 0 0 . 2 0. 4 0. 6 0. 8 1 1= T # 1 0 ! 4 0 0 . 5 1 1 . 5 2 k ( E T ! e ~ K e f f ) P 0 k # 1 0 ! 3 1 = 0 k 1 k = 2. 7 k 1 k = 4. 8 Figure 3.10: Robustness to Lindbladian errors. We consider an unperturbed dissipation given by Eq. (3.24) plus a Lindbladian error term of the form Eq. (3.44) with i = V as in Eq. (3.42). We show results for the unperturbed case, = 0 (blue points), and two non-zero perturbations. The dissipative time-scales are xed at 1 = 2 = 1 arb. units, and we set Tg = 2. Norms are calculated using the maximum singular value of the maps realized as matrices. Time is measured in arbitrary units. j From Sect. 3.3.1, states in the DFS are linear combinations of terms of the formss :=j n0ih n1j j n2ih n3j, ni2f0; 1g. There is a single Lindblad operator for each DGM, S . Non-zero terms ofL1(ss), are of the form X jeiih njj, orjeiih njj X, for some X (or Hermitian conjugate). This is sucient to see that P0L1(ss) = 0 (regardless of X), henceP0L1P0 = 0 38 CHAPTER 3. 3.4. DISCUSSION 3.4 Discussion In this chapter, building on a perturbation theory at the superoperator level rst outlined in Ref. [45], we demonstrated the ability to (in principle) simulate any type of Markovian quantum dynamics, both unitary and non-unitary; that is, a dynamics dened according to a quantum dynamical master equation of the Lindblad form. This relies on coupling ones system to dissipative degrees of freedom, which results in an eective, re-normalized dynamics unfolding within the set of steady states. In this sense, strongly dissipative systems, such as bosonic modes in a lossy cavity, or qubits subject to thermalization, are in fact resources and can be used to enact universal quantum computation and simulation within the system of interest. Using these ideas, we constructed a computational network of dissipation-generated modules (DGMs), which is a scalable architecture that directly uses strong dissipation of the form mentioned above to enact quantum information processing primitives, entangle quantum states, and simulate dissipative dynamics. Moreover, the amount of resources required for an arbitraryN qubit simulation, we showed, scales polynomially in the number of qubits. This module based scheme is attractive for two main reasons. 1) There is some built in robustness to errors occurring to the underlying Hamiltonian, as well as the source of noise itself (i.e. the LindbladianL 0 ). 2) Under a nite time evolution of the network (T <1), the errors associated with the simulation have well dened controllable errors (which go as 1=T or 1= p T depending on the type of simulation one performs). This is in contrast with the `typical' case where one makes uncontrollable approximations (such as the Born-Markov approximations), and it is not clear the extent to which the actual dynamics are described by the approximated ones. Conceptually, the scalability and computational universality of our dissipation-assisted modular networks show yet another way in which dissipation can be turned into a powerful resource for quantum manipulations. In the next chapter we consider another type of computational network based entirely on dissipation, where the computations one wishes to perform are carried out directly by the dissipation itself (i.e., not under an eective evolution). 39 Chapter 4 Dissipative Quantum Circuits and Data Classication It has been shown that the dissipative model of quantum computation, described by Ref. [43], is equivalent in computational power to the gate model [43, 44]. In fact, since there is some built-in robustness to errors within this framework of dissipative quantum circuits [43, 46, 47], it provides a potential route to large scale universal quantum compu- tation. Moreover, since quantum states can be constructed to be attractive xed points of such a dissipative dynamical system, state preparation is a very natural application of these circuits [40]. Another key area where quantum technology may prove to be bene- cial is the eld of machine learning [14, 97{99], several techniques of which actively exploit dissipative quantum dynamics [100{104]. In this chapter (also see Ref. [48]) we describe a dissipative quantum network capa- ble of computing Conjunctive-Normal-Form (CNF) formulas, which connects all three of the aforementioned topics of interest: computation, state preparation, machine learning. The circuit construction we introduce preserves quantum coherence between pure diagonal states in the computational basis { that is, `classical' states { which have the same evalu- ation on the clauses in the CNF. As such, this provides a natural way to classify quantum data, i.e., according to the resulting coherent partitions of the Hilbert space, upon evolving the dissipative network. These partitions are in fact decoherence-free subspaces (DFSs) of the underlying dissipative dynamics [33, 34]. The inspiration for this comes partly from the classical Hopeld recurrent neural net- work. In this classical model, data is classied according to a set of `memories' or `patterns', to which the network dynamically evolves towards as local minima, i.e. attractive xed points of the dynamics. In this manner, the space of classical bit strings is partitioned according to the set of memory states. In our model, which is morally similar, data is clas- sied according to the xed points of a dissipative quantum evolution, and the full space is partitioned into sectors according to DFSs. Our model however in fact goes further in 40 CHAPTER 4. 4.1. COMPUTING BOOLEAN FORMULAS BY DISSIPATION the sense that it can in addition classify quantum data; superpositions of `classical' states. We then use our model to generalize certain classical notions of data classication, providing a consistent framework in which to classify quantum data. We provide a basic outline as to how this can be used in quantum machine learning, and provide a concise example. Another use of this partitioning of the Hilbert space is in the preparation quantum states { such as entangled states { by dissipation alone. This technique is somewhat dierent to previous dissipative state preparation techniques, such as in Refs. [40, 47], which eectively use dissipation to enact the desired unitary. Our method relies on nding a CNF which will split the space into DFSs containing the desired states. In fact, one could prepare an ensemble of quantum states in this manner. As with other dissipative techniques of this type, there will be some in built robustness. We discuss and bound errors within this network, which shows it actually benets from the source of the dissipation being as strong as possible, i.e., we are working in the strongly dissipative regime. After brie y introducing some well known results, we will discuss our general framework, and two `claims' from which all of the applications discussed above are essentially special cases. Indeed, the possibility for additional techniques arising from this methodology is promising. 4.1 Computing Boolean Formulas by Dissipation Consider the space of all bit strings of length n,X :=f0; 1g n . We denote the binary variables associated with x2X by x i 2f0; 1g, where x = (x 1 ;:::;x n ). A CNF formulaC is a conjunction (^) of clauses, where each of the clauses are disjunc- tions (_) of literals,l. Each literall is eitherx j or its negation:x j , for some 1jn. For example, C =C 1 ^^C N , with C i =l (i) 1 __l (i) k i , where k i 2 [n]. We will throughout use the notation [n] :=f1; 2;:::;ng. We state some standard results pertaining to CNF formulas and Boolean functions (see e.g. Ref. [105]): i) Any Boolean function,f :X!f0; 1g can be represented by a CNF formula (although the number of clauses may be exponential in n). ii) Any functionf :X!f0; 1g m can be represented usingm CNF formulas; one formula for each bit of f(x). iii) A clauseC i of a CNF formula containingk i literals can be represented, usingO(k i ) = O(n) additional variables, by an equivalent, equisatisable CNF formula where each of the clauses are at most size 3. Such a formula is also known as a `3-CNF'. 41 4.1. COMPUTING BOOLEAN FORMULAS BY DISSIPATION CHAPTER 4. Throughout we will make use of the above three points, and as such will focus much of our attention on computing Boolean functions (m = 1) with a 3-CNF representation, C. We will refer to the explicit evaluation of C on some particular x2X , using the notation C(x)2f0; 1g. Such a 3-CNF formula C =^ i C i can be used to classify classical data. There are several possible ways one could do this. We take the following approach: Denition 4.1. x;y2X , belong to the same `partition' ofX i C i (x) = C i (y);8i. In particular, ifx;y agree on all of the clauses ofC, they are `classied' equivalently, according to the partition, or `sector', to which they belong. For a CNF with N clauses, this denes a partitioning of the spaceX into at most 2 N sectors. Note that x;y belonging to the same sector of course implies C(x) = C(y), but the converse will not necessarily be true. We will upgrade these general notions to dene partitions over the Hilbert space, where it will be shown that Def. 4.1 provides a natural starting point for dening quantum data classication by dissipation. Another, coarser manner in which to classify classical data is according to the evaluation of C itself. That is, x;y are classied equivalently i C(x) =C(y). We will in a similar manner extend the denition of the boolean formula C over classical bit strings, and give it a meaning over the Hilbert space, allowing us to write such quantities as C( ) wherej i is a quantum state. To map the above into the quantum realm, instead of usingn binary variables to encode data, one considers a system of n qubits. The Hilbert spaceH X = spanfjxig x2X =C n 2 is therefore of dimension 2 n . Herejxi := n i=1 jx i i, withj0i;j1i eigen-states of the Pauli z operator z :=j1ih1jj0ih0j. We represent an arbitrary pure state inH X asj i = P x2X a x jxi, for complex coecients (amplitudes)a x which are normalized P x2X ja x j 2 = 1. Occasionally we will refer explicitly to density operators 2 L(H X ) which are Hermitian, positive-semidenite, trace one, linear operators acting over the Hilbert space. 4.1.1 Dissipative evaluation of 3-CNF clause Central to the dissipative network which is to be described, is the ability to dissipatively evaluate a clause of 3-variables, i.e., C =l 1 _l 2 _l 3 , where the l i are either x k or:x k , for some k2 [n]. That is, given somejxi2H X (with x2X ), we will compute C(x). Let us assume the only three bits of x involved in C are the i 1 ;i 2 ;i 3 -th bits, where i j 2 [n]. Without loss of generality we will assume these three bits are distinct. In the quantum setting, this corresponds to three qubits. One can construct a small dissipative network, coupling these three qubits to an additional (or `auxiliary') qubit, a, via a 4-local Lindbladian, as in Fig. 4.1. We dene this Linbladian by a single Lindblad (jump) operator which acts on these 4 42 CHAPTER 4. 4.1. COMPUTING BOOLEAN FORMULAS BY DISSIPATION Figure 4.1: Three input qubits (yellow, top), coupled dissipatively to an additional qubit (blue, bottom), via a 4-local Lindbladian (represented by the four solid lines, and the solid dot in the center). qubits as L = : ; : :=j:l 1 ;:l 2 ;:l 3 ih:l 1 ;:l 2 ;:l 3 j; (4.1) where the projector : projects out any state not of the formjl 1 = 0;l 2 = 0;l 3 = 0i. The second term a in the tensor product of L acts on the additional qubit a. The notation should be interpreted as follows. If for example, C =x i _:x j _x k , the projector will be given by : =j0; 1; 0ih0; 1; 0j, acting on qubits i;j;k. The initial state of the additional qubit is assumed to bejai =j1i. We make the following claim. Claim 4.1. LetE t :=e tL be the evolution operator, under LindbladianL dened by single Lindblad operator Eq. (4.1). Then the evolution of a matrix elementjxihyj j1ih1j; where x;y2X , and the second term in the tensor product refers to the additional qubit a, is E t (jxihyj j1ih1j) =jxihyj e t j1ih1j + (1e t )jcihcj if C(x) =c =C(y) e t=2 j1ih1j if C(x)6=C(y): (4.2) Note, if C(x) = 1 =C(y), there is strictly no evolution, since in this case, by construc- tion, Ljx; 1i =Ljy; 1i = 0. Proof. Consider a disjunction of three literals, C = l 1 _l 2 _l 3 , where l j is either x i j , or :x i j , for some i j 2 [n]. That is, C(x), for x2X , is fully determined by the (distinct) bits i j , for j = 1; 2; 3, in x. LetL be a Lindbladian,L =L L y 1 2 fL y L; g, with L given by Eq. (4.1) acting on (distinct) qubits i 1 ;i 2 ;i 3 , and an additional qubit. a =j0ih1j is dened in the eigen-basis of z =j1ih1jj0ih0j. 43 4.1. COMPUTING BOOLEAN FORMULAS BY DISSIPATION CHAPTER 4. Let x;y2X . Then Ljxihyj j1ih1j =jxihyj O 8 < : j0ih0jj1ih1j if C(x) = 0 =C(y) 1 2 j1ih1j if C(x)6=C(y) 0 if C(x) = 1 =C(y) (4.3) The conditions on the RHS of the above all come directly from the form of L: i) If C(x) = 0 =C(y), then x i j =:l j =y i j , for j = 1; 2; 3. Hence, Ljx; 1i =jx; 0i, and Ljy; 1i =jy; 0i. Similarly, L y Ljx; 1i =jx; 1i, and L y Ljy; 1i =jy; 1i. ii) If C(x)6= C(y), either Ljx; 1i = 0 and L y Ljy; 1i =jy; 1i, or L y Ljx; 1i =jx; 1i and Ljy; 1i = 0. iii) If C(x) = 1 =C(y), then9j such that x i j =l j , and similarly for y. Hence Ljx; 1i = 0 =Ljy; 1i. Moreover, since Ljxihyj j0ih0j = 0; (4.4) the form of Eq. (4.2) is clear. We provide three additional comments on this result: 1) Upon measuring the additional qubit for input statejxi (x2X ), in order to guarantee with probability 1 of correctly classifying x according toC(x), one should evolve the dissipative network for time t> log 1 . That is, in order to guarantee hC(x)jTr X E t ( x )jC(x)i 1; (4.5) one requirest> log 1 , where x :=jxihxj j1ih1j, and Tr X is the partial trace, tracing outH X . 2) Coherencesjxihyj (x6=y) are preserved, unless C(x)6=C(y). 3) If one wishes to compute a clause of just one or two literals (instead of three), the construction and general result is exactly the same except the projector : of the Lindblad operator Eq. (4.1) acts on the appropriate one or two qubits respectively. Point 2 in particular implies the existence of decoherence-free subspaces, dened ac- cording to DFS c := spanfjxi : C(x) =cg; (4.6) for c = 0; 1. That is, forj i2 DFS c , one has Tr a [E t (j ih j j1ih1j)] =j ih j, where the partial trace Tr a traces out the additional qubit a. Moreover, these decoherence-free 44 CHAPTER 4. 4.1. COMPUTING BOOLEAN FORMULAS BY DISSIPATION subspaces dene a natural way in which to classify quantum states according to C( ) =c ij i2 DFS c . We will provide a more rigorous denition below in Sect. 4.1.2. One can also now see how the Hilbert space is partitioned according to the mapE t when given an arbitrary quantum input 2 L(H X ), the xed point (steady state) of such dynamics being 1 = X c2f0;1g c c jcihcj (4.7) where c := P x:C(x)=c jxihxj is a projector onto DFS c (note P c2f0;1g c =I). That is, the Hilbert spaceH x contains two `coherent sectors' (DFSs) dened by C. States belonging to these sectors are preserved under mapE t , and coherences between the sectors decay away exponentially as e t=2 . We lastly comment, that in the innite time limit, upon tracing out the additional qubit, the state of the systemH X , is the same as under a purely dephasing Lindbladian dened by non-local jump operator L = P x C(x)jxihxj. 4.1.2 Dissipative evaluation of arbitrary 3-CNF Consider now the more general case where one wishes to evaluate a 3-CNF formula con- sisting of N clauses C =^ N i=1 C i . We show how to construct a dissipative network to achieve this, which uses N additional qubits, each of which is coupled to at most 3 of the input qubits, and evolves in a similar manner as Eq. (4.2). The full Hilbert space, of the `input' qubits, and `auxiliary' qubits is therefore of the formH =H X H a , whereH a is of dimension 2 N . We provide an illustration of this set-up in Fig. 4.2. The structure is now much richer, and we will see the Hilbert space is actually parti- tioned into (at most) 2 N DFSs, according to the output of each of the N clauses. The full Lindbladian acting on the system ofn+N qubits is now given byL = P N i=1 L i , whereL i is a Lindblad generator dened by a single jump operator, of the form Eq. (4.1), which acts on the qubits dened by the i-th clause in C. Claim 4.2. ForE t := e tL , withL dened as above, the innite time evolution of matrix elementjxihyj j1ih1j, with x;y2X , is given by lim t!1 E t jxihyj j1ih1j N = jxihyj N N i=1 jc i ihc i j if8iC i (x) =c i =C i (y) 0 if9i s.t. C i (x)6=C i (y): (4.8) Moreover, under a nite time evolution of input statejxi (x2X ), upon measuring the N additional qubits, to successfully obtain C i (x);8i with probability 1, it is sucient to evolve the network for time tO(log N ). Proof. Given a 3-CNF, C =^ N i=1 C i , we construct a 4-local LindbladianL = P N i=1 L i , whereL i is dened for each clause, with single jump operator of the form Eq. (4.1). Here, L i acts on the three qubits dened by clause C i , and the ith additional qubit. 45 4.1. COMPUTING BOOLEAN FORMULAS BY DISSIPATION CHAPTER 4. The full evolution is in fact easy to calculate exactly, since the the individual Lindbla- dians,L i all commute with each other. Moreover, as can be seen from Eq. (4.2), the action of a singleL i only changes the state of the ith additional qubit. These two facts and Claim 4.1 imply E t jxihyj j1ih1j N = N Y i=1 E (i) t jxihyj j1ih1j N =jxihyj N O i=1 e t j1ih1j + (1e t )jc i ihc i j if C i (x) =c i =C i (y) e t=2 j1ih1j if C i (x)6=C i (y) (4.9) whereE t :=e t P N i=1 L i , andE (i) t :=e tL i . Taking t!1 gives precisely Eq. (4.8). We comment on errors associated with nite-time evolution, t<1. In particular, for input of the formjxihxj (x2X ), the probability of correctly obtaining via a projective measurement outcome C i (x) for the i-th clause is 1e t . This is precisely the same calculation as in Claim 4.1 . Since theseN clauses are all independent, in order to correctly evaluate C(x) =^ N i=1 C i (x) with probability 1, one requires (1e t ) N > 1: (4.10) Noting that e > 1, we can instead bound (1e t ) N > e . Using the identity log(1x)< x 1x withx =e t gives Ne t 1e t <, or equivalentlye t > N + 1. ForN=> 1, it is enough to set e t > 2 N , and hence t > log 2 + log N , which for convenience we write t>O(log N ). If the strength of the noise min i kL i k = , the bound becomes t>O( 1 log N ). One can see from Eq. (4.9) that this construction completely generalizes the results of the previous subsection (i.e. where N = 1), and therefore provides a richer structure than previously described. In particular, there are now (at most) 2 N decoherence-free subspaces, dened according to binary representation C(x) := (C 1 (x);:::;C N (x)); (4.11) which uniquely determines the DFS to whichjxi (x2X ) belongs. In particular, the generalization of (4.6) is DFS C := span jxi : C(x) = C : (4.12) If indeed C(x) = C(y), then coherences of the formjxihyj will be preserved under the time evolution of the dissipative quantum network, and in particular forj i2 DFS C , then Tr a E t j ih j j1ih1j N =j ih j (4.13) 46 CHAPTER 4. 4.1. COMPUTING BOOLEAN FORMULAS BY DISSIPATION … … … … Figure 4.2: Illustration of a dissipative computational network, capable of computing a 3- CNF withN clauses, onn variables. The x i represent input qubits, and the a i are output qubits (where the i-th clause is evaluated). In this particular example, we show explicitly the clauses C 1 =l 1 _l 2 _l k and C N =l k _l j _l n , where l i is either x i or:x i . where Tr a traces out the N additional qubits. Similarly, for an arbitrary input 2 L(H X ), the innite time evolved state is of the form 1 = X C2f0;1g N C C j Cih Cj; C := X x2X : C(x)= C jxihxj; (4.14) where 2 C = C are projectors ( P C2f0;1g N C =I) overH X . From Eq. (4.14) it is apparent that upon evolving an initial statej i2 DFS C (i.e. where C j i =j i), that one can determine C without directly measuring or disturbing sub-systemH X itself. That is, one can learn C passively, whilst still retaining the state j i. In this case one can classify a quantum statej i2 DFS C according to C(j i) = C. The denition is said to be `consistent' in the sense that C(jxi) = C(x), for x2X . As such we will also write C(j i) = C( ). One can generalize this notion as follows: Denition 4.2. Letj i2H X . Then the `DFS-classication' of statej i is given by a function ~ C :H X ! [0; 1] N , dened by ~ C(j i) ~ C( ) := X C2f0;1g N k C j ik 2 C; (4.15) where C is given in Eq. (4.14). 47 4.2. APPLICATIONS CHAPTER 4. This motivates the straightforward observation: Lemma 4.1. Letj i2H X . Then, ~ C( ) = C2f0; 1g N ij i2 DFS C . The interpretation of Def. 4.2 and Lemma 4.1 should be clear: Each DFS in the Hilbert space corresponds to a unique vertex of the hypercube [0; 1] N . Non-vertex points corre- spond to statesj i which are in general superpositions between various DFSs, which, according to Eq. (4.14), are the states that become mixed during the time evolution of the associated dissipative quantum circuit. In this way, Def. 4.2 completely generalizes Def. 4.1; x;y 2X belong to the same partition ofX according to Def. 4.1, i ~ C(x) = ~ C(y) in Def. 4.2. One can now also lift the denition of the Boolean function C itself over classical bit strings to a function over the Hilbert space: Denition 4.3. Letj i2H X . The function ^ C :H X ! [0; 1] is dened by ^ C(j i) ^ C( ) :=k e j ik 2 (4.16) where e := (1;:::; 1). Def. 4.3 is also fully consistent with the denition of C, and therefore generalizes this Boolean function. In particular, for x2X , ^ C(x) =C(x). In this section, we have shown how one can lift classical notions pertaining to data classication in a natural manner to the quantum case. Moreover, we provided a practical construction for implementing this physically, dening a local dissipative computational network, which naturally evaluates such classiers. As mentioned at the start of this section, this construction also holds in the more general case where C(x)2f0; 1g m . We will now provide some examples showing the potential use of this framework, pri- marily in the area of quantum machine learning. We will also see this provides a novel manner in which to prepare quantum states. 4.2 Applications 4.2.1 A dissipative quantum data classier A common classical machine learning task is, given labeled samples (x;f(x)) wherex2X , and f :X!f0; 1g m , to build a classier C :X!f0; 1g m , such that given a new sample, y, that C(y) =f(y) with high probability. There are many approaches which attempt to solve this problem classically, including simple linear regression models, Bayesian networks and articial neural networks [106]. The task is more complicated in the quantum case, and as a result there are much fewer techniques available and in fact no general consensus on how to build such classication networks for quantum data [102]. 48 CHAPTER 4. 4.2. APPLICATIONS We will show by example how using our general construction outlined above, one can build a trainable quantum network, which can be used to classify quantum data. At a high level, the prescription of using our scheme to learn quantum data consists of four basic steps: i) Initialize network conguration: dene a system of 4-local Lindbladians which each act on up to three input qubits, and one unique auxiliary qubit, as described by Eq. (4.1). ii) Evolve the network: Given an input quantum state, run for time O(log N ), whereN is the number of auxiliary qubits, and the error associated with nite time evolution as in Claim 4.2. iii) Measurement: Perform post processing of the quantum data, e.g. by performing a POVM. iv) Update: Based on the measurement result in step iii), update the network congura- tion and return to step ii). If no update is required and the network has converged, subsequent evaluations of the network will correctly classify new quantum data (to error ). We demonstrate these general ideas by a simple and well known classical example (see e.g. Ref. [105]), where a priori the space of possible functions from which one is learning is of exponential size 2 2n , but where the problem class can be learned eciently, and moreover will require at most 2n additional qubits in our construction. Note that in the example below, the update procedure we use in step iv) is entirely classical in nature. An interesting question is whether one can gain performance advantages using an algorithm which is inherently quantum. 4.2.1.1 Quantum machine learning the class of conjunctions Consider the task of learning the function f :X !f0; 1g where f is guaranteed to be a pure conjunction. That is, f is of the form f =f 1 ^^f k , where the clauses f i contain just a single literal l i that are of the formx j or:x j forj2 [n]. Let us also write for f the corresponding vector of clauses f = (f 1 ;:::;f k ). Then f(x) = (f 1 (x);:::;f k (x)) describes the evaluation of the k clauses on input x2X . This is in fact an easy problem, and the classical algorithm to solve it is to start with the hypothesis C = x 1 ^:x 1 ^^x n ^:x n (i.e. a conjunction of all possible literals), and whenever a positive labeled sample (y; 1) is observed, for all y i = 1, remove the literal :x i from C, and similarly for all y i = 0, remove the literal x i from C. We will demonstrate the above in a dissipative quantum network, by utilizing 2n addi- tional qubits, and therefore 3n qubits in total. The initial CNF dening the network is as in the previous paragraph, from which one constructs the 2-local LindbladianL = P 2n i=1 L i , 49 4.2. APPLICATIONS CHAPTER 4. whereL i acts on input qubitbi=2c, and the i-th auxiliary qubit. TheseL i are dened in a similar manner as in Eq. (4.1), instead now with just 2 qubits. Let us rst consider the case where labeled states are promised to be of the form (jxi;f(x)), where x2X . One evolves the network underL, and measures the 2n output qubits, obtaining C i (x) where i labels each auxiliary qubit. We will assume that the network is evolved for a suciently long time at each step so that the probability of incorrectly computing C i (x) is negligible. Given a positive sample, f(y) = 1, one can delete anyL i from the network which results in the i-th auxiliary qubit measuring 0, since this implies C(y) = 0. The network is then re-set and run again, possibly now with fewer additional qubits. Repeating this process guarantees the convergence ofC tof. The total number of times one must update the network in this manner is upper bound by 2n. We show an example of the network training process in Fig. 4.3. It is at this point interesting to note one can perform the exact same algorithm if one receives labeled quantum states (j i;f( )) wherej i is guaranteed to be a superposition of classical states all with the same evaluation on the clauses of f. In this case, following Def. 4.3, f( )2f0; 1g, and the input state is of the general form j i = X x: f(x)= f 0 a x jxi; (4.17) for some f 0 2f0; 1g k such thatf( ) =f 0 1 ^^f 0 k , anda x arbitrary normalized complex amplitudes. Measuring the auxiliary qubits performs a projection C onj i (as in Eq. (4.14)), where C = (C 1 ;::: ) is dened by the current state of the dissipative network. If C( ) =C 1 ( )^ ::: disagrees with f( ), one updates the network as above, by deleting the con ictingL i which dene C. Once the network is fully trained, given an unlabeled state j i as above, one can evaluate f( ) without directly measuringj i itself, i.e., one only needs to observe the auxiliary qubits. Moreover, since by assumption an input statej i2 DFS f 0 for some f 0 , and therefore f 0j i =j i, the initial state of the input qubits is the same as the nal state; that is, one can passively learn f( ), whilst still retaining the quantum statej i. One can therefore learn f( ) and still usej i for subsequent computations. This is quite a unique occurrence, since typically to obtain any information about a quantum state requires at least some destructive measurement to take place. 4.2.2 Probabilistic preparation of quantum states By Eq. (4.14), upon measuring statej Ci of the additional qubits, the resulting time-evolved input state is, with probability 1, C C =Tr( C ), where was the initial state. One can use this to prepare quantum states. In particular, to prepare statej i, one requires a suitable, easy to prepare input state,j 0 i, and to dene an appropriate CNF 50 CHAPTER 4. 4.2. APPLICATIONS Figure 4.3: Example of a network learning the conjunction p =x 1 ^:x 3 , for n = 5. The initial network (top) is congured to computex 1 ^:x 1 ^^x n ^:x n as described by the learning algorithm of the main text. The top row of auxiliary qubits (blue) compute the literalsx i , and the bottom row the negations:x i . After training for a suciently long time so that the network has converged, the nal network is given by the bottom diagram, which now has just two auxiliary qubits (blue), corresponding to the computation ofx 1 ^:x 3 . The Hilbert space therefore has 4 associated DFSs. Input (data) qubits are shown in yellow. 51 4.2. APPLICATIONS CHAPTER 4. formula so that, for some C2f0; 1g N , one has C j 0 i/j i. Then, upon measuring C in the additional qubits, which occurs with probabilityh 0 j C j 0 i, one has prepared state j i. One could also prepare an ensemble of states in this manner. 4.2.2.1 Preparation of entangled states Consider for simplicity, two qubits, which we wish to entangle, that are initially prepared in the product statej 0 i =j+i j+i. Here, in the z-eigenbasis,j+i := 1 p 2 (j0i +j1i). We wish to prepare the Bell statej + i := 1 p 2 (j00i +j11i). This procedure will be eectively the same as performing a projective measurement on j 0 i, although we will in fact not measure the statej 0 i itself. This example is interesting therefore for two reasons: 1) dissipation is used directly to create entanglement, and 2) it provides a novel manner in which to actually perform a measurement of a quantum system. To achieve this task, we take the 2-CNF, C = (x 1 _:x 2 )^ (:x 1 _x 2 ) =:C 1 ^C 2 , with associated projectors (0;0) = 0; (0;1) =j01ih01j; (1;0) =j10ih10j; (1;1) =j00ih00j +j11ih11j: (4.18) From this, utilizing two auxiliary qubits that are initialized in thej1i state, one can con- struct a following 3-local Lindbladian,L =L 1 +L 2 . These Lindbladians are dened by Lindblad operators L 1 =j01ih01j 1 , L 2 =j10ih10j 2 , where the second term in the tensor product acts on a single auxiliary qubit, labeled by 1; 2 respectively. See inset of Fig. 4.4 for a schematic of the network. This evolution will partition the space into three DFSs, and in particular, the innite time evolved state is 1 = X c 1 ;c 2 2f0;1g P c 1 ;c 2 j c 1 ;c 2 ih c 1 ;c 2 j jc 1 ;c 2 ihc 1 ;c 2 j; (4.19) with the probability of being in each sector after measurement of the two auxiliary qubits P C =h 0 j C j 0 i, P 0;0 = 0; P 0;1 = 1 4 =P 1;0 ; P 1;1 = 1 2 : (4.20) The states in each sector is given by j 0;1 i = (0;1) j 0 i p P 0;1 =j01i j 1;0 i (1;0) j 0 i p P 1;0 =j10i j 1;1 i = (1;1) j 0 i p P 1;1 = 1 p 2 (j00i +j11i): (4.21) 52 CHAPTER 4. 4.2. APPLICATIONS Figure 4.4: Distance to maximally entangled Bell statej + i = 1 p 2 (j00i +j11i), under dissipative evolution for time t (in arbitrary units). Here, 1;1 is the state after post- selecting on measuring 1 in both additional qubits Eq. (4.22). Initial state is the product statej+; +i. The norm is the maximum singular value. Inset: the connectivity of the 2+2 qubit dissipative network. Note, the (c 1 ;c 2 ) = (0; 0) sector is empty since at least one of the two clauses in C must be satised by construction. That is, in this case, there is no such statej 0;0 i. After evolving the network for a suciently long time, one can post-select on the additional qubits with success probability 1 2 to pick out the Bell state: 1;1 (t) := Tr a h ~ 1;1 (t) ~ 1;1 i ! t!1 j + ih + j (4.22) where (t) :=E t (j+; +ih+; +j j1; 1ih1; 1j); ~ 1;1 :=I 4 j1; 1ih1; 1j: (4.23) In Fig. 4.4 we provide a numerical verication of this scheme, in which the state of the qubits, upon post-selecting on (1,1) on the two additional qubits, converges exponentially quickly to the desired maximally entangled state. 4.2.2.2 Generating a superposition of all solutions to 3-SAT problem 3-SAT problems are NP-hard optimization problems, where one must nd the solution(s) C(x) = 1 to a 3-CNF formula C. 53 4.3. DISCUSSION CHAPTER 4. Given a 3-SAT problem C over n literals, one can construct a dissipative network (of 4-local Lindbladians) withN output qubits, whereN is the number of clauses in the 3-SAT problem. The input state to the network is initialized in the maximal superposition state,j i = 1 2 n=2 P 2 n i=1 jii. After evolving the network for time t O(log N ), upon measurement of j1i N of the output qubits, the resulting state of the input qubits is guaranteed (with probability 1) to be jSi = 1 p N s Ns X s=1 jsi (4.24) where each classical bit string s is such that C(s) = 1. Moreover, there are no other bit-strings with this property. That is,jSi is the uniform superposition of all solutions to the 3-SAT problem. We strongly stress however that in general it is exponentially unlikely to observe such a state; we are not claiming to have an ecient algorithm to solve SAT problems. Nevertheless, this general methodology shows potential promise in obtaining a fair sam- pling for certain problems, which is typically challenging on current quantum optimization devices [107]. 4.3 Discussion We have shown how to evaluate 3-CNF clauses (and hence arbitrary Boolean functions) using dissipative quantum dynamics. We did this by providing a way to construct a dis- sipative network consisting of 4-local Lindbladians, where upon measuring a subset of the qubits in the network, one can evaluate the CNF. We also showed that errors associated with a nite time evolution can be made arbitrarily small. The CNF structure naturally partitions the Hilbert space into sectors in which coherence is preserved, i.e., decoherence- free subspaces. This provides a route to generalize classical notions of data classication, lifting the denitions to the case where the data is quantum in nature. In particular, we achieve this task by classifying quantum data according to the partition, or DFS, to which a particular state belongs, which itself is dened by the dissipative network. This has applications in state preparation, and perhaps more interestingly, machine learning. We provided one such example, but have hopefully have demonstrated the general applicability of these techniques. Indeed, one can make a connection between the work presented here, and a Hopeld recurrent neural network, which we now comment on. A Hopeld network is a classical pattern recognition and classication machine learning model, where the `patterns' or `memories' one wishes to associate new data, are local minima of a pre-dened energy function. In particular, these patterns act like xed points of the total spaceX of a dynamical system (where the dynamics is described entirely by the energy function). We make an analogy between this classical dynamical system, and the 54 CHAPTER 4. 4.3. DISCUSSION quantum dynamical systems described in this work dened entirely by dissipative dynamics with attractive xed points. In our model, which is morally similar, data is classied according to the DFSs of the computational network, which are analogous to the patterns of the corresponding Hopeld network. However, not only can the dissipative networks presented in this work classify classical data, e.g. compute a function C(x)2f0; 1g for x2X , they can also classify quantum data in a consistent manner. Moreover, given a quantum statej i associated to a particular memory pattern (i.e. a DFS), one can evaluate C( ) passively, without disturbing the statej i. This work provides a well dened and consistent starting point for quantum machine learning via dissipation, of which numerous applications could benet. We highlighted simple examples for demonstrative purposes, but hope it is clear that the general protocols can be modied in a multitude of ways to be made applicable for dierent applications. It would be interesting for example to apply these techniques to learning quantum data in the PAC framework [105, 108]. 55 Chapter 5 Quantum Error Suppression via Non-Markovianity One of the biggest challenges in the experimental quantum computing community is de- signing devices which are robust against environmental noise [23, 109]. Indeed, as discussed in Chapter 2, any physical quantum system is an open quantum system, resulting from the inevitable coupling to uncontrollable degrees of freedom (i.e. the environment). The re- sulting non-unitary dynamics typically degrades the coherent characteristics of the system which one wishes to manipulate in order to perform (for example) computations. Com- bating such noise has become itself a eld of research, and has led to the development of pioneering techniques, broadly referred to as quantum error correction or error suppres- sion [30]. Many of these techniques are quite involved requiring the precise application of `control' Hamiltonians (e.g. dynamical decoupling), and often necessitating additional resources (e.g. qubits). Recently however, it has become clear that noise itself can in fact be exploited to the end of performing quantum information processing (QIP) tasks. The early work in this area focused on encoding entangled states [40] and even the output of a computation [43] in the steady state of a dissipative dynamics. Since then other results have appeared which show how one can enact simulations of quantum systems, both open and closed [41, 45, 46, 110, 111], and even perform general computations (robust to certain types of error) in the presence of strong dissipation [47]. In this chapter we describe a passive quantum error suppression scheme, whereby the time-scale upon which the system remains coherent is extended (i.e., the interaction of the system and environment is eectively reduced). We describe this as a passive scheme since one does not require the ability to precisely manipulate the system, nor are any additional qubits required. This work (originally presented in Ref. [49]), partly motivated by the recent progress in simulating non-Markov systems [112{116], introduces a reservoir engineering technique 56 CHAPTER 5. 5.1. AN INTEGRO-DIFFERENTIAL MASTER EQUATION [95, 117{122] whereby so-called generalized-Markovian dissipative processes (studied var- iously by e.g. [75, 81, 123]) can be exploited to the end of reducing the rate at which errors accumulate over a dissipative Markovian evolution. We will show that upon adding generalized-Markovian noise on top of an assumed background Markovian channel, the rate at which the system approaches the steady state can be reduced; that is, it will take longer for the system to relax to the steady state, and one can for example, preserve quantum information encoded in arbitrary states for longer times. 5.1 An Integro-Dierential Master Equation We start by assuming we have some noisy `background' quantum channel which is to a good approximation described by a time-independent master equation of the Lindblad type (i.e. the channel is Markovian). We write _ (t) =L 0 [(t)], whereL 0 is a generator of Markovian dynamics a . We will assume that the dimension of the Hilbert space of the system is nite. It is convenient to introduce the spectral (Jordan) decomposition ofL 0 as in Chapter 2 (Sect. 2.4.2),L 0 = P i i P i +D i , where the i are the eigenvalues. TheP i are projec- tors, and theD i nilpotents (withD m i i = 0). Using this form we can write the evolution superoperator, 0 (t) :=e tL 0 , and the resolvent, R(z) := (zL 0 ) 1 as 0 (t) = X i P i + m i 1 X k=1 t k D k i k! ! e i t (5.1) and R(z) = X i P i z i + m i 1 X k=1 D k i (z i ) k+1 ! : (5.2) From Eq. (5.1) the decay rates of the channel 0 (t) are determined by the real part of the eigenvalues, in particular, 1 0;i :=jRe i j denes the decay time in the i-th block. Our goal is to engineer a channel as close as possible to the identity channel (given the above xed background). As time increases, a channel of the form of 0 (t) departs (in a possibly non-monotonic way) from the ideal channel at t = 0. In this sense we see that it is the short time-dynamics that are important for our purposes. In other words the behavior we are interested in is characterized by the shortest time scale 0 = min i 0;i . This is to be contrasted with another typical situation, where one is interested in the approach to the steady state which is instead dictated by the longest time scale. To quantify how much a channel departs from the ideal one, we use a delity based measure: given a quantum channel , we dene the `minimum channel delity' f as [124] f() := min F (; ()); (5.3) a _ X :=dX=dt 57 5.1. AN INTEGRO-DIFFERENTIAL MASTER EQUATION CHAPTER 5. where F (;) := (Tr p p p ) 2 is the delity between the states ; . This essentially tells us the worst case performance of this channel over all states. Note, by convexity this minimization can be carried out over pure states. In general, one would like to set some minimum error threshold such that only channels satisfying,f() 1, for some > 0 are tolerated. However, given some xed background channel (e.g., as above), nding necessary and sucient conditions to increase f is a very complicated task. On top of the Markovian, dissipative background we now add, at the master equation level, a secondary form of noise, which we refer to as 'generalized-Markovian' noise. The dynamics are now given by the following master equation _ (t) =L 0 (t) +L 1 Z t 0 k(tt 0 )(t 0 )dt 0 ; (5.4) whereL 1 is time-independent, and also of the Lindblad form. We refer to k(t) as the memory kernel. For convenience we also deneK such thatKX = R t 0 k(tt 0 )X(t 0 )dt 0 . Purely Markovian (Lindbladian) dynamics are recovered if the kernel is of the form k(tt 0 ) =(tt 0 ). It is known (see e.g., Ref. [72]) that it is possible to nd kernels such that the resulting evolution operator is not completely positive (CP). Here we require that Eq. (5.4) is such that the generated dynamics are CP for all t 0 . The examples we provide below all fulll this important criterion. On physical grounds we also assume that L 0 andL 1 K originate from separate processes, and therefore we require thatL 1 K must also generate a genuine quantum (CP) map alone. With this constraint we are not allowed to fulll our goal by simply taking, e.g.,L 1 =L 0 , with k(t) =(t). As is well known, if the Lindblad operators forL 1 are self-adjoint, a master equation of the form of Eq. (5.4) can be obtained by coupling a suitable Hamiltonian to a (classical) stochastic noise term (see for example Ref. [73]). In this approach the kernelk(t) originates as the autocorrelation function of the classical, stochastic eld. We provide a brief reminder of this approach below (Sect. 5.1.1). We solve Eq. (5.4) by taking the Laplace transform: ~ (s) = 1 sL 0 ~ k(s)L 1 (0) =: ~ (s)(0) (5.5) where ~ f(s) =L[f(t)](s) = Z 1 0 e st f(t)dt (5.6) is the notation for the Laplace transformation of f. At this point we make the important assumption thatL 0 andL 1 have the same spectral decomposition. Note that this is in principle not a necessary requirement for the success of our scheme (as will be shown below), however it provides a useful insight into its mechanism. In this case using Eq. (5.2) we can write ~ (s) = X i " ~ i (s)P i + m i 1 X k=1 h ~ i (s) i k+1 D k i # ; (5.7) 58 CHAPTER 5. 5.1. AN INTEGRO-DIFFERENTIAL MASTER EQUATION with ~ i (s) = 1 s i ~ k(s) i ; (5.8) where i ; i are the eigenvalues ofL 0 ;L 1 respectively associated with the i-th eigenspace. The evolution operator is then given by (t) =L 1 [ ~ (s)](t). Consider for example the case where ~ k(s) =p(s)=q(s) is a rational function with poly- nomialsp;q. This corresponds to a large class of kernels which are (nite) linear combina- tions of functions of the formt n e at for complexa and integern. In this case one can write ~ i (s) =P i (s)=Q i (s) (with no common roots betweenP i andQ i ). Note by construction we have deg(P i )< deg(Q i ), so one can always write ~ i (s) as a partial fraction decomposition ~ i (s) = X j N (i) j X n j =1 c (i) n j ss (i) j n j ; (5.9) where the rootss (i) j ofQ i (s) occur with multiplicityN (i) j , and thec are constants. Laplace transforming Eq. (5.9) back we obtain: i (t) = X j 0 B @ N (i) j X n j =1 c (i) n j t n j 1 (n j 1)! 1 C Ae s (i) j t : (5.10) This function, in absence of the nilpotent terms, completely species the full map (t). The real part of the roots s (i) j therefore determine the rate of decay of the system. These roots will depend not only on the eigenvalues i , but also on the specic nature of the integral kernelk. The key observation we make is that for certain choices ofk, the decay rate of the `combined' system can in fact be lower than that of the original `background' system. In the i-th eigen-space, for k(t) = 0, the decay is simply of the form i (t) =e i t . We see that if we can guaranteejRe(s (i) j )j<jRe( i )j,8j, then the rate of decay associated with this subspace will have eectively been reduced. This is equivalent to 1 i < 1 0;i , where 1 i := max j jRe(s (i) j )j. This can therefore result in an increase in the minimum channel delity over some xed evolution time (as will be illustrated below). We would like to remark here that if we setk(t) =(t), then under the same conditions as above it is not possible to reduce the decay rates 1 0;i . The non-trivial form of the memory kernel k is completely central to this technique. Before providing some relevant examples to illustrate this scheme, we show one potential route to experimentally generating a noise source according to Eq. (5.4). 59 5.1. AN INTEGRO-DIFFERENTIAL MASTER EQUATION CHAPTER 5. 5.1.1 Derivation of a generalized-Markovian master equation Let us consider adding a stochastic Hamiltonian, H(t) = B(t)h, on top of a background dissipative dynamics so that the time evolution is described by _ (t) =L 0 (t)i[H(t);(t)] (5.11) where h = h y is time-independent, and B(t) 2 R is a stochastic variable (we use the convention that ~ = 1). We assume the statistics governing the underlying stochastic process is such thathB(t)i = 0, andhB(t)B(t 0 )i = k(tt 0 ), where the angle brackets indicate the expectation value b . We will average out the stochastic noise to arrive at a noise-averaged description of the dynamics - we closely follow the derivation in Ref. [73]. First note that one can formally solve Eq. (5.11) as (t) =(0) +L 0 Z t 0 (t 0 )dt 0 i Z t 0 [H(t 0 );(t 0 )]dt 0 ; (5.12) which can be re-inserted into the right hand side of Eq. (5.11): _ (t) =L 0 (t)iB(t)[h;(0)]iL 0 Z t 0 B(t)[h;(t 0 )]dt 0 Z t 0 B(t)B(t 0 )[h; [h;(t 0 )]]dt 0 (5.13) If we assume that the state is suciently decorrelated from the random variables c , which can be formally justied in the weak coupling limit d [126, 127], performing the averaging as above, we get an equation for the noise-averaged density operator (we drop the angle bracket notation on ): _ (t) =L 0 (t) +L 1 Z t 0 k(tt 0 )(t 0 )dt 0 ; (5.14) whereL 1 X = 2hXhfh 2 ;Xg. We note that the termL 1 is in Lindblad form, with self-adjoint Lindblad jump operator h. Note that taking a sumH(t) = P B i (t)h i , withhB i (t)B j (t 0 )i = ij k(tt 0 ) allows one to generate a sum of such Lindbladian generators (each with self-adjoint Lindblad operators). b This type of process is known as wide-sense stationary, an example of which is `random telegraph noise' [125]. c hB(t)(t 0 )ihB(t)ih(t 0 )i, andhB(t)B(t 0 )(t 0 )ihB(t)B(t 0 )ih(t 0 )i, for tt 0 . d That is, the external eld coupling strengthjBj !0, where in the Sch odinger picture !0 is the strength of the (unperturbed) system Hamiltonian. 60 CHAPTER 5. 5.2. PROTECTING QUBITS FROM DECOHERENCE 5.2 Protecting Qubits from Decoherence 5.2.1 Dephasing noise We consider the dynamics of an N qubit generalization of the standard single qubit Pauli channel e . We mention that this is an important class of noise since it is known that quantum error correction techniques can correct against arbitrary errors given the ability to correct against such dephasing errors [30]. With this in mind, we take our background Markovian channel to be dephasing in the k-direction (where k = 1; 2; 3), via the Markovian generator L 0 [X] = (A k XA k X); (5.15) whereA k = N k , k is thek-th Pauli matrix f and > 0. The solution of this dynamics is given by the following quantum map 0 (t)[X] = (1p 0 (t))X +p 0 (t)A k XA k ; (5.16) where p 0 (t) = 1 2 (1e 2 t ) is the probability of dephasing (i.e., with probability p 0 (t), the stateX will becomeA k XA k ). The minimum delity of this channel isf 0 (t) :=f( 0 (t)) = 1 2 (1 +e t= 0 ), with the associated decay rate 1 0 = 2 . In this case, there are no eigen-nilpotents, and one can write the spectral projection as L 0 = X n n P n (5.17) where the sum is over all strings n = (n 1 ;:::;n N ), with n i 2f0; 1; 2; 3g. The projectors are given by (see Appendix C.1) P n (X) = 1 2 N Tr(X n ) n ; (5.18) with n := N i=1 n i (and 0 =I, the 22 identity matrix), while the eigenvalues are either 0, or2 . The evolution operator can therefore be written as 0 (t) = P n e nt P n . We dene the projection to the steady state of the dynamics (i.e. the innite time limit of the evolution) as P 0 := lim t!1 0 (t) = X n: n=0 P n : (5.19) Note that for all quantum states, the corresponding stateP 0 is steady in the sense that it does not evolve underL 0 ; one can check (e.g. using Eq. (5.17)) thatL 0 P 0 = 0. We will exploit this in our scheme, as will be seen more explicitly below. e One can think of this as a model for a classically correlated noisy channel (if an error occurs, it occurs to all qubits simultaneously). See e.g. Ref. [128] for a two qubit version. f We will occasionally make use of the spectral decomposition of 3 =j1ih1jj0ih0j. Throughout, when we writej0i;j1i, it is in this z-eigenbasis. 61 5.2. PROTECTING QUBITS FROM DECOHERENCE CHAPTER 5. 5.2.1.1 Purely decaying correlations To this background channel, we add generalized-Markovian noise as described above. We takeL 1 /L 0 (i.e., equal up to a positive constant). We rst take the memory kernel to be of the form k(tt 0 ) = B 2 e jtt 0 j= k =) ~ k(s) = B 2 1 s+ 1 k (note, k > 0). One can think of k as the characteristic time over which the memory associated with the added noise persists. We will absorb the coupling constant ofL 1 into the kernel strength B, to avoid introducing a redundant parameter. For the eigenvalue2 , Eq. (5.8) gives ~ (s) = 1 s + 2 + B 2 s+ 1 k = c + ss + + c ss ; (5.20) for constants given by c = 1 2 1i k = 0 1 2! k (5.21) s = 1 i! (5.22) ! = s B 2 1 2 0 1 2 k 2 : (5.23) Taking the inverse Laplace transform of Eq. (5.20) we obtain (t) =e t= cos(!t +) cos ; (5.24) where the new decay rate is 1 = 1 2 ( 1 0 + 1 k ) and satises cos = 2! k = p (2! k ) 2 + ( k = 0 1) 2 . Note that for ! = 0 the solution is slightly dierent (see Appendix C.2 for more details). Note that if B = 0 (background channel alone), we recover (t) =e t= 0 . The solution of this dynamics is therefore (t)[X] = (1p(t))X +p(t)A k XA k ; (5.25) where p(t) = 1 2 (1 (t)). We show in the Appendix C.3 that 0 p(t) 1 for all values of the parameters and t 0, i.e., this indeed generates a CP map. However we will focus on the case where !2R + corresponding to the condition 2jBj>j1= 0 1= k j. We note, importantly, that the decay rate of the new system, 1 , can in fact be less than the decay rate for the original (Markov) system alone, 1 0 . This occurs when k > 0 , so that 1 < 1 0 . When this is the case, we nd for certain times along the evolution that p(t) < p 0 (t), i.e. the probability of a dephasing error occurring is reduced. This is equivalent to an increase in the minimum channel delity f, see Fig. 5.1. 62 CHAPTER 5. 5.2. PROTECTING QUBITS FROM DECOHERENCE From Eq. (5.24), for times t n = 2n=! or t n = (2n 2)=! (n = 1; 2;::: ), (t n ) = e tn= and the evolution is the same as that generated byL 0 , but with 0 being replaced by . For example, the limit k !1 is equivalent to replacing with =2 (and evolving for timet n ). In other words we are able to change (e.g. increase) the coherence time of the channel without changing any other of its properties. As an example of a direct application of this, if one has some xed length quantum channel of the form Eq. (5.16), i.e. t =T is xed (equivalently,p 0 is xed), we have shown that upon the addition of this generalized-Markovian noise to the system, the new channel will have error probability p = 1 2 (1 (T )). If we pick k > 0 , i.e., the kernel decay time is longer than the bare channel decay time, upon choosing B such that !T = 2, then we have p = 1 2 (1e T= )< 1 2 (1e T= 0 ) =p 0 (5.26) or equivalently, f > f 0 . Note that since our map is CP for all parameter choices, we can always nd such a choice for B. In Fig. 5.1 we give an explicit example of this. We plot the delity over the dynamical evolution of the background channel, and also the combined channel. We see that along certain points of the evolution, the delity of the combined system surpasses that of the background [in the gure f(T ) 0:1]. Fig. 5.1 also shows that the minimum channel delity of the background channel (blue) at time t = T , is (approximately) equal to the delity of the combined channel (red) at timet = 2T . In other words adding a noise term with a non-trivial kernel can be benecial for performing quantum information processing tasks, for example, by allowing quantum states to be stored as a memory for a longer time (in this case, for nearly twice as long, whilst achieving the same minimum channel delity). In Fig. 5.2, we plot the dierence in delity F between the background channel (x- dephasing), and the channel assisted by the generalized-Markovian noise, for all single qubit pure states, at a xed time t = T (that is, we map F at some xed instance in time of the evolution, where each initial state is a single qubit pure state, as dened by the coordinates in the gure). This shows that the delity always increases apart from the steady statesj+i;ji of the dynamics (which have maximal delity of 1 by denition, under both channels). For our choice of parameters, the error probability decreases from p 0 43% top 32% (i.e. the probability of an error occurring over the channel is reduced by more than 10%). We also consider the eect of sending a single qubit of an entangled pair down the channel Eq. (5.25). We quantify the success of the channel at preserving the entanglement by computing the concurrence [129, 130],C() = max(0; 1 2 3 4 ) where i are the eigenvalues in decreasing order of p p Y Y p , where Y := y y . From Fig. 5.3, we again see that, for certain time intervals, the entanglement of the channel with the added non-trivial memory kernel outperforms the background channel. As before, we nd that we can double the channel length, whilst still achieving the same 63 5.2. PROTECTING QUBITS FROM DECOHERENCE CHAPTER 5. Figure 5.1: Minimum channel delity f(t) = 1 2 (1 + (t)), as a function of time for the purely Markov (background channel) evolution (blue/dash, B = 0), and for the combined system aided by the generalized-Markovian noise (red/solid, B > 0). We also plot the delity of the background channel, with rescaled to 1 2 ( + 1=2 k ) (yellow/dot-dash), which intersects the B > 0 curve at times t = 2 ! (n)and 2n=!. Here, = 1; k = 25, and we setB such that!T = 2 (forT = 1). Note, the axis of dephasing is not important here; f is the same for each direction (for xed parameters). Time is measured in units of 1= . level of entanglement. At this point, before looking to more examples, we brie y discuss, in the context of this example, the physical mechanism which allows this type of generalized-Markovian noise to protect our system (on some time-scales). Recall from Eq. (5.19) that the `innite time' state P 0 (8) does not evolve (hence decohere) under action ofL 0 (sinceL 0 P 0 = 0). Moreover, from Eq. (5.16) we can see that P 0 X = 1 2 (X +A k XA k ): (5.27) When we include generalized-Markovian noise in our system, the dynamics now gov- erned by Eq. (5.25), in fact periodically generates such a `protected' state, (t n ) =P 0 , where (nite)t n is such thatp(t n ) = 1=2, or equivalently (t n ) = 0 (i.e. cos(!t n +) = 0). In other words, given some arbitrary initial state 0 , the time evolved state (t) = (t) 0 , is such thatL 0 (t n ) = 0. This shows the system is periodically driven through the steady state ofL 0 . At, and close to these times, the Markov part of the dynamics (L 0 alone) has no, or little, eect. 64 CHAPTER 5. 5.2. PROTECTING QUBITS FROM DECOHERENCE Figure 5.2: Dierence in the delity F :=FF 0 for all pure states (single qubit), where F 0 is the delity of the `background' channel, and F the delity the `combined' channel, at timet =T [cf. Fig. 5.1]. The background channel here corresponds to dephasing in the x direction. (Left); axes correspond to positions on the Bloch sphere for a single qubit: j i = cos(=2)j0i+e i sin(=2)j1i. (Right) Spherical plot of F (i.e. on the Bloch sphere). We see for all non-stationary initial states, the delity increases. Parameters: T = 1; = 1 (hence p 0 43%), and k = 25, with B chosen so that !T = 2 (hence p 32%). In particular, at these times, the system is essentially decoupled from the environmental noise, which allows the system to exhibit a lower leading decay rate as compared to a purely Markov evolution, which is subject to the full eects of the decoherence induced by L 0 for all nite t. 5.2.1.2 Modulated decaying correlations In this section we brie y study another type of generalized-Markovian process for il- lustrative purposes, where the long-time decay of the kernel has an additional modu- lation (i.e., we generalize the previous example). In other words we take k(t t 0 ) = B 2 e jtt 0 j= k cos((tt 0 )) with Laplace transform given by ~ k(s) =B 2 s + 1 k (s + 1 k ) 2 + 2 : (5.28) In order to nd the partial fraction decomposition of Eq. (5.9) we need to nd the roots of a third order polynomial. One can show (see Appendix C.4) that taking B 2 = 2 9 (2 1= k ) 2 + 2 2 ; (5.29) 65 5.2. PROTECTING QUBITS FROM DECOHERENCE CHAPTER 5. Figure 5.3: Concurrence as a function of time, for an initially maximally entangled pure 2-qubit state,j i = 1 p 2 (j00i +j11i). Here the dephasing is in the z-direction acting on one of the qubits. One can showC(t) =j(t)j. We plot for both the background channel (blue/dash), and for the channel assisted by the generalized-Markovian noise (red/solid). We see the peaks for the assisted channel surpass the background channel. Here, = 1, B = 5, k = 5. Time is measured in units of 1= . these three roots are given by s = 1 ; 1 i!, where 1 = 1 3 ( 1 0 + 2 1 k ); (5.30) ! = r 3 2 1 9 (2 1 k ) 2 : (5.31) Note, one is not restricted to taking this choice of B, however it is convenient to work with as the decay rate associated with each root is identical ( 1 ). In fact, we have (see Appendix C.4) (t) =e t= c 0 + 1c 0 cos cos(!t +) ; (5.32) with c 0 = 1 9! 2 (2 1 k ) 2 + 9 2 ; (5.33) and cos = 1c 0 q (1c 0 ) 2 + 4 9! 2 (2 1 k ) 2 : (5.34) 66 CHAPTER 5. 5.2. PROTECTING QUBITS FROM DECOHERENCE Figure 5.4: Minimum channel delity against time for cosine memory kernel. We plot for the background channel (blue/dash, B = 0), the combined channel (red/solid, B > 0), and for the background channel with replaced by 1 3 ( + 1= k ) (yellow/dot-dash). Here = 0:5; = 10; k = 25. For this parameter choice, the peaks occur approximately at the times 2n=!. We numerically verify the generated map is CP for all t 0 (including the case when we set = 0). Time is measured in units of 1= . Since the rate of decay is otherwise given by 2 (underL 0 alone), assuming parameters are chosen so that ! 2 R, the rate of decay can be reduced by up to a factor which approaches 3 in the limit k !1. In fact after evolving the combined channel for times t n = 2n=! (n = 0; 1; 2;::: ), the system is exactly as it would be under evolution ofL 0 alone, with ! 1 3 ( + 1 k ) (see Appendix C.4). We illustrate this in Fig. 5.4, where we plot the minimum channel delity against time. We have also numerically veried that the generated dynamics are completely positive for our parameter choices (see Appendix C.4). 5.2.2 Thermalization We provide a further example of our scheme, where the Lindblad operators dening the Markovian `background' noise are not self-adjoint. In the interest of providing an example which is potentially experimentally veriable, we restrict our generalized-Markovian noise to the dephasing type i.e. with self-adjoint Lindblad operators. Note however this makes the analytic solution more complicated (sinceL 0 andL 1 have dierent spectra), and therefore we just provide numerics here. If one does not make such restrictions, then a similar analysis as in the previous examples can be carried out. Explicitly, we consider an exponential memory kernel k(t) = e t= k , with andL 0 = 67 5.2. PROTECTING QUBITS FROM DECOHERENCE CHAPTER 5. L +L + , andL 1 =L z , which act as L X = X 1 2 f ;Xg (5.35) L z X = z ( 3 X 3 X) (5.36) where := 1 2 ( 1 i 2 ). Note that evolution underL 0 alone (i.e. z = 0) indeed generates a (unique) thermal (Gibbs) state (in the innite time limit) G = 1 Z e H , at inverse temperature for Hamil- tonian H = g 3 , where we identify + = = e g (with normalization Z = Tr[e H ]). We provide a derivation of this below in Sect. 5.2.2.1. Also note thatL z G = 0 (thermal steady state is a steady state ofL z ). We study this system numerically, solving it by taking the Laplace transformation (see Eq. (5.5)). Note, we check the resulting map t is a genuine quantum map for our parameter choices by checking positivity of the Choi matrix g [54]. We consider the time evolution under t of an initial maximal superposition state, and again nd that one can reduce the leading decay rate, e.g., see Fig. 5.5, which shows revivals in the delity surpassing the background channel. Moreover, in light of our discussion above, we plot in the inset the distance of the state at time t under the full evolution (i.e. with the added generalized-Markovian noise) to the corresponding (unique) steady state of the background dynamicsL 0 =L +L + , as dened by the projectorP 0 := lim t!1 e tL 0 (see Eq. (5.19)). Since the system periodically passes close to the steady state of the background dynamics, periodically the system is eectively decoupled from this thermal noise. 5.2.2.1 Thermal spectral theorem We consider the steady state dynamics of the thermal dissipative model introduced above (Eq. (5.35)). Similar to the Pauli channel, this has no eigen-nilpotents, and can therefore be written L 0 = P 3 i=0 i P i (and i 2 R 0 ). That is, t := e tL 0 = P 3 i=0 e i t P i . We can therefore introduce the leftL i and rightR i eigenvectors ofL 0 , which form an orthonormal basis (see below). Dening := + + , x := + = , one can show P i [X] = Tr[L i X]R i (5.37) g Choi matrix for map : C = P i;j jiihjj (jiihjj). 68 CHAPTER 5. 5.2. PROTECTING QUBITS FROM DECOHERENCE Figure 5.5: Fidelity overlap with initial pure state 0 =j 0 ih 0 j as a function of time for thermal qubit, with (red/solid, z = 2) and without (blue/dash, z = 0) added protection by generalized Markovian noise. Parameters: = 1; + = 1=2; k = 5. Initial state: j 0 i = 1 p 2 (j0i +j1i). We numerically verify the dynamics are CP (using the Choi matrix). Inset: Distance of the state at time t to the corresponding steady state ofL 0 . We see periodically the system passes close to the steady state (when z 6= 0). When the distance k( t P 0 ) 0 k 0, the system is essentially decoupled from the thermal noise. We use the maximum singular value norm. Time is measured in units of 1= . 69 5.3. ALTERNATIVE MASTER EQUATIONS CHAPTER 5. where f i g =f0; 1 2 ; 1 2 ;g fR i g =f 0 1x 1 +x 3 ; ; + ; 1 2 3 g fL i g =f 1 2 0 ; + ; ; x 1 x + 1 0 3 )g: (5.38) One can check orthonormality in the sense Tr[L i R j ] = i;j . We use this to study the innite time dynamics. For any quantum state , we have 1 := t!1 =P 0 = 1 2 R 0 . More concisely, the unique steady state of the dynamics is 1 = 1 1 +x (j0ih0j +xj1ih1j): (5.39) This unique steady state can be written as a thermal (Gibbs) state, where we identify an inverse temperature such that 1 = 1 Z e H , where H =g 3 (and Z = Tr[e H ]), with x =e g . 5.3 Alternative Master Equations We lastly mention, for completeness, that it may of course be possible to achieve similar { or better { results using alternative non-Markovian master equations (i.e. of a dierent form to Eq. (5.4)). We point out however it is not so trivial a task to demonstrate this. For example, consider the master equation used in Ref. [86], which has received a fair amount of attention due to the fact it is non-Markovian in nature, and is guaranteed to generate a CPTP map. This master equation is given by _ (t) = (t)L(t) whereL is time-independent and of Lindblad form. The time dependent decay rate (t), which can be negative for some intervals of time, is such that (t) = R t 0 ()d 0;8t, to ensure complete positivity. Let us then consider, as an example, _ (t) = ( + (t))L(t) =:L 0 (t)(t); (5.40) where 0. That is, represents the background Markovian noise. Note, as per our discussion above, we require both parts to independently generate a CPTP map, and so we still require (t) 0;8t. One can solve this master equation exactly; the corresponding quantum map is h E t =e (t+(t))L : (5.41) h Note, there is of course no need for time ordering, since here L 0 (t) commutes for all times: [L 0 (t1);L 0 (t2)] = 0;8t1;2 0, and thereforeTe R t 0 d(+ ())L =e (t+(t))L . 70 CHAPTER 5. 5.4. SUMMARY Now let us consider Pauli dephasing of a single qubit (i.e., the main example for our scheme above). We can see straightforwardly [e.g. by spectral theorem: 12 (t) = e 2(t+(t)) 12 (0)] that the probability of dephasing under this channel is p(t) = 1 2 (1e 2(t+(t)) )p 0 (t) = 1 2 (1e 2t ): (5.42) That is, the amount of dephasing at timet is increased. This is also mirrored in the obser- vation that, in contrast to the generalized-Markovian master equation we use (Eq. (5.4)), Eq. (5.40) does not drive the system through the steady state space, hence does not `peri- odically decouple' the system from the background noise. 5.4 Summary Researchers into quantum information and open quantum systems are realizing that in some situations, noise can in fact be used to aid in information processing tasks. In the previous two chapters (and references from which they arose from) we introduced several new techniques within this vein, strictly in the Markov setting, showing explicitly how Markovianity can be viewed as a resource. This work added to a growing list of quantum Markov process assisted quantum computational schemes. In this chapter, conversely, we have introduced a method whereby noise of a fundamentally dierent character can be utilized for similar purposes. In particular, a generalized type of Markovian quantum process was introduced to aid in the preservation of quantum information. Prior to this, there was very little literature on the subject of utilizing the unique properties of non- Markovian dynamics, to the end of aiding in quantum information processing tasks. We saw that upon adding generalized-Markovian noise on top of an assumed background Markovian dynamics, the rate at which the system decays can in fact be reduced. The mechanism behind this completely relies on the appearance of the non-trivial memory kernel describing the generalized-Markovian dynamics. One possible way of engineering such dynamics would be by introducing a Hamiltonian coupled to a classical stochastic eld whose correlation is given by the memory kernel. We demonstrated this method by considering a Pauli channel, which we analytically solve, and saw how an exponentially decaying memory kernel can be used to eectively double the length of the channel, whilst still preserving the same threshold for errors, while a cosine-type of kernel has even greater error suppressing capabilities. Moreover, we nd similar results for a qubit in a thermal environment. We discussed a possible physical mechanism governing these dynamics whereby the system is periodically driven to (or close to) a steady state of the background dynamics (L 0 ), at which times, the state is essentially decoupled from the background noise. This decoupling, it seems, allows the qubit to remain coherent for longer times, thus exhibiting a type of quantum memory, which may nd application in quantum computing. 71 5.4. SUMMARY CHAPTER 5. Remarkably, we have found that the act of adding a certain class of noise to an already dissipative system, can in fact result in less decoherence. This particular technique opens new avenues of study into both dissipation as a resource, and into open systems in general; in particular, at the interface of Markov and non-Markov dynamics, of which there are still many unanswered questions. 72 Chapter 6 Quantum Annealing Exploiting Thermalization: an Experimental Foray Inspired by thermal annealing and by the adiabatic theorem of quantum mechanics, quan- tum annealers are designed to make use of diminishing quantum uctuations in order to eciently explore the solution space of particular discrete optimization problems. In the last few years, chip sizes have grown in accordance with Moore's law, and current devices contain on the order of 2000 superconducting qubits, potentially allowing for relatively large scale problems to be solved. Though progress has been made [131, 132], whether quan- tum annealing provides a speedup [133, 134] over conventional approaches to optimization remains open. One challenge for the experimental community is that the devices seem to sample from a non-trivial distribution (of solutions). It is clear the current generation of quantum annealer suer from both `internal' errors due to the devices being analogue in nature [135, 136], and from `external' errors resulting from a non-negligible coupling to the environment (despite being at a `low' temperature of around 10mK) [51, 137]. In particular, it is evident that thermal eects do play a key role in the output of these devices, as starkly shown in Fig. 6.1. For these reasons, researchers have suggested that quantum annealers may be useful for thermal sampling tasks [107, 137{139], such as the NP-hard problem of sampling from a Boltzmann distribution a , which has application in machine learning and articial intelligence. Early results and theory exist [138{140], but the extent to which thermalization occurs in quantum annealing remains a \hotly" contested issue. In this chapter we will present results originally appearing in Refs. [50, 51]. After in- a Note, since Boltzmann sampling is in the class NP-hard one does not expect a quantum an- nealer/computer to be able to solve this problem eciently. It is possible there could be some advantages to be gained however. 73 CHAPTER 6. troducing the basics, we will highlight relevent theory results pertaining to open system quantum annealing; in particular, we will consider (as in Ref. [57]) the weak coupling of the qubits each to their own independent environment (which is consistent with experiments [141]), which results in a time-dependent Lindblad master equation which induces ther- malization (i.e. a Davies generator), from which the time-scale upon which thermalization occurs can be estimated. This will allow us to build a qualitative picture of the relevant time-scales involved in open system quantum annealing. From this model, we will present results from an experimental quantum annealer { the D-Wave 2000Q b { and will argue the extent to which this model holds. In fact, we will observe an intriguing phenomena, largely explainable within this qualitative picture, where one can actually exploit thermal eects in order to achieve an increase in the performance of the device (e.g. increasing the probability of successfully arriving at the solution to the aforementioned optimization problems, often by several orders of magnitude). Moreover we will open up several new questions to be addressed by the community for future experiments, and describe neces- sary features required for the next generation of experimental quantum annealer, should we wish to understand them as fully as possible. 6.0.1 Quantum annealing primer Transverse eld quantum annealing evolves the system over rescaled time s =t=t a 2 [0; 1], where t is the time, and t a 2 [1; 2000]s the total run-time c . We will occasionally refer to the rate of the anneal ds=dt which on the D-Wave 2000Q can be set to zero during a pause (see below) or take values in interval ds=dt 2 [0:0005; 1]s 1 otherwise. The time-dependent Hamiltonian is of the form H(s) =A(s)H d +B(s)H p ; H p = X hi;ji J ij z i z j + X i h i z i ; H d = N X i x i ; (6.1) whereH p is the programmable Ising spin-glass problem (the nal Hamiltonian) to be sam- pled dened by the parametersfJ ij ;h i g, and H d is a transverse-eld (or `driver') Hamil- tonian which provides the quantum uctuations (the initial Hamiltonian). Here N is the total number of qubits in the problem, andhi;ji indicates the sum is only over a coupled qubits, dened by the hardware `chimera' graph, as shown in Fig. 6.2. The device we use, the D-Wave 2000Q contains 1616 unit cells each containing 8 qubits, thus having a max- imum of 2048 qubits. However, because of some faulty qubits/couplers, the actual number b Occasionally we will refer to work presented using an older model of the D-Wave, the D-Wave Two (which contained 512 qubits). In such cases we will make it clear the device the data was collected from. c The specic numbers mentioned in the main text all refer to the D-Wave 2000Q. 74 CHAPTER 6. Figure 6.1: Probability of success (how often the ground state energy is correctly identied) for many problem instances of dierent sizes, compared on two D-Wave Two machines, diering (a priori) only in the temperature. The `hot' machine P H , at around 16mK, is shown on the x-axis, and the `cold' machine P C , at around 13mK, on the y-axis. Number of qubits given by the legend. It is clear the colder machine outperforms the hotter one. 75 CHAPTER 6. Figure 6.2: The D-Wave 2000Q `chimera' graph which we conducted our experiments on (the machine is housed at NASA Ames Research Center). There are 1616 unit cells each containing 8 qubits. Dead (malfunctioning) qubits are not shown on the graph. Top left bordered in red is an example of a square subgraph of side-length SL= 4. of operating qubits is 2031. These `dead' qubits appear randomly dispersed throughout the hardware graph. The initial state is xed as the ground state of H d , j (0)i = j+i N wherej+i = 1 p 2 (j0i +j1i) (dened in the computational basis via z =j1ih1jj0ih0j). The manner in which the Hamiltonian is evolved in time is determined by the annealing schedule (i.e. the time dependence of A;B). The default schedule for the D-Wave 2000Q is shown in Fig. 6.3. After an annealing run, the system is measured in the computational basis. Performing many such runs allows statistics about the annealer to be collected; useful measures include the probability of successfully nding the ground-state of H p (which is the solution to a classical optimization problem) which we denote asP 0 , or the average energy returnedhEi. 76 CHAPTER 6. Figure 6.3: Annealing schedule in GHz (units of h = 1). The operating temperature (T = 12:1 mK, or equivalently 0:25 GHz) of the chip is also shown (black-dashed line). One way to provide more robust statistics, is by changing the `gauge' of the prob- lem. This is a trivial re-mapping of the problem so as to avoid certain biases which may be present for certain couplers/qubits (e.g., some couplers may have fewer analog con- trol errors associated with programming in J = +1 as compared to J =1, or certain qubits may be more likely to align with +z as compared toz even in the absence of any elds). The mapping involves changing the couplings/elds as J ij ! J ij r i r j ;h i ! h i r i , where ! r = (r 1 ;:::;r N ) is a vector of random entries r i 2 f1; 1g. Notice any con- guration ! s = (s 1 ;:::;s N ) has a corresponding conguration of the mapped problem ! s 0 = (r 1 s 1 ;:::;r N s N ) with the same cost, thus the problem itself is exactly the same. The current generation of hardware, the D-Wave 2000Q, allows users to tweak the default schedule in various ways. In particular this gives one the ability to: 1) Pause the evolution at some intermediate point s p < 1 in the anneal, for time t p . 2) Perform reverse annealing, where the system is initialized in a classical conguration at times = 1, evolved backwards to an earlier times p < 1 before evolving the system back to time s = 1 where a read-out occurs. Additionally, a pause can be inserted between the two evolutions. 3) Speed-up or slow-down the schedule during intermediate intervals of the anneal. 4) Provide per-qubit annealing osets. 77 6.1. OPEN SYSTEM QUANTUM ANNEALING CHAPTER 6. Figure 6.4: Example of annealing parameter s as a function of time t for an anneal with a pause, for both forward and reverse annealing. Here a 1s pause (t p = 1s) is inserted into the annealing schedule at s p = 0:5 (i.e. at t = 0:5s), which otherwise has a total anneal time of t a = 1s. Fig. 6.4 shows an example of an annealing schedule with a pause, and also an example of a reverse annealing schedule. Based on these features, new methods of sampling from an annealer have been devel- oped and proposed, such as exploiting reverse annealing to explore the energy landscape in a novel manner [142{145]. Moreover, performance advantages have been observed by osetting the elds of some of the qubits, allowing one to evade spurious transitions which occur during the minimum gap [146, 147]. This chapter focuses mostly on the rst capability listed above, where we embed a pause in the default annealing schedule, i.e., the Hamiltonian is xed atH(s p ) for lengths of time chosen by the user. This approach enables us to study key mechanisms determining the output of the annealer, such as thermalization, and to identify key regimes of the anneal. 6.1 Open System Quantum Annealing An important quantity identied in Ref. [51] is the ratio between the strength of the driving Hamiltonian and the problem Hamiltonian, Q(s) :=A(s)=B(s), shown in Fig. 6.5 for the D-Wave 2000Q schedule. Also important are the classical thermal uctuations which are governed by the quantity C(s) := k B T=B(s), where T is the temperature of the annealer. Observing the relative scales of the characteristic energies associated to the driving terms (i.e. transverse eld, environmental bath) with the energy of the problem Hamiltonian allows us to infer the existence of dierent regimes where a given process 78 CHAPTER 6. 6.1. OPEN SYSTEM QUANTUM ANNEALING Figure 6.5: Dimensionless annealing schedule. We plot the ratio Q(s) := A(s)=B(s), and the ratio of the operating temperature (T = 12:1mK) to the strength of the problem Hamiltonian, C(s) :=k B T=B(s). becomes energetically dominant. In particular, i) at early times when QC > 1, and the system mostly remains in the ground state of H, ii) when QCO(1) and non-trivial dynamics occur with H d driving various transitions between computational basis states, and iii)QC 1 when the Hamiltonian is mostly diagonal (in the computational basis) and little population transfer occurs between the eigenstates (the `frozen' region) through diabatic transitions. Thermal transitions could occur but those depend also on the strength of the coupling to the thermal bath (see below). Naively, one might expect the nal distribution at the end of the anneal to be a classical Boltzmann distribution for the problem Hamiltonian H p at the operating temperature of the device, specically, exp(B(1)H p ), where = 1=k B T , with T the operating temperature of the annealer (on the order of 10-20mK in the various generations of D-Wave annealers). But it has long been known that is not the case. The freeze-out hypothesis due to Amin [137] suggests that while early in the anneal the system thermalizes, later in the anneal, at an instance dependent, but temperature independent, freeze-out point, the transverse eld strength has diminished and the gap between eigenvalues increased to the point that the transition matrix elements are so small that essentially no dynamics happens after this \freeze-out" point. This hypothesis predicts that, for instances with well-dened freeze-out points, the nal distribution would indeed be a classical Boltzmann distribution forH p but at a higher \eective temperature" corresponding to the freeze-out point. More specically, the theory hypothesizes a freeze-out points that occurs onceQ(s ) andC(s ) 79 6.1. OPEN SYSTEM QUANTUM ANNEALING CHAPTER 6. are so `small' that the time-scale upon which the transverse eld drives transitions between eigenstates of H p is much longer than the system evolution time, hence little population transfer occurs. The expected distribution would then be close to exp(B(s )H p ) [51, 137]. The paper proposing this hypothesis [137] recognized that a well-dened freeze- out point will only exist under certain circumstances, with more recent debate as to how typically those circumstances hold. Taking inspiration from work that addressed an open-system treatment in the weak coupling regime for general problems [57], and in the non-perturbative regime for specic problems [148, 149], we look at transitions between instantaneous energy levels E i (s) > E j (s) that are governed by Fermi's Golden Rule rate ij (transitions from E i !E j ): ij (s)J(j! ij (s)j) X k; g 2 jhE i (s)j k jE j (s)ij 2 1 exp(j! ij (s)j) ; ! ij (s) =E i (s)E j (s); (6.2) where g is the environment coupling strength to the =x;y;z component of the system spins, and k is the Pauli operator acting on thek-th spin. The spectral density function J, which is not known precisely for the current hardware, is sometimes modeled by, e.g., an Ohmic bath spectral function (with cuto ! c ) J(!)/!e !=!c (where the constant of proportionality has dimensions of time squared). The explanation of freeze-out in this picture is that ass! 1, energy gapsjE i E j j open up, as well as the matrix elementshE i (s)j z jE j (s)i! 0 (which is typically the dominant environment-spin coupling [149{151]) as the Hamiltonian becomes more diagonal in the z-basis, thus the transition rates dramatically slow down late in the anneal. Therefore, the two strongest (possibly competing) eects determining the relaxation rate Eq. (6.2) are the instantaneous energy gap, and Q(s) (via the matrix element). In Ref. [51] annealers operating at dierent temperatures were used to corroborate the freeze-out hypothesis, nding consistency in the freeze-out point location for instances with well dened freeze-out points late in the anneal. On the negative side, however, this study showed that the most instances did not fall into this category, leaving open whether one would even expect the nal distribution to be a classical Boltzmann distribution for H p , in the typical case. By utilizing the novel features on the current hardware, it is possible to further probe these ideas; i.e. understand in more detail what occurs during an anneal, in regards to thermalization. We will now discuss a qualitative theory pertaining to this model, where we incorporate the possibility of pausing the anneal at intermediate points. The key time scales we consider are the pause time scale t p , the length of the pause inserted into the annealing schedule, the relaxation time scale t r , related to the inverse of Eq. (6.2), and 80 CHAPTER 6. 6.2. PROBING THERMALIZATION the Hamiltonian evolution time scale,t H (s)t a d ~ H(s) ds 1 , which is the characteris- tic time upon which the system Hamiltonian changes. This quantity depends on both t a and the parametersA(s) andB(s). For dimensionality reasons, we introduced the dimensionless Hamiltonian ~ H :=H=A(0). Fig. 6.6 provides a schematic illustration of four regions within an anneal which one would expect have qualitatively dierent dynamics. In the gure, we plot t H as a straight line only for convenience. Early in the anneal, the eigenvalues are spread, with E i E 0 0, and with the system starting the ground state state, one expects little in the way of dynamics and the probability that the system is in the instantaneous ground state to be high, P 0 (s) 1. Once the eigengaps decrease suciently to be within the Hamiltonian evolution time scale, we expect to see thermal excitations, with sucient dynamics that the system instantaneously thermalizes. As we leave this region, the energy gaps open, and the strength of the driver Hamiltonian reduces so that instantaneous thermalization no longer takes place, but transitions still occur. This region is relatively narrow since t r increases exponentially as the gaps open up. In this region we may expect a pause to aid thermalization, allowing the system to relax back down to the ground state, re-populating it. Near the end of the anneal, with the gap distances becoming very large, and where very little transverse eld is applied, one would expect that the dynamics are eectively frozen. 6.2 Probing Thermalization We consider the following simple adaptation to the standard annealing schedule. Allow the system to run as normal to some (re-scaled) time s p 2 [0; 1], upon which we pause the system for timet p 2 [0; 2000]s, after which we continue the evolution as per normal. The problem instances we use for this study fall into two categories: 1. To study large problems, we work with problems of the planted-solution type [152], such as the instance in Fig. 6.7. We chose these problems, because recent techniques enable us to know in advance the general analytic form of the spectrum of H p , including the ground state, as well as certain information about the degeneracy of the energy levels. 2. To analyze problems with respect to spectral properties of the full Hamiltonian,H(s), we used 12-qubit problems, where J ij 2 [1; 1] (uniformly random). In both cases, we used zero local elds, h i = 0. We discuss the generation of problem instances in more detail in Appendix D, in particular providing some more details about point 1 above. Fig. 6.7 shows that this pause dramatically eects the samples returned from the an- nealer. While Fig. 6.7 is for one instance, almost all instances we tested exhibit this behavior, including a strong peak in the success probability when a pause is inserted into 81 6.2. PROBING THERMALIZATION CHAPTER 6. Figure 6.6: A cartoon example illustrating time-scales relevant to analyzing a quantum annealer with coupling to a thermal bath. We indicate the thermal relaxation time-scale t r by the blue solid line, the Hamiltonian evolution time-scale t H , by the red (horizontal) solid line, and the pause time scale, t p , by the black dash line. The diagram focuses on the part of the anneal where the relaxation time scale is shortest, around the location of the minimum gap of the problem (e.g. around s = 0:44 in this example). We show four characteristic regions during the anneal. 1) [yellow] at early times, when the ground state is separated by a large gap from any excited state, and the population in the ground state is approximately 1. 2) [green] As the energy gap closes, and the relaxation rate increases rapidly, thermal excitations may occur. 3) [purple] As the energy gaps open up, the relaxation time scale increases (exponentially). Oncet r >t H instantaneous thermalization can no longer occur. If however t r .t p , a pause may allow eective thermalization, which could lead to a signicantly larger population in the ground state. We indicate the point where t r t p by s opt p . Finally, in region 4) [blue], where t r t p is the longest time scale, very little population transfer will occur (even if one pauses the system for time t p ). The dynamics are eectively frozen. 82 CHAPTER 6. 6.2. PROBING THERMALIZATION the regular annealing schedule within a narrow band of values ofs p . Fig. 6.7 shows that the corresponding average energy returned is also signicantly reduced. We dene the `optimal pause point', s opt p , as the point in the anneal for which a pause returns the lowest average energy returned from many samples d (just after s p = 0:4 in this example). Fig. 6.8 shows that the longer the pause, the greater the increase in the success prob- ability. Here, we see that the success probability, for a pause at re-scaled time s p = 0:44, increases from the baseline ( 0:01%) to over 10% for pause lengths longer than around 500s, and approaches 20% as the pause length approaches 2ms (the longest allowable pause length on the D-Wave machine), although saturating around 1ms (in the logarith- mic regime). That is, an increase of around three orders of magnitude. This behavior gives us new insight into the time-scales involved in these annealers. It shows that even a 10s pause (inserted within a default schedule witht a = 1s) can dramatically eect the nature of the samples returned from the annealer. These observations are consistent with the thermalization picture mentioned in the previous section, and the cartoon in Fig. 6.6; we attribute the purple region in Fig. 6.6 to the region where the huge spike in success probability is observed, since the system can eectively relax back to the low lying energy levels on a time-scale comparable with the pause length. After this (e.g. the blue region in Fig. 6.6), the eect is much weaker (dropping o exponentially) as the relaxation time scale increases (notice in Fig. 6.7 that late in the anneal, the relative increase in success probability is much less, or non-existent, as compared to during the region around s p = 0:4). We observe similar phenomena for the second problem class we study (with J ij 2 [1; 1]), as in Figs. 6.9, 6.10. In these gures we show the eect of changing the pause length, and the total anneal time respectively. An eect we observe upon increasing the pause length is that the width of the peak increases, as shown in Fig. 6.9. Notice that in this gure all curves start to show an increase in success probability at the same pause point s p (just after 0.4), but come back to the baseline probability at later points for longer pause lengths. That is, the region of interest is slightly extended to the right. This also ts in with the model discussed in Sect. 6.1 and the cartoon picture Fig. 6.6, where increasing t p increases the size of the purple region by extending it to the right (i.e. the region where t H < t r . t p ). The location of the peak s opt p we posit to be around the point when t r t p , the last point in the anneal for which thermalization can eectively occur during a pause of length t p . Indeed we experimentally observe (in Fig. 6.9) that increasing the pause length shifts the peak to later in the anneal (and also increases in size P 0 in accordance with this picture e ). d The optimal pause points opt p is evaluated by nding the lowest average energy (from many thousands of anneals) returned from the D-Wave device. In principle one could use the ground state success probability as a metric, however this is often not practical since 1) some hard problems are not solved even once by the D-Wave, and 2) for randomly generated instances (i.e., not of the planted-solution type), the ground state energy is typically unknown. e For a given problem, the later one can thermalize, the greater the ground state success probability, 83 6.2. PROBING THERMALIZATION CHAPTER 6. Figure 6.7: Forward annealing with a pause for a single 783 qubit (planted-solution) prob- lem instance. The top gure shows the success probability with respect to pause length t p , and the location of the pause s p . The total evolution time (aside from the pause) is 1s. The corresponding bottom gure shows the average energy (in arbitrary units) returned by the annealer. Each data point is averaged from 10000 anneals with 5 dierent choice of gauge. In the absence of a pause, P 0 10 4 . 84 CHAPTER 6. 6.2. PROBING THERMALIZATION Figure 6.8: Dependence of success probabilityP 0 on pause lengtht p , for the same problem considered from Fig. 6.7, where we x s p = 0:44 (corresponding to the peak in Fig. 6.7). We see increasing the pause length corresponds to a larger success probability (although it mostly saturates around 500s). In the absence of a pause the success probability is P 0 10 4 , which increases by several orders of magnitude to around 20%. Red solid line is linear t to tail end (t p > 10 1:5 s) Inset: Same as main gure, but with linear scale on t p -axis. Each data point is averaged from 10000 anneals with 5 dierent choice of gauge. If one instead increases the anneal time (hence t H ), the peak narrows (and attens), and eventually disappears, as observed in Fig. 6.10. Note, in accordance with Fig. 6.6, the location ofs opt p does not change upon increasingt a (since this should reduce the size of the purple region from the left). 6.2.1 Reverse annealing with a pause Before proceeding with a statistical analysis we brie y present some relevant results using the reverse annealing protocol, with a pause, the general protocol of which is demonstrated graphically in Fig. 6.4. This allows us to identify some of the key regions during an anneal, which, as explained above, depend on the ratios of the various energy scales involved and associated time-scales. We show some of our ndings in Fig. 6.11 where we identify 4 regions of interest. 1) s p < s gap . The system has been evolved (from s = 1) past the minimum gap, and been paused at a point where Q>O(1), allowing for mixing between energy levels in the computational basis. There is no memory of the initial conguration. 2) In the region just after the minimum gap, up to arounds opt p , where the lowest energy solutions are found, and corresponding to the purple region in Fig. 6.6. Here there is no memory of the initial state, assuming the gap !01 opens up so that e.g. the ratio P1=P0/ exp(!01) is reduced. 85 6.2. PROBING THERMALIZATION CHAPTER 6. Figure 6.9: Eect of changing the pause length for a 12-qubit problem instance. Each data point is from 10000 annealing runs using 5 dierent gauges. The anneal time t a = 1s for all data sets shown. Figure 6.10: Eect of changing the anneal time for the same problem considered in Fig. 6.9. Each data point is from 10000 annealing runs using 5 dierent gauges. The pause length t p = 100s for all data sets shown. 86 CHAPTER 6. 6.2. PROBING THERMALIZATION Figure 6.11: Reverse annealing with pause at s p (solid lines), for a four dierent initial congurations for a 12-qubit problem. We plot the average energy (arbitrary units) re- turned from 5000 anneals, evolved at rate ds=dt = 1s 1 , and t p = 100s. We consider ground and rst excited state congurations, as well as two highly excited energy levels. The energy level E i is indicated on the right hand side. We also show the corresponding forward anneal curve (black-dash line). This problem has a minimum gap at s = 0:44 indicated in the gure. and no clear dierence between forward and reverse annealing. We expect thermalization is able to occur eectively on times scales comparable with t p , i.e., the transition rates between energy levels ij 1=t p . 3) After s opt p , where there is a clear dierence between forward annealing and reverse annealing, there is `memory' of the initial conguration. Thus the state returned by the annealer at s = 1 depends heavily on the system state at the pause points p suggesting dierent time-scales and transition rates are important here. In this region, as Q! 0 andhE i j z jE j i! 0, the g x;y couplings may play more of a role, leading to qualitatively dierent thermalization mechanisms and time-scales. Here some transition rates ij may be comparable to 1=t p , whereas others much less. 4) Very late in the anneal, with Q C 1, almost no dynamics occur (the state returned from the annealer is almost always the same as the one initialized), i.e. ij 1=t p . We mention that these general observations seem to be fairly generic, and not specic to this particular example. With this in mind, we proceed with a statistical analysis, demonstrating to what extent the picture outlined in Fig. 6.6 holds. 6.2.2 Correlation with the minimum gap Typical folklore of (open system) quantum annealing dictates that around the location of the minimum gap, thermal excitations from the ground state to excited energy levels may occur, and that after the gap, thermal relaxation will allow some of the excited population 87 6.2. PROBING THERMALIZATION CHAPTER 6. Figure 6.12: Correlation between the location of the minimum gap, s gap , and the optimal pause point s opt p for 100 problems of size 12 qubits. These problems have J ij 2 [1; 1] (uniformly random) and h i = 0. We divided the data into two groups based on their minimum gap min (see legend), and we also indicate by the dash-dot (vertical) line the location corresponding to Q(s gap ) = 0:1. We xed the pause length to t p = 1000s, and total anneal time (excluding the pause) to t a = 1s. Data from the annealer is averaged over 10000 anneals, with 10 dierent choices of gauge. to fall back to the ground state [153] (of course, this is heavily dependent on the nature of the minimum gap, and hence on H p itself, as well as the temperature and annealing schedule). This general idea is also demonstrated in Fig. 6.6. This framework would suggest that a pause in the annealing schedule some (problem dependent) time after the minimum gap may lead to an increase in the success probability (that is, the population in the ground state at time s = 1). Working with 12-qubit problems withJ ij 2 [1; 1] (uniformly random), we indeed nd such a correlation between the location of the minimum gap, and the optimal pause point, s opt p , where for over 90 of the 100 problems tested the best place to pause is after the minimum gap. This is demonstrated in Fig. 6.12, where on average s opt p s gap + 0:14. We comment brie y on the few outliers (e.g. withs gap <s opt p , or not in the main cluster of points) in the data set. For some of these 12-qubit problems, they are solved almost 100% of the time by the D-Wave annealer (i.e. they are extremely easy optimization problems). These are problems that have large minimum gaps min > 1GHz, and we indicate them in red in the plot. For these instances, we typically do not observe a well dened optimal 88 CHAPTER 6. 6.2. PROBING THERMALIZATION pause point; since the gap is so large for all s, we expect very little thermal excitation to occur at all, hence pausing has little eect. We see these red points have a fairly random spread in the s opt p -axis. The second set of outliers are (some of the) instances which have minimum gaps rel- atively late in the anneal. We observe these instances do not have well dened optimal pause points, and expect this is due to quirks of their individual spectra. In particular when the minimum gap occurs late in the anneal (e.g. after the point when Q < 0:1), either the transition time scale t r may already be too large for eective thermalization to occur during a pause (since the z i matrix elements driving the transitions in Eq. (6.2), governed by Q, are negligible), or the gap doesn't open up enough before the end of the anneal to transfer enough population out of the excited states. It is interesting to note that the observation of instances without well dened optimal pause points appears to be a small size eect, since for all large problems we tested con- taining over 100 qubits (e.g. as in Fig. 6.19 below), exhibited well dened optimal pause points. Nevertheless, the overall trend is clear, with the majority of problem instances exhibit- ing an optimal pause point in a narrow region shortly after the location of the minimum gap. We similarly study the same problems where we re-scale the problem Hamiltonian by a constant factor (2,4,8). This has two eects; 1) it shifts the location of the minimum gap to later in the anneal, and it also reduces the size of the minimum gap. We indeed see that correspondingly, the location of the optimal pause point shifts to later in the anneal (see inset of Fig. 6.13). Interestingly, we also observe that the location of s opt p concentrates (thus becoming less correlated with s gap ) upon reducing the energy range of the problem; notice how in Fig. 6.13, the purple points (H p =8) are almost perfectly aligned close to s opt p = 0:8. We also see this by noting that the error bars (standard deviation) in the s opt p -axis decrease (inset of gure). We explain this behavior by pointing out that by dividingH p by a large enough factor, ! 01 (s) < 1 for s > s gap where ! 01 (s) := E 1 (s)E 0 (s) is the gap between the ground and rst excited state, and the inverse temperature of the annealer. This implies the system should be able to eectively thermalize until very late in the anneal, until the matrix elements in Eq. (6.2) become small enough (determined by Q being small enough). This picture means the start of the region when the dynamics become frozen (purple region in Fig. 6.6) is not determined so much by the problem itself (i.e. the exact spectrum as a function ofs), but the annealing schedule (i.e. Q(s)). Thus dierent problems may exhibit very similar optimal pause points. It would be worth while to explore this in more detail to conrm this hypothesis. 89 6.2. PROBING THERMALIZATION CHAPTER 6. Figure 6.13: Eect of reducing problem energy scale (for the same 100 problems studied in Fig. 6.12). We divide the problem Hamiltonian H p by 1,2,4,8 (see legend). Inset: Mean data point for each group of 100 instances upon dividingH p by 1,2,4,8 (see legend). Dash- dot line is least squares tting to the median data point. Error bars are the standard deviation. 90 CHAPTER 6. 6.2. PROBING THERMALIZATION 6.2.3 Quantum Boltzmann distribution For a set of 10 problem instances of size 12-qubits with J ij 2 [1; 1] (uniformly ran- dom), each with well dened optimal pause points, with min < 1GHz, Q(s gap ) > 0:1, we compare the returned statistics to the instantaneous quantum Boltzmann distribution exp((T )H(s)) (projected to the z-eigenbasis), for various choices of T ; we vary the temperature T from 1 4 T DW to 4T DW , in increments of 1 4 T DW . Here T DW = 12:1mK is the operating temperature of the annealer. We outline these calculations below (Sect. 6.2.3.1). We study these problems, with a very long pause length oft p = 1000s to allow enough time to thermalize, and run with short anneal time (excluding pause time)t a = 1s, i.e. the quickest possible rate ds dt , so that we can try to isolate the eect of the pause by `quenching' the system. An example of this analysis for a single instance is shown in Fig. 6.14, however we note the pattern looks much the same for all of the instances studied. Here, we focus on the distribution returned from the annealer with a pause at the optimal pause point s opt p , and compare this to a distribution of the form exp((T )H(s)), leaving (s;T ) as parameters to optimize over. It is interesting to note that we do indeed seem to observe a correspondence between the projected quantum Boltzmann distribution at (s opt p ;T DW ), and the data sampled from the experimental device. In fact, over all of the instances D KL (s opt p ;T DW ) = 0:076 0:065, where s opt p = 0:57 0:05. Having said this however, we observe that the experimental distribution is best de- scribed by a hotter Boltzmann distribution, at a later point in the anneal than the optimal pause point. In particular for these problems the optimal parameters (s ;T ) such that D KL is minimized, correspond tos = 0:780:10 andT = 26:18:8mK (and up to 4 times the physical temperature). The corresponding optimal KL-divergence over all problems is D KL min = 0:016 0:015, seemingly markedly better than at (s opt p ;T DW ). We comment that values ofs this late in the anneal, with Q(s ) 10 3 , the distribu- tion would be expected to be eectively a classical Boltzmann distribution (of H p ); that is, of the form exp ~ H p , where ~ is an eective inverse temperature. This is demonstrated explicitly in Fig. 6.15. Here we compare the D-Wave distribution of a single instance sampled from an anneal with pause at s opt p , to the optimal found over all (s;T ) in exp((T )H(s)). We plot on a logarithmic scale to show the similarity to a classical Boltzmann distribution of H p , for which ln p i g i = ~ E i lnZ (i.e. a straight line on this graph). We notice that indeed, both the experimental data and the computed Boltzmann dis- tribution seem to t reasonably well to a linear t. This indicates classical thermalization is occurring to some extent, but also mention it is not clear whether the two distributions correspond to one and the same; there are clearly some large divergences (the y axis is a logarithmic scale). These general observations explain the diagonal `streak' in Fig. 6.14: If the returned experimental data does indeed t to a Boltzmann distribution at a late point in the anneal, 91 6.2. PROBING THERMALIZATION CHAPTER 6. when Q(s ) 1, then the distribution exp((T )H(s )) exp((T )B(s )H p ), i.e. a classical Boltzmann distribution. Therefore there are a range of values (s ;T ) which give a similar distribution, i.e., so long asB(s )=k B T is constant. This means that ifs (hence B(s )) is larger, T should also larger to compensate, as we see in this gure. This explains, to some extent, the variability in the values of (s ;T ) mentioned above. Most likely there are many values of (s ;T ) which provide a very good t to the exper- imental data. It is perhaps therefore not meaningful to speak of a single optimal value (s ;T ). Having said this however, we do universally nd the data does t better to a hotter Boltzmann distribution, at a later point in the anneal as compared to (s opt p ;T DW ), sug- gesting perhaps there are non-trivial dynamics occurring after the pause at s opt p , and that therefore we are not appropriately quenching the dynamics. That is, even though we may hope to observe directly a projected quantum distribution, dynamics up until later in the anneal may allow the system to exhibit some classical thermalization still. It is also interesting to note that in the absence of a pause we do not see any clear cor- respondence between the D-Wave distribution and a Boltzmann distribution. Performing the same analysis as above with an anneal with no pause, the KL-divergence values vary wildly, with D KL min = 0:19 0:15. That is, over the range of (s;T ) for which we computed exp((T )H(s)), there is typically no good choice of (s;T ), and it is not clear what the distribution is. If the system is not able to thermalize appropriately during the anneal (e.g. because the anneal time t a is too quick), there is no expectation for the D-Wave distribution to be close to any Boltzmann distribution. We conjecture that after the optimal pause point non-trivial dynamics do indeed still occur. The intermediate pause helps the D-Wave distribution equilibriate to the instanta- neous thermal distribution, and after this, although the thermalization time-scale is `large' (relative to t p;H ), some dynamics, will inevitably occur. Though we have presented direct evidence suggesting the pause does indeed provide a key role in allowing the system to thermalize, more work is however required to understand precisely the distribution one is sampling from at the optimal pause point. The main constraint in our experiments prohibiting us to probe these details further is the maximum annealing rate ds dt which is limited to 1s 1 on the present annealer. That is, even though we hope to be approximately quenching the system from after the optimal pause point (i.e. measuring in the middle of the anneal), in reality non-trivial internal dynamics, driven by Q, may still occur before late in the anneal. The ability to perform a true quench should allow us to precisely verify the picture presented here: If we could nd a region where the system is predicted to thermalize to a quantum Boltzmann distribution, when Q(s) is still large (e.g. around the region of the minimum gap, or the green region in Fig. 6.6), one should observe unique, unambiguous optimal tting values (s ;T ). We provide some more intriguing evidence in the next section, where we focus on a set of 92 CHAPTER 6. 6.2. PROBING THERMALIZATION Figure 6.14: KL divergenceD KL between data from the annealerP DW and the Boltzmann distributionP QBM (projected into the computational basis) for various choices of (s;T ), for a single 12-qubit instance. The D-Wave data is sampled from a pause of lengtht p = 1000s at s opt p , with t a = 1s (from 10000 anneals and 10 choices of gauge). We indicate in the plot three key parameters; the physical temperature T DW = 12:1mK, the location of the minimum gap s gap , and the optimal pause point s opt p . The white diamond corresponds to the minimum value ofD KL over all (s;T ), and is equal toD KL min = 0:01 bits of information. Note, to be able to distinguish the features in the plot, we set the upper limit on the plot to be D KL = 0:2 (any value above this is mapped to this color). 93 6.2. PROBING THERMALIZATION CHAPTER 6. Figure 6.15: Comparison of the D-Wave probability distribution and the closest t (as measured by KL-divergence) to a quantum Boltzmann (QBM) distribution of the form exp((T )H(s)), where the t is over the parameters (s;T ) for a 12 qubit problem. Opti- mal values for this problem are (s ;T ) = (0:76; 18:5mK) (corresponding to D KL min = 0:03). Here p i is the probability of observing a conguration with energy E i , and g i is the de- generacy of that energy level. The solid and dash-dot lines are least-squares tting to the annealer data and the QBM data respectively. The annealer data is from a schedule with a pause at the optimal pause point (s opt p = 0:59), over 50 samples, each of 10000 anneals (and 10 gauges), with t a = 1s, t p = 1000s. Error bars are standard deviation over the 50 samples. 94 CHAPTER 6. 6.2. PROBING THERMALIZATION large-scale problem instances which strongly suggest classical thermalization is occurring, that is, thermalization to a Boltzmann distribution of H p . 6.2.3.1 Computing the quantum Boltzmann distribution We wish to compare the distribution returned from the D-Wave annealer, i.e., the proba- bility that a conguration with energy E is returned P DW (E), to what would be expected if the annealers were instantaneously thermalizing to the quantum Boltzmann distribu- tion (s;T ) := 1 Z e H(s) , where Z = Tre H(s) , and = 1=k B T with T a temperature parameter. We note that the D-Wave annealer can only measure in the computational (z) ba- sis, and assuming the system can be quenched appropriately, to compare the probability distributions we compute P (s;T ) QBM (E) = X z :Hpjzi=Ejzi hzj(s;T )jzi (6.3) for s2 [0; 1] (steps of 0.01), and T2 [ 1 4 ; 4]T DW (steps of 1 4 T DW ). 6.2.4 Classical Boltzmann distribution The extent to which annealers sample from a classical Boltzmann distribution at some point s late in the anneal is a hotly contested issue [51, 107, 137{139]. Sampling from a Boltzmann distribution is NP-hard, so annealers would not be expected to eciently solve this problem across the board, but they could have advantage over the best classi- cal methods. Quantum computing is known to have advantages over classical computing for sampling from certain distributions [15, 154], including Gibbs distributions [155], but whether quantum annealing has an advantage remains an open question. If problems for which machine learning is applicable (e.g. in the restricted Boltzmann machine paradigm) freeze-out at a point late in the anneal, whenQ 1, then annealers may have an advantage over classical samplers [138{140]. With the advent of a new entropic sampling technique based on population anneal- ing [156], we were able to accurately estimate the degeneracies for 225 planted-solution instances containing 501 qubits (that is the estimated ground and rst excited state de- generacies are within 5% of the known values found by planting). For more information on these techniques see Refs. [51, 156], and Appendix D. Due to the large size of these problems, we are of course not able to compute the Boltzmann distribution of the full Hamiltonian H(s) as we did in the previous section. However, having accurate values for the degeneracies allows us to calculate the clas- sical (problem Hamiltonian) Boltzmann distribution exp ~ H p , where ~ is an instance-dependent eective inverse temperature, i.e., a tting parameter, which depends on the physical temperature, and the strength problem HamiltonianB(s) (and in principle 95 6.2. PROBING THERMALIZATION CHAPTER 6. anything else eecting the distribution returned from the annealer such as various noise sources). If the distribution returned from the D-Wave machine is indeed a classical Boltzmann distribution at freeze-out point s late in the anneal (when Q(s ) 1), then one would expect (ignoring any other noise sources) to nd ~ =B(s )=J max , where = 1=k B T is the physical inverse temperature, B(s ) is the problem Hamiltonian strength at the freeze-out point, and J max = maxjJ ij j is a normalization parameter (since the J ij programmed into the quantum annealer are restricted to the range [1; 1]). We make two key observations. 1) all of these problems tested exhibit a strong peak in the success probability in a fairly narrow region s opt p 2 [0:35; 0:46] (i.e., much less varied than the 12-qubit instances studied above). 2) The data returned from the annealer, for all of these problems, matches closely to a classical Boltzmann distribution for H p , but at a higher temperature than the operating temperature of the annealer (at least 1.5 times higher). This is in accordance with the results of Ref. [51] where calculated freeze-out points for most large problems were very early in the annealing schedule (equivalent to a higher than expected temperature), although in this work, a pause during the anneal to more directly study thermalization was not available. We demonstrate these points below. First consider Fig. 6.16 where for a single instance we plot ln p i g i againstE i , wherep i is the probability of observing a conguration with energyE i , andg i is the degeneracy of that energy level. One can see the data returned from the annealer corresponds closely to a linear t (for all problems, and all pause points s p , we nd the R 2 (coecient of determination) value is greater than 0.97, and up to 0.9999). That is, the data from annealer seems to t to p i = g i Z e ~ E i , for some constants Z, and ~ (which can be determined by least-squares tting). Though the results are clear, the correct interpretation of them is not. For example, if we obtain the eective inverse temperature of the distribution ~ from the least squares tting, and set it equal to ~ = B(s )=J max , with knowledge of = 1=k B T , and J max , we can calculate B(s ). If one does this however, the value returned corresponds to an extremely early point during the anneal, even earlier than s opt p (e.g. with Q 1, or equivalently s 0:35). If one however assumes the thermalization picture presented in Sect. 6.1, which suggests the dynamics should freeze when the relaxation time scale is longer than the system time scale, i.e. approximately around the optimal pause point, s opt p = 0:42 0:01 for these problems, the temperature required for the t is > 1:5 times higher than the physical temperatureT = 19:81:1mK (compared toT DW = 12:11:4mK). It is however not clear whether exp(H(s )) with s = 0:42 and Q(s ) 0:5 would indeed correspond to a classical Boltzmann distribution (ofH p ) since the o diagonal driver is still relatively strong in magnitude. Indeed, this is a somewhat similar result as from the previous section where we observed the optimal parameter value for s was in fact slightly after s opt p (and T larger than the physical temperature). If there are in-fact still dynamics afters opt p , the dynamics will be frozen somewhat later in the anneal (whenQ(s) is smaller), 96 CHAPTER 6. 6.2. PROBING THERMALIZATION Figure 6.16: Fitting experimental data to linear t for a single 501 qubit instance. We show data for three dierent pause points (and from the standard annealing schedule). We see the standard schedule is almost indistinguishable from the case where the anneal is paused ats = 0:2; 0:8 (slope ~ 0:35), however, when pausing ats =s opt p = 0:44 for this instance, we see a change in the distribution returned (purple, with slope ~ = 0:44). We also plot the corresponding classical Boltzmann distribution expected (black-dash line) if the system were thermalizing toH p ats =s opt p , at temperature 12.1mK (with slope ~ = 0:61). Note, freeze-out at s = 1 would give ~ = 2:52. Inset: Average energy returned by the annealer as a function of pause point for the same instance. The curve has a minimum value at s opt p = 0:44. Experimental data obtained from 10000 anneals with 5 choices of gauge, with t a = 1s, and t p = 100s. 97 6.2. PROBING THERMALIZATION CHAPTER 6. Figure 6.17: Quantifying accuracy of linear regression usingR 2 for ln p i g i as a function ofE i (as demonstrated in Fig. 6.16) for a single 501 qubit instance. For each data point shown, we obtain a least-squares tting from the distribution returned by the annealer from which we calculate the R 2 value. The solid line (red, right-axis) is the average energy returned from the annealer. We see the peak inR 2 corresponds to the region arounds opt p (just after s p = 0:4). Each data point is obtained from 10000 anneals with 5 choices of gauge, with t a = 1s, and t p = 100s. and the associated temperature of the t will be larger. We discuss some implications of this below. For now we compare the samples from the optimal pause point to those from outside of it. In Fig. 6.17 we plot theR 2 value found by the least squares tting for a typical instance as a function of pause point s p . We see that the peak corresponds closely to the optimal pause point under the pause. Moreover, we see a very similar trend for all of the problems we tested, with the pause point for which the largest R 2 value is observed diering by at most 3% of the annealing schedule from the optimal pause point; s max R 2 = s opt p 0:03. This indicates that the data returned from a pause at this critical point, ts better to a classical Boltzmann distribution, as compared to the rest of the data (or indeed, from the distribution returned in the absence of a pause in the schedule). If indeed the problems are thermalizing to a classical Boltzmann distribution, this work shows that by pausing the anneal at a particular (instance-dependent) point allows a more complete thermalization to occur. This result is similar to that found in the previous section. We performed a similar analysis for three other problem sizes (N = 31; 125; 282 qubits), and nd that increasing the problem size in general increases the mean R 2 value for a t to a classical Boltzmann distribution. For N = 31; 125; 282; 501, the corresponding values arehR 2 i = 0:911; 0:994; 0:995; 0:997, where the average (median) is over all instances and all pause points s p tested. Moreover, the correlation between s opt p and s max R 2 seems to also increase with problem size, as demonstrated in Fig. 6.18, which shows the variation between 98 CHAPTER 6. 6.3. IMPROVING PERFORMANCE VIA THERMALIZATION Figure 6.18: Average dierence between the pause points p for which the maximalR 2 value is found (i.e., the closest to a Boltzmann distribution), and the optimal pause point, with problem size. The problems are dened on square sub-graphs of the full D-Wave chimera with side-length SL. The corresponding number of qubits is given in the legend. The data points are an average of (at least) 55 instances each. Error bars represent standard deviation. To collect our data we used 10000 anneals with 5 choices of gauge, witht a = 1s, and t p = 100s. dierent instances decreases with problem size, and seems to suggest that s opt p s max R 2 , for large N (i.e., the optimal pause point and the pause location for which the best t to a Boltzmann distribution is observed, coincide for large problems). 6.3 Improving Performance via Thermalization Though it is clear experimental quantum annealers suer in performance due to a cou- pling with the environment (i.e. as compared to the closed system case), this chapter has hopefully demonstrated that thermalization can itself actually be used and exploited to improve performance over the generic case. In particular, by inserting a pause in the annealing schedule can result in many orders of magnitude improvement in performance, an eect we related to thermalization (i.e. relaxation) after the minimum gap. It is remarkable that the peak which one observes in success probability (as a function of pause point, as in Fig. 6.7) is extremely well dened, occurs in such a concentrated region, and exists for almost all problems we studied (the only exception being some small problem instances, as we discussed). For problems of the planted-solution type, there seems to be little dependence on problem size for the position of this peak. Fig. 6.19 plots the optimal pause point, s opt p , which does not vary much with problem size and, in 99 6.3. IMPROVING PERFORMANCE VIA THERMALIZATION CHAPTER 6. Figure 6.19: The optimal pause point s opt p as a function of problem size, for two dierent problem classes (see legend). The problems were generated on a square subgraph of the chimera with `side-length' SL, consisting of SLSL unit cells each containing 8 qubits (see e.g. Fig. 6.2). Each SL shown [4; 8; 12; 16] corresponds to (taking account for dead qubits) N = [127; 507; 1141; 2031] respectively. Each data point shown is an average over (at least) 50 instances. Error bars represent the standard deviation. Each instance (for each s p tested) is averaged from 10000 anneals with 5 dierent choice of gauge, with t a = 1s (not including the pause time), and t p = 100s. addition, the deviation (i.e., the error bars in the gure) in the samples appears more or less constant. For problems generated withJ ij 2 [1; 1] (uniformly random), we see a mild eect with increasing problem size, in which the optimal pause point seems to decrease, and concentrate in location (i.e. the error bars in the gure are decreasing with problem size). This eect would presumably saturate with large enough N. Moreover in Fig. 6.20, we see the width of the peak only depends very weakly (or not at all) on the problem size; this suggests that for most problems, regardless of size, there is a fairly large window in which one can pause and observe an increase in success probability. The increase in the success probability can be dramatic, as shown in Fig. 6.21, where for many of the (planted-solution) problems tested, the average success probability is increased by up to four orders of magnitude. These simple observations demonstrate that one may be able to design more ecient annealing schedules by annealing very quickly, and pausing for a relatively short time, as compared to running the default annealing schedule for a long time (which has a logarithmic increase in success probability with annealing time). This makes use of the fact that 1) the optimal pause point is in a fairly narrow region across a problem class and problem size so can be isolated with ease, 2) the width of the peak is fairly consistent for dierent problems of a particular class, and crucially does not e.g. close exponentially as for example 100 CHAPTER 6. 6.3. IMPROVING PERFORMANCE VIA THERMALIZATION Figure 6.20: The width of the peak on a graph ofjhEij Vs. s p , as a function of problem size. The `full-width-at-half-maximum' (FWHM) is the width of the peak in the energy curve jhEij as a function of s p , at (jE(s opt p )j +jE BG j)=2 where E BG is the background average energy (i.e., the average energy returned from the annealer in the absence of a pause), and E(s opt p ) the (mean) energy returned by the annealer at the optimal pause point. Note, modulus is used since all energies observed negative. The problems are the same as in Fig. 6.19. the minimum gap does, and 3) even pausing for a `short' time of say 10s can cause a signicant increase in ground state probability. This relative robustness across instances within a problem class therefore suggests that good heuristics for the pause length and location may be derivable from empirical evaluation of a few instances within a problem class. We propose the following simple algorithm for demonstrative purposes: 1: procedure OptimizePauseSchedule(N problem instances of some class) 2: Randomly select MN of the input instances 3: Pick interval ds (e.g. ds = 0:01) 4: Pick annealing parameters: number of anneals n a , anneal-time t a , pause-time t p 5: Initialize S p = [ ] (empty array) 6: for instance 1 :M do 7: InitializehE min i =1, s opt p = 0, 8: for s = 0 :ds : 1 do 9: Run instance with n a anneals for anneal-time t a with pause at s p = s, for time t p 10: Record average energy of solutionshE(s p )i from the n a anneals 11: ifhE(s p )i<hE min i then 12: Update:hE min i =hE(s p )i, s opt p =s p 101 6.3. IMPROVING PERFORMANCE VIA THERMALIZATION CHAPTER 6. Figure 6.21: For the planted problems of Fig. 6.19 (50 instances per side-length), we plot the success probability under the default annealing schedule (witht a = 1s)P 0 as compared to the success probability for an anneal with pause (t p = 100s) at the optimal pause point, P 0 (s opt p ). We see typically orders of magnitude improvement for the hard problems. Note, the problems which are plotted vertically on the y-axis have P 0 = 0 (i.e. were not solved once over all anneals). Note, some problems which were not solved once, even under the pause, are not included. 102 CHAPTER 6. 6.3. IMPROVING PERFORMANCE VIA THERMALIZATION 13: end if 14: end for 15: Append s opt p to S p 16: end for 17: Return median(S p ) 18: end procedure This algorithm takes advantage of the three above points, in particular that the optimal pause point across a particular problem class is fairly robust (and that a pause does not seem to hinder performance). The above algorithm attempts to nd a good heuristic pause point to use across an entire problem class. In the initialization step 4, it is best to take `small' values of t a;p and n a so that the algorithm can be run eciently; the total run time is O Mna ds (t a +t p ) . The algorithm returns the predicted best pause point to use for the problem class (of course the algorithm can be modied in many ways). If one wishes to solve many thousands of problems (N), it may be most ecient to rst run the above algorithm (for some judicious choice of M), and then test the problems with this adapted schedule. Even though this a priori adds to the total solve time of the problems, since it should increase the performance of the annealer in the average case, for N large enough the total solve time may actually decrease (i.e. one requires fewer anneals per instance to solve it, hence may require less time overall). Obtaining a deeper understanding of the mechanism behind the phenomena reported in this chapter will further aid in developing such adaptive annealing schedules. To probe this further, it would be imperative to allow for a more appropriate quench, that is, allow for an annealing rate ds=dt 1s 1 . This would allow us to compare more precisely the instantaneous D-Wave distribution to the predicted thermal distribution. Moreover it would be useful to have better (real time) temperature estimates of the qubits in the device, for the same purpose. Also important is to better understand how analogue errors eect the device. For example, since the couplingsJ ij are analogue in nature, the value ofJ ij is actually normally distributed, thus the one programmed in to the annealer may dier somewhat from the one intended. The result is that even if the annealer performs perfectly, it may solve the wrong, or malformed problem, in general resulting in a lower success probability. It is in fact known that larger problems are more susceptible to control errors, as indicated in Fig. 6.22. This gure shows that the larger the problem size, the more variability in performance per instance. Since the thermal uctuations are consistent across all problem sizes, it is most likely that some sort of control error is causing this variability. This is in agreement with the fact that larger problems, which have more couplings dening them, have a higher likelihood of some couplings varying signicantly from the ones intended (i.e. from the tail end of the normal distribution on the J ij ). This observation alone may make it infeasible to use quantum annealers of the current construction for any practical purposes (i.e. solving large-scale problems), and should therefore be addressed in future 103 6.3. IMPROVING PERFORMANCE VIA THERMALIZATION CHAPTER 6. Figure 6.22: Typical spread of eective inverse temperature as a function of problem size. Our measure for spread is the 95th to 5th percentile mean ratio e averaged over instances of each problem size. The data is from two D-Wave Two quantum annealers, one at 16mK (`hot', in red), one at 13mK (`cold', in blue), for planted-solution instances. e is computed by measuring the freeze-out point, and using the known operating temperatures (for more information, see Ref. [51]). We take the ratio to overcome any bias from the cold chip recording higher values of e (see inset). We nd both devices follow a nearly identical trend: uctuations increase and e decreases with problem size. Inset: Median e for each problem size. 95% condence interval error bars obtained by bootstrapping over the instances of each particular N. annealer designs. 104 Chapter 7 Concluding Remarks The last few years have shown huge promise in the pursuit for quantum computers. Each passing year produces new technologies, and with investment from the private sector and governments, progress is increasing rapidly. There are many universities and research cen- ters devoted to the eld, in addition to `quantum start-up' companies a and well-established `traditional' computing companies driving progress. Small scale devices { around 50 qubits { have been produced, and have demonstrated basic proof of principle operation. In fact, some of these are even accessible over the cloud, for use by the general public b . The next benchmark to pass is `quantum supremacy' [21], which, although in and of itself, not particularly scientically useful, will demonstrate to society, unambiguously, that quantum mechanics can oer advantages over traditional computing techniques. This benchmark will most likely be passed using `noisy intermediate scale quantum devices' [157]; devices with no error correction, but gate error rates low enough, and coherence times just long enough so that to a good enough approximation, they are error free. Moving forward from this stage is somewhat daunting and requires a signicant revamp in the current technology. Indeed, as discussed in the Introduction, error correction will be required in order to build a large-scale device, and most likely much longer coherence times will be necessary. Whilst basic error correction techniques have already been demonstrated [158, 159], they result in a signicant overhead, and therefore there are many practical issues to overcome before starting to scale-up the technology. Some of these problems can be tackled from the experimental side, but others seem to be more fundamental, and require an advance in our theoretical understanding in how errors and noise aect quantum systems during a computation. It is not clear which of the many proposed designs for future quantum computers will prove ultimately successful (if we are allowed to be optimistic), but it seems clear whichever ones are will rely on a host of quantum error mitigation techniques. a 1QBit, D-Wave Systems Inc., IonQ, Qulab Inc., and Rigetti Computing to name a few. b For example, the IBM Q Experience. 105 CHAPTER 7. One possibility is using the dissipative model of quantum computing. Though this has had in comparison to the circuit model much less attention, it overcomes some of the diculties inherent to this framework. Despite the equivalence in computational power between the two methodologies [44], some tasks may be practically easier to achieve and implement in the dissipative paradigm. As a simple example, in Chapter 3 we demonstrated a scalable architecture capable of performing universal quantum computation. In this framework, there is an in-built robustness to certain types of error, and therefore requires no error correction for these. Considering the numerous techniques introduced in this thesis, it is possible that a hybrid quantum device may prove practical, giving the best of both worlds; that is, a device implementing both unitary and dissipative technologies. In Chapter 5 we showed a basic mechanism to improve quantum memory, by dissipation. This could, for example, be used to implement a type of quantum RAM [160], storing the output state of a unitary computation for use later, whilst in an otherwise decohering environment. This work has attempted to tackle some of the fundamental challenges currently fac- ing the quantum computing community, from two complementary points of view. 1) We have shown directly how one can exploit the unique features of dissipation to aid in quan- tum information processing tasks. In addition to the above mentioned aspects, Chapter 4 demonstrates a very natural manner in which quantum machine learning can be imple- mented through dissipation. 2) We have added to the understanding of how errors aict quantum systems through the use of novel open system models. In fact, in Chapter 6 we went further and not only outlined a model of thermalization, we also experimentally veri- ed some of the predictions made by the model in a current generation quantum annealer. This then allowed us to design more ecient annealing protocols, relating back to the rst point above. The techniques outlined here add to a growing list of technologies and applications for quantum computing. Only time will tell the ultimate fate of the eld. As more precise experiments are performed, and as better theoretical models and tools are developed, we will hopefully gain the keys to unlock this elusive technology. 106 Appendices A Numerical Methods: Vectorization A `superoperator'L2 L(L(H)) acting on the space of linear operators L(H) over some Hilbert spaceH may be hard to compute with numerically. For example, an object of the formE t := e tL does not typically lend directly to numerical simulation, whereLX = P k A k XB k , for some A k ;B k ;X2L(H) (e.g. the Lindblad equation is of this form). Note however sinceL is linear, we can represent it as a matrix (after xing a basis). To do this we can use a tool known as vectorization, where the superoperator action on an operator is reduced to the action of an operator on a vector, but in a larger space. In particular, if d = dimH,L can be thought of as an operator over a vector space of dimension d 2 (that is,L can be represented as a d 2 d 2 matrix). In this way we will change the notation ofLX to ^ LjXii (where the double angle bracket represents X is now in vector form, and the `hat'^thatL is now a regular operator). There is no unique way to do this. For example (when viewed as a matrix with respect to basisjii), matrix elements can be mapped asjiihjj!jji jii, or, equivalently,jiihjj! jii jji (where represents the complex conjugate). The dierence in the two, respectively, is shown below: a b c d ! 0 B B @ a c b d 1 C C A ; a b c d ! 0 B B @ a b c d 1 C C A : (A.1) Once the convention is xed (we pick the former for demonstrative purposes), one can rewriteL. Consider vectorizing the operator AXB!jAXBii. Writing X in the above basis (with coecients x ij ) gives AXB = X i;j x ij AjiihjjB = X ij x ij jA i ihB y j j!jAXBii = X ij x ij jB y j i jA i i = X ij x ij (B y jji) Ajii = X ij x ij B T jji Ajii = X ij x ij (B T A)(jji jii) = (B T A)jxii (A.2) 107 B. APPENDICES FOR CHAPTER 3 wherejA i i := Ajii, andjB y i i := B y jii (also B T represents the (non-conjugate) transpose of B). Noting that vectorization is linear, we can write the general matrix form of a superop- eratorL = P k A k B k as ^ L = P k (B T k A k ). That is, all superoperator actions can be reduced to matrix multiplication, e.g.,e tL [X]! e t ^ L jXii (this follows simply by notingjL (n) [X]ii = ^ LjL (n1) [X]ii = = ^ L n jXii, where L (n) means apply superoperatorL, n times). B Appendices for Chapter 3 B.1 Eective unitary dynamics for time-dependent Hamiltonian Here we letL(t) =L 0 +K(t) depend ont throughK and provide a proof sketch of Eq. (3.5), i.e. kE T P 0 Te R 1 0 ds ~ K e (s) P 0 k =O(1=T ): (B.1) We will show this by rst stating two claims. The `proof' of the rst claim, I readily admit, is not the easiest to follow and could do with an upgrade in its presentation. It is essentially copied from a collection of notes I made over two years ago (which never made it into a formal publication), and is therefore a bit muddled. The idea however is fairly simple; it is a recursive bound on an integral. One should perhaps therefore think of this as more of a `sketch proof', since there may be some small details not included. First note that at the superoperator level one has _ E t =L(t)E t . Expanding this as a Dyson series gives E t =e tL 0 1 + Z t 0 dt 1 e t 1 L 0 K(t 1 )E t 1 =: 1 X n=0 E (n) t (B.2) whereE (n) t =e tL 0 R t 0 dt 1 R t n1 0 dt n ^ K(t 1 )::: ^ K(t n ), where ^ K(t) :=e tL 0 K(t)e tL 0 . Now consider the seriesE t P 0 , with the identity insertion e tL 0 =P 0 +e tL 0 Q 0 . Each E (n) t P 0 term is a sum of 2 n dierent terms, containing dierent combinations of theP 0 and Q 0 . One can describe such terms by a binary string, = ( 1 ;:::; n ), where i 2f0; 1g indicates if at position i it is aP 0 , or, respectivelyQ 0 . We denote such a term in the sum as (n) , i.e. E t P 0 = 1 X n=0 X (n) (B.3) where the sum over is over all the 2 n bit strings. If one considers those only containing P 0 , i.e. jj = 0, we see this gives exactly Te R T 0 P 0 K(t)P 0 P 0 =Te R 1 0 ds ~ K e (s) P 0 . 108 B. APPENDICES FOR CHAPTER 3 We now seek to bound the remaining terms, i.e. for each n, such thatjj> 0. That is: E :=kTe R T 0 L(t) P 0 Te R T 0 P 0 K(t)P 0 P 0 k 1 X n=1 X :jj>0 k (n) (T )k: (B.4) We make two claims which prove the statement. Claim B.1.k (n) (T )kc n sup t2[0;T ] kK(t)k n T njj (njj)! , for some constant c. Since, for xed n andjj, there are n jj terms, we can rewrite our above bound as a sum: EM := 1 X n=1 c n sup t2[0;T ] k ~ K(t)k n n X k=1 T k (nk)! n k (B.5) whereK = ~ K=T . We re-write this in powers of T , after dening D :=c sup t k ~ K(t)k, as M = 1 X n=1 1 T n 1 X m=n D m (mn)! m n : (B.6) Claim B.2. M =O(1=T ) for large T . Assuming these claims (proof to follow), we have indeed EO(1=T ) as required (for large T ). Proof of Claim B.1: We wish to bound k (n) (T )k. There are essentially four types of term which one needs to integrate over, arising from the insertion of the identity element e tL 0 =P 0 + Q 0 e tL 0 (and usingP 0 =P 2 0 ; 1P 0 =:Q 0 =Q 2 0 , and that c Q 0 L 0 =L 0 Q 0 ). Dene these as:K PP (t) :=P 0 K(t)P 0 ,K QP (t) :=e tL 0 Q 0 K(t)P 0 ,K PQ (t) :=P 0 K(t)e tL 0 Q 0 , and K QQ (t) =e tL 0 Q 0 K(t)e tL 0 Q 0 . An example of the form of (n) is (n) (T ) =e TL 0 Z T 0 dt 1 Z t 1 0 dt 2 Z t n1 0 dt n (K QP (t 1 ):::K PQ (t i1 )K QQ (t i+1 ):::K PP (t n ))P 0 (B.7) where if ones counts the number of occurrences of Q on the rst index of theK XY (t i ) (where X;Y 2fP;Qg), i.e., the X index, there are a total ofjj. Note there will be no consecutive terms of which the Y index on the rst term is dierent form the X index on the second (asP 0 Q 0 = 0). Note the rst e TL 0 on the left will be absorbed into e t 1 L 0 (see below). c e.g.Q0 =SL0 =L0S =) Q0L0 =L0SL0 =L0Q0 109 B. APPENDICES FOR CHAPTER 3 We bound each integral (i.e. such as Eq. (B.7)) usingk R t 0 dtA t B t k R t 0 dtkA t kkB t k recursively. To do this, one must be able to bound the integrals of the fourK XY . The main idea, is that each of these integrals can be bound by some constant, multiplied by the time parameter to some power (e.g. which can then be integrated in the next bound etc., leading to a T n =n! overall factor in thejj = 0 case). Firstly we have, I PP := R t 0 kK PP () k k t k+1 k+1 sup 2[0;t] kK()k. Note, the factor k is important. It arises from multiple integrations over the dierent ranges (T;t1;t2;:::;t n1 ), and the bounds therein. We bound e tL 0 R t 0 K QP (t) using the results of Appendix D of Ref. [161]. The e tL 0 is brought through from the next integral up, which is necessarily of the form K XQ , and therefore can be written e tL 0 XK(t)Q 0 e tL 0 , where X is one ofQ 0 ;P 0 . We are interested in bounding I QP := R t 0 ke tL 0 K QP () p dk (again, p arises as explained in the previous paragraph, with p related to the `recursive depth'). We use the spectral projection ofL 0 , noting that t > 0 (and using a trivial change of variables, t!), so that I QP sup 2[0;t] kK(t)k X h>0 Z t 0 de Re h P h + m h 1 X k=1 k D k h k! ! (t) p : (B.8) Note, Re h>0 < 0 by assumption, and theQ 0 kills anyP 0 term (hence h> 0). We consider the integrals separately (by putting the norms on each term - triangle and multiplicative inequalities), such that we must calculate R t 0 de Re h k (t) p . Now, Z t 0 d k e Re h (t) p t p Z 1 0 d k e Re h =t p (k)! j Re h j k+1 (B.9) Therefore, we have I QP ct p sup 2[0;t] kK()k, where c := X h>0 kP h k j Re h j + m h 1 X k=1 kD k h k j Re h j k+1 ! : (B.10) Note that I PQ can be given the same bound as I PP , as the e tL 0 has been bound by the preceding term. In a similar vein, I QQ can be bound in the same way as I QP above. That is, each term I PQ ,I PP provides an extra factor of t, andI QP ;I QQ do not. The total number of powers in t becomes T njj , since the total number of integrals I XY with P on the rst index is, by denition,njj. Similarly, from these integrations, one gets a factor of (njj)! in the denominator. All together then this gives k (n) (T )kC n sup t2[0;T ] k ~ K(t)k n T jj (njj)! ; (B.11) 110 B. APPENDICES FOR CHAPTER 3 as required. Note the T n in ~ K=T n cancels that from T njj . C is some constant (e.g. maxf1;cg). Proof of Claim B.2: On can re-write Eq. (B.6) using the Laguerre polynomials. We do this in two steps. First, by simple rearrangement from Eq. (B.6) we get M = 1 X n=1 D n T n 1 X p=0 D p p! p +n n = 1 X n=1 D n T n 1 X p=0 D p p! (p +n + 1) (p + 1)(n + 1) = 1 X n=1 D n T n 1 X p=0 D p p! (n + 1) p (1) p (B.12) where (a) x is the Pochammer symbol. This is in fact the denition of the Kummer con uent hypergeometric function 1 F 1 , which can be written as a Laguerre polynomial d : M = 1 X n=1 D n T n 1 F 1 (1 +n; 1;D) =e D 1 X n=1 D n T n L n (D): (B.13) Now, a well-known relation e is P 1 n=0 t n L n (x) = e tx=(1t) 1t . Working with this gives M =e D e D 2 =(TD) 1D=T 1 ! : (B.14) Therefore, to O(1=T ), we have M =e D D(D+1) T =O(1=T ): B.2 Proof of second order eective generator [Eq. (3.7)] We haveL =L 0 +K, withkKk =O( 1 p T ), and consider a perturbative expansion in 1= p T . For small (but non-zero)T 1 , there will be in general non-zero eigenvalues (k) 0 ofL which are originated by = 0 ofL 0 (that is, the degeneracy is lifted upon the perturbation). We denote byP the projection associated with these eigenvalues (for T 1 = 0 we have P =P 0 ). Since, because of the Lindblad structure, there is no nilpotent term associated with the zero eigenvalue, the perturbation theory reads, as shown in Ref. [67], for smallkKk; i.e., large T; PP 0 = P 0 KSSKP 0 +O(kKk 2 ); PL T P = P 0 KP 0 P 0 KSKP 0 P 0 KP 0 KS SKP 0 KP 0 +O(kKk 3 ) (B.15) d A standard identity is 1F1(a; 1;z) =e z La1(z). e This is the generating function for the Laguerre polynomials. 111 B. APPENDICES FOR CHAPTER 3 From the rst equation it now follows (for suciently large T ) kPP 0 k =O( R kKk)C 0 1 R kKk; (B.16) whereC 0 1 is a suitable constant (notice thatkP 0 k = 1). On the other hand, usingP 0 KP 0 = 0 and the denitionL e :=P 0 KSKP 0 for the dimensionful eective generator from the last equation in (B.15) it follows kPLPL e k =O(kKk 3 ); (B.17) whence (for smallkKk)kPLPkC 3 kL e k: Since e tL P =e tPLP P one can write (e tL e tPLP )P 0 =(e tL e tPLP )(PP 0 ) (B.18) e tL e tPLP = (e tL e tL e ) + (e tL e e tPLP ) (B.19) Usingke X e X+Y kkYke kXk+kYk with X := tL e and Y = t(PLPL e ) and the bounds above it also follows that, for 0tT , ke tL e e tPLP kC 0 2 tkKk 3 (B.20) where C 0 2 is a constant of that, for dimensional reasons, is O( 2 R ) i.e., C 0 2 C 2 2 R : From (B.19) usingke tL k = 1; and standard operator norm inequalities one nds t := k(e tL e tL e )P 0 kk(e tL e e tPLP )P 0 k + (ke tL k +ke tPLP k)kPP 0 k ke tL e e tPLP k + (1 +e tkPLPk )kPP 0 k (B.21) Now using the bounds (B.16), (B.17),(B.20) and 0tT; (> 0) one nds t R kKk(C 1 +C 2 t R kKk 2 ); (B.22) whereC 1 C 0 1 (1+e C 3 k ~ L e k ): By moving to the dimensionless Hamiltonian such thatkKk = ( R T ) 1=2 k ~ Kk the inequality (B.22) becomes t p R T k ~ Kk(C 1 + t T C 2 k ~ Kk 2 ): Notice that the requirement ofkKk being suciently small used repeatedly in the above translate now in the \adiabatic criterion" of T being suciently large. Finally by taking the supremum for t2 [0; T ] one obtains sup t t p R T (C 1 +C 2 ): That is, T = O( p r =T ), which is precisely Eq. (3.7). B.3 Derivation of Eq. (3.33) We assume a Hamiltonian given by Eq. (3.32), with dissipation occurring according to Eq. (3.30). As such we write the full Hilbert space asH = N H q; H 1; , where the subscript (q;) refers to the qubit sector in cavity =A;B, and (1;) the corresponding bosonic sector. 112 B. APPENDICES FOR CHAPTER 3 Since H AB conserves the number of photons in the joint system, the dissipative term (Eq. (3.31)), which accounts for leakage, guarantees the nal state under evolution of Eq. (3.30) is the joint vacuum state. For simplicity we set K 0 = 0. We expand on this comment at the end of this section, and consider the eect of a non-zeroK 0 . The second term in equation (3.32) only appears at second order in our approximation, hence must be scaled by 1= p T . Following the tensor ordering of the Hilbert space, we consider the action of the eective Liouvillian, L eff :=P 0 KSKP 0 , on a state of the form A j0ih0j B j0ih0j, where are qubit states. The action of the rst Hamiltonian superoperator (K) results in: KP 0 () =i X =A;B g S q; j1i h0j q; S + ;q j0i h1j ; (B.23) where identity operations have been ignored for clarity. We dene ^ K := iKP 0 (). As in Refs. [45, 46], we calculateS using the integral formS = R 1 0 dte tL 0 Q 0 , whereQ 0 = 1P 0 . Since ^ K has already been projected out of the SSS, we just need to consider the action of e tL 0 , which fortunately can be simplied, noting thatH AB :=i[H AB ;], and L AB := P =A;B L () 0 commute when acting on ^ K. In fact, the action ofL AB on ^ K, or indeed onH AB ^ K, is to simply pull out a factor of 1 2 (hence the exponential givese t= 2 ). Therefore, the task is to calculate e tH AB ^ K. Dene ^ K 0 := i J H AB ^ K. One can see thatH AB ^ K 0 =iJ ^ K (i.e. applyingH AB twice is the identity, up to an overallJ 2 factor). Thus, e tH AB ( ^ K) = ^ KitJ ^ K 0 (tJ) 2 2! ^ K +i (tJ) 3 3! ^ K 0 + = ^ K cos(Jt)i ^ K 0 sin(Jt): (B.24) Combining these two results allows us to explicitly perform this integration, and hence calculateSKP 0 . We get: S ^ K = ^ K Z 1 0 e t= 2 cos(Jt)dt +i ^ K 0 Z 1 0 e t= 2 sin(Jt)dt = 2 1 + 4(J) 2 ^ K + 4iJ 2 1 + 4(J) 2 ^ K 0 : (B.25) ApplyingK to the above essentially just brings out another factor of g A or g B , with the appropriate S operator. Projecting back into the steady state results in L eff () =i 4Jg A g B 2 1 + 4(J) 2 S A S + B +S + A S B ; + 4 1 + 4(J) 2 X =A;B g 2 (S S + 1 2 fS + S ;g); (B.26) 113 C. APPENDICES FOR CHAPTER 5 where the state is a steady state. This is the form as quoted in the main text. Note that if we allow K 0 6= 0, and instead scale this Hamiltonian by 1=T (as it has a non-vanishing eective rst order term), the above result is the same, with the addition that to the eective Lindbladian, Eq. (B.26), there is an extra term,i[! q A S z A +! q B S z B ;], where as in the above, is a steady state. B.4 Derivation of Eq. (3.40) For a single qubit undergoing dephasing in the z-direction, steady states are of the form ss (a) :=aj0ih0j + (1a)j1ih1j (wherej0i;j1i are dened in Sect. 3.3.2.1). One can easily compute KP 0 (X) =ig(ab)[j1ih0j j0ih1jH:c:]; (B.27) whereP 0 (X) = ss (a) ss (b), andK is dened by Hamiltonian Eq. (3.39). The tensor ordering respects the order of the DGMs. It is clear that this projects to zero at rst order (i.e.P 0 KP 0 = 0), so we consider the second order eective generator. One can check that L eff (X) =g 2 (ab)[j0ih0j j1ih1jj1ih1j j0ih0j]: (B.28) This allows us to calculate e TL eff (X), for some steady state X. In particular, in the main text, we pick X =j0ih0j j1ih1j, which results in Eq. (3.40). C Appendices for Chapter 5 C.1 Derivation of Eqs. (5.17) and (5.18) The easiest way to see the spectral projection for generator Eq. (5.15) is to note thatL 0 acts on the space of linear operators dened over the joint Hilbert spaceH N 1=2 , whereH 1=2 'C 2 , and as such we can represent an operator as X = P n n n , where n = 1 2 N Tr[X n ] (we use the same notation as in the main text). Then, by linearity, L 0 [X] = X n n L 0 [ n ] = 1 2 N X n n Tr[X n ] n =: X n n P n [X] (C.1) where in the second line we have used that n is an eigenstate ofL 0 (with value n 2 f0;2 g). In the last step we dened the projector which acts asP n [X] = 1 2 N Tr[X n ] n . 114 C. APPENDICES FOR CHAPTER 5 We now showP n is indeed a genuine projector. We take X2 L(H N 1=2 ) as above, an arbitrary linear operator over the joint Hilbert space. First, P n P m [X] = 1 2 N X p p P n Tr[ m p ] m = 1 2 N m Tr[ m n ] n = m n n n = m n P n [X] (C.2) where we used Tr[ m n ] = 2 N m n . Since X is arbitrary, we haveP n P m = m n P n . Second, X m P m [X] = X m; n n P m [ n ] = 1 2 N X m; n n Tr[ m n ] m = X n n n =X (C.3) so that P m P m =I. C.2 Derivation of (t) [Eq. (5.24)] We assume !2 R >0 dened in the main text is real and non-zero. Apart from steady states, the eigenvalues ofL 0 are =2 . Recall we haveL 1 /L 0 (the same up to a positive constant), and we absorb the (magnitude of the) non-zero eigenvalue ofL 1 in B (so as to avoid introducing a redundant parameter). We compute the ~ (s) (as in Eq. (5.7)), for these eigenvalues: ~ (s) = 1 s + 2 + B 2 s+ 1 k = s + 1 k (ss + )(ss ) = c + ss + + c ss (C.4) where s = 1 i!, and c = 1 2 (1i 2 k 1 2! k ). Therefore, (t) =c + e s + t +c e s t . We can write c =jcje i , which gives (t) =jcje t= (e i e i!t +e i e i!t ) = 2jcje t= cos(!t +); (C.5) where 2jcj = p 1 + (2 k 1) 2 =(2! k ) 2 = 1= cos. Note, by expanding the cosine function, this can also be written as (t) =e t= (cos(!t) tan sin(!t)); (C.6) 115 C. APPENDICES FOR CHAPTER 5 where tan = 1=2 k ! . One can in fact use the form Eq. (C.6) to easily derive in the limit !! 0, or when ! =ij!j. Note that at timesT = 2 ! n, and 2 ! (n) [n = 1; 2;::: ], we have (T ) =e T= , and therefore the evolution operator is 0 T = X n e 0 n T P n ; (C.7) where 0 n = 1 2 ( n + 1= k ), where n is either 0 or2 . We see, that the evolution of this system (for timeT ) is equivalent to evolution under the background channelL 0 alone, with replaced by 1 2 ( + 1=2 k ). We demonstrate this in the main text in Fig. 5.1. C.3 Conditions for complete positivity For map Eq. (5.25) to be CP, we require 0 p(t) 1, and therefore1 (t) 1;8t 0. First we consider !2R >0 (see below for the imaginary case) footnoteNote, if in fact ! = 0, one can easily see directly thatj(t)j 1., and therefore we have (t) = e t= cos(!t +)= cos. We dierentiate this which shows at the turning points, ^ t, we have cos ! ^ t + = ! p 1+(!) 2 , and therefore j cos ! ^ t + j cos = s 1 + 2 1 + 2 + < 1; (C.8) where = 2 k 1 2! k . Thus, ( ^ t) 1. Since also (0) = 1; (1) = 0, it is clear that j(t)j 1 for all parameters, and all t 0. C.3.1 The case ! =ij!j We dene for convenience = 2 k . If ! is not real, then it is purely imaginary of the form ! = ij!j (this occurs when jBj< j1 1=j). In this case the analysis is simple since we can see that, from Eq. (C.6) above, (t) =e t= coshj!jt + + 1=2 k j!j sinhj!jt ; (C.9) and therefore j(t)j (t) :=e t= coshj!jt + 1 j!j sinhj!jt : (C.10) We see d dt =e t= j!j sinh(j!jt) 1 1 j!j 2 2 0; (C.11) where the inequality comes from the observation thatj!j 2 = 2 (1 1=) 2 B 2 < 2 (1 + 1=) 2 = 2 . Since (0) = 1, and is decreasing for all times, it is clear that (t) 1;8t 0. 116 C. APPENDICES FOR CHAPTER 5 C.4 Modulated decay noise - derivations As described in the main text, we have ~ k(s) =B 2 s + 1 k (s + 1 k ) 2 + 2 : (C.12) Note, as before, we can absorb any redundant (positive) constants into the denition of B. Therefore, the poles of ~ (s) are the roots of s 3 + 2s 2 ( + 1 k ) +s( 2 +B 2 + 4 1 k + 2 k ) +2 ( 2 k + 2 ) +B 2 1 k : (C.13) We take B 2 = 2 9 (2 1 k ) 2 + 2 2 , which means the roots of Eq. (C.13) are simply s = 1 ; 1 i! (C.14) where 1 and ! are given in the main text. Therefore, we can write using partial fractions ~ (s) = c 0 s + 1 + c 1 s + 1 i! + c 1 s + 1 +i! ; (C.15) for constants c 0 = 1 9! 2 (2 1 k ) 2 + 9 2 ; c 1 = 1c 0 2 cos e i cos = 1c 0 q (1c 0 ) 2 + 4 9! 2 (2 1 k ) 2 : (C.16) Inverting this, one gets (t) =e t= c 0 +c 1 e i!t +c 1 e i!t =e t= c 0 + 1c 0 cos cos(!t +) : (C.17) We note that the dynamics generate a genuine quantum map ifj(t)j 1,8t 0. For a given parameter set, one can numerically check this by for example, dierentiating Eq. (C.17), to nd the rst minima and maxima of (t) (subsequent minima/maxima will be lower/upper bounded by the rst due to the exponential). If at these turning points j(t)j 1, the map is CP for all times. Note, as per the main text, for a choice of ; k , one must also check that setting = 0 still generates a CP map. Lastly, notice that at timesT = 2 ! n , and 2 ! (n), [n = 1; 2;::: ] then (T ) =e T= , and the resulting evolution (operator) is equivalent to that underL 0 alone, with replaced by 1 3 ( + 1 k ). For large k , we can reduce the decay rate by nearly a factor of three (as compared to a factor of two with the purely exponential kernel). 117 D. GENERATION OF PROBLEM INSTANCES FOR QUANTUM ANNEALING D Generation of Problem Instances for Quantum Annealing Generating problem instances to test on experimental quantum annealing devices for bench- marking purposes, and otherwise, has evolved somewhat into a eld of study itself [152, 162{ 166]. Arguably the most important development is the use of planted-solution instances, rst outlined in Ref. [152]. These are instances constructed around a (randomly chosen) conguration, consisting of small sub-Hamiltonian loops, so that H p = P l H l , and where the H l are chosen that that the conguration minimizes each H l individually, hence the ground state ofH p is the conguration the problem is constructed around. By constructing theH l over loops (e.g. of size 4 or 6 qubits) on the graph the problem is dened over, one can add frustration to the problem itself f , thus allowing for a degree of tunability in the problem hardness. This is set of problems is therefore extremely useful for benchmarking annealing devices since the problems are 1) easy to generate, 2) can be varied in `diculty', and 3) the ground state is known in advance. Another use was found for this problem set, which we rely on heavily in Chapter 6. This is the fact that one can actually numerically compute the exact values of the ground and excited state degeneracies of planted-solution instances. This is essentially a counting problem, where one must count individual solutions of each H l which are consistent with one another. This algorithm is detailed in Ref. [107]. From work originally presented in Refs. [50, 51], and highlighted in Chapter 6 (Sect. 6.2.4), we used this to compute the classical Boltzmann distribution of large (up to 500 bit) spin- glass instances. By computing the ground and rst excited state degeneracy as outlined in the previous paragraph, we are able to accurately assess whether an entropic sampler has correctly converged; this is highly non-trivial, as without this guarantee it is not clear at all whether the estimated values (from sampling) are any where near to the true values. In Ref. [51] we used this to accurately estimate the degeneracies for problems of up to 300 qubits using a standard implementation of the Wang-Landau algorithm, as originally described in Refs. [167, 168]. We considered an entropic sampling as `successful' when neither the ground nor the rst excited state degeneracy diered by more than 5% of the known values by planting. In Ref. [156], we describe an entropic algorithm based on population annealing, which for large problems, was found to be much more accurate than `traditional' techniques (e.g. Wang-Landau). This algorithm was used to accurately estimate the degeneracies for larger problems, up to 500 qubits, which we take advantage of in Sect. 6.2.4. For more information on the specic implementation details regarding the problems used in Chapter 6, please see the accompanying papers, Refs. [50, 51]. f Frustration is a property of a spin system where, due to competing edges connecting a spin, it can not simultaneously satisfy all of its constraints, thus the ground state energy of the problem is unaected by its value. The spin is said to be frustrated. 118 D. GENERATION OF PROBLEM INSTANCES FOR QUANTUM ANNEALING We mention at this point, for completeness, another technique which was derived in or- der to increase the hardness of planted-solution problem instances, as detailed in Ref. [165]. Note, this was not used in Refs. [50, 51]. Here we gave a simple heuristic optimization algo- rithm, which intends to maximize the time-to-solution (TTS) for a problem instance g . The general idea is shown below in Algorithm LAO (standing for loop adaptive optimization): LAO 1: procedure GenerateFrustratedProblem 2: Generate (random) solution 3: Pick update parameter 4: Place M random loops on graph, each respecting the planted solution 5: Calculate GS energy 6: for step = 1 to NSTEP do 7: Remove random loop from current instance 8: Pick new random loop and add, respecting planted solution 9: Get new TTS 10: if TTS increases then 11: Accept Change, update GS energy 12: else 13: Accept with probability e jTTSj 14: Update GS energy if accepted 15: end if 16: Update if required 17: end for 18: end procedure Here, one starts with an initial random planted-solution instance, and adds/removes loops conditionally depending on whether the TTS increases or decreases. At each step the new loops are chosen to respect the planted-solution so that at the end of the algorithm the original planted-solution is still the minimizing one. Remarkably, this rather simple algorithm is quite eective, and was shown to render much more dicult problems than cherry picking from a large random problem set (as was often done prior to this technique being introduced). Fig. D.1 shows the distribution of problem hardness before and after the algorithm has run over 2000 steps (NSTEP = 2000). One see up to an order of magnitude increase in problem hardness. Similar algorithms were also demonstrated for other problem types, which can result in an even greater increase in problem diculty. g The TTS is dened as the average `time' (e.g. CPU time or number of algorithmic steps) it takes the solver of choice to successfully obtain a minimizing conguration of the optimization problem. 119 D. GENERATION OF PROBLEM INSTANCES FOR QUANTUM ANNEALING 10 0 10 0 : 5 10 1 t ( f ) H F S ( s ) / h t ( i ) H F S ( s ) i 0 0 . 0 5 0 . 1 0 . 1 5 F r a c t i on o f I n s t an c e s 0 2000 Figure D.1: Histogram of the ratio of nal to mean initial t HFS for 100 planted-solution instances (dened over the D-Wave Two graph (see Ref. [165])). We compare the LAO algorithm after 2000 steps (red) to the 100 random initial (blue) planted-solution instances, each containing 350 random loops. Updates consisted of adding and removing random loops, where size 4 (6) loops have a probability of 0.1 (0.9) being chosen, with integer loop weight chosen randomly in range [1,5]. We plot the histogram of the ratio nal HFS TTS, t (f) HFS , after 0 (blue) and 2000 (red) LAO steps, to the mean initial TTS,ht (i) HFS i. The mean TTS ratio after the 2000 steps is 4:3, and the maximum nal TTS is 0:15s. 120 Bibliography [1] J. Falk, \Things that Count," e-book available online (2014). [2] B. J. Copeland, \The Modern History of Computing," in The Stanford Encyclopedia of Philosophy (Stanford University, 2017). [3] M. Smith, The Secrets of Station X (Biteback Publishing, 2011). [4] P. Hohenberg and W. Kohn, \Inhomogeneous Electron Gas," Phys. Rev. 136, B864 (1964). [5] W. Kohn and L. J. Sham, \Self-Consistent Equations Including Exchange and Cor- relation Eects," Phys. Rev. 140, A1133 (1965). [6] A. J. Cohen, P. Mori-S anchez, and W. Yang, \Insights into Current Limitations of Density Functional Theory," Science 321, 792 (2008). [7] R. P. Feynman, \Simulating physics with computers," Int. J. Theor. Phys. 21, 467 (1982). [8] D. Deutsch, \Quantum theory, the Church-Turing principle and the universal quan- tum computer," Proc. R. Soc. London Ser. A 400, 97 (1985). [9] D. Deutsch and R. Jozsa, \Rapid solution of problems by quantum computation," Proc. R. Soc. London Ser. A 439, 553 (1992). [10] P. W. Shor, \Algorithms for quantum computation: discrete logarithms and factor- ing," in Proc. 35th Annual Symposium on Foundations of Computer Science (1994) pp. 124{134. [11] A. Y. Kitaev, \Quantum measurements and the Abelian Stabilizer Problem," arXiv:quant-ph/9511026 (1995). [12] L. K. Grover, \A fast quantum mechanical algorithm for database search," in Proc. Twenty-eighth Annual ACM Symposium on Theory of Computing (1996) pp. 212{ 219. 121 BIBLIOGRAPHY BIBLIOGRAPHY [13] D. S. Abrams and S. Lloyd, \Quantum Algorithm Providing Exponential Speed In- crease for Finding Eigenvalues and Eigenvectors," Phys. Rev. Lett. 83, 5162 (1999). [14] A. W. Harrow, A. Hassidim, and S. Lloyd, \Quantum Algorithm for Linear Systems of Equations," Phys. Rev. Lett. 103, 150502 (2009). [15] S. Aaronson and A. Arkhipov, \The Computational Complexity of Linear Optics," in Proc. Forty-third Annual ACM Symposium on Theory of Computing (2011) pp. 333{342. [16] C. J. Ballance, T. P. Harty, N. M. Linke, M. A. Sepiol, and D. M. Lucas, \High- Fidelity Quantum Logic Gates Using Trapped-Ion Hyperne Qubits," Phys. Rev. Lett. 117, 060504 (2016). [17] L. Childress, M. V. Gurudev Dutt, J. M. Taylor, A. S. Zibrov, F. Jelezko, J. Wrachtrup, P. R. Hemmer, and M. D. Lukin, \Coherent Dynamics of Coupled Electron and Nuclear Spin Qubits in Diamond," Science 314, 281 (2006). [18] I. Chiorescu, Y. Nakamura, C. J. P. M. Harmans, and J. E. Mooij, \Coherent Quantum Dynamics of a Superconducting Flux Qubit," Science 299, 1869 (2003). [19] N. A. Peters, J. T. Barreiro, M. E. Goggin, T.-C. Wei, and P. G. Kwiat, \Remote State Preparation: Arbitrary Remote Control of Photon Polarization," Phys. Rev. Lett. 94, 150502 (2005). [20] C. M. Dawson and M. A. Nielsen, \The Solovay-Kitaev algorithm," arXiv:quant- ph/0505030 (2005). [21] J. Preskill, \Quantum computing and the entanglement frontier," arXiv:1203.5813 (2012). [22] D. P. DiVincenzo, \The Physical Implementation of Quantum Computation," arXiv:quant-ph/0002077 (2000). [23] R. Landauer, \Is quantum mechanics useful?" Phil. Trans. R. Soc. Lond. A 353, 367 (1995). [24] R. Barends, J. Kelly, A. Megrant, A. Veitia, D. Sank, E. Jerey, T. C. White, J. Mutus, A. G. Fowler, B. Campbell, Y. Chen, Z. Chen, B. Chiaro, A. Dunsworth, C. Neill, P. O'Malley, P. Roushan, A. Vainsencher, J. Wenner, A. N. Korotkov, A. N. Cleland, and J. M. Martinis, \Superconducting quantum circuits at the surface code threshold for fault tolerance," Nature 508, 500 (2014). [25] V. M. Sch afer, C. J. Ballance, K. Thirumalai, L. J. Stephenson, T. G. Ballance, A. M. Steane, and D. M. Lucas, \Fast quantum logic gates with trapped-ion qubits," Nature 555, 75 (2017). 122 BIBLIOGRAPHY BIBLIOGRAPHY [26] P. W. Shor, \Scheme for reducing decoherence in quantum computer memory," Phys. Rev. A 52, R2493(R) (1995). [27] A. Steane, \Multiple-particle interference and quantum error correction," Proc. R. Soc. London Ser. A 452, 2551 (1996). [28] R. La amme, C. Miquel, J. P. Paz, and W. H. Zurek, \Perfect Quantum Error Correcting Code," Phys. Rev. Lett. 77, 198 (1996). [29] D. Gottesman, \Stabilizer Codes and Quantum Error Correction," arXiv:quant- ph/9705052 (1997). [30] D. A. Lidar and T. A. Brun eds., Quantum Error Correction (Cambridge University Press, 2013). [31] L. Viola and S. Lloyd, \Dynamical suppression of decoherence in two-state quantum systems," Phys. Rev. A 58, 2733 (1998). [32] K. Khodjasteh and D. A. Lidar, \Fault-Tolerant Quantum Dynamical Decoupling," Phys. Rev. Lett. 95, 180501 (2005). [33] P. Zanardi and M. Rasetti, \Noiseless Quantum Codes," Phys. Rev. Lett. 79, 3306 (1997). [34] D. A. Lidar, I. L. Chuang, and K. B. Whaley, \Decoherence-Free Subspaces for Quantum Computation," Phys. Rev. Lett. 81, 2594 (1998). [35] E. Knill, R. La amme, and L. Viola, \Theory of Quantum Error Correction for General Noise," Phys. Rev. Lett. 84, 2525 (2000). [36] J. Kempe, D. Bacon, D. A. Lidar, and K. B. Whaley, \Theory of decoherence-free fault-tolerant universal quantum computation," Phys. Rev. A 63, 042307 (2001). [37] G. A. Paz-Silva, A. T. Rezakhani, J. M. Dominy, and D. A. Lidar, \Zeno Eect for Quantum Computation and Control," Phys. Rev. Lett. 108, 080501 (2012). [38] M. B. Plenio, S. F. Huelga, A. Beige, and P. L. Knight, \Cavity-loss-induced gener- ation of entangled atoms," Phys. Rev. A 59, 2468 (1999). [39] A. Beige, D. Braun, B. Tregenna, and P. L. Knight, \Quantum Computing Using Dissipation to Remain in a Decoherence-Free Subspace," Phys. Rev. Lett. 85, 1762 (2000). [40] B. Kraus, H. P. B uchler, S. Diehl, A. Kantian, A. Micheli, and P. Zoller, \Preparation of entangled states by quantum Markov processes," Phys. Rev. A 78, 042307 (2008). 123 BIBLIOGRAPHY BIBLIOGRAPHY [41] J. T. Barreiro, M. M uller, P. Schindler, D. Nigg, T. Monz, M. Chwalla, M. Hennrich, C. F. Roos, P. Zoller, and R. Blatt, \An open-system quantum simulator with trapped ions," Nature 470, 486 (2011). [42] P. Schindler, M. M uller, D. Nigg, J. T. Barreiro, E. A. Martinez, M. Hennrich, T. Monz, S. Diehl, P. Zoller, and R. Blatt, \Quantum simulation of dynamical maps with trapped ions," Nat. Phys. 9, 361 (2013). [43] F. Verstraete, M. M. Wolf, and J. Ignacio Cirac, \Quantum computation and quantum-state engineering driven by dissipation," Nat. Phys. 5, 633 (2009). [44] M. Kliesch, T. Barthel, C. Gogolin, M. Kastoryano, and J. Eisert, \Dissipative Quantum Church-Turing Theorem," Phys. Rev. Lett. 107, 120501 (2011). [45] P. Zanardi and L. Campos Venuti, \Coherent Quantum Dynamics in Steady-State Manifolds of Strongly Dissipative Systems," Phys. Rev. Lett. 113, 240406 (2014). [46] P. Zanardi, J. Marshall, and L. Campos Venuti, \Dissipative universal Lindbladian simulation," Phys. Rev. A 93, 022312 (2016). [47] J. Marshall, L. Campos Venuti, and P. Zanardi, \Modular quantum-information processing by dissipation," Phys. Rev. A 94, 052339 (2016). [48] J. Marshall, L. Campos Venuti, and P. Zanardi, \Quantum data classication by dissipation," arXiv:1811.03175 (2018). [49] J. Marshall, L. Campos Venuti, and P. Zanardi, \Noise suppression via generalized- Markovian processes," Phys. Rev. A 96, 052113 (2017). [50] J. Marshall, D. Venturelli, I. Hen, and E. G. Rieel, \The power of pausing: advancing understanding of thermalization in experimental quantum annealers," arXiv:1810.05881 (2018). [51] J. Marshall, E. G. Rieel, and I. Hen, \Thermalization, Freeze-out, and Noise: De- ciphering Experimental Quantum Annealers," Phys. Rev. Applied 8, 064025 (2017). [52] M. A. Nielsen and I. L. Chuang, Quantum Computation and Quantum Information (Cambridge University Press, 2010). [53] I. Bengtsson and K. _ Zyczkowski, Geometry of Quantum States: An Introduction to Quantum Entanglement (Cambridge University Press, 2006). [54] M. M. Wolf, \Quantum Channels & Operations: Guided Tour," Lecture notes avail- able online (2012). 124 BIBLIOGRAPHY BIBLIOGRAPHY [55] C. Majenz, T. Albash, H.-P. Breuer, and D. A. Lidar, \Coarse graining can beat the rotating-wave approximation in quantum Markovian master equations," Phys. Rev. A 88, 012103 (2013). [56] D. A. Lidar, Z. Bihary, and K. B. Whaley, \From completely positive maps to the quantum Markovian semigroup master equation," Chem. Phys. 268, 35 (2001). [57] T. Albash, S. Boixo, D. A. Lidar, and P. Zanardi, \Quantum adiabatic Markovian master equations," New J. of Phys. 14, 123016 (2012). [58] P. Pearle, \Simple derivation of the Lindblad equation," Eur. J. Phys. 33, 805 (2012). [59] H.-P. Breuer and F. Petruccione, The Theory of Open Quantum Systems (Oxford University Press, 2002). [60] R. Kubo, \Statistical-Mechanical Theory of Irreversible Processes. I. General Theory and Simple Applications to Magnetic and Conduction Problems," J. Phys. Soc. Japan 12, 570 (1957). [61] P. C. Martin and J. Schwinger, \Theory of Many-Particle Systems. I," Phys. Rev. 115, 1342 (1959). [62] E. B. Davies, \Markovian master equations," Comm. Math. Phys. 39, 91 (1974). [63] H.-P. Breuer, \Genuine quantum trajectories for non-Markovian processes," Phys. Rev. A 70, 012106 (2004). [64] H.-P. Breuer, E.-M. Laine, J. Piilo, and B. Vacchini, \Colloquium: Non-Markovian dynamics in open quantum systems," Rev. Mod. Phys. 88, 021002 (2016). [65] G. Lindblad, \On the generators of quantum dynamical semigroups," Comm. Math. Phys. 48, 119 (1976). [66] V. Gorini, A. Kossakowski and E.C.G Sudarshan, \Completely positive dynamical semigroups of N-level systems," J. Math. Phys. 17, 821 (1976). [67] T. Kato, Perturbation Theory for Linear Operators (Springer-Verlag, 1995). [68] L. Campos Venuti, T. Albash, D. A. Lidar, and P. Zanardi, \Adiabaticity in open quantum systems," Phys. Rev. A 93, 032118 (2016). [69] V. V. Albert and L. Jiang, \Symmetries and conserved quantities in Lindblad master equations," Phys. Rev. A 89, 022118 (2014). [70] S. Nakajima, \On Quantum Theory of Transport Phenomena : Steady Diusion," Prog. Theor. Phys. 20, 948 (1958). 125 BIBLIOGRAPHY BIBLIOGRAPHY [71] R. Zwanzig, \Ensemble Method in the Theory of Irreversibility," J. Chem. Phys. 33, 1338 (1960). [72] S. M. Barnett and S. Stenholm, \Hazards of reservoir memory," Phys. Rev. A 64, 033808 (2001). [73] S. Daer, K. W odkiewicz, J. D. Cresser, and J. K. McIver, \Depolarizing channel as a completely positive map with memory," Phys. Rev. A 70, 010304 (2004). [74] A. A. Budini, \Stochastic representation of a class of non-Markovian completely positive evolutions," Phys. Rev. A 69, 042107 (2004). [75] A. Shabani and D. A. Lidar, \Completely positive post-Markovian master equation via a measurement approach," Phys. Rev. A 71, 020101 (2005). [76] H.-P. Breuer and B. Vacchini, \Quantum Semi-Markov Processes," Phys. Rev. Lett. 101, 140402 (2008). [77] B. Vacchini, \Non-Markovian master equations from piecewise dynamics," Phys. Rev. A 87, 030101 (2013). [78] V. Giovannetti and G. M. Palma, \Master Equations for Correlated Quantum Chan- nels," Phys. Rev. Lett. 108, 040401 (2012). [79] F. Ciccarello, G. M. Palma, and V. Giovannetti, \Collision-model-based approach to non-Markovian quantum dynamics," Phys. Rev. A 87, 040103 (2013). [80] S. Lorenzo, F. Ciccarello, and G. M. Palma, \Class of exact memory-kernel master equations," Phys. Rev. A 93, 052111 (2016). [81] B. Vacchini, \Generalized Master Equations Leading to Completely Positive Dynam- ics," Phys. Rev. Lett. 117, 230401 (2016). [82] D. Chru sci nski and A. Kossakowski, \Generalized semi-Markov quantum evolution," Phys. Rev. A 95, 042131 (2017). [83] H.-P. Breuer, E.-M. Laine, and J. Piilo, \Measure for the Degree of Non-Markovian Behavior of Quantum Processes in Open Systems," Phys. Rev. Lett. 103, 210401 (2009). [84] M. M. Wolf, J. Eisert, T. S. Cubitt, and J. I. Cirac, \Assessing Non-Markovian Quantum Dynamics," Phys. Rev. Lett. 101, 150402 (2008). [85] R. Vasile, S. Maniscalco, M. G. A. Paris, H.-P. Breuer, and J. Piilo, \Quantifying non-Markovianity of continuous-variable Gaussian dynamical maps," Phys. Rev. A 84, 052118 (2011). 126 BIBLIOGRAPHY BIBLIOGRAPHY [86] B. Bylicka, D. Chru sci nski, and S. Maniscalco, \Non-Markovianity and reservoir memory of quantum channels: a quantum information theory perspective," Sci. Rep. 4, 5720 (2014). [87] S. Goedecker, \Linear scaling electronic structure methods," Rev. Mod. Phys. 71, 1085 (1999). [88] A. M. N. Niklasson, S. M. Mniszewski, C. F. A. Negre, M. J. Cawkwell, P. J. Swart, J. Mohd-Yusof, T. C. Germann, M. E. Wall, N. Bock, E. H. Rubensson, and H. Djid- jev, \Graph-based linear scaling electronic structure theory," J. Chem. Phys. 144, 234101 (2016). [89] P. Facchi and S. Pascazio, \Quantum Zeno Subspaces," Phys. Rev. Lett. 89, 080401 (2002). [90] P. Zanardi, and L. Campos Venuti, \Geometry, robustness, and emerging unitarity in dissipation-projected dynamics," Phys. Rev. A 91, 052324 (2015). [91] T. Barthel and M. Kliesch, \Quasilocality and Ecient Simulation of Markovian Quantum Dynamics," Phys. Rev. Lett. 108, 230504 (2012). [92] D. Kielpinski, V. Meyer, M.A. Rowe, C.A. Sackett, W.M. Itano, C. Monroe, and D.J. Wineland, \A Decoherence-Free Quantum Memory Using Trapped Ions," Science 291, 1013 (2001). [93] R.C. Bialczak, M. Ansmann, M. Hofheinz, E. Lucero, M. Neeley, A.D. O'Connell, D. Sank, H. Wang, J. Wenner, M. Steen, A. N. Cleland, J. M. Martinis, \Quantum process tomography of a universal entangling gate implemented with Josephson phase qubits," Nat. Phys. 6, 409 (2010). [94] E. A. Sete, J. M. Martinis, and A. N. Korotkov, \Quantum theory of a bandpass Purcell lter for qubit readout," Phys. Rev. A 92, 012325 (2015). [95] Z.-X. Man, Y.-J. Xia, and R. Lo Franco, \Cavity-based architecture to preserve quantum coherence and entanglement," Sci. Rep. 5, 13843 (2015). [96] C. Gonz alez-Guti errez, E. Villase~ nor, C. Pineda, and T. H. Seligman, \Stabilizing coherence with nested environments: a numerical study using kicked Ising models," Phys. Scr. 91, 083001 (2016). [97] E. Bernstein and U. Vazirani, \Quantum complexity theory," in Proc. Twenty-fth Annual ACM Symposium on Theory of Computing (1993) pp. 11{20. [98] S. Lloyd, S. Garnerone, and P. Zanardi, \Quantum algorithms for topological and geometric analysis of data," Nat. Commun. 7, 10138 (2016). 127 BIBLIOGRAPHY BIBLIOGRAPHY [99] P. Rebentrost, T. R. Bromley, C. Weedbrook, and S. Lloyd, \Quantum Hopeld neural network," Phys. Rev. A 98, 042308 (2018). [100] A. Monras, A. Beige, and K. Wiesner, \Hidden Quantum Markov Models and non- adaptive read-out of many-body states," arXiv:1002.2337 (2010). [101] M. Schuld, I. Sinayskiy, and F. Petruccione, \Quantum walks on graphs representing the ring patterns of a quantum neural network," Phys. Rev. A 89, 032333 (2014). [102] M. Schuld, I. Sinayskiy, and F. Petruccione, \The quest for a Quantum Neural Network," Quantum Inf. Process. 13, 2567 (2014). [103] P. Rotondo, M. Marcuzzi, J. P. Garrahan, I. Lesanovsky, and M. M uller, \Open quantum generalisation of Hopeld neural networks," J. Phys. A 51, 115301 (2018). [104] J. ichi Inoue, \Pattern-recalling processes in quantum Hopeld networks far from saturation," J. Phys. 297, 012012 (2011). [105] M. J. Kearns and U. V. Vazirani, An Introduction to Computational Learning Theory (MIT Press, 1994). [106] S. Russel and P. Norvig, Articial Intelligence: A Modern Approach, 3rd ed. (Pear- son, 2010). [107] B. H. Zhang, G. Wagenbreth, V. Martin-Mayor, and I. Hen, \Advantages of Unfair Quantum Ground-State Sampling," Sci. Rep. 7, 1044 (2017). [108] L. G. Valiant, \A Theory of the Learnable," Commun. ACM 27, 1134 (1984). [109] W. G. Unruh, \Maintaining coherence in quantum computers," Phys. Rev. A 51, 992 (1995). [110] H. Wang, S. Ashhab, and F. Nori, \Quantum algorithm for simulating the dynamics of an open quantum system," Phys. Rev. A 83, 062317 (2011). [111] T. Barthel and M. Kliesch, \Quasilocality and Ecient Simulation of Markovian Quantum Dynamics," Phys. Rev. Lett. 108, 230504 (2012). [112] A. Chiuri, C. Greganti, L. Mazzola, M. Paternostro, and P. Mataloni, \Linear Optics Simulation of Quantum Non-Markovian Dynamics," Sci. Rep. 2, 968 (2012). [113] P. C. C ardenas, M. Paternostro, and F. L. Semi~ ao, \Non-Markovian qubit dynamics in a circuit-QED setup," Phys. Rev. A 91, 022122 (2015). [114] F. Brito and T. Werlang, \A knob for Markovianity," New J. of Phys. 17, 072001 (2015). 128 BIBLIOGRAPHY BIBLIOGRAPHY [115] R. Sweke, M. Sanz, I. Sinayskiy, F. Petruccione, and E. Solano, \Digital quantum simulation of many-body non-Markovian dynamics," Phys. Rev. A 94, 022317 (2016). [116] R. Di Candia, J. S. Pedernales, A. del Campo, E. Solano, and J. Casanova, \Quantum Simulation of Dissipative Processes without Reservoir Engineering," Sci. Rep. 5, 9981 (2015). [117] J. F. Poyatos, J. I. Cirac, and P. Zoller, \Quantum Reservoir Engineering with Laser Cooled Trapped Ions," Phys. Rev. Lett. 77, 4728 (1996). [118] A. R. R. Carvalho, P. Milman, R. L. de Matos Filho, and L. Davidovich, \Deco- herence, Pointer Engineering, and Quantum State Protection," Phys. Rev. Lett. 86, 4988 (2001). [119] B. Bellomo, R. Lo Franco, S. Maniscalco, and G. Compagno, \Entanglement trap- ping in structured environments," Phys. Rev. A 78, 060302 (2008). [120] J. Tan, T. H. Kyaw, and Y. Yeo, \Non-Markovian environments and entanglement preservation," Phys. Rev. A 81, 062119 (2010). [121] S. G. Schirmer and X. Wang, \Stabilizing open quantum systems by Markovian reservoir engineering," Phys. Rev. A 81, 062306 (2010). [122] Q.-J. Tong, J.-H. An, H.-G. Luo, and C. H. Oh, \Mechanism of entanglement preser- vation," Phys. Rev. A 81, 052330 (2010). [123] D. Chru sci nski and A. Kossakowski, \Generalized semi-Markov quantum evolution," Phys. Rev. A 95, 042131 (2017). [124] E. Knill and R. La amme, \Theory of quantum error-correcting codes," Phys. Rev. A 55, 900 (1997). [125] J. H. Eberly, K. W odkiewicz, and B. W. Shore, \Noise in strong laser-atom interac- tions: Phase telegraph noise," Phys. Rev. A 30, 2381 (1984). [126] J. I. Costa-Filho, R. B. B. Lima, R. R. Paiva, P. M. Soares, W. A. M. Morgado, R. L. Franco, and D. O. Soares-Pinto, \Enabling quantum non-Markovian dynamics by injection of classical colored noise," Phys. Rev. A 95, 052126 (2017). [127] D. Zhou and R. Joynt, \Noise-induced looping on the Bloch sphere: Oscillatory eects in dephasing of qubits subject to broad-spectrum noise," Phys. Rev. A 81, 010103 (2010). [128] C. Addis, G. Karpat, C. Macchiavello, and S. Maniscalco, \Dynamical memory eects in correlated quantum channels," Phys. Rev. A 94, 032121 (2016). 129 BIBLIOGRAPHY BIBLIOGRAPHY [129] S. Hill and W. K. Wootters, \Entanglement of a Pair of Quantum Bits," Phys. Rev. Lett. 78, 5022 (1997). [130] W. K. Wootters, \Entanglement of Formation of an Arbitrary State of Two Qubits," Phys. Rev. Lett. 80, 2245 (1998). [131] T. Albash and D. A. Lidar, \Demonstration of a Scaling Advantage for a Quantum Annealer over Simulated Annealing," Phys. Rev. X 8, 031016 (2018). [132] V. S. Denchev, S. Boixo, S. V. Isakov, N. Ding, R. Babbush, V. Smelyanskiy, J. Mar- tinis, and H. Neven, \What is the Computational Value of Finite-Range Tunneling?" Phys. Rev. X 6, 031015 (2016). [133] T. F. Rnnow, Z. Wang, J. Job, S. Boixo, S. V. Isakov, D. Wecker, J. M. Martinis, D. A. Lidar, and M. Troyer, \Dening and detecting quantum speedup," Science 345, 420 (2014). [134] S. Mandr a, Z. Zhu, W. Wang, A. Perdomo-Ortiz, and H. G. Katzgraber, \Strengths and weaknesses of weak-strong cluster problems: A detailed overview of state-of- the-art classical heuristics versus quantum approaches," Phys. Rev. A 94, 022337 (2016). [135] V. Martin-Mayor and I. Hen, \Unraveling Quantum Annealers using Classical Hard- ness," Sci. Rep. 5, 15324 (2015). [136] T. Albash, V. Martin-Mayor, and I. Hen, \Analog Errors in Ising Machines," arXiv:1806.03744 (2018). [137] M. H. Amin, \Searching for quantum speedup in quasistatic quantum annealers," Phys. Rev. A 92, 052323 (2015). [138] M. H. Amin, E. Andriyash, J. Rolfe, B. Kulchytskyy, and R. Melko, \Quantum Boltzmann Machine," Phys. Rev. X 8, 021050 (2018). [139] M. Benedetti, J. Realpe-G omez, R. Biswas, and A. Perdomo-Ortiz, \Estimation of eective temperatures in quantum annealers for sampling applications: A case study with possible applications in deep learning," Phys. Rev. A 94, 022308 (2016). [140] S. H. Adachi and M. P. Henderson, \Application of Quantum Annealing to Training of Deep Neural Networks," arXiv:1510.06356 (2015). [141] T. Lanting, R. Harris, J. Johansson, M. H. S. Amin, A. J. Berkley, S. Gildert, M. W. Johnson, P. Bunyk, E. Tolkacheva, E. Ladizinsky, N. Ladizinsky, T. Oh, I. Perminov, E. M. Chapple, C. Enderud, C. Rich, B. Wilson, M. C. Thom, S. Uchaikin, and G. Rose, \Cotunneling in pairs of coupled ux qubits," Phys. Rev. B 82, 060512 (2010). 130 BIBLIOGRAPHY BIBLIOGRAPHY [142] A. D. King, J. Carrasquilla, J. Raymond, I. Ozdan, E. Andriyash, A. Berkley, M. Reis, T. Lanting, R. Harris, F. Altomare, K. Boothby, P. I. Bunyk, C. Enderud, A. Fr echette, E. Hoskinson, N. Ladizinsky, T. Oh, G. Poulin-Lamarre, C. Rich, Y. Sato, A. Y. Smirnov, L. J. Swenson, M. H. Volkmann, J. Whittaker, J. Yao, E. Ladizinsky, M. W. Johnson, J. Hilton, and M. H. Amin, \Observation of topolog- ical phenomena in a programmable lattice of 1,800 qubits," Nature 560, 456 (2018). [143] M. Ohkuwa, H. Nishimori, and D. A. Lidar, \Reverse annealing for the fully con- nected p-spin model," Phys. Rev. A 98, 022314 (2018). [144] D. Ottaviani and A. Amendola, \Low Rank Non-Negative Matrix Factorization with D-Wave 2000Q," arXiv:1808.08721 (2018). [145] D. Venturelli and A. Kondratyev, \Reverse Quantum Annealing Approach to Port- folio Optimization Problems," arXiv:1810.08584 (2018). [146] T. Lanting, A. D. King, B. Evert, and E. Hoskinson, \Experimental demonstration of perturbative anticrossing mitigation using nonuniform driver hamiltonians," Phys. Rev. A 96, 042322 (2017). [147] J. I. Adame and P. L. McMahon, \Inhomogeneous driving in quantum annealers can result in orders-of-magnitude improvements in performance," arXiv:1806.11091 (2018). [148] V. N. Smelyanskiy, D. Venturelli, A. Perdomo-Ortiz, S. Knysh, and M. I. Dykman, \Quantum annealing via environment-mediated quantum diusion," Phys. Rev. Lett. 118, 066802 (2017). [149] S. Boixo, V. N. Smelyanskiy, A. Shabani, S. Isakov, M. Dykman, V. S. Denchev, M. H. Amin, A. Y. Smirnov, M. Mohseni, and H. Neven, \Computational multiqubit tunnelling in programmable quantum annealers," Nat. Commun. 7, 10327 (2016). [150] A. J. Leggett, S. Chakravarty, A. T. Dorsey, M. P. A. Fisher, A. Garg, and W. Zw- erger, \Dynamics of the dissipative two-state system," Rev. Mod. Phys. 59, 1 (1987). [151] R. Hanson, V. V. Dobrovitski, A. E. Feiguin, O. Gywat, and D. D. Awschalom, \Coherent Dynamics of a Single Spin Interacting with an Adjustable Spin Bath," Science 320, 352 (2008). [152] I. Hen, J. Job, T. Albash, T. F. Rnnow, M. Troyer, and D. A. Lidar, \Probing for quantum speedup in spin-glass problems with planted solutions," Phys. Rev. A 92, 042325 (2015). 131 BIBLIOGRAPHY BIBLIOGRAPHY [153] N. G. Dickson, M. W. Johnson, M. H. Amin, R. Harris, F. Altomare, A. J. Berkley, P. Bunyk, J. Cai, E. M. Chapple, P. Chavez, F. Cioata, T. Cirip, P. deBuen, M. Drew- Brook, C. Enderud, S. Gildert, F. Hamze, J. P. Hilton, E. Hoskinson, K. Karimi, E. Ladizinsky, N. Ladizinsky, T. Lanting, T. Mahon, R. Neufeld, T. Oh, I. Perminov, C. Petro, A. Przybysz, C. Rich, P. Spear, A. Tcaciuc, M. C. Thom, E. Tolkacheva, S. Uchaikin, J. Wang, A. B. Wilson, Z. Merali, and G. Rose, \Thermally assisted quantum annealing of a 16-qubit problem," Nat. Commun. 4, 1903 (2013). [154] M. J. Bremner, R. Jozsa, and D. J. Shepherd, \Classical simulation of commuting quantum computations implies collapse of the polynomial hierarchy," Proc. R. Soc. London Ser. A 467, 459 (2011). [155] F. G. S. L. Brandao and K. Svore, \Quantum Speed-ups for Semidenite Program- ming," arXiv:1609.05537 (2016). [156] L. Barash, J. Marshall, M. Weigel, and I. Hen, \Estimating the Density of States of Frustrated Spin Systems," arXiv:1808.04340 (2018). [157] J. Preskill, \Quantum Computing in the NISQ era and beyond," Quantum 2, 79 (2018). [158] J. Kelly, R. Barends, A. G. Fowler, A. Megrant, E. Jerey, T. C. White, D. Sank, J. Y. Mutus, B. Campbell, Y. Chen, Z. Chen, B. Chiaro, A. Dunsworth, I.-C. Hoi, C. Neill, P. J. J. O'Malley, C. Quintana, P. Roushan, A. Vainsencher, J. Wenner, A. N. Cleland, and J. M. Martinis, \State preservation by repetitive error detection in a superconducting quantum circuit," Nature 519, 66 (2015). [159] R. Harper and S. Flammia, \Fault tolerance in the IBM Q Experience," arXiv:1806.02359 (2018). [160] V. Giovannetti, S. Lloyd, and L. Maccone, \Quantum Random Access Memory," Phys. Rev. Lett. 100, 160501 (2008). [161] P. Zanardi and L. Campos Venuti, \Coherent Quantum Dynamics in Steady-State Manifolds of Strongly Dissipative Systems," arXiv:1404.4673v2 (2014). [162] H. G. Katzgraber, F. Hamze, and R. S. Andrist, \Glassy Chimeras Could Be Blind to Quantum Speedup: Designing Better Benchmarks for Quantum Annealing Ma- chines," Phys. Rev. X 4, 021008 (2014). [163] V. Martin-Mayor and I. Hen, \Unraveling quantum annealers using classical hard- ness," Sci. Rep. 5, 15324 (2015). [164] D. Venturelli, S. Mandr a, S. Knysh, B. O'Gorman, R. Biswas, and V. Smelyanskiy, \Quantum Optimization of Fully Connected Spin Glasses," Phys. Rev. X 5, 031040 (2015). 132 BIBLIOGRAPHY BIBLIOGRAPHY [165] J. Marshall, V. Martin-Mayor, and I. Hen, \Practical engineering of hard spin-glass instances," Phys. Rev. A 94, 012320 (2016). [166] W. Wang, S. Mandr a, and H. G. Katzgraber, \Patch-planting spin-glass solution for benchmarking," Phys. Rev. E 96, 023312 (2017). [167] F. Wang and D. P. Landau, \Ecient, Multiple-Range Random Walk Algorithm to Calculate the Density of States," Phys. Rev. Lett. 86, 2050 (2001). [168] F. Wang and D. P. Landau, \Determining the density of states for classical statistical models: A random walk algorithm to produce a at histogram," Phys. Rev. E 64, 056101 (2001). 133
Abstract (if available)
Abstract
Dissipation, noise, and the associated decoherence are typically regarded as detrimental from the perspective of performing quantum information processing tasks. Therefore, it is remarkable that dissipative quantum processes can in fact be harnessed and used themselves as a computational resource, to enact quantum information primitives and prepare quantum states. In this work we explore several recent developments in this area, in particular, i) the use of strongly dissipative processes to achieve universal quantum computation within a scalable framework, ii) classifying quantum data according to dissipation alone, iii) suppressing errors using noise of a non-Markovian nature, and iv) utilizing thermalization to improve the performance of quantum annealers.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Open-system modeling of quantum annealing: theory and applications
PDF
Tunneling, cascades, and semiclassical methods in analog quantum optimization
PDF
Coherence generation, incompatibility, and symmetry in quantum processes
PDF
Topics in quantum information and the theory of open quantum systems
PDF
Topological protection of quantum coherence in a dissipative, disordered environment
PDF
Theory and simulation of Hamiltonian open quantum systems
PDF
The theory and practice of benchmarking quantum annealers
PDF
Error suppression in quantum annealing
PDF
Error correction and quantumness testing of quantum annealing devices
PDF
Open quantum systems and error correction
PDF
Towards optimized dynamical error control and algorithms for quantum information processing
PDF
Healing of defects in random antiferromagnetic spin chains
PDF
Quantum computation by transport: development and potential implementations
PDF
Quantum information flow in steganography and the post Markovian master equation
PDF
Understanding physical quantum annealers: an investigation of novel schedules
PDF
Disordered quantum spin chains with long-range antiferromagnetic interactions
PDF
Quantum information techniques in condensed matter: quantum equilibration, entanglement typicality, detection of topological order
PDF
Applications of quantum error-correcting codes to quantum information processing
PDF
Demonstration of error suppression and algorithmic quantum speedup on noisy-intermediate scale quantum computers
PDF
Minor embedding for experimental investigation of a quantum annealer
Asset Metadata
Creator
Marshall, Jeffrey
(author)
Core Title
Dissipation as a resource for quantum information processing: harnessing the power of open quantum systems
School
College of Letters, Arts and Sciences
Degree
Doctor of Philosophy
Degree Program
Physics
Publication Date
02/13/2019
Defense Date
11/16/2018
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
OAI-PMH Harvest,open quantum systems,quantum computing,quantum information
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Zanardi, Paolo (
committee chair
), Campos Venuti, Lorenzo (
committee member
), Haas, Stephan (
committee member
), Hen, Itay (
committee member
), Levi, Anthony (
committee member
)
Creator Email
jmarshall1234@gmail.com,jsmarsha@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c89-118482
Unique identifier
UC11676742
Identifier
etd-MarshallJe-7054.pdf (filename),usctheses-c89-118482 (legacy record id)
Legacy Identifier
etd-MarshallJe-7054.pdf
Dmrecord
118482
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Marshall, Jeffrey
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
open quantum systems
quantum computing
quantum information