Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Applications of quantum error-correcting codes to quantum information processing
(USC Thesis Other)
Applications of quantum error-correcting codes to quantum information processing
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Applications of Quantum Error-correcting Codes to Quantum Information Processing by Kung-Chuan Hsu A Dissertation Presented to the FACULTY OF THE GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulllment of the Requirements of the Degree DOCTOR OF PHILOSOPHY (Electrical Engineering) December 2016 Copyright 2016 Kung-Chuan Hsu Dedicated to my parents Shuenn-Jyi Sheu and Feng-Chu Chen and to my wife Alice F. Cheung ii Acknowledgements First and foremost, I would like to express my appreciation to my advisor, Todd Brun, for his mentorship. He has always been helpful and patient when it comes to guidance in research. He has the broad and profound knowledge to explain any problem or concept in a way that is comprehensive yet easy to understand. He is always available for meeting and discussion, and I really appreciate all the time and eort that he has spent. His support and kindness have provided me a comfortable learning and research environment, without having to go through unnecessary worries and stress. I also look up to his diverse interests that are not limited to the academia, from Shakesperean literature and acting to being a celebrity speaker on the scientic television series \Through the Wormhole". Todd is a role model in many perspectives, and I have learned so much from him. I am really grateful to have known him and for being his student. I next would like to address a special thanks to Professor Solomon Golomb for his support and assistance, especially when I rst came to USC. His contributions to the academia, to USC, and to many individuals are of great impact. He will be sorely missed. I am grateful to Professors Daniel Lidar, Paolo Zanardi, and Ben Reichardt for being on my qualifying exam and dissertation committees. I appreciate their participation and valuable feedback regarding my dissertation. I also thank them for teaching many \Special Topics" classes on quantum computing not regularly oered at USC. These classes both broadened and sharpened my knowledge and have been useful for my research. iii I thank Professors Alexander Sawchuk and Douglas Burke for accepting me as their teaching assistant. It was a valuable experience preparing and teaching weekly lectures. It was also very enjoyable for being able to assist and guide students in learning. I thank Professors Giuseppe Caire and Gerhard Kramer for their teaching while they were still at USC. Their advise inspired me and helped me a lot in learning information theory and error-correcting codes. I would like to thank my colleagues, Ching-Yi Lai, Yi-Cong Zheng, Shengshi Pang, Jan Florjanczyk, Jose Raul Gonzalez Alonso, Christopher Cantwell, and Scout Kingery, for the many enlightening discussions, and the comradeship they provided during the conference trips. I also thank the sta members at USC for their timely assistance and kind support, especially Diane Demetras, Gerrielyn Ramos, Corine Wong, Anita Fung, Tim Boston, Ted Low, and Susan Wiedem. Last but not least, I would like to thank my family. I thank my parents for their unconditional love and support throughout my life. I would not have walked that far without their guidance and encouragement. I am indenitely indebted to them. I thank my sister for always being positive, caring, and cheerful. Her encouragement has helped me overcome many diculties that I encountered. Most importantly, I thank my wife for all her love. I thank her for always taking care of me and being patient, especially of my bad habits. This Ph.D. journey would not have been so exciting and enjoyable without her. iv Table of Contents Abstract x I Preliminaries 1 I.1 Basics of Quantum Mechanics . . . . . . . . . . . . . . . . . . . . . . . . . 1 I.2 Quantum Stabilizer Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 I.3 Quantum Stabilizer Subsystem Codes . . . . . . . . . . . . . . . . . . . . 5 I.4 Calderbank-Shor-Steane (CSS) Codes . . . . . . . . . . . . . . . . . . . . 6 II AFamilyofFiniteGeometryLDPCCodesforQuantumKeyExpansion 8 II.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 II.2 Quantum Key Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 II.2.1 Code construction . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 II.2.2 Luo and Devetak's quantum key expansion protocol . . . . . . . . 12 II.2.3 Analysis of QKE post-processing . . . . . . . . . . . . . . . . . . . 14 II.2.4 Improving QKE post-processing . . . . . . . . . . . . . . . . . . . 16 II.2.5 Summary of the improved QKE protocol . . . . . . . . . . . . . . 18 II.3 Finite Geometry LDPC Codes . . . . . . . . . . . . . . . . . . . . . . . . 20 II.3.1 Euclidean geometry (EG) LDPC codes . . . . . . . . . . . . . . . . 21 II.3.2 Projective geometry (PG) LDPC codes . . . . . . . . . . . . . . . 22 II.3.3 Extension of nite geometry LDPC codes by column and row splitting 24 II.4 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 II.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 IIIMethod for Quantum-jump Continuous-time Quantum Error Correc- tion 32 III.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 III.2 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 III.2.1 Structure of stabilizer codes . . . . . . . . . . . . . . . . . . . . . . 33 III.2.2 The model of quantum-jump CTQEC . . . . . . . . . . . . . . . . 38 III.3 Quantum-jump CTQEC with minimal ancillas . . . . . . . . . . . . . . . 39 III.3.1 Preliminary derivation . . . . . . . . . . . . . . . . . . . . . . . . . 41 III.3.2 Coherent measurement and the ancillary system . . . . . . . . . . 44 v III.3.3 A protocol for quantum-jump CTQEC requiring minimal number of ancillas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 III.4 Comparison with other CTQEC protocols . . . . . . . . . . . . . . . . . . 54 III.4.1 Oreshkov's protocol . . . . . . . . . . . . . . . . . . . . . . . . . . 54 III.4.2 ADL protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 III.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 IVTeleportation-basedFault-tolerantQuantumComputationinMulti-qubit Large Block Codes 70 IV.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 IV.1.1 Universal FTQC . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 IV.2 Steane Syndrome Extraction . . . . . . . . . . . . . . . . . . . . . . . . . 74 IV.3 Measuring logical operators . . . . . . . . . . . . . . . . . . . . . . . . . . 74 IV.4 Logical teleportation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 IV.5 Logical Cliord gates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 IV.5.1 Error model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 IV.6 Estimate of the logical error rate . . . . . . . . . . . . . . . . . . . . . . . 80 IV.7 3D Gauge Color Codes as C p . . . . . . . . . . . . . . . . . . . . . . . . . 82 V Iterative Decoder for Three-dimensional Gauge Color Codes 84 V.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 V.2 3D gauge color codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 V.2.1 Topological lattice of a 3D gauge color code . . . . . . . . . . . . . 86 V.2.2 3D gauge color code . . . . . . . . . . . . . . . . . . . . . . . . . . 88 V.2.3 Construction of the code lattice . . . . . . . . . . . . . . . . . . . . 89 V.2.4 Decoding problem for 3D gauge color codes . . . . . . . . . . . . . 90 V.3 Iterative decoder for 3D gauge color codes . . . . . . . . . . . . . . . . . . 92 V.3.1 Vertex decoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 V.3.2 Edge decoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 V.4 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 V.5 Closing Remark . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 VIConclusion 112 References 114 vi List of Figures II.1 Bit error rate of the keys generated by the original QKE protocol with selected codes from EG1(2; 5;c sp ;r sp ). . . . . . . . . . . . . . . . . . . . 26 II.2 Net key rate of the improved QKE protocol with selected codes from EG1(2; 5;c sp ;r sp ) and error threshold = 10 6 . . . . . . . . . . . . . . . 26 II.3 Bit error rate of the keys generated by the original QKE protocol with selected codes from PG1(2; 5;c sp ;r sp ). . . . . . . . . . . . . . . . . . . . 27 II.4 Net key rate of the improved QKE protocol with selected codes from PG1(2; 5;c sp ;r sp ) and error threshold = 10 6 . . . . . . . . . . . . . . . 27 II.5 Net key rate of the improved QKE protocol with selected codes from both EG1(2; 5;c sp ;r sp ) andPG1(2; 5;c sp ;r sp ) that perform well in the various channel error regions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 III.1 Comparison of CTQEC performance between optimal and constant cor- rection for Oreshkov's device using the three-qubit bit- ip code under independent bit- ip channels with error rate and correcting parameter = 100. The thin and thick dashed red lines are, respectively, the correctable overlap and codeword delity of the case with optimal correc- tion. The thin and thick solid blue lines are, respectively, the correctable overlap and codeword delity of the case with constant correction. . . . 60 III.2 Plot ofw 1 (t) in the case of optimal correction for Oreshkov's device with = 100. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 III.3 Comparison of CTQEC performance between optimal and constant cor- rection for Oreshkov's protocol using the ve-qubit \perfect" code un- der independent depolarizing channels with error rate and correcting parameter is = 100. The thin and thick dashed red lines are, re- spectively, the correctable overlap and codeword delity of the case with optimal correction. The thin and thick solid blue lines are, respectively, the correctable overlap and codeword delity of the case with constant correction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 vii III.4 The CTQEC performance of our protocol and ADL protocol using three- qubit bit- ip code under independent bit- ip channels with error rate . The parameters of the ADL protocol are 2 = 64 and 2 = 128, and the parameter of our protocol is = 7:684764. The solid and dashed blue lines are, respectively, the correctable overlap and the codeword delity of our protocol. The lled and unlled red circles are, respectively, the correctable overlap and the codeword delity of ADL's simulation of their protocol [2] sampled at time increments in 0:01 (1=). . . . . . . . . . . 66 IV.1 The framework of our teleporation-based FTQC scheme. . . . . . . . . . 73 IV.2 The circuit for Steane syndrome extraction and its eective error model. 74 IV.3 Diagram of the logical T gate using logical state teleportation. The red blocks represent joint measurements of logical qubits, and the blue one represents bitwise T or T y applied to the processor block. . . . . . . . . 76 IV.4 The logical error rate of the memory blocks for the [[2047,23,77]] code (blue), [[2921,57,77]] code (red) and [[5865,143,105]] code (green) versus physical error rate p. The number of samples for each point is up to 4 10 8 . The dashed lines are from extrapolation of linear tting. . . . . 81 IV.5 The logical error rate of the logicalT gate performed on the concatenated [[15; 1; 3]] code of two levels (blue) and three levels (red) using up to 3 10 7 samples for each point. The green point represents a numerical upper bound for three levels when the physical error rate isp = 7 10 4 . The dashed lines are from extrapolation of linear tting. . . . . . . . . . 82 V.1 Syndromes disappear within an error cluster due to modulo-2 arithmetic. 94 V.2 An error that the greedy algorithm fails to correct most of the time. . . 96 V.3 An incorrect decoding of the error in Fig. V.2. . . . . . . . . . . . . . . . 97 V.4 An example showing the diusion algorithm alone is not enough. . . . . 100 V.5 The rst type of trap in a greedy edge decoder. . . . . . . . . . . . . . . 106 V.6 The second type of trap in a greedy edge decoder. . . . . . . . . . . . . 107 V.7 The blue, red, and green lines are the performances of our 2-diusion and 2-neighborhood selection vertex decoder for Bomb n's construction with code parameters 11, 9, and 7, respectively. The channel is mod- eled as a binary symmetric channel of the corresponding type of error with error rate p. The simulation is sampled at channel error rates p = 0:01; 0:008; 0:006; 0:004; 0:002; 0:001, with 10 6 samples run at each error rate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 viii V.8 The blue and red represent the performances of Bomb n's 7th and 9th code, respectively. The solid lines are the performances of the 2-diusion and 2-neighborhood selection vertex decoder. The dashed lines are the performances of the 2-diusion and weight diusion selection vertex de- coder. The channel is modeled as a binary symmetric channel of the corresponding type of error with error rate p. The simulation is sampled at channel error rates p = 0:01; 0:008; 0:006; 0:004; 0:002; 0:001, with 10 6 samples run at each error rate. . . . . . . . . . . . . . . . . . . . . . . . 109 V.9 The blue, red, and green lines are the performances of our edge decoder for Bomb n's construction with code parameters 11, 9, and 7, respectively. The channel is modeled as a binary symmetric channel of the correspond- ing type of error with error rate p. The simulation is sampled at channel error ratesp = 0:01; 0:008; 0:006; 0:004; 0:002; 0:001, with 10 6 samples run at each error rate. The vanishing sample points mean that there are no error observed. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 ix Abstract Quantum computer is susceptible to decoherence. Therefore, quantum error correction is important component of quantum computation. Quantum error-correcting code is a tool to realize quantum error correction. In my dissertation, I propose the application of quantum error-correcting codes in dierent quantum information processing settings. The dissertation is divided into six chapters. Chapter I introduces the fundamental knowledge on quantum error-correcting codes. In Chapter II, I propose the use of - nite geometry low-density parity-check codes in a quantum key expansion protocol. A typical quantum key distribution protocol makes use of a pair of dual-containing classi- cal codes, however, the dual-containing property is no longer required in a quantum key expansion protocol. Therefore, any error-correcting code that has ecient decoder and good performance, for example the nite geometry low-density parity-check code, may be used. In Chapter III, I propose a method to derive protocols performing quantum jump continuous-time quantum error correction. In these protocols, weak measurements and weak corrections are rapidly applied on a quantum state in order to protect it from decoherence. Chapter IV proposes a scheme for fault-tolerant quantum computation by encoding logical qubits into computational code blocks. In order to achieve universal quantum computation in this scheme, we require an error-correcting code that allows transveral implementation of an operation outside the Cliord group. One class of error- correcting codes with this property is the \gauge color code". In Chapter V, I propose the rst ecient decoder for three-dimensional gauge color codes. These codes are topo- logically dened on a pure simplicial 3-complex, and guring out how to decode them is an interesting problem. The conclusion is given in Chapter VI. x Chapter I Preliminaries I.1 Basics of Quantum Mechanics The space of states of a quantum system is a Hilbert space. \Quantum bit", or \qubit", is a unit of quantum information, and its state space is the two-dimensional Hilbert space H. Given an orthonormal basis 8 > < > : j0i = 2 6 4 1 0 3 7 5;j1i = 2 6 4 1 0 3 7 5 9 > = > ; ofH, a qubit statej i is j i =j0i +j1i = 2 6 4 3 7 5; (I.1) where;2C andjj 2 +jj 2 = 1. The space of linear operators onH has a set of basis formed by the Pauli matrices I, X, Y and Z, where I = 2 6 4 1 0 0 1 3 7 5; X = 2 6 4 0 1 1 0 3 7 5; Y = 2 6 4 0 i i 0 3 7 5; Z = 2 6 4 1 0 0 1 3 7 5: (I.2) 1 The state space ofn qubits is the tensor product ofn single-qubit state spacesH n = H H. Letj i i be the state of the i-th qubit for all i, the state of the n qubits is j 1 i j n i. Sometimes, we writej iji in replacement ofj i ji for simplicity. Also useful when we discuss stabilizer codes in the next section, then-fold Pauli group G n is dened to be G n =fi j n l=1 O l :j2f0; 1; 2; 3g;O l 2fI;X;Y;Zg;8lg; (I.3) The evolution of a closed quantum system can be described by a unitary transfor- mation U. The resulting state, after a unitary U is applied to an initial statej i, is Uj i. Measurements of a closed quantum system can be described by a set of measurement operatorsfM m g. The number of possible outcomes of the measurement is the number of the measurement operators. Starting with a statej i, the measurement gives outcome m with probability p m =jM m j ij 2 , and the state after the measurement is Mmj i p pm . It is often that one does not know the exact state a quantum system is in, but has information about how likely the system is in some set of statesfj j ig. In this case, the system is said to be in a \mixed" state, and a \density matrix" describes this mixed state as = X j p j j j ih j j; (I.4) wherep j is the probability that the system is in the statej j i. If the state of a quantum system is exactly know to bej i, the system is in a \pure state", and its density matrix is simply =j ih j 2 The evolution of a closed quantum system is described by a unitary operatorU. When this operator U is acted on an initial state , the state becomes UU y . It is usual that the target system we are considering belongs to a larger system. The target system may no longer be closed, and we may not be able to describe the evolution of the target system by a single unitary operator. However, any such quantum operation can be describe in the form of \operator-sum representation" (or \Kraus representation") with a set of \operator elements" (or \Kraus operators")fE j g satisfying the completeness relation P j E y j E j =I. The state after the evolution becomes P j E j E y j . Quantum measurement is an important type of quantum operations that is used to observe a quantum system. A quantum measurement can be described by a set of \measurement operators"fM j g acting on the system that is to be measured. Let be the initial state of the system, the measurement gives outcome m with probability p m = Tr(M y m M m ), and the state after the measurement is MmM y m Tr(M y m Mm) . Here, Tr() is the trace operation. I.2 Quantum Stabilizer Codes Let C be a stabilizer code that protects k qubits by encoding them into n qubits. We say that C is an [[n;k]] code. Associated to C is an Abelian subgroup S, often called the \stabilizer group", ofG n that does not containI n . Furthermore, there exists a set of nk independent operators, G =fg 1 ;g 2 ;:::;g nk gG n , that generates the group S. The elements of G are called the \stabilizer generators". We write S =hg 1 ; ;g nk i: (I.5) 3 The codespace ofC, where we denoteC S , is the 2 k -dimensional subspace ofH n that is xed by S, that is, C S = j i2H n :gj i =j i; 8 g2S : (I.6) The projector onto the codespace C S is P C = nk Y j=0 I n +g j 2 : (I.7) For an error operatorE2G n , the error \syndrome" is a binary string of lengthnk: s =s 1 s nk ; (I.8) where s j 2 Z 2 =f0; 1g, and s j = 1 if and only if E anticommutes with g j , that is, Eg j =g j E. Associated with each syndrome string is a \syndrome subspace"H s . The projector ontoH s is P C;s = nk Y j=0 I n + (1) s j g j 2 ; (I.9) and H n = M s2Z nk 2 H s : (I.10) The \normalizer group" of S, denoted N S , inG n is the subgroup of elements inG n that commute with S, in the sense that an element in N S brings an element in S to one that is in S. For any stabilizer group S, it can be shown that its normalizer group is 4 the same as its centralizer group, which consists of operators that commute with every elements of S. Therefore, N S =fh2G n :hgh y 2S;8g2Sg =fh2G n :hg =gh;8g2Sg: (I.11) The code can correct the set of errors E =fE j g where E y j E l 2 S[ (GN S ) 8 j;l. This last condition is equivalent to E y j E l 62N S nS8 j;l. The minimum distance d of a stabilizer code is the minimum weight, or minimum number of non-trivial qubit operators, of an element in N S nS. The code is capable of correcting any error of weight less than or equal tob d1 2 c. We then call this stabilizer code an [[n;k;d]] code. For more information regarding quantum stabilizer codes, please see [25]. I.3 Quantum Stabilizer Subsystem Codes In the theory of stabilizer codes, we see that the system's Hilbert space is partitioned into H n =H 0 H ? 0 . The thoery of stabilizer subsystem codes considers the case whereH 0 is further partitioned as H 0 =A B; (I.12) where the information is encoded on the A subsystem only, and B is called the \gauge" subsystem. The idea is that A B and A 0 B are considered having the same information even if 6= 0 . 5 In the construction of a subsystem code, a \gauge group" G of operators are required besides the stabilizer group S =hg 1 ; ;g nk i. Futhermore, GS, and G =hg 1 ; ;g nk ;X nk+1 ; ;X nk+r ;Z nk+1 ; ;Z nk+r i; (I.13) where for all j2f1; ;rg, X nk+j anticommutes with Z nk+j , and all other pairs of generators commute. Similar to the usual stabilizer formalism, the subsystem code is capable of correcting the set of errors E =fE j g where E y j E l 62N S nG8 j;l. The distanced of the subsystem code is dened as the minimum weight of an element in N S nG. For more information regarding quantum subsystem codes, please see [58, 75]. I.4 Calderbank-Shor-Steane (CSS) Codes The Calderbank-Shor-Steane (CSS) codes are a class of stabilizer codes, that have each of their stabilizer generators being a tensor product of either X's andI's (X-type) orZ's and I's (Z-type). A CSS code can be described by two classical codes C 1 and C 2 with the same code length n satisfying the \twisted property": C ? 1 C 2 ; (I.14) where C ? 1 is the dual code of C 1 . Let H 1 and H 2 be the parity-check matrices of C 1 and C 2 , respectively. A necessary and sucient condition for C ? 1 C 2 is H 2 H T 1 = 0. Let k 1 and k 2 be the rank of C 1 and C 2 , respectively. The quantum code has rank 6 n(nk 1 )(nk 2 ) =k 1 +k 2 n, that is, it encodes an information block ofk 1 +k 2 n qubits into a code block of n qubits. Hence, given two valid classical codes C 1 and C 2 , the corresponding CSS code is an [[n;k 1 +k 2 n]] code. H 1 andH 2 are used to dene theX-type andZ-type stabilizer generators, respectively. Each parity-check vector, or row, of H 1 denes an X-type stabilizer generator being a tensor product ofX's at the 1-location of the parity-check vector. Each parity-check vector ofH 2 denes anZ-type stabilizer generator being a tensor product ofZ's at the 1-location of the parity-check vector. Since X and Z commutes with themselves and anticommute with each other, the X-type stabilizer generators are used to produce syndromes for Z errors, and the Z-type stabilizer generators are used to produce syndromes for X errors. If bothC 1 andC 2 are capable of correcting any t errors, the CSS code can correct errors on any t qubits. 7 Chapter II A Family of Finite Geometry LDPC Codes for Quantum Key Expansion II.1 Introduction A quantum key expansion (QKE) protocol allows two parties, Alice and Bob, to expand a shared secret key by using one-way quantum communication and public classical com- munication. Luo and Devetak [45] demonstrated a QKE protocol, which is derived from the standard BB84 quantum key distribution (QKD) protocol with post-processing steps involving the use of entanglement-assisted Calderbank-Shor-Steane (CSS) codes. The protocol is provably secure from an eavesdropper, Eve, based on a result by Shor and Preskill [66]. The QKE protocol has a potential advantage over QKD, in that the original pair of classical codes considered need not have the dual-containing property. The cost is that the parties involved have to pre-share a secret key. The classical codes correspond to entanglement-assisted quantum error-correcting code (EAQECC). The EAQECC con- struction is described by the formalism given by Brun, Devetak and Hsieh [12]. 8 In the CSS construction of Luo and Devetak's QKE protocol, a pair of classical linear codes with good error-correcting performance is needed. LDPC codes are classical linear codes that have sparse parity-check matrices, and many families of LDPC codes have been studied and claimed to give good performance (see, e.g., [40, 48, 15, 29, 60, 30, 31]). There were several recent studies on the performance of LDPC codes used for QKD [53, 22]. In our work, LDPC codes constructed from nite geometry (FG) are considered [40, 31], and methods to incorporate them into the QKE protocol are proposed and explained. For simplicity, the quantum channel is modeled by the depolarizing channel. Given a tolerable bit error threshold for the generated keys, the goal is to search for codes that maximize the net key rate for given channel error parameters. This chapter is organized as follows. In Sec. II.2, We rst introduce the QKE protocol of Luo and Devetak. We then discuss our proposal of modications to the post-processing steps to improve performance. In Sec. II.3, we discuss families of LDPC codes generated by nite geometry. In Sec. II.4, we discuss simulation results using our improved QKE protocol from Sec. II.2 and the codes from Sec. II.3, and we analyze their performance. In Sec. II.5, we give conclusions. The one-dimensional vectors appearing in this chapter should always be considered as column vectors. The vectors are denoted with underline, and the matrices are denoted with boldface. The operations + and are dened respectively as component-wise addition and addition modulo 2. 9 II.2 Quantum Key Expansion The QKE protocol discussed in this chapter is derived from the BB84 quantum key distribution protocol, using CSS codes for error correction and privacy amplication. The CSS code used for a BB84 QKD protocol is derived from a pair of \twisted" classical linear codes as described in Chapter I.4. The QKE protocol, however, does not require the pair of classical codes have the twisted property. The idea is to interpret the code as an entanglement-assisted code rather than a standard quantum code, and the cost is that the two parties involved must have a pre-shared secret key that is expanded by the protocol. In Sec. II.2.1, the structure of entanglement-assisted code will be introduced, as well as the notation that will be used throughout the paper. Sec. II.2.2 reviews the steps of the QKE protocol proposed by Luo and Devetak [45]. In Sec. II.2.3 and II.2.4, I analyze the post-processing steps of the QKE protocol, and propose improvements. In Sec. II.2.5, I summarize the improvements of II.2.4 and give a QKE protocol with enhanced performance compared to the original QKE protocol. II.2.1 Code construction This subsection summarizes the entanglement-assisted CSS code construction and the matrix structures involved. The notation mentioned here will be used throughout this chapter. For i = 1; 2, let C i be a classical [n;k i ;d] code with parity-check matrix H i of size (nk i )n. Based on the given pair of classical codes, an [[n;k 1 +k 2 n +c;d;c]] entanglement-assisted quantum CSS code can be constructed, where c = rank(H 1 H T 2 ) 10 is the number of ebits (or entangled pairs of qubits) needed. This code can protect m = k 1 +k 2 n +c qubits from error. After this process, we end up with two twisted classical codes C 0 1 and C 0 2 with \augmented" parity check matrices H 0 1 and H 0 2 . The derivation ofH 0 i fromH i is as follows: For a given pair of H 1 and H 2 , there always exist nonsingular matrices T 1 and T 2 such that T 1 H 1 H T 2 T T 2 = 0 B @ 0 (nk 1 c)(nk 2 c) 0 (nk 1 c)c 0 c(nk 2 c) I c 1 C A: (II.1) H 0 i can thus be constructed as follows to assure that the new codes satisfy the twisted property,H 0 1 H 0T 2 =0. H 0 i = (T i H i J i ), whereJ i = 0 B @ 0 (nk i c)c I c 1 C A: (II.2) Suppose H 0 1 and H 0 2 are constructed. There exist binary matrices E 1 , F 1 , E 2 , and F 2 such that the following four requirements are satised: 1. The rows ofH 0 1 andE 1 form a basis for C 0 2 . 2. The rows ofH 0 2 andE 2 form a basis for C 0 1 . 3. N 1 = 0 B B B B B @ H 0 1 E 1 F 1 1 C C C C C A andN 2 = 0 B B B B B @ F 2 E 2 H 0 2 1 C C C C C A are full rank matrices. 4. N 1 N T 2 =I. The new parity-check matrices H 0 i have more columns than the original H i . These columns correspond to additional qubits on the receiver's side. Before decoding, the 11 sender (Alice) and the receiver (Bob) share c entangled pairs. Since Bob's half of these pairs do not pass through the channel, they are noise-free. The syndrome of an error is dened as the error vector multiplied by the parity-check matrix of the code. For the codeC 0 1 in our case, the syndrome corresponding to the error vector e is s =H 0 1 e. The set of codewords of the code is the set of all vectors with zero syndromes. The decoder for the LDPC codes considered in this work is an SPA decoder [46] that identies a probable error corresponding to each syndrome. Based on the decoder, the error set correctable by the code can be dened. For the codeC 0 1 with parity-check matrix H 0 1 , one may dene such a set asE 0 1 =fF T 2 s +E T 2 (s) +H 0T 2 0 (s) :s2Z nk 1 2 g, where () : Z nk 1 2 ! Z m 2 and 0 () : Z nk 1 2 ! Z nk 2 2 are mappings xed by the decoder. For every syndrome s2Z nk 1 2 , the decoder givesF T 2 s +E T 2 (s) +H 0T 2 0 (s) as the probable error. The receiver then corrects this error on the received codeword to retrieve the original message. II.2.2 Luo and Devetak's quantum key expansion protocol Let Alice and Bob be the sender and receiver utilizing the QKE protocol proposed in [45]. The steps of the protocol are: 1) Alice generates a binary string a consisted of (2 + 3)n random bits. 2) Alice generates another binary string consisted of (2 + 3)n random bits, and she prepares each bit in a in the Z or X basis according to the corresponding bit in . For example, Alice may prepare the bit in a in the Z basis if the corresponding bit in is 0, and in the X basis otherwise. 3) Alice sends the prepared qubits to Bob. 12 4) Bob receives the qubits, and he generates a binary string consisting of (2 + 3)n random bits. Bob then uses to determine in which bases to measure the received qubits. To be consistent with the example in 2), Bob measures the received qubit in the Z basis if the corresponding bit in is 0 and measures in theX basis otherwise. Let the resulting bit string be b. 5) Alice announces, and Bob discards the bits inb where the corresponding bits in and don't match, that is, the bit locations where they prepare and measure in dierent bases. Bob announces which bits he discards. With high probability, there are at least (1 +)n bits left; if not, they abort and restart the protocol. 6) Alice randomly chooses n bits and announces the bit locations for Bob to extract the corresponding bits. Let Alice's resulting string be ^ a, and Bob's be ^ b. There are at leastn pairs of bits left, and those pairs are used for channel estimation. Alice and Bob announce those bits to each other and count the fraction that do not match. If there are too many errors, they abort and restart the protocol. 7) Alice attaches the length-c pre-shared bit string to ^ a. She rst computes s A = H 0 1 0 B @ ^ a 1 C A and announces it to Bob. She then computes her part of the generated key, k A =E 1 0 B @ ^ a 1 C A. 8) Bob computes s B = H 0 1 0 B @ ^ b 1 C A, and his part of the generated key is k B = E 1 0 B @ ^ b 1 C A(s A s B ). 13 II.2.3 Analysis of QKE post-processing Consider the procedure of Luo and Devetak's QKE protocol formalized in the previous subsection. The error correction is performed at the last step 8) where Bob computes (s A s B ). In this case, s A s B is the syndrome that initializes the decoding. To understand how the function () is computed, we need to examine its denition and the matrix structure of the code. Suppose we start with two LDPC codes with parity-check matricesH 1 andH 2 of sizes (nk 1 )n and (nk 2 )n, and c = rank(H 1 H T 2 ). The formalism in Sec. II.2.1 gives two (n +c) (n +c) full rank matricesN 1 andN 2 , each formed by 3 block-matricesH 0 i , E i , andF i of sizes (nk i )(n+c); (k 1 +k 2 n+c)(n+c), and (nk (1+i mod2) )(n+c) respectively. H 0 1 and H 0 2 are dened as the parity check matrices of the newly formed entanglement-assisted CSS code. Note that the two new parity-check matrices need not be low-density and thus the performance will be poor if one uses them to run the SPA decoder. However, as seen in Sec. II.2.1, since the matrix operations transforming H i toH 0 i are reversible, the error syndrome with respect to the original parity-check matrix H i can be retrieved by doing inverse matrix operations on the corresponding syndrome with respect to H 0 i . That is, given a syndrome corresponding to H 0 i , we can nd the corresponding syndrome for H i . As a result, the errors can be decoded by the SPA decoder with LDPC matrixH i . The details follow. The function (), which includes the process of error correction, comes into the pic- ture when the error setE 1 correctable by the codeH 0 1 is dened. Recall from Sec. II.2.1, E 1 =fF T 2 s +E T 2 (s) +H 0T 2 0 (s) :s2Z nk 1 2 g. Since the matrixN 2 formed byH 0 2 ,E 2 , andF 2 is a full rank matrix inZ 2 , the error string corresponding to a particular syndrome 14 s can be retrieved by the following steps: i) Compute s 0 =T 1 1 s. ii) Run the SPA decoder using the original LDPC matrix H 1 with the syndrome s 0 . The decoded string is the estimated error, and we denote it by ^ e. iii) Attach c 0's to ^ e and compute (s) =E 1 0 B @ ^ e 0 c1 1 C A. In the above steps i) and ii), the error message can be decoded usingH 1 instead ofH 0 1 since the last c bits of the message are pre-shared by Alice and Bob, and thus the error message from those bits should always be a string of 0's. The syndrome is then totally determined by the rst n bits of the error message. This allows us to use the original low-density parity-check matrices for decoding and thus the error-correcting performance is maintained. The last step may not be trivial, and we explain it in the following. Using our notation, if 0 B @ ^ e 0 c1 1 C A is correctable byH 0 1 with syndromes, it is in the setE 1 and can be written in the form 0 B @ ^ e 0 c1 1 C A =N T 2 0 B B B B B @ s (s) 0 (s) 1 C C C C C A : (II.3) SinceN 1 N T 2 =I, it is obvious thatN T 2 =N 1 1 . N 1 can then be multiplied to both sides of the above equation. As a result, 15 0 B B B B B @ s (s) 0 (s) 1 C C C C C A =N 1 0 B @ ^ e 0 c1 1 C A = 0 B B B B B @ H 0 1 E 1 F 1 1 C C C C C A 0 B @ ^ e 0 c1 1 C A: (II.4) It should now be clear that step iii) is valid. II.2.4 Improving QKE post-processing A very important observation based on our simulations is that in the cases where the channel error rates are not small, the bit error rates of the resulting keys are signicant whenever the estimated errors 0 B @ ^ e 0 c1 1 C A are erroneous. Specically, the bit error rates of the keys are about half the block error rates for suciently large channel error proba- bilities. Since () is equivalent to multiplying by a matrix, E 1 , this observation implies thatE 1 is generally not sparse. Given a block error, it is likely that each row of E 1 and the block error have overlapping non-zero elements, which on average contributes to a signicant number of errors in the key. In other words, when a block error occurs the resulting key is almost totally randomized. From the observation above, we can apply two useful improvements to the protocol. Improvement1 is to check the syndrome following the decoder's output. This allows the detection of not-yet-converged messages from the SPA decoder. These messages must have block errors. Aborting the protocol after detecting those erroneous messages greatly improves the error performance of the generated key, at the cost of modestly reducing the key rate, since the information sent through the channel in the prior stages is wasted. 16 Improvement 2 is to check the generated keys directly. Let the block error rate and bit error rate of the generated keys be denoted by R blk and R bit . Since block errors of the keys result in a large fraction of the bits being erroneous in each block, checking several randomly chosen bits allows a large probability of detecting those block errors. Let us assume the relationship R bit = qR blk , such that, on average, a block error yields a bit error rate of q. Suppose each time the protocol is processed, a number of bits are chosen randomly from the key, and are used for a check between the sender and the receiver. The bit error rate of the generated key, ^ R bit , can then be calculated as ^ R bit =R bit (1q) 1R blk + (1q) R blk R bit f: (II.5) The bit error rate is scaled by the factor f. For xed R blk , f decreases dramatically as increases. This means that not many bits need be checked to greatly improve the error performance of the key. To determine , we nd the smallest satisfying ^ R bit <, where is the desired threshold for the bit error rate of the nal key. That is, = 8 > < > : dlog (1q) ( (1R blk ) (q)R blk )e if q>, 0 otherwise. (II.6) Since those randomly chosen bits from the key are revealed, the tradeo in using this method would be to reduce key rate by an amount n . A problem arises here, in that the pre-shared key bits are consumed even if the protocol fails, which could even result in the net key rate being negative. However, there is a way to get around this problem. 17 In the original QKE protocol, Alice announces to Bob the message s A =H 0 1 0 B @ ^ a 1 C A, and Bob corrects the errors using the syndrome s =s A H 0 1 0 B @ ^ b 1 C A =H 0 1 0 B @ ^ a ^ b 0 1 C A. This syndrome can also be computed by Bob if Alice sends the message ^ s A =H 0 1 0 B @ ^ a 0 1 C A instead. In this case, Bob just computes s = ^ s A H 0 1 0 B @ ^ b 0 1 C A =H 0 1 0 B @ ^ a ^ b 0 1 C A. Thus, instead of comparing the keys k A = E 1 0 B @ ^ a 1 C A and k B = E 1 0 B @ ^ b 1 C A (s A s B ) and consuming the pre-shared key , it is sucient for the two parties to compare ^ k A =E 1 0 B @ ^ a 0 1 C A and ^ k B =E 1 0 B @ ^ b 0 1 C A(^ s A ^ s B ). In this way, we can post- pone the consumption of the pre-shared keys until after the check is performed. Note that, Alice and Bob must discard the bits from the nal key corresponding to the ones they compare, since information about those bits is publicly revealed. II.2.5 Summary of the improved QKE protocol In this subsection, we will combine the two improvements from the previous subsection and assess the improved performance of the QKE protocol. We consider the case where Improvement 1 is performed rst, and then Improvement 2 is performed if the check in Improvement 1 is successful. Letp 1 be the failure rate of the check inImprovement1. Conditioned on passing the check in Improvement 1, let p 2 be the rate of bit errors in the generated keys followed 18 by the remaining block errors. Also, let R blk be the block error rate of the LDPC code and be the error threshold that is desired for QKE. The values, R blk , p 1 andp 2 , can be determined by simulation. After Improvement 2 is performed, the bit error rate of the generated key, ^ R bit , can then be calculated: ^ R bit =p 2 (1p 2 ) (R blk p 1 ) 1R blk + (1p 2 ) (R blk p 1 ) : (II.7) To determine , we nd the smallest satisfying ^ R bit <. That is, = 8 > < > : dlog (1p 2 ) ( (1R blk ) (p 2 )(R blk p 1 ) )e if p 2 >, 0 otherwise. (II.8) . We now outline the improved QKE protocol. Referring to the original QKE protocol in Sec. II.2.2, the procedure up to step 6) will be the same. The steps beyond 7) are modied as follows: 7) Alice computes ^ s A =H 0 1 0 B @ ^ a 0 1 C A and announces it to Bob. 8) Bob rst computes ^ s B =H 0 1 0 B @ ^ b 0 1 C A, and then he runs the SPA decoder using the original LDPC matrixH 1 with the syndrome s 0 =T 1 1 (^ s A ^ s B ). Let the decoded error string be ^ e. 9) Bob checks if H 1 ^ es 0 is the all-zero string. If not, the protocol is aborted and they start over. This is a result of Improvement 1. 19 10) Alice randomly chooses bits from ^ k A =E 1 0 B @ ^ a 0 1 C A and announces them to Bob. Bob checks if the corresponding bits from ^ k B = E 1 0 B @ ^ b ^ e 0 1 C A match the ones sent by Alice. If the strings do not completely match, the protocol is aborted and they start over. This is a result of Improvement 2. 11) Alice computes her part of the generated key as k A = ^ k A E 1 0 B @ 0 1 C A, excluding the bits corresponding to the ones they have compared in the previous step. Bob also computes his part of the generated key as k B = ^ k B E 1 0 B @ 0 1 C A, excluding the bits similarly. The pre-shared key is only used in the last step. Therefore, the pre-shared key will not be consumed if the protocol is aborted in steps 10) or 11). The net key rate of this improved QKE protocol is R net = (1R blk + (1p 2 ) (R blk p 1 )) mc n : (II.9) We will see how well this does in simulations below. II.3 Finite Geometry LDPC Codes Finite geometry (FG) LDPC codes were formalized by Kou, Lin and Fossorier [40]. There are four families of FG LDPC codes: type-1 Euclidean geometry (EG1) LDPC codes, type-2 Euclidean geometry (EG2) LDPC codes, type-1 projective geometry (PG1) LDPC codes, and type-2 projective geometry (PG2) LDPC codes. These classical FG LDPC 20 codes were used by Hsieh, Yen and Hsu to construct EAQECCs with good performance that use relatively little entanglement [31]. In this section, we brie y restate the results from [40] and [31] and introduce the construction of FG LDPC codes. II.3.1 Euclidean geometry (EG) LDPC codes Let EG(p; 2 s ) be anp-dimensional Euclidean geometry over the Galois eld GF(2 s ), where p;s2N. This geometry consists of 2 ps points, where each is anp-tuple over GF(2 s ). The all-zerop-tuple is dened as the origin. Those points form an p-dimensional vector space over GF(2 s ). A line in EG(p; 2 s ) is a coset of a one-dimensional subspace of EG(p; 2 s ), and each line consists of 2 s points. There are 2 (p1)s (2 ps 1)=(2 s 1) lines. Each line has 2 (p1)s 1 lines parallel to it. Each point is intersected by (2 ps 1)=(2 s 1) lines. Let GF(2 ps ) be the extension eld of GF(2 s ). Each element in GF(2 ps ) can be rep- resented as an p-tuple over GF(2 s ), and hence a point in EG(p; 2 s ). Therefore, GF(2 ps ) may be regarded as the Euclidean geometry EG(p; 2 s ). Let be a primitive element of GF(2 ps ). Then 0; 0 ; 1 ; 1 ;:::; 2 ps 2 represent the 2 ps points of EG(p; 2 s ). LetH EG1 (p;s) be a matrix over GF(2). The rows of H EG1 (p;s) are the incidence vectors of all the lines in EG(p; 2 s ) not passing through the origin. The columns of H EG1 (p;s) are the 2 ps 1 non-origin points of EG(p; 2 s ), and theith column corresponds to the point i1 . ThenH EG1 (p;s) consists of n = 2 ps 1 columns and J = (2 (p1)s 1)(2 ps 1)=(2 s 1) rows, and it has the following structure: 1. Each row has weight r = 2 s . 2. Each column has weight c = (2 ps 1)=(2 s 1) 1. 3. Any two columns have at most one \1-component" in common. 4. Any two rows have at most one \1-component" in common. 21 The density of H EG1 (p;s) is 2 s =(2 ps 1), which is small for p or s large. Then H EG1 (p;s) is a low-density matrix. The LDPC code with parity-check matrix H EG1 (p;s) is called a type-1 Euclidean geometry LDPC code, and we denote it by EG1(p;s). Let H EG2 (p;s) = H EG1 (p;s) T . Then H EG2 (p;s) is a matrix with 2 ps 1 rows and (2 (p1)s 1)(2 ps 1)=(2 s 1) columns. The rows ofH EG2 (p;s) are the non-origin points of EG(p; 2 s ), and the columns are the lines in EG(p; 2 s ) not passing through the origin, and it has the following structure: 1. Each row has weight r = (2 ps 1)=(2 s 1) 1. 2. Each column has weight c = 2 s . 3. Any two columns have at most one \1-component" in common. 4. Any two rows have at most one \1-component" in common. The LDPC code with parity-check matrix H EG2 (p;s) is called a type-2 Euclidean geometry LDPC code, and we denote it by EG2(p;s). II.3.2 Projective geometry (PG) LDPC codes Let GF(2 (p+1)s ) be the extension eld of GF(2 s ). Let be a primitive element of GF(2 (p+1)s ). Let n = (2 (p+1)s 1)=(2 s 1) and = n . Then has order 2 s 1, and the 2 s elements 0; 0 ; 1 ; 2 ;:::; 2 s 2 form all the elements of GF(2 s ). Consider the set f 0 ; 1 ; 2 ;:::; n1 g, and partition the non-zero elements of GF(2 (m+1)s ) into n disjoint subsetsf i ; i ; 2 i ;:::; 2 s 2 i g, for i2f0; 1;:::;n 1g. Each such set is represented by its rst element ( i ), for i2f0; 1;:::;n 1g. If each element in GF(2 (p+1)s ) is represented as a (p + 1)-tuple over GF(2 s ), then ( i ) consists of 2 s 1 (p + 1)-tuples over GF(2 s ). The (p + 1)-tuple over GF(2 s ) that 22 represents ( i ) can be regarded as a point in a nite geometry over GF(2 s ). Then the points ( 0 ); ( 1 ); ( 2 );:::; ( n1 ) form a p-dimensional projective geometry over GF(2 s ), denoted PG(p; 2 s ). (Note that a projective geometry does not have an origin.) LetH PG1 (p;s) be a matrix over GF(2). The rows ofH PG1 (p;s) are the incidence vectors of all the lines in PG(p; 2 s ). The columns of H PG1 (p;s) are the n points of PG(p; 2 s ), and theith column corresponds to the point ( i1 ). ThenH PG1 (p;s) consists ofn = (2 (p+1)s 1)=(2 s 1) columns andJ = (2 ps +:::+2 s +1)(2 (p1)s +:::+2 s +1)=(2 s +1) rows, and it has the following structure: 1. Each row has weight r = 2 s + 1. 2. Each column has weight c = (2 ps 1)=(2 s 1). 3. Any two columns have at most one \1-component" in common. 4. Any two rows have at most one \1-component" in common. The density of H PG1 (p;s) is (2 2s 1)=(2 (p+1)s 1), which is small for p or s large. ThenH PG1 (p;s) is a low-density matrix. The LDPC code with parity-check matrix H PG1 (p;s) is called a type-1 projective geometry LDPC code, and we denote it by PG1(p;s). Let H PG2 (p;s) = H PG1 (p;s) T . Then H PG2 (p;s) is a matrix with (2 (p+1)s 1)=(2 s 1) rows and (2 ps +::: + 2 s + 1)(2 (p1)s +::: + 2 s + 1)=(2 s + 1) columns. The rows of H PG2 (p;s) are the points of PG(p; 2 s ), and the columns are the lines in PG(p; 2 s ), and it has the following structure: 1. Each row has weight r = (2 ps 1)=(2 s 1). 2. Each column has weight c = 2 s + 1. 3. Any two columns have at most one \1-component" in common. 4. Any two rows have at most one \1-component" in common. 23 The LDPC code with parity-check matrix H PG2 (p;s) is called a type-2 projective geometry LDPC code, and we denote it by PG2(p;s). II.3.3 Extension of nite geometry LDPC codes by column and row splitting A nite geometry LDPC code with n columns and J rows can be extended by splitting each column of its parity-check matrix H into multiple columns. If the splitting is done properly, very good extended nite geometry LDPC codes can be obtained. Let g 1 ;g 2 ;:::;g n be the columns of H. Let c sp be the column splitting factor, c sp 2 f1; 2;:::; c g. Then the column splitting can be done by splitting each g i intoc sp columns g i;1 ;g i;2 ;:::;gi;c sp , and distribute the ones of the original column among the new columns accordingly. So that the columns g i;1 ;g i;2 ;:::;gi; c c sp b c csp c have weights c csp + 1, and the other columns have weights c csp . After column splitting, we can proceed with row splitting, that is, determine a row splitting factor r sp 2f1; 2;:::; r g and follow similarly the process of column splitting. We denote EG1(p;s;c sp ;r sp ) as the LDPC code constructed by an EG1(p;s) LDPC code with column and row splitting factors c sp and r sp . The codes EG2(p;s;c sp ;r sp ), PG1(p;s;c sp ;r sp ), PG2(p;s;c sp ;r sp ) are dened similarly. II.4 Simulation Results In this section, we provide simulation results of our QKE protocol with FG codes. We use the same LDPC code for both C 1 and C 2 in constructing the entanglement-assisted CSS code for our QKE protocol. The channel for quantum communication is assumed to be a 24 depolarizing channel, and the channel error probability P e in the simulation corresponds to that of the equivalent classical binary-symmetric channel (BSC). We use Monte Carlo simulation with sample sizes of 200; 000. We allow the SPA decoder to iterate a maximum of 100 times. The channel error probabilities range from 2% to 8% in steps of 0:5%. Since many codes perform well whenP e is small, we are mostly interested in codes that have good performance for higher P e , such as might occur in realistic experiments. Let [[n;m;c]] be the parameters of the entanglement-assisted code, and R net be the original net key rate of QKE using that code; that is, R net = mc n . This means that the QKE protocol expands a key of length c to a key of length m. For a code to serve the purpose of performing key \expansion," one requires R net to be positive. Table II.1 demonstrates all possibleEG1(2; 5;c sp ;r sp ) codes with positiveR net that have block lengthn 11000. In Fig. II.1, we show the QKE performance of the original protocol, in terms of bit error rate, of some codes from Table II.1. In Fig. II.2, we set the generated keys' bit error threshold to = 10 6 , and simulate QKE with the improved QKE protocol from Sec. II.2. We present the performance, in terms of net key rate, using some codes from Table II.1. Table II.2 demonstrates all possible PG1(2; 5;c sp ;r sp ) codes with positive R net that have block lengthn 11000. In Fig. II.3, we present the QKE performance of the original protocol, in terms of bit error rate, of some codes from Table II.2. In Fig. II.4, we set the generated keys' bit error threshold to = 10 6 and simulate QKE with the improved QKE protocol proposed in Sec. II.2. We present the performance, in terms of net key rate, using some codes from Table II.1. 25 0.02 0.03 0.04 0.05 0.06 0.07 0.08 10 −6 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 Channel Error Probability, Pe Bit Error Rate EG1(2,5,5,2) EG1(2,5,6,2) EG1(2,5,7,2) EG1(2,5,7,3) EG1(2,5,8,2) EG1(2,5,8,3) EG1(2,5,9,2) EG1(2,5,9,3) EG1(2,5,9,4) EG1(2,5,10,3) EG1(2,5,10,4) Figure II.1: Bit error rate of the keys generated by the original QKE protocol with selected codes from EG1(2; 5;c sp ;r sp ). 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Channel Error Probability, Pe QKE Net Key Rate EG1(2,5,5,2) EG1(2,5,6,2) EG1(2,5,7,2) EG1(2,5,7,3) EG1(2,5,8,2) EG1(2,5,8,3) EG1(2,5,9,2) EG1(2,5,9,3) EG1(2,5,9,4) EG1(2,5,10,3) EG1(2,5,10,4) Figure II.2: Net key rate of the improved QKE protocol with selected codes from EG1(2; 5;c sp ;r sp ) and error threshold = 10 6 . 26 0.02 0.03 0.04 0.05 0.06 0.07 0.08 10 −6 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 Channel Error Probability, Pe Bit Error Rate PG1(2,5,5,2) PG1(2,5,6,2) PG1(2,5,7,2) PG1(2,5,7,3) PG1(2,5,8,2) PG1(2,5,8,3) PG1(2,5,9,2) PG1(2,5,9,3) PG1(2,5,9,4) PG1(2,5,10,3) PG1(2,5,10,4) Figure II.3: Bit error rate of the keys generated by the original QKE protocol with selected codes from PG1(2; 5;c sp ;r sp ). 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Channel Error Probability, Pe QKE Net Key Rate PG1(2,5,5,2) PG1(2,5,6,2) PG1(2,5,7,2) PG1(2,5,7,3) PG1(2,5,8,2) PG1(2,5,8,3) PG1(2,5,9,2) PG1(2,5,9,3) PG1(2,5,9,4) PG1(2,5,10,3) PG1(2,5,10,4) Figure II.4: Net key rate of the improved QKE protocol with selected codes from PG1(2; 5;c sp ;r sp ) and error threshold = 10 6 . 27 Table II.1: EG1(2; 5;c sp ;r sp ) codes with positive net key rates that have block length n 11000. [[n;m;c]] c sp r sp R net [[n;m;c]] c sp r sp R net [[1023; 571; 32]] 1 1 0:5269 [[8184; 7152; 1012]] 8 1 0:7502 [[2046; 452; 450]] 2 1 0:0010 [[8184; 6138; 2042]] 8 2 0:5005 [[3069; 2045; 1022]] 3 1 0:3333 [[8184; 5115; 3067]] 8 3 0:2502 [[4092; 3068; 1020]] 4 1 0:5005 [[8184; 4094; 4082]] 8 4 0:0015 [[4092; 2038; 2034]] 4 2 0:0010 [[9207; 8181; 1020]] 9 1 0:7778 [[5115; 4091; 1022]] 5 1 0:6000 [[9207; 7161; 2046]] 9 2 0:5556 [[5115; 3067; 2044]] 5 2 0:2000 [[9207; 6134; 3065]] 9 3 0:3333 [[6138; 5114; 1022]] 6 1 0:6667 [[9207; 5115; 4092]] 9 4 0:1111 [[6138; 4090; 2044]] 6 2 0:3333 [[10230; 9202; 1018]] 10 1 0:8000 [[7161; 6137; 1022]] 7 1 0:7143 [[10230; 8182; 2044]] 10 2 0:6000 [[7161; 5115; 2046]] 7 2 0:4286 [[10230; 7160; 3068]] 10 3 0:4000 [[7161; 4092; 3069]] 7 3 0:1429 [[10230; 6132; 4086]] 10 4 0:2000 Note that for channel error rates less than 2%, we may consider the codePG1(2; 5; 9; 2), which has a net key rate of about 0:5556. Considering channel error rates much lower than 2%, we can use other codes in the family which have even larger net key rates. In Fig. II.5, we set the generated keys' bit error threshold to = 10 6 , and we present the QKE net rate using the codes from both Table II.1 and II.2 that perform the best in each channel error region. As can be seen, quite reasonable key rates can be achieved even for error probabilities above 7%. 28 Table II.2: PG1(2; 5;c sp ;r sp ) codes with positive net key rates that have block length n 11000. [[n;m;c]] c sp r sp R net [[n;m;c]] c sp r sp R net [[1057; 570; 1]] 1 1 0:5383 [[8456; 7399; 1055]] 8 1 0:7502 [[2114; 490; 488]] 2 1 0:0009 [[8456; 6342; 2112]] 8 2 0:5002 [[3171; 2112; 1055]] 3 1 0:3333 [[8456; 5286; 3170]] 8 3 0:2502 [[4228; 3172; 1056]] 4 1 0:5005 [[8456; 4229; 4227]] 8 4 0:0002 [[4228; 2114; 2112]] 4 2 0:0005 [[9513; 8455; 1056]] 9 1 0:7778 [[5285; 4227; 1056]] 5 1 0:6000 [[9513; 7399; 2114]] 9 2 0:5556 [[5285; 3171; 2114]] 5 2 0:2000 [[9513; 6342; 3171]] 9 3 0:3333 [[6342; 5284; 1056]] 6 1 0:6667 [[9513; 5284; 4227]] 9 4 0:1111 [[6342; 4228; 2114]] 6 2 0:3333 [[10570; 9511; 1055]] 10 1 0:8000 [[7399; 6341; 1056]] 7 1 0:7143 [[10570; 8456; 2114]] 10 2 0:6000 [[7399; 5285; 2114]] 7 2 0:4286 [[10570; 7399; 3171]] 10 3 0:4000 [[7399; 4227; 3170]] 7 3 0:1429 [[10570; 6342; 4228]] 10 4 0:2000 29 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Channel Error Probability, Pe QKE Net Key Rate PG1(2,5,7,2) PG1(2,5,8,2) PG1(2,5,9,2) PG1(2,5,9,3) PG1(2,5,10,3) PG1(2,5,10,4) EG1(2,5,8,3) EG1(2,5,9,4) Figure II.5: Net key rate of the improved QKE protocol with selected codes from both EG1(2; 5;c sp ;r sp ) and PG1(2; 5;c sp ;r sp ) that perform well in the various channel error regions. It is worthwhile comparing our results to the recent work by Elkouss, Leverrier, All eaume and Boutros [22]. In their work, a set of 9 irregular LDPC codes were found for QKD based on the BB84 protocol. With a bit error rate threshold of the generated keys on the same order as ours (1:5 10 6 in their case), their net key rate performance exceeds ours by roughly 15 20% over the same channel error regions. However, this is not too surprising, since they considered LDPC codes with very large block sizes (on the order of 10 6 bits), while ours have much more modest block sizes (on the order of 10 3 ). We believe the sizes of our codes are reasonable for practical use. Given much greater computing resources for postprocessing, it should be easy to construct very large codes in our family of LDPC codes that would have better net key rates. 30 II.5 Conclusion In this chapter, we demonstrate our research in proposing a protocol for QKE that is an improved version of the protocol proposed by Luo and Devetak. The modications are done to lter out block errors, which allows us to greatly reduce the bit error rate of QKE with only a small reduction in the net key rate. In addition, we have studied a family of LDPC codes based on nite geometry that are capable of protecting the QKE protocol from errors even when the channel is moderately noisy. The gures in the previous section show clearly which codes one should choose to eciently expand the keys. 31 Chapter III Method for Quantum-jump Continuous-time Quantum Error Correction III.1 Introduction Continuous-time quantum error correction (CTQEC) studies quantum error correction when both decoherence and error correction are treated as continuous in time. One of the rst approaches to CTQEC was proposed by Paz and Zurek [57], where the process of error correction is modeled as a weak quantum-jump operation taking a small time t. In the limit oft going to zero, the evolution of the code system can be described by a continuous- time master equation. In [54], Oreshkov proposed a quantum-jump CTQEC protocol and reviewed various topics on CTQEC not restricted to the quantum-jump model. Another scheme for CTQEC was proposed by Ahn, Doherty, and Landahl (ADL) [2] involving the method of feedback control by estimation [20]. Several studies based on this scheme have been published since then (see, e.g., [3, 16]). Other work or resource on CTQEC and continuous-time quantum feedback control includes [73, 19, 11, 4, 33, 63, 64, 35, 55, 49, 32, 74, 34]. 32 In Sec. III.2, we introduce the technical background needed for the rest of this chap- ter. In Sec. III.3, we discuss a class of protocols that perform quantum-jump CTQEC. Furthermore, we show the minimal requirement on the size of the ancillary system and propose a protocol that meets this requirement. In Sec. III.4, we compare this protocol to those of Oreshkov and ADL. In addition, we investigate whether optimizing the pa- rameter in the error-correction process of Oreshkov yields any signicant improvement in performance, and we show that the improvement is negligibly small in general. III.2 Preliminaries In this section, we review background knowledge and notation for quantum stabilizer codes and quantum-jump CTQEC sucient for the reader to understand the content of this chapter. III.2.1 Structure of stabilizer codes Let C be a stabilizer code that protects k qubits by encoding them into n qubits. Asso- ciated with C is a set of nk stabilizer generators G =fg 1 ;g 2 ;:::;g nk gG n , where G n =fi j n l=1 O l : j 2f0; 1; 2; 3g;O l 2fI;X;Y;Zg;8lg is the n-fold Pauli group, and X;Y;Z are the Pauli operators. The stabilizer group S is an Abelian subgroup ofG n that is generated by the stabilizer generators and does not containI n . The codespace is dened to be the space C S =fj i2H n :gj i =j i; 8 g2Sg, whereH n is the 2 n - dimensional Hilbert space of n qubits. The code can correct any set of correctable errors E =fE j g j2J whereE j E l 2S[(G n N(S))8j;l2J, whereJ is an index set andN(S) is the normalizer of S. 33 The above outlines the structure of stabilizer codes. For more detail, see [25]. In the following, we will introduce three bases ofH n that are useful in analyzing stabilizer codes: 1. \Physical basis": The physical basis is the simplest basis, in the sense that physical errors act on the state that is represented in this basis. In this basis, the codespace is C S as dened above. 2. \Encoded basis": The state represented in the encoded basis has a set of k infor- mation qubits that are to be protected, and the remaining nk qubits are the syndrome qubits that represent the redundancies used in the error-correcting code. Without loss of generality, we will assign the rstk qubits in this basis as the information subsystem and the lastnk qubits as the syndrome subsystem. In this basis, a state is in the codespace if and only if the syndrome qubits are in the statej0i nk . There exists an encoding unitary U E that takes the operators and states represented in the encoded basis to the physical basis. 3. \Corrected basis": The state in the corrected basis results from applying a correct- ing unitary to the state in the encoded basis such that any correctable error acting on a codeword state leaves the information subsystem invariant. The correcting unitary, U leaves the syndrome subsystem unchanged, and it applies a unitary to the information system conditioned on which syndrome subspace the state is in. It can be written in the following form in terms of the set of standard basis states of the syndrome subsystem, jji , and some set of unitary operators,fU ;j g, where j2f0; 1;:::; 2 nk 1g: U = 2 nk 1 X j=0 U ;j jjihjj: (III.1) 34 Specically, eachjji is dened as a tensor product of standard basis statesj0i andj1i in the following way: jji =jj nk i jj nk1 i :::jj 1 i (III.2) =jj nk j nk1 ::: j 1 i; (III.3) where j l 2f0; 1g satises j = nk X l=1 j l 2 l1 : (III.4) In other words, j nk j nk1 ::: j 1 is the binary representation of j with the most signicant digit on the left, and an underline is used to dierentiate the index as a decimal number from its binary representation. The following illustrates the contents in this section using the three-qubit bit- ip code. Example 1. The three-qubit bit- ip code encodes k = 1 qubit into n = 3 qubits. The nk = 2 stabilizer generators may be chosen as: g 1 =Z Z I; g 2 =Z I Z: (III.5) A correctable set of errors for this code contains the trivial error, E 0 =I 3 , and the bit- ip errors on each of the n = 3 qubits: E 1 =X I I; E 2 =I X I; E 3 =I I X: (III.6) 35 An encoding unitary for this code is: U E =j0ih0j I I +j1ih1j X X: (III.7) Now, by applying the inverse encoding unitary, U 1 E , we can represent the operators in the encoded basis. Adding the subscript \(E)" to dierentiate them from their original form in the physical basis, they are: g 1(E) =I Z I; g 2(E) =I I Z: (III.8) and E 0(E) =I 3 ; E 1(E) =X X X; (III.9) E 2(E) =I X I ; E 3(E) =I I X: (III.10) In the encoded basis, just like the physical basis, the correctable error E 0(E) leaves any state invariant. So U ;0 =I. On the other hand, the correctable error E 1(E) ips every qubit in the system. When the state was previously in a codeword state, E 1(E) brings the syndrome qubits from the j0i-subspace to thej3i-subspace. Since the information qubit has been ipped as well, the corresponding unitary correction is to apply a bit- ip operator so that the information qubit is ipped back, that is, U ;3 =X. Similarly, one may nd that U ;1 =I; U ;2 =I. 36 Therefore, the correcting unitary dened earlier in this example is: U = 3 X j=0 U ;j jjihjj (III.11) =I (j0ih0j +j1ih1j +j2ih2j) +X j3ih3j: (III.12) To represent the operators in the corrected basis, one applies the unitary U U 1 E to the operators in the physical basis. We denote the operators in the corrected basis by adding the subscript \(C)," and they are: g 1(C) =I Z I; g 2(C) =I I Z; (III.13) and E 0(C) =I 3 ; (III.14) E 1(C) =I (j0ih3j +j3ih0j) +X (j1ih2j +j2ih1j); (III.15) E 2(C) =I (j0ih2j +j2ih0j) +X (j1ih3j +j3ih1j); (III.16) E 3(C) =I (j0ih1j +j1ih0j) +X (j2ih3j +j3ih2j): (III.17) Any correctable error leaves the information qubit of a codeword state invariant. The only eect of a correctable error acting on a codeword state is to bring the syndrome system from thej0i-subspace to a nontrivial syndrome subspace. As will be seen, we will mainly work in the corrected basis since it simplies the analysis of a CTQEC protocol. However, some caution is required. As written, the corrected basis looks like a tensor product of bases on each of the qubits, but it is not. 37 In the example, the statesj0i andj1i for the system qubit are dierent in syndromej3i than in syndromesj0i,j1i andj2i. We will therefore also use the operators represented in the physical basis for practical purposes. III.2.2 The model of quantum-jump CTQEC In conventional quantum error correction, the system undergoes an error-correcting op- eration, which is characterized by a CPTP mappingR (), consisting of a syndrome mea- surement followed by a unitary correction operation based on the measurement result. In quantum-jump CTQEC, a weak version of the error-correcting operation is applied repeatedly on the system. Let t be the small time step of a single instance of the weak error-correcting operation. The innitesimal error correction map is: t ! (1t) +tR(); (III.18) where is the rate of error correction. In the limit oft! 0, the above innitesimal error-correcting operation approaches a master equation describing the system evolution under error correction: d dt = (R()): (III.19) In addition, the system undergoes evolution caused by decoherence. We model the decoherence as Markovian with the system HamiltonianH and a set of Lindblad operators fL j g. Then the full master equation becomes d dt =L() + (R()); (III.20) 38 where L() =i [H;] + X j L j L y j 1 2 L y j L j 1 2 L y j L j : (III.21) In this study, we will use quantum error-correcting codes and error models based on standard quantum error correction. Our main goal in exploring quantum-jump CTQEC is to nd a method to realize the weak mapping given by Eq. (III.18). A general way of realizing this weak mapping involves the following steps: (1) Couple the primary system to an ancillary system. (2) Apply a weak unitary to both systems. (3) Weakly measure and then discard the ancillary system. (4) Apply a weak unitary correction, conditioned on the measurement result, to the system. In this chapter, we will propose a specic realization of CTQEC based on the above scheme, which will require analysis on the ancillary system used in the coherent measure- ment steps. Throughout this chapter, we will denote a basis state of the ancillary system asjj a i, where the subscript \a" signies that the state is of the ancillary system, and the index j runs from 0 to the dimension of the ancillary system minus one. III.3 Quantum-jump CTQEC with minimal ancillas In [54], a quantum-jump CTQEC protocol was proposed that requires the size of the ancillary system to be the dimension of the syndrome space minus one. That is, for a syndrome system of size nk, an ancillary system of size 2 nk 1 is required. This exponential overhead in the size of the ancillary system would be a very expensive resource in practice. In this section, we propose a quantum-jump CTQEC protocol that requires 39 only linear overhead, nk + 1 to be exact, of ancillary qubits to perform the same task. We will also show that nk + 1 is in fact optimal. The main idea is as the following. Consider CTQEC using an arbitrary [[n;k;d]] stabilizer code. In the corrected basis, without loss of generality, we assume the information subsystem is the rstk qubits. The lastnk qubits form the syndrome subsystem which absorbs the eect of any correctable error on the code. The syndrome system is initially prepared in the trivial statej0i nk . When a correctable error happens, some syndrome qubits may be ipped toj1i, while the information qubits are left invariant. The goal of error correction is obviously to ip the syndrome qubits back to the trivial statej0i nk so that the syndrome subsystem is capable of absorbing any new correctable error that may occur. Such an error-correcting operation may be described in the corrected basis as a CPTP mapR(:): !R() = 2 nk 1 X j=0 R j R y j ; (III.22) where R j =I k j0ihjj ; 8 j2f0; 1;:::; 2 nk 1g; (III.23) and is the density operator of the encoded system. With the given error-correcting mapR (), the weak error-correcting map is ! 1" 2 +" 2 R(): (III.24) Note that, as in Eq. (III.18), if the map takes time t we dene a rate by " = p t: (III.25) 40 In Sec. III.3.1, we rst give the structure of a class of methods that implements the CTQEC map in Eq. (III.24). In Sec. III.3.2, we discuss the requirements on the ancillary system in the coherent measurement steps, and we show that the size of the ancillary system must be at least nk + 1. In Sec. III.3.3, we propose a CTQEC protocol for any [[n;k;d]] stabilizer code, requiring nk + 1 ancillary qubits. III.3.1 Preliminary derivation In this section, we derive requirements to construct CTQEC protocols based on a given [[n;k;d]] stabilizer code. Recall that, in the corrected basis, the CPTP map we wish to achieve is given by ! 1" 2 +" 2 2 nk 1 X j=0 R j R y j ; (III.26) where R j =I k j0ihjj; 8j2f0; 1;:::; 2 nk 1g: (III.27) The above map is already in a Kraus decomposition with Kraus operators n p 1" 2 I n ;"R j :j = 0; 1;:::; 2 nk 1 o : (III.28) However, this set of Kraus operators is not useful to us, since we are performing continuous-time correction and we need each of the Kraus operatorsK to be weak, in the sense that it has the form K =c I n +O(") ; (III.29) where c is some constant andO(") is some term upper-bounded by the order of ". That is, each of the Kraus operators is close to the identity. 41 Now, letfK l g l2S be a set of non-zero weak Kraus operators andS =f0; 1;:::;jSj 1g be an index set that is to be determined. The following lemma (Theorem 8.2 from [52]) will be useful from here on. Lemma 1. Let M;N2f0g[N, and letfE j g M j=0 andfF l g N l=0 be sets of Kraus operators for CPTP mappingE andF, repectively. If M6=N, by appending zero operators to the shorter list of the two sets of Kraus operators we may ensure that M =N. ThenE =F if and only if there exists an (M + 1) (M + 1) unitary matrix U, with the j;l-th element being u j;l , such that E j = M P l=0 u j;l F l for all j2f0;:::;Mg. The following Theorem then follows from Lemma 1. Theorem 2. LetM;N2f0g[N, and letfE j g M j=0 andfF l g N l=0 be sets of non-zero Kraus operators for the same CPTP mappingE. IffF l g N l=0 is a linearly independent set of operators, then MN. Proof. Suppose M < N. Adding NM zero operators, E M+1 = ::: = E N = 0, to the setfE j g M j=0 , we have a new set of Kraus operatorsfE j g N j=0 . Then by Lemma 1, 0 =E N = N P l=0 u N;l F l , whereu j;l 's are elements of an unitary matrix U. Now, since theF l 's are linearly independent,u N;l = 0 for alll. This implies that a row of U has elements all equal to 0. Hence, U does not have full rank and therefore cannot be unitary. The contradiction implies that MN. 42 It is easy to check that the set of Kraus operators for the weak CPTP mapping given by Eq. (III.28) is linearly independent. Therefore, by Theorem 2, jSj 2 nk + 1: (III.30) Hence, each of the Kraus operators K l may be written K l = 2 nk 1 X j=0 u l;j "R j +u l;2 nk p 1" 2 I n (III.31) =u l;2 nk p 1" 2 I n + 2 nk 1 X j=0 u l;j " I k j0ihjj (III.32) wherefu l;k g l;k2S are elements of an unitary matrix. The next step requires the polar decomposition of each K l . For all l2S, K l can be decomposed as K l =U C;l M l ; (III.33) where U C;l is unitary and M l is positive-semidenite. The operatorsfM l g l2S describe a positive-operator valued measurement (POVM). Each of theM y l M l is positive-semidenite, and the completeness relation is satised: X l2S M y l M l = X l2S U y C;l K l y U y C;l K l (III.34) = X l2S K y l K l =I n : (III.35) To implement the weak correction map given by Eq. (III.26), we recall the four steps of CTQEC that were introduced in Sec. III.2.2: 43 (1) Couple the primary system to an ancillary system. (2) Apply a weak unitary to both systems. (3) Weakly measure and then discard the ancillary system. (4) Apply a weak unitary correction, conditioned on the measurement result, to the system. The polar decomposition gives a set of weak measurement operatorsfM l g l2S and a set of weak unitary operatorsfU C;l g l2S . The mapping dened by the Kraus operators can be treated as rst measuring withfM l g l2S and then applying a unitary correction conditioned on the measurement outcome. Therefore, the corrections in step 4 are given by the setfU C;l g l2S , and steps 1 to 3 implement the measurement given by the set of measurement operatorsfM l g l2S . The remaining question is how to carry out the three steps for the measurement. We shall answer this question in the rest of this section. III.3.2 Coherent measurement and the ancillary system We begin with the required ancillary system. In step 3, ancillary system is measured. Since each outcome of the measurement on the ancillary system should result in a unique measurement operator fromfM l g l2S acting on the primary system, there must exist a set ofjSj mutually orthogonal states in the ancillary system. So the dimension of the Hilbert space of the ancillary system must be no less thanjSj. If the ancillary system hass A 2N qubits then 2 s A jSj ) s A log 2 jSj: (III.36) 44 Therefore, by Eq. (III.30), the minimum size of the ancilla, s A , is s A =dlog 2 2 nk + 1 e =nk + 1: (III.37) Since we are investigating the general setup, we do not require the ancillary system to be of minimum size. Let s A log 2 jSj be the size of the ancillary system, and let the set S A =f0; 1;:::; 2 s A 1g. We denote a set of standard basis states for this system as fjl a ig l2S A ; (III.38) where the underline notation is dened as in Sec. III.2, and the subscript \a" indicates that the basis state is in the ancillary Hilbert space of dimension 2 s A . Throughout this paper, we will label the Hilbert space of the primary systemH S and the Hilbert space of the ancillaH A . The initial state of the ancilla,jA 0 i2H A , can be chosen such that each of the standard basis statesfjj a ig j2S is equally weighted: jA 0 i 1 p jSj X j2S jj a i: (III.39) Note that SS A , sojA 0 i6=j+i s A in general. Let U M be the desired weak unitary operator on the combined Hilbert space of both the primary and ancillary systems,H S H A . Suppose the primary system is initially 45 in an arbitrary statej i2H S . Then Eq. (III.39) implies that the initial state for the combined system isj i jA 0 i. We choose U M to satisfy U M (j i jA 0 i) = X j2S M j j i jj a i: (III.40) After measuring the ancillary system in the basisfjj a ig j2S A , the state of the primary system after measurement result j2S is M j j i, and any other outcomes in S A nS have zero probability. To nd the weak unitary U M , let U M =e i"H M =I n+s A +i"H M 1 2 " 2 H 2 M +O(" 3 ); (III.41) where H M is Hermitian. We can always write H M in the following form: H M = X j;l2S A H j;l jj a ihl a j; (III.42) where H j;l 's act on the primary system and satisfy H j;l =H y l;j since H M is Hermitian. Up to second order in ", Eq. (III.40) implies that for all j2S, 1 p jSj 0 @ I n+s A +i" X l2S A H j;l 1 2 " 2 X l;m2S A H j;l H l;m 1 A =M j ; (III.43) and for all j2SnS A , 1 p jSj 0 @ I n+s A +i" X l2S A H j;l 1 2 " 2 X l;m2S A H j;l H l;m 1 A = 0: (III.44) 46 Solving the above equations may be dicult in general. However, we show in the next section a special case where those equations can be solved. III.3.3 A protocol for quantum-jump CTQEC requiring minimal num- ber of ancillas In this section, we propose a specic CTQEC protocol based on the formalism in the previous sections. This protocol requires a minimal ancillary system of size s A = s A = nk + 1, and the weak operators involved are in a form such that the weak Hermitian operator associated with the coherent measurement can be easily found. The target error-correcting map is given by Eq. (III.24). LetS =f0; 1;:::; 2 nk+1 1g, and consider the following set of Kraus operatorsfK j g j2S for this mapping: K j = 1 p 2 nk+1 I k p 1" 2 I nk +i" p 2 nk j0ihjj ; (III.45) K 2 nk +j = 1 p 2 nk+1 I k p 1" 2 I nk i" p 2 nk j0ihjj ; (III.46) for all j2f0; 1;:::; 2 nk 1g. Note that in this case S A =S. Following our formalism, we rst nd the polar decomposition of these Kraus opera- tors. This results in a unique set of POVM operatorsfM j g j2S and a corresponding set of 47 correcting unitariesfU C;j g j2S satisfying K j =U C;j M j for all j2S. Up to second order in ", these operators are listed below: M 0 = 1 p 2 nk+1 I k 1 1 2 " 2 I nk + 2 nk1 " 2 j0ih0j ; (III.47) U C;0 =I k I nk + i p 2 nk " 2 nk1 " 2 j0ih0j ; (III.48) M 2 nk = 1 p 2 nk+1 I k 1 1 2 " 2 I nk + 2 nk1 " 2 j0ih0j ; (III.49) U C;2 nk =I k I nk + i p 2 nk " 2 nk1 " 2 j0ih0j ; (III.50) 48 and8 j2f1; 2;:::; 2 nk 1g, M j = 1 p 2 nk+1 I k 1 1 2 " 2 ! I nk 2 nk3 " 2 j0ih0j + 3 2 nk3 " 2 jjihjj +i p 2 nk2 " j0ihjjjjih0j ; (III.51) U C;j =I k I nk 2 nk3 " 2 j0ih0j +jjihjj +i p 2 nk2 " j0ihjj +jjih0j ; (III.52) M 2 nk +j = 1 p 2 nk+1 I k 1 1 2 " 2 ! I nk 2 nk3 " 2 j0ih0j + 3 2 nk3 " 2 jjihjj i p 2 nk2 " j0ihjjjjih0j ; (III.53) U C;2 nk +j =I k I nk 2 nk3 " 2 j0ih0j +jjihjj i p 2 nk2 " j0ihjj +jjih0j ; (III.54) Note that the correcting unitaries may be written in the form U C;j = e i"H C;j , where 8j2f0; 1;:::; 2 nk 1g, 49 H C;j =I k p 2 nk 2 j0ihjj +jjih0j ; (III.55) H C;2 nk +j =I k p 2 nk 2 j0ihjj +jjih0j ; (III.56) Now, as we have claimed, the ancillary system has size s A =nk + 1: (III.57) In this case, S A =f0; 1;:::; 2 nk+1 1g = S. Following Eq. (III.39), the ancillary system is prepared in the state jA 0 i = 1 p 2 nk+1 X j2S jj a i =j+i nk+1 : (III.58) Finding U M is equivalent to nding the corresponding Hermitian operator H M . We have mentioned that one may regard this task as solving Eqs. (III.43) and (III.44), and in fact, solving only Eq. (III.43) is required since S =S A . Since the M j 's are known, by collecting the terms of each order in ", then solving the following Eqs. (III.59){(III.63) is equivalent to solving Eq. (III.43) X j2S H 0;j = X j2S H 2 nk ;j = 0; (III.59) X j;l2S H 0;j H j;l = X j;l2S H 2 nk ;j H j;l =I k I nk 2 nk j0ih0j ; (III.60) 50 and8 j2f1; 2;:::; 2 nk 1g, X l2S H j;l = X l2S H 2 nk +j;l =I k p 2 nk 2 j0ihjjjjih0j ; (III.61) X l;m2S H j;l H l;m = X l;m2S H 2 nk +j;l H l;m ; =I k I nk + 2 nk 4 j0ih0j 3 2 nk 4 jjihjj ; (III.62) and8 j;l2S, H j;l =H y l;j : (III.63) These equations potentially have many solutions. We solved the equations for a small problem size, and then generalized our solution to arbitrary problem size. The small problem we consider is three-qubit bit- ip code, where n = 3, k = 1, and therefore s A = nk + 1 = 3. In this case, one can solve for thefH j;l g straightforwardly by hand. Generalizing to arbitrary size involves assigning parameters as elements of theH j;l 's similar to the solution to the three-qubit code case, and then solving for the parameters by plugging everything back into Eq. (III.43). Here is this solution for the general case: H 0;0 =H 2 nk ;2 nk = 2 p 2 nk I k 2 nk 1 X l=1 (j0ihlj +jlih0j); (III.64) H 0;2 nk =H 2 nk ;0 = 0; (III.65) 51 and8 j2f1; 2;:::; 2 nk 1g, H 0;j =H j;0 = 2 p 2 nk I k j0ihjj +jjih0j ; (III.66) H 0;2 nk +j =H 2 nk +j;0 = 0; (III.67) H j;j =H 2 nk +j;2 nk +j = 2 (1 2 nk ) p 2 nk I k j0ihjj +jjih0j ; (III.68) H 2 nk ;j =H j;2 nk = 0; (III.69) H 2 nk ;2 nk +j =H 2 nk +j;2 nk = 2 p 2 nk I k j0ihjj +jjih0j ; (III.70) H j;2 nk +j =H 2 nk +j;j = p 2 nk 2 I k j0ihjjjjih0j ; (III.71) and8 j;l2f1; 2;:::; 2 nk 1g where j6=l, H j;l =H 2 nk +j;2 nk +l = 1 p 2 nk I k j0ihjj +jjih0j +j0ihlj +jlih0j ; (III.72) H j;2 nk +l =H 2 nk +j;l = 1 p 2 nk I k j0ihjj +jjih0jj0ihljjlih0j : (III.73) 52 The desired weak unitary is therefore U M =e i"H M , where H M = X j;l2S H j;l jj a ihl a j: (III.74) Example2. In this example, we demonstrate the protocol for the three-qubit bit- ip code. The primary system is rst coupled to an ancillary system of s A = nk + 1 = 3 qubits: jA 0 i =j+i 3 : (III.75) Then the unitary operation U M =e i"H M is applied to the combined system, in which H M = 7 X j;l=0 H j;l jj a ihl a j (III.76) = 3 X j=1 I h j0ihjj ju j ihv j j +jv j ihw j j jjih0j (jv j ihu j j +jw j ihv j j) i ; (III.77) where ju j i =j+i 0 B B @ j0i 3 X l=1 j<l jli 1 C C A ; jv j i =ji jji; (III.78) and jw j i =j+i 0 B B @ j0i + 3 X l=1 l6=j jli 2jji 1 C C A : (III.79) Finally, the correcting unitary, U C;j , corresponding to each measurement result j2 f0; 1; 2;:::; 7g is given by Eqs. (III.48), (III.50), (III.52) and (III.54). 53 III.4 Comparison with other CTQEC protocols In this section, we will compare the proposed CTQEC protocol with two previous ones: those of Oreshkov [54] and of Ahn, Doherty, and Landahl (ADL) [2]. In particular, for Oreshkov's method, we have investigated whether feedback strategy can improve its performance. However, we will show results that suggest that any improvement in per- formance may be minimal. For a fair comparison, we set the error-correcting \strength" of the protocols equal to each other. The measure of strength used is the diamond norm [71, 72] of the dierence between each weak map and the identity. The contents related to Oreshkov's and ADL's schemes will be addressed in Sec. III.4.1 and Sec. III.4.2, respectively. III.4.1 Oreshkov's protocol In [54], Oreshkov presents a method that does not perform the precise quantum-jump CTQEC map in Eq. (III.26); however, the protocol gives the map when the primary system is initially in the code space and the error model has all correctable errors of the code occur as Poisson processes with the same rate . The procedure is very similar to that in this paper. One notable dierence is that Oreshkov's protocol requires 2 nk 1 ancillary qubits; the result in the previous section shows that this number can be reduced (whennk> 2) to as low asnk +1, which is the minimum number of ancillas possible! One might argue that each operator of the interaction Hamiltonian in Oreshkov's method is easier to realize, as it involves interaction between the primary system and a single 54 ancilla qubit. However, each such operator is still a multi-body interaction, and the Hamiltonian includes exponentially many such terms. Since Oreshkov's protocol and ours produce the same eective CTQEC mapping, the performances of both devices are the same. One question about Oreshkov's protocol that we have investigated is whether feeding back all results from previous measurements would allow better correction. In general, the answer should be positive, but how much greater is the benet? To answer this question, we rst give a brief description of the protocol. The procedure also follows the four steps (of coherent measurement and unitary cor- rection) listed near the end of Sec. III.3.1. In step 1, the ancillary system is prepared in the state j O i =j+i 2 nk 1 : (III.80) In step 2, a joint weak unitary, U M;O =e i"H M;O , is done, with Hamiltonian H M;O = 1 2 I k 2 nk 1 X j=1 X j Y a j ; (III.81) where Y a j is the single Pauli-Y operator acting on the j-th ancillary qubit, and X j =jjih0j +j0ihjj: (III.82) 55 In step 3, a Z measurement is performed on each of the ancillary qubits. If the measurement result on the j-th qubit is m j 2f0; 1g, then the unitary correction in step 4 is U C;O =e i"H C;O with H C;O = 1 2 I k 2 nk 1 X j=1 (1) m j Y j a ; (III.83) where Y j =i jjih0jj0ihjj : (III.84) In this framework, the correcting operation is xed. However, the parameter" inU C;O is allowed to vary with time, U C;O (t) = e i(t)H C;O for some function (t). Introducing this freedom can improve the performance, but we will see shortly that results for simple codes show minimal gain in performance, which suggests that a xed correcting parameter already provides nearly optimal error-correcting capability. Take the case of the three-qubit bit- ip code for example. Let I be the density matrix of the information system that is to be protected from error. The initial state of the primary system in the corrected basis (and also the encoded basis) is then (0) = I j0ih0j: (III.85) Suppose the error model is independent bit- ip Poisson processes on each qubit with rate . Then the density matrix of the primary system at time t is (t) = 3 X j=0 w j (t) j ; (III.86) 56 where the j 's are normalized density matrices corresponding toj dierent errors applied to the initial code state: 0 =(0); (III.87) 1 = 1 3 3 X j=1 E j(C) (0)E j(C) ; (III.88) 2 = 1 3 3 X j;l=1 j<l E j(C) E l(C) (0)E j(C) E l(C) ; (III.89) 3 =E 1(C) E 2(C) E 3(C) (0)E 1(C) E 2(C) E 3(C) ; (III.90) where the E j(C) 's are the bit- ip errors represented in the corrected basis (as dened in Example 1). The w j (t)'s are real functions that form a set of probabilities at each time t. Note that w 0 (0) = 1. Suppose at time t the density matrix of the primary system is given by Eq. (III.86). Then by applying the CTQEC map in Eq. (III.24) to this density matrix, w 0 (t)!w 0 (t) 1 3("(t)) 2 4 +w 1 (t) (" +(t)) 2 4 ; (III.91) w 1 (t)!w 1 (t) 1 (" +(t)) 2 4 +w 0 (t) 3("(t)) 2 4 ; (III.92) w 2 (t)!w 2 (t) 1 (" +(t)) 2 4 +w 3 (t) 3("(t)) 2 4 (III.93) w 3 (t)!w 3 (t) 1 3("(t)) 2 4 +w 2 (t) (" +(t)) 2 4 : (III.94) 57 Since the goal of error correction is to map the state back to thej0i-subspace, the parameter in this CTQEC map is chosen to maximize the updated w 0 (t). Hence, the optimal strategy would be to choose (t) as follows: (t) = arg max w 0 (t) 1 3(") 2 4 +w 1 (t) (" +) 2 4 = 3w 0 (t) +w 1 (t) 3w 0 (t)w 1 (t) ": (III.95) This is optimal correction. Now let " = p t, in order to realize the innitesimal CTQEC map in Eq. (III.18) and let t! 0, the rst-order dierential equations for the w j (t)'s including both the error process and error correction are w 0 0 =3w 0 +w 1 + 3w 0 w 1 3w 0 w 1 ; w 0 1 = 3w 0 3w 1 + 2w 2 3w 0 w 1 3w 0 w 1 ; w 0 2 = 3w 3 3w 2 + 2w 1 9w 2 0 w 2 3w 2 1 w 3 (3w 0 w 1 ) 2 ; w 0 3 =3w 3 +w 2 + 9w 2 0 w 2 3w 2 1 w 3 (3w 0 w 1 ) 2 : (III.96) The last terms in Eq. (III.96) arew 1 ,w 1 ,w 2 andw 2 , respectively, if one takes (t) =" to be a xed value as Oreshkov considered. Fig. III.1 compares the performances of optimal and constant correction in Oreshkov's protocol in terms of codeword delity (i.e., w 0 (t)) and correctable overlap (i.e., w 0 (t) + w 1 (t), which is the codeword delity after a strong error correction is applied) between the cases of optimal and constant correction in Oreshkov's device. The rate of error- correction in this comparison is = 100. We can see that the dierence in performance is negligibly small. In fact, we can see from Figs. III.1 and III.2 that w 1 (t) is very small 58 compared tow 0 (t) at any timet. This should not be surprising, since the correction terms in Eq. (III.96) drive the weightsw 1 andw 2 towardsw 0 andw 3 , respectively, and the rate of this driving process is much larger than the rate of the error process (). If we use the relationship w 1 w 0 to approximate the dierential equations given by Eq. (III.96), we get w 0 0 3w 0 +w 1 +w 1 + w 2 1 3w 0 ; (III.97) w 0 1 3w 0 3w 1 + 2w 2 w 1 w 2 1 3w 0 ; (III.98) w 0 2 3w 3 3w 2 + 2w 1 w 2 w 1 (2w 0 w 2 w 1 w 3 ) (3w 2 0 ) 2 ; (III.99) w 0 3 3w 3 +w 2 +w 2 + w 1 (2w 0 w 2 w 1 w 3 ) (3w 2 0 ) 2 : (III.100) The last terms in the above approximation are the result of optimal correction. Since w 1 w 0 , the correction is dominated by the second to last terms, and the eect of the extra terms is negligibly small. The extra eort to track the state and apply precise corrections yields little benet. In Fig. III.3, we show a more complicated example using the ve-qubit \perfect" code that can correct any single-qubit error. The correcting parameter is again chosen to be = 100. We see that the benet of optimal correction is still very small. Note that, in practice, will probably be larger than 100, and the benet of optimal correction will be even smaller. Therefore, we conclude that the constant correction method considered in Oreshkov's protocol has good enough error-correcting performance. 59 0 1 2 3 4 5 0.70 0.75 0.80 0.85 0.90 0.95 1.00 Time (units of 1/ λ) Figure III.1: Comparison of CTQEC performance between optimal and constant correc- tion for Oreshkov's device using the three-qubit bit- ip code under independent bit- ip channels with error rate and correcting parameter = 100. The thin and thick dashed red lines are, respectively, the correctable overlap and codeword delity of the case with optimal correction. The thin and thick solid blue lines are, respectively, the correctable overlap and codeword delity of the case with constant correction. 60 0 1 2 3 4 5 0.00 0.02 0.04 0.06 0.08 0.10 Time (units of 1/ λ) Figure III.2: Plot of w 1 (t) in the case of optimal correction for Oreshkov's device with = 100. 61 0 1 2 3 4 5 0.2 0.4 0.6 0.8 1.0 Time (units of 1/ λ) Figure III.3: Comparison of CTQEC performance between optimal and constant cor- rection for Oreshkov's protocol using the ve-qubit \perfect" code under independent depolarizing channels with error rate and correcting parameter is = 100. The thin and thick dashed red lines are, respectively, the correctable overlap and codeword delity of the case with optimal correction. The thin and thick solid blue lines are, respectively, the correctable overlap and codeword delity of the case with constant correction. 62 III.4.2 ADL protocol In [2], Ahn, Doherty, and Landahl proposed a CTQEC protocol that weakly measures the stabilizer generators. The correcting Hamiltonian is a sum of correctable errors with tuning parameters that are controlled by the measurement results. This protocol results in a stochastic master equation giving the quantum trajectories of the system. Note that the master equation of the system in our CTQEC protocol given by Eq. (III.19) does not have diusive terms, which may arise in general in a quantum trajectory, since it is a map that averages over all measurement outcomes. The averaged map makes analysis and simulation easier, and it should accurately model the protocol since it represents the \expected" evolution of the system. Therefore, it is reasonable to use the averaged map of the ADL protocol in the comparison. To compare the performance between dierent protocols, we should adjust the param- eters in both protocols so that the comparison is \fair" in the sense that the strengths of the correcting maps are comparable. One commonly measures the strength of a quantum map by the diamond normjjjj [71, 72] of the dierence between and the identity mapI. Let this function be denoted byD(), where for all quantum maps , D()jjIjj : (III.101) Now, we will compare our protocol to the ADL device for the three-qubit bit- ip code. Let be the averaged CTQEC map for our protocol over an arbitrary small timet. Then for any density matrix , () = (1t) +tR(); (III.102) 63 where we recall thatR() is the strong error-correcting map given by Eq. (III.22). Over the same time t, given c as the estimated density matrix based on previous measurement results, let ADL;c be the averaged CTQEC map for the ADL protocol that takes as input the density matrix . ADL;c () = + 2 ~ L(g 1 ) + ~ L(g 2 ) + ~ L(g 1 g 2 ) t 2 i F 1 ( c ); 2 ( c ); 3 ( c ) ; t; (III.103) where [;] is the commutator, ~ L() is a superoperator where for any operators g and , ~ L(g) is dened as ~ L(g)gg y 1 2 g y g 1 2 g y g; (III.104) and the feedback Hamiltonian F ( 1 ( c ); 2 ( c ); 3 ( c )) 3 X j=1 j ( c )E j ; (III.105) where j (;)2f1;1g. We do not specify the functionsf j (;)g as we will see that they are irrelevant in the case to be considered. Note that theg j 's andE j 's are the stabilizer generators and correctable errors dened in Example 1. 64 We consider the ADL case given as an example in [2]. In this case, 2 = 64 and 2 = 128, where is again the rate of the i.i.d. bit- ip error processes. If we dene the mapping (j;l;m) ADL , for any operator and any j;l;m2f1;1g, as (j;l;m) ADL () + 2 ~ L(g 1 ) + ~ L(g 2 ) + ~ L(g 1 g 2 ) t 2 i [F (j;l;m);]t: (III.106) Using semidenite programming for computing diamond norm [71, 27], we nd that for all j;l;m2f1;1g, D( (j;l;m) ADL ) = 2 7:6847 2 t: (III.107) Hence, the map ADL;c would have D( ADL;c ) =D( ( 1 (c); 2 (c); 3 (c)) ADL ) = 2 7:6847 2 t: (III.108) On the other hand, calculation shows D() = 2t: (III.109) Therefore, a fair comparison would be to choose such thatD() =D( ADL;c ), which implies that should be = 7:6847 2 = 7:6847 64: (III.110) 65 Figure III.4: The CTQEC performance of our protocol and ADL protocol using three- qubit bit- ip code under independent bit- ip channels with error rate . The parameters of the ADL protocol are 2 = 64 and 2 = 128, and the parameter of our protocol is = 7:6847 64. The solid and dashed blue lines are, respectively, the correctable overlap and the codeword delity of our protocol. The lled and unlled red circles are, respectively, the correctable overlap and the codeword delity of ADL's simulation of their protocol [2] sampled at time increments in 0:01 (1=). 66 Given the initial state 0 =j0ih0j =j000ih000j, Fig. III.4 shows the codeword delity and correctable overlap (i.e., the codeword delity after a strong error correction is ap- plied) of our protocol when = 7:6847 64 and ADL's simulation of their protocol [2]. We see that both the code delity and correctable overlap are much better using our pro- tocol. The reason for this improvement is not entirely obvious. One reason we suspect for the improvement is that the intuitive approach in ADL { weakly measuring the stabilizer generators { is not optimal. We can see this in the simple example of protecting a single qubit statej0i from bit- ip errors. The error-correcting code in this case has a single stabilizer generator Z. On the Bloch sphere, the bit- ip error process gradually brings the state from +1 towards 1 on the z-axis. The most obvious approach using strong error correction would measure the stabilizer Z and then apply a correction X if necessary. However, this is not the only approach: one could actually measure along any axis of the Bloch sphere, followed by a unitary correction conditioned on the measurement outcome. For strong error-correction, all of these protocols work equally well. In the limit of weak measurements, however, the weak limit of these protocols are no longer equivalent. It can be shown that weak measurement along a direction perpendicular to the z-axis, followed by a weak unitary correction, will yield the greatest averaged increase in the purity of the state [33]. This implies that for continuous-time quantum error correction, instead of weakly measuring the stabilizerZ which is along the direction parallel to thez-axis, one should measure along a direction on thex-y plane. For example, one could weakly measure X or Y to yield the most averaged increase in the purity of the state, followed by a weak rotation about the Y or X axis to maximize the delity withj0i. Going from this simple example to a more general code suggests that weakly 67 measuring the stabilizer generators will not increase the overlap with the code space as much as weakly measuring a complementary set of operators. III.5 Conclusion In this chapter, we investigate and formalize the structure of quantum-jump CTQEC protocol involving weak measurements and weak unitary corrections. We show that for a given [[n;k;d]] quantum stabilizer code the minimum required size of the ancillary system is nk + 1 qubits, and we propose a method achieving this minimal requirement. We compare the performance of the proposed protocol to other known CTQEC protocols by Oreshkov and ADL. Our protocol is eectively equivalent to Oreshkov's quantum- jump CTQEC protocol, and hence the performance is the same. However, we reduce the number of ancillas required by an exponential factor. We also consider an improved protocol by optimizing the error-correcting parameter at each step, and we conclude after some analysis that this does not yield much benet in general. For the comparison with the ADL protocol, we use the diamond norm to equalize the error-correcting strengths of the two maps and then compare their performances for the same code and noise model. Our protocol signicantly outperforms the ADL protocol using the three-qubit bit- ip code against X noise. A critical issue that requires future investigation is that the Hamiltonian terms cou- pling the primary system to the ancillas are in general multi-body operators, which would be hard to implement experimentally. One possible approach uses the fact that our pro- posed formalism actually gives a family of quantum-jump CTQEC protocols. It would be 68 interesting to see if there are members of the family where the terms of the Hamiltonian are low weight while still using only a modest number of ancillas. 69 Chapter IV Teleportation-based Fault-tolerant Quantum Computation in Multi-qubit Large Block Codes IV.1 Introduction Quantum computers are extremely vulnerable to errors during computation. Theory has shown that if all types of errors are suciently local, and their rates are small enough to fall below a threshold, it is possible to realize quantum computations of arbitrary size with arbitrarily small error by so-called \fault-tolerant" methods, using quantum error-correcting codes (QECCs) [18, 24, 44]. Since the threshold theorems [1] have been established, several fault-tolerant quantum computation (FTQC) schemes have been pro- posed, including but not limited to those in Refs. [25, 26, 38, 61, 62, 77]. Practically speaking, we say an FTQC scheme is \fault-tolerant" if the logical error rates for each elementary logical operation are suciently low that a quantum algorithm can be executed with a high probability of success. Typically, a practical quantum algo- rithm usingK qubits andQ elementary steps hasKQ 10 10 , so the logical error rate for each logical operation should be much less than 10 10 [51]. However, most FTQC schemes 70 require enormous overhead to achieve this rate, by increasing either the \concatenation levels" for concatenated codes or the \distances" for topological codes. Consequently, a logical qubit is encoded in thousands of physical qubits [38, 23, 51, 42]. Steane observed more than a decade ago that multi-qubit block codes can achieve signicantly higher code rates for comparable error protection ability, but logical gates in these codes are quite dicult to implement [69, 70]. In this chapter, I will discuss a scheme we have proposed in [13] to exploit the advantages of multi-qubit block codes. Similar to von Neumann architecture, our scheme has three components as shown in Fig. IV.1: a \memory" array of [[n;k;d]] CSS code blocks C m [14, 67] with k 1; a \processor" array of an [[n 0 ; 1;d 0 ]] quantum code blocks C p that support a transversal T (=8) gate (or other non-Cliord gate); and an \ancilla factory" that continuously produces fresh logical ancillas for error correction, teleportation, and logical operator measurement. It is known that universality cannot be achieved in a single code by transversal gates alone [21], (see also [76, 17]). Universality is usually achieved with the help of \magic states", which dominate the overhead of most FTQC schemes [23]. An important feature of our scheme is that magic state distillation is unnecessary. FTQC with the same feature has been shown in [56, 36, 5]. Those schemes are restricted to a limited class of codes [39], and may sacrice some error-correcting ability to this goal. Our scheme, however, applies to all CSS codes, without loss of error-correcting power. In the rest of this chapter, I discuss the detail of each component in our FTQC scheme. In IV.1.1, the framework of the scheme is introduced. 71 IV.1.1 Universal FTQC Quantum information is stored in the memory array, and error corrections using Steane syndrome extraction [68] are constantly performed. Logical Cliord operations are done by measuring sequences of logical operators on the C m blocks. We will show that mea- suring logical Pauli operators of C m can be combined with error correction if particular ancillas are available. To perform a logicalT gate on a particular logical qubit, that qubit is teleported to a C p block, where a transversal T gate is performed, and then teleported back to its original memory block. Again, given suitable ancillas, one can measure joint logical operators between the C m and C p blocks and do logical teleportation. Thus, uni- versal quantum computation is achieved. This scheme needs only ancilla preparation, transversal circuits, and single-qubit measurements, and is intrinsically fault-tolerant. Details are given below. Many clean ancillas of various types are required in this scheme. Fortunately, these ancillas are stabilizer states, and they can be prepared using quantum circuits consisting only of Cliord gates and then distilled [43]. The distillation procedure is more like entanglement distillation [6] than magic state distillation. Magic state distillation requires postselection and may take several iterations to achieve a low enough error rate, while in principle, stabilizer states can be prepared and distilled in a single step, using only Cliord gates and Pauli measurements. We have not analyzed ancilla distillation in our work, but just assume that the ancilla factory is capable of preparing all required ancillas with high delity. 72 Ancilla Factory Memory Array Processor Array M M M M M M P P P P A m A m A m A p A p ...... ...... Figure IV.1: The framework of our teleporation-based FTQC scheme. 73 E 1 1 E 1 2 E 1 3 E 2 1 E 2 2 E 2 3 E 3 1 E E 3 2 E 3 3 data qubit Z ancilla X ancilla E i E f Mz Mz Mx Mx Figure IV.2: The circuit for Steane syndrome extraction and its eective error model. IV.2 Steane Syndrome Extraction Steane syndrome extraction is used to measure the stabilizer generators of C m (or C p ). The procedure is as follows (shown as the left circuit in Fig. IV.2): 1) Prepare two ancillas of C m where all logical qubits are set to the statesj0i k L and j+i k L (called the X and Z ancillas respectively), for Z and X error syndrome measure- ments, respectively. 2) Do a transversal CNOT from the information block to the Z ancilla. 3) Do a transversal CNOT from the X ancilla to the information block. 4) Do single-qubit measurements on the X and Z ancillas in the X and Z bases, respectively. Collecting the measurement outcomes and multiplying together the correct subset of1 results reveals the eigenvalue of each stabilizer generator, and hence the error syndrome. IV.3 Measuring logical operators Steane showed that measuring a logical X u or Z u can be combined with the recovery operation in Steane syndrome extraction [69], where u is a binary k-tuple indicating which logical qubits are acted on. If logical X u is to be measured, the X ancillaj0i L at 74 step 1) is replaced withj0i L +jui L , which is also a stabilizer state. The rest of the steps are the same. For example, suppose we wish to measure X i X j . Logical qubits i and j of theX ancilla are prepared inj + i L = 1 p 2 (j0 i 0 j i L +j1 i 1 j i L ), which is a joint +1 eigenstate of X i X j and Z i Z j , and the other logical qubits are prepared in the statej0i L . This logical operator measurement is protected by the classical error-correcting codes from which C m (or C p ) is constructed. We now generalize this method to products of logical X and Z operators. To illustrate, we show how to measure logical operators of the form X i Z j on logical qubitsi andj. This will allow us to do any Cliord gate, as we shall see. If i6= j, logical qubit i of the X ancilla and logical qubit j of the Z ancilla at step 1) are prepared in the entangled state j ij i L = 1=2 (j0 i 0 j i L +j0 i 1 j i L +j1 i 0 j i L j1 i 1 j i L ), which is the joint +1 eigenstate of X i Z j and Z i X j , while the other logical qubits of the X or Z ancillas are prepared in j0i L andj+i L , respectively. If i = j, the ancilla is prepared in a joint +1 eigenstate of Y i Z i and Z i X i . It is possible to measure any logical operator X u Z v , by preparing more complicated ancillas. Note that these measurements are combined with Steane syndrome extraction. IV.4 Logical teleportation Logical qubits can be teleported between code blocks (of C m or C p ). To perform a T gate on a logical qubit of a C m block, the qubit is teleported to a C p block. One can think of the two code blocks as a part of a larger code. Suppose the logical qubits of C m have associated with them pairs of logical operators ( X 1 , Z 1 ), , ( X k , Z k ), which are all 75 Unified Block T M X X 0 1 P M M Z Z 0 1 M X X 1 i M Z Z 1 i M X X 0 1 M Z Z 0 1 Figure IV.3: Diagram of the logical T gate using logical state teleportation. The red blocks represent joint measurements of logical qubits, and the blue one represents bitwise T or T y applied to the processor block. Pauli operators. We reserve logical qubit 1 as a buer qubit used in teleportation. The logical operators of C p are labeled ( X 0 , Z 0 ). To teleport logical qubit j from C m to C p : 1) Measure the operators X 0 X 1 and Z 0 Z 1 . This prepares a logical Bell state between the processor block and the buer qubit. 2) Measure the operators X 1 X j and Z 1 Z j . This does a logical Bell measurement on the buer qubit and qubit j, and teleports qubit j to the processor block. 3) If necessary, apply a logical Pauli operator to the processor block to correct the state. 4) Apply the logical T gate by a transversal circuit on the processor block. 5) Measure the operators X 0 X 1 and Z 0 Z 1 . This does a logical Bell measurement on the processor block and the buer qubit, and teleports the transformed qubit back to logical qubit j of the memory block. 6) If desired, apply a logical Pauli operator to the memory block to correct. The procedure is illustrated in Fig. IV.3 for a logical T gate. Note that the X and Z ancillas are prepared in an entangled state between the C m and C p blocks for the joint measurements. 76 IV.5 Logical Cliord gates Logical Cliord gates can be performed within a memory block solely by measuring log- ical operators, which is like a simplied logical teleportation. Suppose we wish to do a Hadamard gate on logical qubit i of a C m block in the statej i L . Logical qubit 1 is a buer qubit in statej0i L . We do the following measurements: 1) measure X 1 Z i . 2) measure X i . Logical qubit 1 is left in the state Hj i L , up to a Pauli correction X 1 , Y 1 or Z 1 , while logical qubit i is left inji L and can be reset toj0i as a new buer qubit, if desired. Similarly, we can perform a CNOT from logical qubit i to j of a C m block, again using logical qubit 1 as a buer qubit in statej0 L : 1) Measure X 1 X j . 2) Measure Z i Z j . 3) Measure X i . This does a CNOT from logical qubits i to j (up to a Pauli correction), shifts them to logical qubits j and 1, respectively, and moves the buer qubit to qubit i in state ji L . Similar procedures for the Phase and SWAP gates are given in the supplementary material. We can build any Cliord unitary from Hadamard, Phase, CNOT, and SWAP gates. But a complicated Cliord unitary can also be done directly by measuring more complicated combinations of logical operators with the help of a particular ancilla. A tradeo exists between the eciency of enabling a larger set of Cliord operations and the complexity of having to prepare and distribute more kinds of ancillas. 77 IV.5.1 Error model Steane syndrome extraction and its variations are used throughout the scheme. At least four kinds of errors exist: memory errors in the code blocks, physical gate errors, faulty ancilla preparation, and measurement errors. We model errors in physical gates, ancilla preparations, and measurements by treating them as perfect operations followed or pre- ceded by Pauli errors. Herein we model physical noise as depolarizing errors. At each time step, every physical qubit \independently" undergoes a Pauli error X, Y or Z with probability=3, or remains unchanged with probability 1, where is the memory error rate. We treat ancilla preparation as perfect followed by independent depolarizing noise on each qubit with rate r. Similarly, we treat each single-qubit gate as perfect, followed by a depolarizing error with rate p g 1 for single-qubit gates. A two-qubit gate is modeled as a perfect gate followed by one of the 15 possible one- or two-qubit errors from IX,IY , IZ,XI,XX,XY ,XZ,YI,YX,YY ,YZ,ZI,ZX,ZY , andZZ with equal probability p g 2 =15, or no error with probability 1p g 2 . Finally, the measurement of a single physical qubit has a classical bit- ip error with probability p m . (Note that we do not expect the \form" of errors to greatly aect the performance; but the assumption of independence across qubits is very important.) Measurement outcomes in Steane syndrome extraction can be erroneous due to either imperfect measurements or errors during the circuit. This makes error analysis dicult. Traditionally this is handled by repeated syndrome measurements. We now show that syndrome measurement can be done in a \single shot." All errors during the syndrome extraction process can be mapped to errors occurring on the data qubits before and after the process, so that the ancillas, gates and qubit measurements in the circuit can be 78 regarded as error-free, as illustrated in Fig. IV.2. This is formally stated as the following theorem. Theorem 3. (Eective error.) During imperfect Steane syndrome extraction and its variations, if errors in the same block (memory, processor, or ancilla) are uncorrelated, then the errors are equivalent to eective errors acting only on the \data" qubits before and after the process. The idea is to commute errors forward or backward in the circuit. Consequently we only have to decode the eective error on the data qubits at each syndrome measurement. Since syndromes are measured constantly, this greatly reduces the time overhead for syndrome extraction and implementing logical gates. On the other hand, it can also eliminate potential errors caused by repeated measurements. The estimated eective error will be a Pauli operator and can be corrected instantly, or kept track of in the cumulative Pauli frame on the data qubits. After every round, there will generally be a residual error that has not yet been detected, and that diers from the current error estimate. However, this is not a problem so long as its weight is always small compared to the code distance. Note that this theorem is applicable to quite general independent noise models and not just the depolarizing channel. At the same time, the analysis of error propagation is greatly simplied. If we assume =p m =p g 1 =p g 2 ,p, the eective model on the data block at each time step can be approximated by E tot [] (1 11p) + 71 15 pXX + 71 15 pZZ + 23 15 pYY: (IV.1) 79 For simplicity, we choose the error model to be a depolarizing error with rate p e = (71=5)p in the following simulations, where p is the underlying physical error rate. IV.6 Estimate of the logical error rate The error rate for each logical step is determined by the failure rate of decoding for the eective error process. The performance and the physical resources of our scheme depend heavily on the choice of quantum codes for C m and C p . For C m , desirable properties include: 1) Have high distance. 2) Have good code rate. 3) There exists an ecient decoding algorithm. As preliminary examples, we study three large block codes obtained by concatenating a medium-sized block code with a high-distance single-qubit code. By concatenating the [[89; 23; 9]], [[127; 57; 11]], and [[255; 143; 15]] quantum BCH codes [28] with the [[23; 1; 7]] quantum Golay code, we obtain CSS codes with parameters [[2047; 23; 63]], [[2921; 57; 77]] and [[5865; 143; 105]], respectively. All three block codes on average encode a single logical qubit in less than 100 physical qubits. The Golay code is decoded using the Kasami error-trapping decoder [37], and the BCH codes are decoded using the Berlekamp-Massey algorithm [7, 50]. Fig. IV.4 shows the estimated logical error rate for a memory block using Monte-Carlo simulation. However, the simulation complexity is too high to go beyond p e . 10 2 . Therefore, we use linear extrapolation to estimate that region, and nd that at p e = 0:007 (corresponding to p = 5 10 4 ) the logical error rates are less than 10 15 for all three codes. We see that the [[5865; 143; 105]] code stands out because 80 0.0010 0.0020 0.0030 0.0015 10 -23 10 -19 10 -15 10 -11 10 -7 0.001 p Block error rate Figure IV.4: The logical error rate of the memory blocks for the [[2047,23,77]] code (blue), [[2921,57,77]] code (red) and [[5865,143,105]] code (green) versus physical error ratep. The number of samples for each point is up to 410 8 . The dashed lines are from extrapolation of linear tting. of its high code rate and extremely low logical error rate, making it a very promising code in practice. For C p , we need a CSS code that allows a transversal non-Cliord gate such as the concatenated [[15; 1; 3]] shortened RM code [39] or the 3D gauge color code [8]. Here we simulate the concatenated [[15; 1; 3]] code, which allows a transversal T gate. This code can correct almost all bit- ip errors when p e is small. There exists an optimal ecient soft-decision decoder for concatenated codes [59], which diagnoses the error syndromes using a message-passing algorithm [47]. We performed Monte-Carlo simulations of the concatenated [[15; 1; 3]] code of two and three levels with the soft-decision decoder. The results are plotted in Fig. IV.5. For three levels of concatenation, the logical error rate drops to less than 2 10 12 atp e = 0:007 (by extrapolation). Hence, when the physical error rate is less than 5 10 4 , the logical error rate is below 2 10 12 for a single round of syndrome extraction and all logical gates. In this case, the error rate for each logical 81 0.00100 0.00050 0.00200 0.00030 0.00150 0.00070 10 -14 10 -11 10 -8 10 -5 0.01 p Block error rate Figure IV.5: The logical error rate of the logical T gate performed on the concatenated [[15; 1; 3]] code of two levels (blue) and three levels (red) using up to 3 10 7 samples for each point. The green point represents a numerical upper bound for three levels when the physical error rate is p = 7 10 4 . The dashed lines are from extrapolation of linear tting. operation is well below 10 10 , which will allow interesting quantum algorithms. Nor do we have any reason to believe this is optimal: in all likelihood, better codes exist for both storage and processor blocks. Note that we have not attempted to prove a threshold theorem for this scheme: the goal was to quantify how large a computation can be done with a xed choice of code. However, the crossing point in Fig. IV.4 is intriguingly similar to what one would expect with a threshold of around 0.3%. The existence of a threshold for this scheme is an open question. IV.7 3D Gauge Color Codes as C p As seen in Sec. IV.1, an open problem of our FTQC scheme is searching for good CSS codes for C m and C p . The task for C p is especially challenging since not many known family of codes permits trasversal implementation ofT gate. TheC p we considered in our 82 paper is the three-level and two-level concatenated [[15; 1; 3]] quantum shortened Reed- Muller (RM) codes [39]. Bomb n recently proposed the family of 3D gauge color codes [8] that permit transversal T gate. The [[15; 1; 3]] RM code is the smallest member in this family, and therefore exploring into the 3D gauge color codes seems promising. In [41], the authors presented a rigorous verication of Bomb n's proposal. In [10], the authors proposed a particular construction of 3D gauge color code and analyzed the performance of the code using a clustering decoder [9] for topological quantum codes. There have been several publications on 3D gauge color codes and, up to our knowledge, two code constructions [8, 10] are currently known; yet none of the existing publications claim that they have found an ecient decoder for this family of codes. We have looked at some possibilities of eciently decoding these codes, and I will present our result in the next chapter. 83 Chapter V Iterative Decoder for Three-dimensional Gauge Color Codes V.1 Introduction In fault-tolerant quantum computation, quantum information is encoded using error- correcting codes, so that it can be protected from decoherence. An important require- ment on any allowed logical or encoded operation is that it can be transversally imple- mented [65]. In order to achieve universal fault-tolerant quantum computation, it would be very helpful to have codes that permit transversal implementations for each of the gates of a universal set. Although, no single code allows transversal implementation for the entire set. This requirement of transversality restricts the number of possible code candidates. In particular, not many codes are known to have transversal implementation of logical non-Cliord gates, and color codes [8] are a class of them. Color codes are topological codes that are dened on high-dimensional lattices. The least dimension of a color code that has transversal implementation of a logical non-Cliord gate is three. Three-dimensional (3D) color codes allow transveral implementation of the logical 8 -gate, 84 or T-gate. In fact, color codes belong to a broader class of topological codes called the gauge color codes [8]. The gauge color codes are generally subsystem codes, while the color codes are members that have no gauge degree of freedom. It has been shown that transversal implementation of a universal gate set is available within gauge color codes, by switching between dierent codes using the gauge-xing technique [56]. Recent papers on gauge color codes include [41] and [10]. In [41], a collection of rigorous proofs regarding gauge color codes was provided. In [10], Brown et al. studied the threshold of a class of 3D gauge color codes. However, no ecient decoder is known for this class of codes. In this chapter, we analyze the decoding problem and propose an ecient decoder for the 3D gauge color codes. The rest of this chapter is organized as follows. In Sec. V.2, we review fundamental knowledge of 3D gauge color codes. In Sec. V.3, we propose a decoder for 3D gauge color codes. We reason and give detail explanation of our approach. In Sec. V.4, we show the performance of the decoder. We give closing remark in Sec. V.5 discussing potential work in the near future. V.2 3D gauge color codes In this section, we review background knowledge and dene notation regarding 3D gauge color codes. In Sec. V.2.1, we dene the topological structure of a 3D gauge color code. In Sec. V.2.2, we explain how to dene a 3D gauge color code using the topological structure. Sec. V.2.3 discusses a straightforward method to generate such a topological structure. In Sec. V.2.4, we introduce the decoding problem of 3D gauge color codes. 85 V.2.1 Topological lattice of a 3D gauge color code In this section, we introduce the set of topological lattices in R 3 on which the 3D gauge color codes are dened. Such a latticeL has to satisfy two properties: Condition 1. L is a homogeneous simplicial 3-complex that is a triangulation of a 3- simplex. We clarify the details of this condition as follows. In real Euclidean space, ak-simplex is ak-dimensional polytope that is the convex hull of a set ofk + 1 anely independent verticesfv 0 ;v 1 ;:::;v k g: =f k X j=0 j v j j j 08j; k X j=0 j = 1g: (V.1) The ane independency guarantees that the polytope is exactly dimension k and not lower. Therefore, a 0-simplex is a vertex, a 1-simplex is an edge, a 2-simplex is a triangle, and a 3-simplex is a tetrahedron. For a k-simplex , the convex hull of any non-empty subset of vertices of is itself a simplex, which is called a \face" of the simplex . Specically, a face of with l + 1 vertices is called an l-face of . Therefore, a k-simplex has k+1 l+1 l-faces. For example, a 3-simplex (tetrahedron) has four 2-faces (triangles), six 1-faces (edges), and four 0-faces (vertices). Given a k-simplex , for lk, we denote the set of all its l-faces as l . A simplicial complexK is a set of simplices that satises the following two conditions: 1) If 2K is a k-simplex, then l K8lk. 2) If 1 ; 2 2K, then either 1 \ 2 =;, or 1 \ 2 2 l 1 and 1 \ 2 2 l 2 . 86 In other words, K includes all the faces of its elements, and any two simplices in K can only be connected through a mutual face of both simplices. The second condition prohibits the connection of two simplices in K through faces of dierent dimensions. For example, a vertex cannot appear in the middle of an edge. A simplicial k-complex K is a simplicial simplex where the largest dimension of sim- plices in K is k. Given a simplicial k-complex K, for l k, we denote the set of all l-simplices in K as l K . Notice the similarity of this notation to the earlier one that was dened for a simplex. A homogeneous simplicial k-complex K is a simplicial k-complex where each simplex in K is a face of a k-simplex in K. Therefore, a homogeneous simplicial 3-complex can be understood as a set of tetra- hedra together with all of their vertices, edges, and triangles, such that two dierent tetrahedra can only be connected through a common face of both tetrahedra. LetK be a homogeneous simplicialk-complex, anl-face is a boundary face if it belongs to a single k-simplex of K. The boundary of K, denoted @K, is the subcomplex of K including all boundary (k 1)-faces ofK together with all their faces. The interior of K, denotedK , isK =Kn@K. We will also denote the subset of l-simplices inK that are in the interior of K as l K l K \K , with an underline below . As we will see, this subset of simplices of a certain dimension is important when dening the stabilizer and gauge groups of a gauge color code. A triangulation of a k-simplex is a division of the k-simplex into a homogeneous simplicial k-complex K, such that @K = @. Therefore, the latticeL in condition C 1. is consisted of a collection of tetrahedra packed together by sharing vertices, edges, or 87 triangles, such that the space they occupy is that of a tetrahedron, and the lattice is seen as a tetrahedron from outside with 4 vertices, 6 edges, and 4 triangles. Condition 2. The latticeL is 4-colorable. For a given lattice including a set of vertices and a set of edges connecting the vertices, the lattice is said to be m-colorable if there exists an assignment of m dierent colors to the vertices, such that no two vertices connected through an edge are assigned the same color. V.2.2 3D gauge color code A latticeL satisfying conditions 1 and 2 can be used to construct three dierent 3D gauge color codes. Adapting the notation from [41], we denote these codes as CC L (x;z), where the two integer parameters x and z satisfy x +z 1 and x;z 0. As a result, the three gauge color codes are CC L (0; 0), CC L (0; 1), and CC L (1; 0). As we will see, only CC L (0; 0) is a subsystem code. The other two of them, CC L (0; 1) and CC L (1; 0), are actually stabilizer codes that have no gauge degrees of freedom, and we will call them \3D color codes". 3D color codes are of particular interest to us since they are the ones that permit the transversal implementation of the logical T gate. Nevertheless, we will explore decoding algorithms for all three types of codes in later sections. Physical qubits of these codes are located on the 3-simplices, or tetrahedra, of the latticeL. From Sec. II.1, we may identify the set of physical qubits as Q L = 3 L = 3 L . Color codes are CSS codes. Each stabilizer or gauge generator consists of either X (Pauli-X) or Z (Pauli-Z) operators supported on a set of qubits. 88 Let A Q L and O be an operator acting on a single qubit, we denote O A as the tensor operator that is supported on qubits inA. Furthermore, for a simplex, we denote Q L () as the set of all 3-simplices, or tetrahedra, ofL that have as their faces. Then, we say that an operator is \O-operators located on a simplex " if it isO Q L () , which is the tensor operator that is supported on qubits that have as their faces. For the codeCC L (x;z), theX-type stabilizer generators are located on the interiorx- simplices ofL, and theZ-type stabilizer generators are located on the interiorz-simplices ofL. The X-type gauge generators are located on the interior (2z)-simplices ofL, and theZ-type gauge generators are located on the interior (2x)-simplices ofL. As an example, the set of all X-type stabilizer generators isfX Q L () j2 x L g. It can now be seen that for (x;z) = (0; 1) or (x;z) = (1; 0), the set of stabilizer generators is equal to the set of gauge generators, and therefore there is no gauge degree of freedom and the code is a stabilizer code. As with all stabilizer codes, the code space is the simultaneous +1-eigenspace of all stabilizer generators. V.2.3 Construction of the code lattice Two dierent constructions of 3D gauge color codes are proposed in [8] and [10]. These constructions lead to families of 3D gauge color codes. They are both constructed using a similar method. This method starts with a careful stacking of tetrahedra such that: 1) the lattice is a homogeneous simplicial 3-complex. 2) Let each 2-simplex be assigned a color that is the complement color of its three vertices. Then the boundary surface of the lattice can be divided into four simply-connected regions 89 (or simply regions with no holes), where each region consists of 2-simplices of a single unique color. After the above lattice is constructed, four boundary 0-simplices, or vertices, of dif- ference colors are then introduced. Six 1-simplices are created through all pairs of the boundary 0-simplices. For each boundary 0-simplex of color c, 1-simplices are created by connecting it to other 0-simplices that belong to 2-simplices of the same color c. With all these new 1-simplices (edges), new 2-simplices (triangles) and new 3-simplices (tetrahe- dra) are created in a similar manner. In our study, we use Bomb n's construction [8]. V.2.4 Decoding problem for 3D gauge color codes The design of CSS codes allows us to separately correct bit- ip X errors and phase- ip Z errors in two error correction processes. To detect and correct X(Z) errors, we use the syndrome inferred from measuring the Z(X)-type stabilizer generators. Therefore, the decoding process can be treated as two classical error correction processes. Given the code CC L (x;z), we start with a state in the code space. When an X(Z) error is applied to a qubit located on a 3-simplex, the state changes from inside the +1- eigenspaces to the1-eigenspaces of the Z(X)-type stabilizer generators Z Q L () ; 82 z \K (X Q L () ; 82 x \K ). Subsequent errors will ip the eigenvalues of the corresponding stabilizer generators between +1 and1. Therefore, a measurement result of +1(1) from an X-type stabilizer generator lo- cated on the simplex implies that an even(odd) number of qubits inQ L () haveZ errors. Similar argument goes for Z-type stabilizer generators. 90 It is convenient to dene an invertible map from the measurement results to two binary bit-strings, one for the X-type stabilizer generators, and one for the Z-type stabilizer generators, such that +1 is mapped to 0, and1 is mapped to 1. In this way, a new X(Z) error at 3-simplex changes this measurement result simply by adding a bit string supported on x ( z ) under modulo-2 arithmetic. In order to dene this map, we need to introduce a one-to-one correspondence between a simplex and its location in the bit-string. For all k2f0; 1; 2; 3g, let f k : k L !f1; 2;:::;j k L jg be a one-to-one function that denes an ordering of the k-simplices in K . Next, we denotem X andm Z as the binary column vectors for the measurement result. They are often called the \syndromes" of the error. Their i-th elements are: m X (i) = 8 > > > < > > > : 0; if + 1 is result of measuring X Q L (f 1 x (i)) ; 1; if 1 is result of measuring X Q L (f 1 x (i)) : (V.2) m Z (i) = 8 > > > < > > > : 0; if + 1 is result of measuring Z Q L (f 1 z (i)) ; 1; if 1 is result of measuring Z Q L (f 1 z (i)) : (V.3) The X(Z) parity-check matrix H X (H Z ) is aj x L jjQ L j(j z L jjQ L j) matrix with each row representing an X(Z) stabilizer generator. Their (i;j)-th elements are: H X (i;j) = 8 > > > < > > > : 0; if f 1 3 (j) = 2f 1 x (i); 1; if f 1 3 (j)2f 1 x (i): (V.4) 91 H Z (i;j) = 8 > > > < > > > : 0; if f 1 3 (j) = 2f 1 z (i); 1; if f 1 3 (j)2f 1 z (i): (V.5) The decoding problem of gauge color codes can be understood as: Given m X and m Z , nd two lowest-weighted binary column vectors c X and c Z such that m X =H X c Z ; m Z =H Z c X The correction operator is then X c X times Z c Z . V.3 Iterative decoder for 3D gauge color codes In this section, we explore a method of decoding 3D gauge color codes by iteratively applying single-qubit errors. The decoder consists of two sub-decoders, one for decoding X errors and the other for decoding Z errors. Since the parameters x;z2f0; 1g for 3D gauge color codesCC L (x;z), two types of decoders are required - a \vertex decoder" when the parameter is 0, and an \edge decoder" when the parameter is 1. The name \vertex decoder" comes from the fact that the stabilizer generators are located on the interior 0-simplices, or vertices, of the code lattice. The syndrome can be viewed as the set of vertices with measurement result 1 being highlighted on the lattice, which gives clues on the error in local areas. A similar argument applies to the \edge decoder". In general, the edge decoder should perform better than the vertex decoder, since the syndrome on the vertices can be inferred from the syndrome on the edges. 92 Suppose we are decoding the syndromes located on k-simplices. The iterative algo- rithm starts with a syndrome vector s that is equal to the measurement result, and an estimated error vector ^ e initiated as all-zero. In each single iteration of our decoders, an algorithm is used to determine a 3-simplex based on the current syndrome. A bit- ip is then applied at the location of , which is the f k ()-th element, in ^ e. The syndrome s is then updated to re ect the contribution of this additional error, by ipping the bits at all k-simplices that are faces of . At this iteration, we have: 8 > > > > < > > > > : ^ e !^ e +e[f 3 ()]; s !s + P j2 k e[f k (j)]; (V.6) where e[j] is the standard basis vector in the j-th dimension. The decoding algorithm then proceeds to the next iteration if the s is not all-zero. Usually there is some stopping criterion telling the algorithm to terminate when there is a trap, or the running time is too long such that nding a good solution is unlikely. One important thing to note is that in order to correctly decode an error string e, the estimated error ^ e need not exactly match the error. In the case of decoding X-errors, a correct decoding only requires that X ^ e be dierent from X e up to a stabilizer element or a gauge operator, which is some product of theX-type stabilizer generators andX-gauge generators. We will discuss the details in later sections. In the rest of this section, we discuss our vertex decoder in Sec. V.3.1 and edge decoder in Sec. V.3.2. 93 Figure V.1: Syndromes disappear within an error cluster due to modulo-2 arithmetic. V.3.1 Vertex decoder When the syndrome at a vertex is 1, there is denitely some error neighboring that vertex. We call those vertices with a syndrome of 1 being \highlighted". A highlighted vertex gives straightforward clues about errors in its vicinity. The primary diculty in decoding gauge color codes is the disappearance of syn- dromes within an error cluster due to modulo-2 arithmetic. Fig. V.1 is an example of this phenomena. The colored tetrahedra have errors on them. The faces shared by the tetrahedra show no signs of their vertices being ipped. The other major diculty in decoding gauge color codes has to do with the boundary of the lattice. The four boundary vertices do not represent stabilizer generators, therefore 94 their syndrome values, if they had any, are not known. Our solution is to run two instances of the decoder using two settings of syndromes for the boundary vertices, where one of them is guaranteed to be the correct syndrome conguration. The reason is that when an error happens at a tetrahedron, the syndrome values of all its vertices of four dierent colors all ip at once. Therefore, for any error, a condition that should always be satised is that the numbers of highlighted vertices of the four dierent colors should all be odd or all be even. Since the four boundary vertices are of dierent colors, we can assign them two sets of values satisfying the condition we just discussed. These two sets of values dier by just ipping all the bits. In this way, we extend the function f 0 () to include the boundary vertices in the domain. We also include additional rows to the parity-check matrix to re ect the parity- checks of those boundary vertices. The syndrome string s is also extended to include the syndromes of the boundary vertices. We believe the above way of assigning syndrome values to the boundary vertices is benecial, since one of the assignments is guaranteed to be correct, and in that case, we have more information about the error. However, due to the construction of the code lattice as mentioned in Sec. V.2.3, each boundary vertex belongs to a large number of tetrahedra, which is a much larger number than for any interior vertex. Therefore, as we will see later in our analysis, this information on the boundary is only useful to a limited extent. In a vertex decoder, the most important component is the algorithm that decides in each iteration which tetrahedron to apply an error on. A natural candidate for this algorithm would be the greedy algorithm. In the simplest greedy algorithm, a random 95 Figure V.2: An error that the greedy algorithm fails to correct most of the time. tetrahedron that has the most highlighted vertices is picked. This is due to the straight- forward guess that such a tetrahedron is likely to have an error on it, indicated by its highlighted vertices. However, Fig. V.2 shows a very simple type of error that this greedy algorithm fails to correct. In this case, each of the tetrahedra around the highlighted ver- tices has one highlighted vertex. A random one of those tetrahedra has a high chance of not being one of the two tetrahedra with errors, as shown in Fig. V.3, therefore increasing the weight of the error, as well as the size of the error cluster. To solve this problem, we propose a method called \defect diusion". From a defect, some weights are assigned to its neighbors. In this way, the vertices in the middle of a cluster are likely to receive multiple weight assignments. Then, the tetrahedra within an 96 Figure V.3: An incorrect decoding of the error in Fig. V.2. error cluster are much more likely to be picked, and the size of the error cluster would less likely to increase. The details of this method are as follows: First, we dene a distance metric on the set of vertices 0 L ,d : 0 L 0 L !f0; 1; 2;:::g. For all 1 ; 2 2 0 L , a \path" from 1 to 2 is a sequence of edges connecting a sequence of vertices starting at 1 and ending at 2 . Then d( 1 ; 2 ) is dened as the shortest path from 1 to 2 . Let 2 0 L and k be a non-negative integer, the \k-neighborhood of ",N k (), is dened as the set of vertices that have a distance of less than or equal to k from : N k () = 2 0 L :d(;)k : (V.7) 97 We call each of these vertices a k-neighbor of . Next, we dene a weight functionw 0 : 0 L ! [0;1) that assigns a non-negative weight to each vertex. We also dene a weight functionw 3 : 3 L ! [0;1) for a tetrahedron as: w 3 () = X 2 0 w 0 (): (V.8) In the end, the tetrahedron with the largest weight is selected. If more than one tetrahedron has the largest weight, one of them is chosen at random. We will now show a simple implementation of this idea to help correct the error in Fig. V.2. In this implementation, we assign each highlighted vertex a unit weight of 1, and we assign each 1-neighbor of a highlighted vertex an additional positive weight"< 1. Each vertex of the triangle shared by the two colored tetrahedra has weight of 2", which is larger than any other 1-neighbors of the two highlighted vertices. Therefore, the two colored tetrahedra have largest weight 1 + 6", and applying a random selection between both of them eliminates an error. In fact, we can extend this method to take care of a string of three errors by as- signing an additional smaller weights, " 2 for example, to 2-neighbors of the highlighted vertices. The assignments of weights resembles a process where information about a defect is \diused" to its neighborhood. However, this kind of implementation is risky when the information is diused through the boundary vertices. This is because a boundary vertex has many more 1-neighbors than an interior vertex, and its 1-neighbors are not in a geometrically local region. Local information can be carried by boundaries to another local region, and this may cause misidentication of errors in other local regions. This suggests that the information that 98 is carried through the boundary vertices should be weighted less, since it is unreliable. Below is a modication of the weight-diusion process to reduce this problem. We dene aj 0 L jj 0 L j diusion matrix "D, where "< 1 is a positive number, and the (i;j)th element of D is D(i;j) = 8 > > > > > > > < > > > > > > > : 1 jN 1 (f 1 k (j))j ; if f 1 k (i)2N 1 (f 1 k (j)); 1; if i =j; 0; otherwise: (V.9) Note that we have previously extended the functionsf k () to include the boundary vertices in the domain. For a syndrome vector s, let us analyze the vector s 1 = Ds. When the j-th vertex is highlighted, i.e. s(j) = 1, all locations of 1-neighbors of the j-th vertex in s 1 are assigned an additional value of 1 jN 1 (f 1 k (j))j ", while these weights are taken from the j-th vertex. s 1 then represents a rst-order diusion, where a xed weight " is equally distributed from each highlighted vertices to its 1-neighbors. The diused weight to each 1-neighbor is inversely proportional to the number of 1-neighbors where the weight is diused from. In this way, the uncertainty of information being diused through boundary vertices is re ected in s 1 . A generalization of this implementation is to consider the matrices ("D) j . When a rst-order weight reaches a 1-neighbor of a highlighted vertex, another small portion, ", of this weight is then distributed to this 1-neighbor's neighbors. This repetition of diusion can be implemented several times, and we call the implementation that includes 99 k diusion operations the \k-diusion protocol". This protocol has the diusion matrix being: D k = k X j=0 ("D) k ; (V.10) where ("D) 0 I is the identity matrix. Note that we usually choose " << 1. The primary goal of introducing the diused weights is to give a direction when a group of local tetrahedra all have the same number of highlighted vertices. That is, we do not need" to be large to have achieve this goal, and also, large" could result in the undesirable eect of making the diusion more signicant than the defects themselves. Figure V.4: An example showing the diusion algorithm alone is not enough. 100 After the diusion process, it seems straightforward to select a random tetrahedron with the largest diused weight. In fact, since " << 1, any tetrahedron with a high- lighted vertex has much signicant weight compered to one without a highlighted vertex. Therefore, we can just go through all the highlighted vertices and check the weights of the tetrahedra that neighbor them. However, this is not a good strategy, and we show in Fig. V.4 an example that it fails. In Fig. V.4, the two errors are not adjacent to each other, but they both share some vertices with another tetrahedron. The current strategy will decide to apply a ip on this tetrahedron, which then result in an error string just like the one in Fig. V.3. One might notice that this string of three errors can be corrected if we proceed with the same 2-diusion protocol; however, if there are some other highlighted vertices near the end of this error string, a tetrahedron outside of this string may be chosen, and we could end up with a long string of errors that is no longer correctable. This suggests that a tetrahedron of highest weight may not have an error, and in some situations applying such a tetrahedron could be disastrous. We propose that a ltering of the highlighted vertices could be able to solve this problem. When there is a cluster of errors, we have shown that applying a tetrahedron in the center of the cluster could result in all defects in the center being erased, and the long string of errors may become harder to correct since the defects at the two ends are distant from each other. This suggests that we would do better by applying a tetrahedron near the ends to the cluster. We need a way to recognize such a tetrahedron, and this may begin with recognizing a highlighted vertex near the ends of an error cluster, and after that selecting a tetrahedron neighboring that highlighted vertex. The highlighted vertices that are near the ends of error clusters tend to have fewer defects in their vicinity, 101 and we may design schemes for selection of highlighted vertices to satisfy this property. We denote the set of highlighted vertices as S =f2 0 L :s(f 1 0 ()) = 1g, where s(j) is the j-th element of s. In the end, we have a set, ^ S S, of \better" highlighted vertices for the tetrahedron-selection stage. One scheme is simply selecting a highlighted vertex with the least diused weight. Suppose we are using the k-diusion protocol, ^ S = arg min 2S (D k s)(f 0 ()); (V.11) where (D k s)(j) is the j-th element of D k s. We call this scheme the \weight diusion selection" of highlighted vertices. Another scheme selects a highlighted vertex with the least number of highlighted vertices in its k-neighborhood. In this scheme, ^ S = arg min 2S jN k ()\Sj: (V.12) We call this scheme the \k-neighborhood selection" of highlighted vertices. As we will see, we observe from our simulations that the performance of the decoder is sensitive to the scheme for selection of highlighted vertices. Given a scheme for selecting the highlighted vertices, we now specify the algorithm for the vertex decoder using a k-diusion protocol. Let s be the syndrome vector, " << 1, and M be a positive integer that sets a limit on the number of iterations that can be carried out in one decoding instance. The decoded error ^ e is initialized as the all-zero vector, that is, ^ e =0. s and ^ e will change throughout the decoding process, and whenever 102 s =0, the decoder terminates and return ^ e as the decoded error. The diused vector d s is initialized as d s = D k s. For any vertex 2 0 L , its weight is w 0 () d s (f 0 ()), the f 0 ()-th element of d s . Each decoding iteration goes through the following steps: Step 1) The set ^ S is computed, and a random highlighted vertex v2 ^ S is selected. Step 2) A random tetrahedron t is selected from the set arg max 2Q L (v) w 3 (). Recall that w 3 () = P 2 0 w 0 () is the sum of the diused weights of all 's vertices. Step 3) ^ e, s, d s , and w 0 ()'s are updated re ecting the application of t. As we have mentioned before, the estimated error from the output of the decoder does not need to completely match the error that has happened. There is no logical error if they dier up to a set of operators which we call an \equivalence group". This property plays an important role in the success of our decoder. First of all, we discuss what is the equivalence group. In fact, this depends on the type of 3D color code. For the stabilizer code CC L (0; 1), Z errors are decoded with the vertex decoder, since the X-stabilizer generators are located on the interior vertices. In this case, the Z-stabilizer generators are located on the interior edges. Therefore, the group of operators generated by theZ-stabilizers is the equivalence group. We may treat this problem as consideringZ errors alone, and write it in the binary vector format. Then in order for there to be no logical errors, the dierence between the decoded error vector and the actual error vector must be in the row space of H Z . That is, if we dene an edge operator on the edge as bit- ip operators acting onQ L (), then there is no logical error as long as the dierence between the decoded error vector and the actual error vector is a product of some interior edge operators. A similar argument applies to the stabilizer code CC L (1; 0). For the subsystem code CC L (0; 0), the X and Z gauge operators are located on the interior edges. Any product of gauge operators is not a logical operator, 103 therefore, there is no logical error as long as the dierence is a product of some interior edge operators. Before we explain why this property is important for our decoder to work properly, we need the following theorem. Theorem 4.jQ L ()j is even for all 2 1 L . This theorem is a result of the 4-colorable property as stated in condition 2. It says that any interior edge belongs to an even number of tetrahedra. The consequence of Theorem 4 is that the syndrome of an edge operator is zero. For any set of tetrahedra A around an edge, the complementary set of tetrahedra around the same edge, Q L ()nA, yields the same syndrome as A. See Fig. as an example. In general, most interior edges have Q L () small. It is therefore frequent that the decoder applies the complement set of tetrahedra, but this does not introduce a logical error since the dierence with the actual error is an interior edge operator! V.3.2 Edge decoder We extend the denition from the previous section and call an edge with syndrome of 1 a \highlighted edge". Unlike highlighted vertices, the highlighted edges are less likely to disappear in an error cluster. Three out of four highlighted vertices of a tetrahedron may disappear when another tetrahedron sharing a mutual face is applied, leaving the remaining defects distant from each other. This is not the case for edge defects. No matter how two neighboring tetrahedra are applied, there is always highlighted edges from both tetrahedra that are connected by a single vertex. 104 Based on our simulation, a simple greedy algorithm with some minor improvements is sucient to yield good performance. We here specify the rst part of the algorithm, and we introduce the improvements later in this section. Similar to the vertex decoder, there are two possible syndrome conguration for the boundary edges based on the syndrome of the interior edges. We therefore run two instances of decoding corresponding to these two congurations. We also extend the sets and vectors to include the boundary simplices. Let s be the syndrome vector and S 1 L be the corresponding set of highlighted edges, and M be a positive integer that sets a limit on the number of iterations that can be carried out in one decoding instance. A vector, u, is created to store the numbers of highlighted edges on the tetrahedra, where its j-th element is: u(j) =jS\ 1 f 1 3 (j) j; (V.13) where we recall that 1 f 1 3 (j) is the set of edges on the j-th tetrahedron f 1 3 (j). The decoded error ^ e is initialized as the all-zero vector, that is, ^ e = 0. ^ e, s, S, and u will change throughout the decoding process, and whenever s = 0, the decoder terminates and return ^ e as the decoded error. Each decoding iteration goes through the following steps: Step 1) A random tetrahedron t is selected from the set arg max 2 3 L u(f 3 ()). Step 2) e, s, S, and u are updated re ecting the application of t. A common problem of greedy algorithms is that there could be local traps. Once the algorithm reaches such a trap, it will apply a set of tetrahedra over and over again, and it can longer advance. We found that there are two types of local traps as shown in Fig. V.5 105 Figure V.5: The rst type of trap in a greedy edge decoder. and Fig. V.6. In Fig. V.5, all four tetrahedra around an interior edge have errors. The red edges are the highlighted edges. Whenever a tetrahedron is applied at an iteration, that tetrahedron will be applied at the next iteration, since it will have the most highlighted edges compared to other tetrahedra. In Fig. V.6, there are seven tetrahedra, and one of them is located in the center where it owns the two dashed edges. Three tetrahedra share a dashed edge with the center tetrahedron, and the other three tetrahedra share the other dashed edge with the center tetrahedron. In this case, all tetrahedra except the center one have errors. The red edges are the highlighted edges. Just like the case in Fig. V.5, a tetrahedron that is applied at an iteration will be applied again at the next iteration. To deal with these kinds of traps, the following algorithm is used: 106 Figure V.6: The second type of trap in a greedy edge decoder. Algorithm 1. 1) Check if applying the last tetrahedron again will reduce the number of highlighted edges, if yes, apply that tetrahedron. 2) Search for sets of local edge defects that form loops. 3) For each of the loops, locate a new edge that belongs to a tetrahedron with each of the highlighted edges in the loop. Apply all tetrahedra around this new edge. In this algorithm, 1) ensures that the structure is brought back to that of Fig. V.5 or Fig. V.6. 2) locates either the edge shared by all tetrahedra in Fig. V.5 or the dashed highlighted edges in Fig. V.6. Applying all tetrahedra around those edges solves the problem. 107 We therefore introduce the following improvement to the greedy algorithm. This improvement applies Algorithm 1 whenever the greedy algorithm terminates while failing to haves6=0. After this, ifs is still not0, the greedy algorithm is run again starting with the current state. A xed positive integer parameter N is set as the maximum number of times we may run the greedy algorithm. A good setting ofN is dependent on the number of error clusters. Typically, it suce for N to be a small number when the size of the lattice is not large, and one could increase it for a larger lattice if the performance is not good. When the greedy algorithm terminates and s6=0, the following steps are taken: Step 3) If the greedy algorithm has been run N times, terminate; else, Algorithm 1 is run. Step 4) Run the greedy algorithm with the current e, s, S, and u. V.4 Simulation Results In this section, we show simulation results of the proposed decoders. Three codes are used in our simulation. They are from Bomb n's construction [8] with code parameters 7, 9, and 11. These codes are of sizes 1695, 3439, and 6095. We rst examine the performance of the vertex decoder. Without loss of generality, let the channel be a bit- ipX error channel. Fig. V.7 shows the performance of our vertex decoder with 2-diusion and 2-neighborhood selection as introduced in Sec. V.3.1. The choice of diusion and neighborhood selection is optimized for Bomb n's 9th code, there- fore, that code has the best performance out of all three. As a side note, we observe, as in Fig. V.8, that 2-neighborhood selection yields better performance than weight diusion selection for the 9th code, while the reverse is true for the 7th code. Therefore, it would 108 be interesting to study the eect of dierent diusion and neighborhood selection settings on the performance of the decoder. This should be an interesting problem to look at in the near future. 1 2 3 4 5 6 7 8 9 10 x 10 −3 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 Channel error probability Logical error rate Figure V.7: The blue, red, and green lines are the performances of our 2-diusion and 2-neighborhood selection vertex decoder for Bomb n's construction with code parameters 11, 9, and 7, respectively. The channel is modeled as a binary symmetric channel of the corresponding type of error with error rate p. The simulation is sampled at channel error rates p = 0:01; 0:008; 0:006; 0:004; 0:002; 0:001, with 10 6 samples run at each error rate. 1 2 3 4 5 6 7 8 9 10 x 10 −3 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 Channel error probability Logical error rate Figure V.8: The blue and red represent the performances of Bomb n's 7th and 9th code, respectively. The solid lines are the performances of the 2-diusion and 2-neighborhood selection vertex decoder. The dashed lines are the performances of the 2-diusion and weight diusion selection vertex decoder. The channel is modeled as a binary symmetric channel of the corresponding type of error with error rate p. The simulation is sampled at channel error rates p = 0:01; 0:008; 0:006; 0:004; 0:002; 0:001, with 10 6 samples run at each error rate. 109 Next we examine the performance of the edge decoder. Without loss of generality, let the channel be a bit- ip X error channel. Fig. V.9 shows the performance of our edge decoder. The performance is quite good over all, and with no doubt, they perform much better than the vertex decoder. 1 2 3 4 5 6 7 8 9 10 x 10 −3 10 −6 10 −5 10 −4 10 −3 Channel error probability Logical error rate Figure V.9: The blue, red, and green lines are the performances of our edge decoder for Bomb n's construction with code parameters 11, 9, and 7, respectively. The channel is modeled as a binary symmetric channel of the corresponding type of error with error ratep. The simulation is sampled at channel error ratesp = 0:01; 0:008; 0:006; 0:004; 0:002; 0:001, with 10 6 samples run at each error rate. The vanishing sample points mean that there are no error observed. If we model the channel as a depolarizing channel, the performance will look similar to Fig. V.7, since the vertex decoder performs much worse than the edge decoder, and at least one type of error is decoded using the vertex decoder. V.5 Closing Remark In this chapter, we look at some existing challenges in decoding 3D color codes. We propose a type of decoder that provides a certain level of solutions to these challenges. There are several open problems that relate to this new decoder. One problem is that the decoder tends to fail at decoding some low-weight errors. Most of these errors have a 110 common property that they are on, or at least close to, the boundary of the code lattice. Although the eect of the boundary decreases as the size of the code lattice grows, these errors are local errors and still persist. However, based on our observation, the structures of most of these low-weight uncorrectable errors are similar to each other. Therefore, it would be benecial to recognize these errors and design a particular decoder for them. Yet this raises another question on how to recognize this kind of errors. Since they do not create traps, it would be hard to recognize them like we did for our edge decoder. However, when these errors happen, the decoder tends to apply long strings of errors to make up for the misidentications at its earlier stage. Therefore, for low channel error rate, we could tell immediately that something is wrong when the decoder has applied long strings of errors. Another problem worth investigating is the eect of dierent diusion and vertex selection techniques in the vertex decoder. We have shown in the previous section that the choices of those techniques have noticeable eect on the performance of the decoder. Also, it would perhaps be convenient if we provide a table that gives a good set of congurations for each of the code that we wish to use, for example the codes from the Bomb n's family. 111 Chapter VI Conclusion Quantum error-correcting codes are brilliant and useful designs for quantum error correc- tion. Despite the simplicity of the principle behind quantum error-correcting codes, the actual implementations of the codes depend on the tasks and are often not trivial. In this dissertation, I present a selection of original research regarding the use of quantum error-correcting codes in dierent quantum tasks. In quantum cryptography, quantum error-correcting codes may be used to enhance the accuracy and security of the secret keys generated between the parties. We see in Chapter II that low-density parity- check codes are suitable candidates for carrying out a quantum key expansion protocol as they have ecient decoders and records of good performance. We develop a technique to improve the net key rate of the protocol, and this should provide an insight about how quantum key expansion protocols should be designed. In continuous-time quantum error correction, protocols involving precise weak mea- surements and weak corrections are needed to re ect a specic strong error correction over the course of time. In Chapter III, we present a recipe for constructing continuous-time quantum error correction protocols that perform error corrections of stabilizer codes. This 112 nding could be a leading point for quantum theorists to search for exciting continuous- time quantum error correction protocols of certain required properties. In fault-tolerant quantum computation, we address the benet to quantum computa- tion by encoding and teleporting logical qubits between code blocks. The proposed frame- work would provide a new direction to the task of building reliable quantum computers. We are hopeful that the framework would be extensively researched in the near future, so that it is ready to put into use when the associated quantum devices are experimentally mature. We also look at the problem of decoding the topological three-dimensional color codes. We propose a collection of ecient decoders that take advantage of the lattice structures of the codes. Topological codes are popular since locality is a desired property, and it would be no surprise that more and more interesting topological codes of high dimensions are being discovered. Many ideas in our proposal do not only serve for color codes. In fact, they may suit other types of topological lattices when proper adjustments are being made. We are in an exciting era where experiments have been showing breakthroughs in presenting reliable quantum computing resources. The importance of quantum error correction is more or more notable as the size of these resources grow. In the past decades, we have seen an extensive growth in the eld of quantum error correction accompanied by many fascinating research results. A growing amount of applications regarding the use of quantum error-correcting codes have been discovered, and many quantum computing tasks have been fullled with those applications. I hope the research conducted in this dissertation is able to provide evidence on how useful quantum error-correcting codes can be. I am also hopeful that the results in this dissertation would shed light on the paths to other potentially intriguing topics in quantum information processing. 113 References [1] Dorit Aharonov and Michael Ben-Or. Fault-Tolerant Quantum Computation with Constant Error Rate, 1999. http://arxiv.org/abs/quant-ph/9906129. [2] Charlene Ahn, Andrew C. Doherty, and Andrew J. Landahl. Continuous quantum error correction via quantum feedback control. Phys. Rev. A, 65(032340), 2002. [3] Charlene Ahn, H. M. Wiseman, and G. J. Milburn. Quantum error correction for continuously detected errors. Phys. Rev. A, 67:052310, 2003. [4] Charlene Ahn, Howard M. Wiseman, and Kurt Jacobs. Quantum error correction for continuously detected errors with any number of error channels per qubit. Phys. Rev. A, 70:024302, 2004. [5] Jonas T. Anderson, Guillaume Duclos-Cianci, and David Poulin. Fault-tolerant conversion between the steane and reed-muller quantum codes. Phys. Rev. Lett., 113(080501), 2014. [6] Charles H. Bennett, Gilles Brassard, Sandu Popescu, Benjamin Schumacher, John A. Smolin, and William K. Wootters. Purication of noisy entanglement and faithful teleportation via noisy channels. Phys. Rev. Lett., 76(722), 1996. [7] Elwyn R. Berlekamp. Algebraic coding theory. McGraw-Hill, New York, 1968. [8] H ector Bomb n. Gauge Color Codes: Optimal Transversal Gates and Gauge Fixing in Topological Stabilizer Codes. New J. Phys., 17(083002), 2015. http://arxiv.org/abs/1311.0879. [9] S. Bravyi and J. Haah. Quantum Self-Correction in the 3D Cubic Code Model. Phys. Rev. Lett., 111(200501), 2013. [10] Benjamin J. Brown, Naomi H. Nickerson, and Dan E. Browne. Fault-tolerant error correction with the gauge color code, 2015. http://arxiv.org/abs/1503.08217. [11] Todd A. Brun. A simple model of quantum trajectories. Am. J. Phys., 70:719, 2002. 114 [12] Todd A. Brun, Igor Devetak, and Min-Hsiu Hsieh. Correcting Quantum Errors with Entanglement. Science, 314(5798):436{439, Oct. 2006. [13] Todd A. Brun, Yi-Cong Zheng, Kung-Chuan Hsu, Joshua Job, and Ching-Yi Lai. Teleportation-based Fault-tolerant Quantum Computation in Multi-qubit Large Block Codes, 2015. http://arxiv.org/abs/1504.03913. [14] A. R. Calderbank and Peter W. Shor. Good quantum error-correcting codes exist. Phys. Rev. A, 54(1098), 1996. [15] Thomas Camara, Harold Ollivier, and Jean-Pierre Tillich. Construc- tions and Performance of Classes of Quantum LDPC Codes, 2005. http://arxiv.org/abs/quant-ph/0502086. [16] B. A. Chase, A. J. Landahl, and J. M. Geremia. Ecient feedback controllers for continuous-time quantum error correction. Phys. Rev. A, 77(032304), 2008. [17] Xie Chen, Hyeyoun Chung, Andrew W. Cross, Bei Zeng, and Isaac L. Chuang. Subsystem stabilizer codes cannot have a universal set of transversal gates for even one encoded qudit. Phys. Rev. A, 78(012353), 2008. [18] David P. DiVincenzo and Peter W. Shor. Fault-tolerant error correction with ecient quantum codes. Phys. Rev. Lett., 77(3260), 1996. [19] Andrew C. Doherty, Salman Habib, Kurt Jacobs, Hideo Mabuchi, and Sze M. Tan. Quantum feedback control and classical control theory. Phys. Rev. A, 62:012105, 2000. [20] Andrew C. Doherty and K. Jacobs. Feedback control of quantum systems using continuous state estimation. Phys. Rev. A, 60:2700, 1999. [21] Bryan Eastin and Emanuel Knill. Restrictions on transversal encoded quantum gate sets. Phys. Rev. Lett., 102(110502), 2009. [22] David Elkouss, Anthony Leverrier, Romain All eaume, and Joseph J. Boutros. 2009. [23] Austin G. Fowler, Matteo Mariantoni, John M. Martinis, and Andrew N. Cleland. Surface codes: Towards practical large-scale quantum computation. Phys. Rev. A, 86(032324), 2012. [24] Frank Gaitan. Quantum Error Correction and Fault Tolerant Quantum Computing. CRC Press, Boca Raton, FL, 2008. [25] Daniel Gottesman. Stabilizer codes and quantum error correction, 1997. http://arxiv.org/abs/quant-ph/9705052. [26] Daniel Gottesman. Theory of fault-tolerant quantum computation. pra, 57, 1998. 115 [27] Michael Grant and Stephen Boyd. CVX: Matlab software for disciplined convex programming, version 2.1. http://cvxr.com/cvx, March 2014. [28] M. Grassl and T. Beth. Quantum bch codes. Proceedings Xth International Sympo- sium on Theoretical Electrical Engineering, Magdeburg, 1999. [29] Manabu Hagiwara and Hideki Imai. Quantum quasi-cyclic ldpc codes. Proceedings of ISIT 2007, Jun. 2007. [30] Min-Hsiu Hsieh, Todd A. Brun, and Igor Devetak. Entanglement-assisted quantum quasi-cyclic low-density parity-check codes. Phys. Rev. A, 79(032340), Mar. 2009. [31] Min-Hsiu Hsieh, Wen-Tai Yen, and Li-Yi Hsu. High performance entanglement- assisted quantum ldpc codes need little entanglement. IEEE Trans. Inf. Theory, 57(3):1761{1769, Mar. 2011. [32] Matteo Ippoliti, Leonardo Mazza, Matteo Rizzi, and Vittorio Giovannetti. Perturba- tive approach to continuous-time quantum error correction. Phys. Rev. A, 91:042322, 2015. [33] Kurt Jacobs. Optimal feedback control for rapid preparation of a qubit. Proc. SPIE, 5468:355, 2004. [34] Kurt Jacobs. Quantum Measurement Theory and its Applications. Cambridge Uni- versity Press, Cambridge, United Kingdom, 2014. [35] Kurt Jacobs and Daniel A. Steck. A straightforward introduction to continuous quantum measurement. 47:279, 2006. [36] Tomas Jochym-OConnor and Raymond La amme. Using concatenated quantum codes for universal fault-tolerant quantum gates. Phys. Rev. Lett., 112(010505), 2014. [37] T. Kasami. A decoding procedure for multiple-error-correcting cyclic codes. ieeeit, 10:134{139, 1964. [38] E. Knill. Quantum computing with realistically noisy devices. Nature, 434, 2005. [39] E. Knill, R. La amme, and W. Zurek. Threshold accuracy for quantum computation, 1996. http://arxiv.org/abs/quant-ph/9610011. [40] Yu Kou, Shu Lin, and Marc Fossorier. Low-density parity-check codes based on nite geometries: A rediscovery and new results. IEEE Trans. Inf. Theory, 47(7):2711{ 2736, Nov. 2001. [41] Aleksander Kubica and Michael E. Beverland. Universal transversal gates with color codes: A simplied approach. Phys. Rev. A, 91(032330), Mar. 2015. 116 [42] Ching-Yi Lai, Gerardo Paz, Martin Suchara, and Todd A. Brun. Performance and error analysis of knill's postselection scheme in a two-dimensional architecture. Quan- tum Inf. Comput., 14(9& 10):807{822, 2014. [43] Ching-Yi Lai, Yi-Cong Zheng, and Todd A. Brun. Ancilla preparation for steane syndrome extraction by classical error-correcting codes, 2015. in preparation. [44] Daniel A. Lidar and Todd A. Brun. Quantum Error Correction. Cambridge Univer- sity Press, Cambridge, United Kingdom, 2013. [45] Zhicheng Luo and Igor Devetak. Eciently implementable codes for quantum key expansion. Phys. Rev. A, 75(1):010303, Jan. 2007. [46] David J. C. MacKay. Good error-correcting codes based on very sparse matrices. IEEE Trans. Inf. Theory, 45(2):399{431, Mar. 1999. [47] David J. C. MacKay. Information Theory, Inference and Learning Algorithms. Cam- bridge University Press, Cambridge, United Kingdom, 2003. [48] David J. C. MacKay, Graeme Mitchison, and Paul L. McFadden. Sparse-graph codes for quantum error correction. IEEE Trans. Inf. Theory, 50(10):2315{2330, Oct. 2004. [49] Eduardo Mascarenhas, Breno Marques, Marcelo Terra Cunha, and Marcelo Fran ca Santos. Continuous quantum error correction through local operations. Phys. Rev. A, 82:032327, 2010. [50] J. L. Massey. Shift-register synthesis and bch decoding. IEEE Trans. Inf. Theory, 15(122), 1969. [51] Rodney Van Meter and Clare Horsman. A blueprint for building a quantum com- puter. Commun. ACM, 56(10):84{93, 2013. [52] Michael A. Nielsen and Isaac L. Chuang. Quantum Computation and Quantum Information. Cambridge University Press, Cambridge, United Kingdom, 2000. [53] Maki Ohata and Kanta Matsuura. Constructing CSS Codes with LDPC Codes for the BB84 Quantum Key Distribution Protocol, 2007. http://arxiv.org/abs/quant-ph/0702184. [54] Ognyan Oreshkov. Continuous-time quantum error correction. In Daniel A. Lidar and Todd A. Brun, editors, Quantum Error Correction, chapter 8, pages 201{228. Cambridge University Press, Cambridge, United Kingdom, 2013. [55] Ognyan Oreshkov and Todd A. Brun. Continuous quantum error correction for non- markovian decoherence. Phys. Rev. A, 76(022318), 2007. 117 [56] Adam Paetznick and Ben W. Reichardt. Universal fault-tolerant quantum computa- tion with only transversal gates and error correction. Phys. Rev. Lett., 111(090505), 2013. [57] Juan Pablo Paz and Wojciech Hubert Zurek. Continuous error correction. Proc. Roy. Soc. London Ser. A, 454(1969):355, 1998. [58] David Poulin. Stabilizer formalism for operator quantum error correction. Phys. Rev. Lett., 95(230504), 2005. [59] David Poulin. Optimal and ecient decoding of concatenated quantum block codes. Phys. Rev. A, 74(052333), 2006. [60] David Poulin and Yeojin Chung. On the iterative decoding of sparse quantum codes. Quantum Inf. Comput., 8(10):987, 2008. [61] Robert Raussendorf and Jim Harrington. Fault-tolerant quantum computation with high threshold in two dimensions. Phys. Rev. Lett., 98(190504), May. 2007. [62] Robert Raussendorf, Jim Harrington, and Kovid Goyal. Topological fault-tolerance in cluster state quantum computation. New J. Phys., 9(199), Jun. 2007. [63] Mohan Sarovar, Charlene Ahn, Kurt Jacobs, and Gerard J. Milburn. Practical scheme for error control using feedback. Phys. Rev. A, 69:052324, 2004. [64] Mohan Sarovar and G. J. Milburn. Continuous quantum error correction by cooling. Phys. Rev. A, 72(012306), 2005. [65] Peter W. Shor. Fault-tolerant quantum computation, 1997. https://arxiv.org/abs/quant-ph/9605011. [66] Peter W. Shor and John Preskill. Simple proof of security of the bb84 quantum key distribution protocol. Phys. Rev. Lett., 85(2):441{444, Jul. 2000. [67] Andrew M. Steane. Error correcting codes in quantum theory. Phys. Rev. Lett., 77(793), 1996. [68] Andrew M. Steane. Active stabilization, quantum computation, and quantum state synthesis. Phys. Rev. Lett., 78(2252), 1997. [69] Andrew M. Steane. Ecient fault-tolerant quantum computing. Nature, 399:124{126, 1999. [70] Andrew M. Steane. Overhead and noise threshold of fault-tolerant quantum error correction. Phys. Rev. A, 68(042322), 2003. [71] John Watrous. Semidenite programs for completely bounded norms. Theory Com- put., 5:217, 2009. 118 [72] John Watrous. CS 766/QIC 820 Theory of Quantum Information (Fall 2011), 2011. https://cs.uwaterloo.ca/ watrous/LectureNotes.html. [73] H. M. Wiseman. Quantum theory of continuous feedback. Phys. Rev. A, 49(2133), 1994. [74] Howard M. Wiseman and Gerard J. Milburn. Quantum Measurement and Control. Cambridge University Press, Cambridge, United Kingdom, 2014. [75] Paolo Zanardi, Daniel A. Lidar, and Seth Lloyd. Quantum tensor product structures are observable induced. Phys. Rev. Lett., 92(060402), 2004. [76] Bei Zeng, Andrew Cross, and Isaac L. Chuang. Transversality versus universality for additive quantum codes. IEEE Trans. Inf. Theory, 57:6272{6284, 2011. [77] Yi-Cong Zheng and Todd A. Brun. Fault-tolerant scheme of holonomic quantum computation on stabilizer codes with robustness to low-weight thermal noise. Phys. Rev. A, 89(032317), 2014. 119
Abstract (if available)
Abstract
Quantum computer is susceptible to decoherence. Therefore, quantum error correction is important component of quantum computation. Quantum error-correcting code is a tool to realize quantum error correction. In my dissertation, I propose the application of quantum error-correcting codes in different quantum information processing settings. ❧ The dissertation is divided into six chapters. Chapter I introduces the fundamental knowledge on quantum error-correcting codes. In Chapter II, I propose the use of finite geometry low-density parity-check codes in a quantum key expansion protocol. A typical quantum key distribution protocol makes use of a pair of dual-containing classical codes, however, the dual-containing property is no longer required in a quantum key expansion protocol. Therefore, any error-correcting code that has efficient decoder and good performance, for example the finite geometry low-density parity-check code, may be used. In Chapter III, I propose a method to derive protocols performing quantum jump continuous-time quantum error correction. In these protocols, weak measurements and weak corrections are rapidly applied on a quantum state in order to protect it from decoherence. Chapter IV proposes a scheme for fault-tolerant quantum computation by encoding logical qubits into computational code blocks. In order to achieve universal quantum computation in this scheme, we require an error-correcting code that allows transveral implementation of an operation outside the Clifford group. One class of error-correcting codes with this property is the ""gauge color code"". In Chapter V, I propose the first efficient decoder for three-dimensional gauge color codes. These codes are topologically defined on a pure simplicial 3-complex, and figuring out how to decode them is an interesting problem. The conclusion is given in Chapter VI.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Quantum steganography and quantum error-correction
PDF
Quantum error correction and fault-tolerant quantum computation
PDF
Open quantum systems and error correction
PDF
Entanglement-assisted coding theory
PDF
Lower overhead fault-tolerant building blocks for noisy quantum computers
PDF
Quantum coding with entanglement
PDF
Protecting Hamiltonian-based quantum computation using error suppression and error correction
PDF
Dynamical error suppression for quantum information
PDF
Quantum computation and optimized error correction
PDF
Applications and error correction for adiabatic quantum optimization
PDF
Towards efficient fault-tolerant quantum computation
PDF
Topics in quantum information and the theory of open quantum systems
PDF
Error correction and cryptography using Majorana zero modes
PDF
Quantum feedback control for measurement and error correction
PDF
Error correction and quantumness testing of quantum annealing devices
PDF
Flag the faults for reliable quantum computing
PDF
Towards optimized dynamical error control and algorithms for quantum information processing
PDF
Topics in quantum cryptography, quantum error correction, and channel simulation
PDF
Towards robust dynamical decoupling and high fidelity adiabatic quantum computation
PDF
Error suppression in quantum annealing
Asset Metadata
Creator
Hsu, Kung-Chuan
(author)
Core Title
Applications of quantum error-correcting codes to quantum information processing
School
Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Electrical Engineering
Publication Date
12/05/2016
Defense Date
08/31/2016
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
fault-tolerant quantum computation,OAI-PMH Harvest,quantum error correction,quantum information processing
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Brun, Todd A. (
committee chair
), Lidar, Daniel A. (
committee member
), Zanardi, Paolo (
committee member
)
Creator Email
casey72017@gmail.com,kungchuh@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c16-675073
Unique identifier
UC11336621
Identifier
etd-HsuKungChu-4956.pdf (filename),usctheses-c16-675073 (legacy record id)
Legacy Identifier
etd-HsuKungChu-4956.pdf
Dmrecord
675073
Document Type
Dissertation
Rights
Hsu, Kung-Chuan
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
fault-tolerant quantum computation
quantum error correction
quantum information processing