Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Quantum computation in wireless networks
(USC Thesis Other)
Quantum computation in wireless networks
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Quantum Computation in Wireless Networks Doctoral Dissertation Chi Wang Ming Hsieh Department of Electrical Engineering Viterbi School of Engineering A Dissertation Presented to the FACULTY OF THE GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements of the Degree DOCTOR OF PHILOSOPHY (Electrical Engineering) Degree Conferral Date: Aug. 6, 2016 i To the future. Acknowledgement I would like to first and foremost thank my advisor Professor Edmond Jonckheere, who full- heartedly guided me all the way for 4 years. He supported all my major academic decisions, and backed me up without second thoughts when facing setbacks. Back in 2012, when I was about to begin an industry job after finishing my master degree, it is my advisor who welcomed me to his group as a Ph.D. student. This later transformed my whole mindset about science and technology to a point I could never reach as a master student, and opened a whole new world for me that I had never seen before. I could not have started this journey without him and could not have finished this journey without him. It is my greatest luck to have met him. I am also grateful to Professor Todd Brun, whom I took my first course in quantum computation with. It is that course that inspired my interest in quantum computing, and the teaching style of that course that fueled such interest, considering I was asking a large amount of beginners’ ques- tions at the time and all of them were answered professionally and patiently. I also feel lucky to have learned a great deal of wisdom from his character, such as humbleness, patience and more importantly, humor. I also would like to thank Professor Bhaskar Krishnamachari, a leading expert in wireless network- ing, for his open mindedness about my unorthodox research combining wireless networking and quantum computations. Many thanks also go to Professor Aiichiro Nakano, who rubber stamped my research with the computer scientist’s seal of approval. Lastly, I would like to thank ISI for providing me with the essential piece in my research, the actual D-Wave quantum computer. I am grateful to Dr. Robert Lucas for such unparalleled research con- ditions. I would also like to extend many thanks to Professor Daniel Lidar and his group, whom I had a lot of insightful discussions with, and a lot of inspirations from. Special thanks to my colleague and friend Huo Chen, whom I collaborated with in many academic endeavors; to my office folks, Laith, Eugenio, Roberto, Edwin, Reza, Hanie, whom I share the same vision (though different research objectives) with; and finally, to Mr. Goodoff, who makes our time in the EEB building so much fun. Thank you, with love. iii List of Figures 1 Proposed workflow of quantum wireless application. The workflow features a stan- dard 3-step process of weighting, scheduling and forwarding discussed in detail in section 3, with the scheduling part done on D-Wave instead of classical heuristics. 4 2 The big picture of where we stand in the quantum computation problem. Blue indicates our focus and green indicates our current progress. Quantumness and speedup is not currently focused on, but demonstrating an application with advan- tage over classical heuristics would contribute to such topic. . . . . . . . . . . . . 6 3 Concept of optimal transport and curvature, where a mass distributed over a ball B x is being transported to the ball centered aty. The comparison between (i) the piecemeal process of transporting every single mass element by parallel transport ofxx 0 and (ii) the process of lumping the total mass ofB x at its center and trans- porting it in one move to the center of the other ball for redistribution across the ball is quantified by the fraction R Bx d(x 0 ;T(x 0 ))dx 0 d(x;y)vol(Bx) . This measure is greater than 1 if and only if Ricci [x;y] < 0 in a negatively curved Riemannian manifold, or less than 1 if Ricci [x;y]> 0 in a positively curved space. . . . . . . . . . . . . . . . . 16 4 Steady state node-to-node delay of various routing protocols in different network topologies. LPR, BP, HD stands for Least Path Routing, Backpressure, Heat Dif- fusion, respectively. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 5 Steady state routing energy of various routing protocols in different network topolo- gies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 6 Performance and sensitivity of different routing protocols in varying topology . . . 20 7 Evolution of average queue occupancy with time under uniform arrival rate . . . . 22 8 Evolution of average queue occupancy with time under different arrival rates . . . . 22 9 Adaptive system switching to best protocol given current topology and desired performance metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 10 Decision tree based switching logic of the adaptive system of Fig. 9. Each arrow color represents different categorization of curvature. . . . . . . . . . . . . . . . . 24 11 Proposed pre-processor for the D-Wave adiabatic quantum computing platform. The arbitrary graph could be either “embeddable” or “unembeddable,” but this is not known. The “preprocessor” acts only on the “unembeddable” graphs, as it only checks a necessary condition for embeddability, in polynomial-time in order to save embedding trial time. Note that even if the necessary condition test passes, the graph could still be unembeddable so that embedding trials are still necessary. . 26 iv 12 Experimental results on wall-clock running time for the embedding heuristic. A set of 40 problems all with 40 nodes, generated in accordance with the Erd¨ os- R´ enyi model, is tested. On the left plot, in order to test success probability, the embedding trials do not stop if a successful embedding is found before the end of 40 embedding trials. On the right graph, to simulate the real scenario, a default 10-trial process is run and stops once an embedding is found in which case the graph is positively determined to be embeddable. On the other hand, if after a great many runs no embedding is found, the graph is declared unembeddable (al- though it might be embeddable if the trial missed the embedding). The orange part of the left pie is the time it takes heuristics to declare the problem to be not em- beddable, whereas the blue part of the left pie is the time it taks to be successful at constructing and embedding. The numbers in the pie chart are in seconds. Param- eter loading and sampling time are provided in [70]. Determining embeddability takes much shorter time since the process would stop once an embedding is found, and is usually found within the first few trials. . . . . . . . . . . . . . . . . . . . . 27 13 The conceptual connection among four related invariants in graph theory. We pro- vide strong numerical connection between Ollivier-Ricci curvature and treewidth in random arbitrary graphs—the latter connects to graph minors, and hence cur- vature could determine unembeddability. It is worth noting that for engineering purposes, among the four topological invariants, only the Ollivier-Ricci curvature has polynomial-time complexity. . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 14 Scatter plots of Ollivier-Ricci curvature versus treewidth (coefficient of determina- tionR 2 : 0:965 (left) and 0:889 (right)). The left plot is performed on a set of 40 random graphs of the same order using the Erd¨ os-R´ enyi random graph model with increasing probability of connection (hence increasing curvature). The right plot is performed on a set of 250 random graphs, with the order of the graphs varying from 10 to 34; moreover, each order case contains random graphs with different probabilities of connection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 15 Accuracy analysis of Ollivier-Ricci curvature estimation and other well-known classical heuristics. Accuracy is estimated relative to the exact solution, so that the exact solution has accuracy index of 1. Bars show 25th and 75th percentile of the estimation. The left graph shows the accuracy index of 60 graphs of the same order but with varying topology. The right plot shows the accuracy index of graphs with similar topology but with order varying from 10 to 40. Accuracy larger than one means estimated treewidth is larger than actual treewidth. LibTW library [95] is used throughout the computation. . . . . . . . . . . . . . . . . . . . . . . . . . 32 16 Wall-clock time cost of classical exact treewidth algorithms and non classical curvature estimate. On the left panel, the abscissa is consistent with increasing treewidth on a set of 60 graphs with the same order. On the right panel, the abscissa is the order of the graph in a set of graphs with similar topological characteristics in the sense of normalized treewidth. Both simulations are performed in the same environment as before. No parallelization is added. Note that the time is in log scale. 33 v 17 An example of network to conflict graph conversion, QUBO generation, and minor- embedding to Chimera architecture. The conflict graph is generated based on the 1-hop interference model. Only 4 cells ofK 4;4 structure of the Chimera architec- ture are shown. On D-Wave Two platform, 64 such cells are available, giving 512 available physical qubits. Quantum annealing is performed on such architecture to theoretically give the ground state of the configuration, and thus the network scheduling solution to the original problem. The numerical data shows the conflict graph weight structureG qubo and how it is mapped to the Ising Hamiltonian char- acterized byh Ising andJ Ising . The off-diagonal entries ofG qubo are chosen as 0:1 larger than the minimum of the weights of the two corresponding vertices of the conflict graph. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 18 Energy compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 19 Network Delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 20 D-Wave 2X noise improvement compared to D-Wave II. Noise reduction is suf- ficient enough to demonstrate a quantum advantage over SA after gap expansion. Experiment is performed on the same set of graphs as those used in Figure 19, while the SA curve is slightly different due to the randomness of SA during re- performing it. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 21 Network Delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 22 ST99, defined as log(10:99) log(1P OPT ) relative to required optimality specified by network, before and after gap expansion on four test graphs. Note that probability is cal- culated based on all solutions returned by D-Wave; thus ST99 of 10 3 corresponds to one set of annealing runs, which in our case costs 20ms. The curve for Graph 2 ends at 0:9 optimality because there is no solution that satisfies such optimality after a total of 120; 000 annealing runs. Also note that the left figure is run on D-Wave II while the right figure is run on D-Wave 2X. . . . . . . . . . . . . . . . 43 23 Complete picture of proposed quantum network scheduling framework . . . . . . . 45 vi List of Tables 1 Numerical values of sensitivity of four protocols.S q is in the scale of 10 3 ,S R is in the range of 10 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2 Ollivier-Ricci curvature versus treewidth agreement at the limit of their respective ranges of values (n denotes the order of the graph) . . . . . . . . . . . . . . . . . . 28 3 Search range for best parameters for simulated annealing . . . . . . . . . . . . . . 39 4 Problem size of random graphs used in experiment. The third column represents the actual physical qubits used after minor embedding. . . . . . . . . . . . . . . . 40 5 Performances After Gap Expansion for Graphs 2,3,4 . . . . . . . . . . . . . . . . 43 6 Penalty weight with different setup and resulting quality measure of returned solution averaged over 120 timeslots. Graph 2 is a larger size instance than Graph 1. Delay refers to average network delay in steady state, and NC refers to the non-convergent case where steady state is not reached within tested time span. . . . 44 vii Abstract This thesis proposes a unique application framework for adiabatic quantum computations (AQC), specifically, its application to optimization of wireless interference-constrained networks. Such framework relies on recently developed programmable quantum annealers, and through the spe- cific wireless networking problem identifies a set of real world challenges in quantum computation. This thesis identifies paths to be followed and suggests methods to be used in future quantum ap- plications. Among the paths to be followed, one will mention the “gap expansion” technique that pulls those optimal solutions satisfying the interference constraints to the bottom of the energy spec- trum, so that even in case of a diabatic evolution the returned, possibly suboptimal, solution has a good chance of satisfying the interference constraints. In addition, it is shown that the specific wireless network constrained optimization might provide a benchmark situation where some defi- nite advantages of quantum annealing over simulated annealing emerge. Finally, we provide a patch to a problem that has been plaguing AQC: the minor embedding of the problem graph in the architecture graph. Instead of using time consuming heuristics that might run for a long time before declaring that the problem is not minor embeddable in the architecture, we propose an AQC “preprocessor” that runs the differential geometry Ollivier-Ricci curvature test on both the problem and the architecture graphs to rule out case where the embedding is impossible, thereby saving a substantial amount of time before the annealing runs are started. viii Contents List of Figures iv List of Tables vii 1 Introduction 1 1.1 The Landscape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Obstacles to Quantum Applications . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 Proposed Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2 Literature Review 7 2.1 Scheduling and Capacity Region . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.1.1 Network interference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.1.2 Centralized TDMA scheduling . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2 Simulated Annealing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.3 Quantum annealing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.4 Minor embedding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.5 Adiabatic Quantum Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.5.1 Year 2012 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.5.2 Year 2013 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.5.3 Year 2014 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.5.4 Year 2015 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.5.5 Speedup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3 Wireless Network Setup 13 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.1.1 Topology and Least Path Routing in Wired Networks . . . . . . . . . . . . 13 3.1.2 Heat Diffusion Routing and Dirichlet Routing . . . . . . . . . . . . . . . . 13 3.2 Ollivier-Ricci Curvature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 3.3 Protocol Performance under Different Network Topology . . . . . . . . . . . . . . 17 3.3.1 Simulation Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3.3.2 Node-to-Node Delay Performance . . . . . . . . . . . . . . . . . . . . . . 17 3.3.3 Routing Energy Performance . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.3.4 Varying Topology and Sensitivity . . . . . . . . . . . . . . . . . . . . . . 19 3.3.5 Summary of Topological Impacts . . . . . . . . . . . . . . . . . . . . . . 21 3.4 Network Capacity Region versus Ollivier-Ricci Curvature . . . . . . . . . . . . . . 21 3.5 Curvature driven adaptive control . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 4 Embedding Preprocessor 25 4.1 Minor Embeddability and Problem Formulation . . . . . . . . . . . . . . . . . . . 25 4.1.1 Treewidth and Minor Embeddability . . . . . . . . . . . . . . . . . . . . . 25 4.1.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 4.2 Ollivier-Ricci Curvature and Treewidth . . . . . . . . . . . . . . . . . . . . . . . 27 4.2.1 Proof of lower bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 4.3 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 ix 4.3.1 Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 4.3.2 Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 4.3.3 Time cost . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 5 Mapping Preprocessor 33 5.1 Ising Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 5.1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 5.1.2 Wireless Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 5.1.3 Mapping to QUBO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 5.1.4 Measurement of quality . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 5.2 Gap Expansion, Energy Compression . . . . . . . . . . . . . . . . . . . . . . . . 38 5.3 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 5.3.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 5.3.2 Average network delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 5.3.3 Throughput optimality and ST99[OPT] . . . . . . . . . . . . . . . . . . . 40 5.3.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 6 Conclusion 43 7 References 46 x 1 Introduction 1.1 The Landscape ‘It promises to solve some of humanity’s most complex problems’ - Time Magazine (Feb. 2014) Since the first proposal of quantum Turing theorem by David Deutsch in 1985 [58], quantum computation has seen fast evolution in recent decades. It has provided a revolutionary transfor- mation on the notion of feasible computability, with the most celebrated Shor’s factoring algo- rithm [59] and Grover’s search algorithm [60]. Adiabatic Quantum Computation (AQC), first proposed in the year 2000 [54], has been a very promising candidate for future quantum com- putation models. Resembling the classical metaheuristic of simulated annealing [42], AQC aims to solve hard optimization problems with promise of utilizing quantum tunneling effects so as to more efficiently explore the search space. With the introduction of the world’s first programmable quantum annealing platform (known as D-Wave) in 2011, more research efforts have been put into such endeavor, including benchmarking, error correction, quantumness test and applications, all of which led to the following big questions: • Is it quantum? The short answer is ‘most likely yes’. In Apr. 2013, by performing experiments on D-Wave with random instances with differ- ent level of hardness [44, 45], and comparing the experimental results with classical models of simulated annealing, spin dynamics and quantum Monte Carlo, a unique bimodal dis- tribution of success probability of quantum annealing was observed to be in agreement of quantum Monte Carlo. This rejects simulated annealing and spin dynamics models. In May 2014, Lanting et al. [61] experimentally determined the existence of entanglement inside D-Wave. In Nov. 2014, evidence of quantum tunneling is experimentally observed [62]. • Does it have speedup over classical computer? The short answer is ‘no conclusive and convincing speedup has been measured thus far’. Definition of the term ‘speedup’ is more subtle than it appears to be, and speed itself is very problem dependent. Efforts have been made at showing quantum advantage [44–46], and general speedup has not yet been detected. Katzgraber et al. gave a possible reason for why speedup has not been detected [63], that random Ising problems might be too easy, and that 20s annealing time might be too long. With the introduction of D-Wave 2X with 1152 qubits in Aug. 2015, D-Wave released benchmark results claiming that time-to-target measure on D-Wave is 8 to 600 times faster than competing algorithms on all input cases tested [64]. In Dec. 2015, an arXiv preprint by Google claimed that with the new D-Wave 2X, a speedup as high as a factor of 10 8 has been observed, but this work has not been well received in 1 academia for the main reason that the problem appears to be quite an artificial one, and that the annealing time might be too long for easier cases and hides the exponential scaling. At the time of writing this dissertation, whether such speedup represents a ‘true quantum speedup’ is still in debate. • Can improvements be done? The short answer is ‘yes’. In Jul. 2013, Pudenz et al. proposed quantum annealing correction on D-Wave [65] by us- ing multiple qubits to represent one qubit and properly setting the penalty weights between such redundancy qubits, and observed significant improvements in experiments. Unlike tra- ditional quantum error correction where many-body interaction is required and overhead is usually too large to be implementable on D-Wave, Pudenz code is specifically designed for D-Wave’s Chimera architecture. In Jul. 2015, Vinci et al. proposed error correction in conjunction with minor embedding [66]. By solving encoded problems experimentally, significant improvements on minor- embedded instances are detected. There also exist other error correction efforts on adiabatic quantum computing, in particular Vinci [67] and Mishra [68] both in late 2015. • What can be done on D-Wave? The not-too-short answer is ‘current applications all have limited practical significance, in- cluding the one proposed in this thesis’ A detailed review particularly of applications is given in section 2.5. There exist several hard obstacles discussed in section 1.2. In general, unlike gate-based quantum computation with proved universality, D-Wave is a specialized solver that solves combinatorial problems. Solving real-world problem is significantly different from theoretical problems, as would be explained later. It is important to map real-world problems in a proper way for the application to be useful, something that will be one of the topics of this thesis. We propose to give a framework of solving real-world problems, and give a network schedul- ing problem as a pilot problem. We propose to contribute mainly to the last problem listed above (‘What can be done on D-Wave’), and also to contribute to the improvement of relevance of re- turned solutions by providing a set of improved solutions particularly designed to cope with real- world problems. This thesis is organized as follows • What is the background? In section 1.2, we identify obstacles in designing a real-world application; in section 1.3 we give a big picture of the proposed solution. • What other groups are doing? We start our thesis by giving a detailed literature review in section 2. • What have we done? We introduce what we have done so far in wireless networking (sec- tion 3), embedding preprocessor (section 4), quantum network scheduling (section 5), which are three key concepts in the entire setup. 2 1.2 Obstacles to Quantum Applications Coping with real-world applications is very different than research-oriented problems, where the problem instance is deliberately constructed to demonstrate a research point. It is important to identify several important obstacles that have to be overcome in real-world applications as follows, • 1. Real world problems have arbitrary structure: In Feb. 2015, Wu [69] named minor embedding as No. 1 challenge in applying AQC to practical problems. The hardware graph, known as ‘Chimera’ architecture, is a fixed graph composed of a lattice ofK 4;4 bipartite cells; thus while mapping a real world arbitrary graph onto Chimera architecture, the NP-hard process of minor embedding has to be performed. This greatly increases the overall complexity of solving such problem. • 2. Real world problems have very good heuristics: It is of overriding importance to find a problem that could at least potentially justify the use of a quantum annealer. By comparing it to classical methods, including exact algorithms, meta- heuristics, problem-specific heuristics, the quantum annealer should at least have a practical advantage, if not faster. • 3. Real world problems have to be sizable: Even exact solvers for many NP-hard problems can solve small instances very fast. Problem size matters if one wishes to demonstrate a scaling advantage of quantum computer. Due to limited amount of available qubits, mapping practical problems to architecture becomes a challenge. Too much use of error correction would severely limit the size of the problem to be solved. Thus, it is of great interest that applications could utilize as many available qubits as possible. • 4. Real world problems have arbitrary parameters: Weights on Ising Hamiltonian is problem-defined and can be anything. King et al. [70] claimed that the analog control error onh andJ follows Gaussian distributions with h 0:05 and J 0:035 on the available scale of [2; 2] on D-Wave II. Such value is believed to be significantly reduced on the new D-Wave 2X, as shown in our result in Section 4.3. It is also such error that caused current effort of finding convincing speedup to fail. Thus, for real-world problems with problem-defined weights, it is highly likely that two states would become indistinguishable. Due to the obstacles identified above, current D-Wave is experiencing difficulty in its applica- tions to real world challenges. Applications we have seen so far 1) are either Chimera architecture specific; 2) are of the class where classical heuristics can do better; 3) are of the class that solves on small instances; 4) are of the class of idealized parameters. Thus, it is important to further bridge the gap between theory and applications to solve the following problem: There is currently no complete solution for solving practical problems on D-Wave. 3 1.3 Proposed Solution We propose to give a unifying framework to solve real-world problems and cope with the chal- lenges listed in section 1.2. In search for a ‘good quantum problem’ of practical significance, we first define a real-world problem to be of the following nature: • It has strict time constraint for the application. • Problem is hard to solve exactly, and current heuristics have relatively poor performance under such time constraint. • Optimal solution is good, but suboptimal solutions are acceptable, subject to time constraint. There is only ONE optimal solution but SEVERAL suboptimal solutions. • Suboptimal solutions have neither been considered nor carefully assessed. We found that the wireless network scheduling problem satisfies all of them, and could become a ‘good quantum problem’. Network scheduling problem features most scenarios in a typical real-world problem, including: 1) Time-varying problem structure; 2) Protocol defined problem parameters; 3) Strict time constraint and dynamic nature, in the sense that the quality of next timeslot solution depends on the qualify of the previous one. In section 2.1, we give a more detailed introduction on such problem. However, it is important that we nail the structure of such framework as shown in Figure 1. Figure 1: Proposed workflow of quantum wireless application. The workflow features a standard 3-step process of weighting, scheduling and forwarding discussed in detail in section 3, with the scheduling part done on D-Wave instead of classical heuristics. We intend to use the embedding preprocessor to partially solve the minor embedding chal- lenge described in section 1.2; the mapping preprocessor to greatly improve solution quality of quantum annealing by gap expansion; the rescaling preprocessor to cope with conflict between real-world parameters and hardware precision, and the whole setup to demonstrate the real-world 4 use of quantum computing, providing a reference for future application designs, thus demonstrat- ing that quantum annealers could be justified and paving the way to demonstrate quantum advan- tage. Details of wireless setup is introduced in section 3; details of embedding preprocessor is introduced in section 4; details of mapping preprocessor is introduced in section 5. We also summarize the intent this chapter in Figure 2. Note that we do not try to show quan- tumness of D-Wave, nor do we intend to demonstrate a conclusive speedup. However, we believe that our application has shown an advantage over classical heuristics in a particular setup of the problem, and thus could contribute to the endeavor of demonstrating such speedup. 5 Quantum Computation Theoretic Models Gate-Based Model Holonomic Model Adiabatic Model Measurement-Based Model Applications? Quantumness? Challenge: Minor embedding Improvements? Speedup? D-Wave First programmable quantum annealing platform. Not now, possible in the future Most likely yes Challenge: Competing classical heuristic Challenge: Non-trivial problem size Yes Challenge: Real-world parameter Partially solved Found advantage Highly non-trivial Method defined Figure 2: The big picture of where we stand in the quantum computation problem. Blue indicates our focus and green indicates our current progress. Quantumness and speedup is not currently focused on, but demonstrating an application with advantage over classical heuristics would contribute to such topic. 6 2 Literature Review 2.1 Scheduling and Capacity Region 2.1.1 Network interference One of the fundamental problems in multi-hop wireless networks is network scheduling. Signal- to-Interference-plus-Noise Ratio (SINR) has to be maintained above a certain threshold to ensure successful decoding of information at the destination. For example, IEEE 802.11b requires min- imum SINR of 4 and 10 dB corresponding to 11 and 1Mbps channel [22]. Consequently, only a subset of edges in a network can be activated at the same time, since every link transmission causes interference with nearby link transmissions. Different networks and protocols usually use different interference models. The most commonly used model is the 1-hop interference model (node exclusive model), in which only one link attached to a given node can be activated in the same timeslot, with the restriction extended to every node. 2.1.2 Centralized TDMA scheduling Here we consider a centralized Time Division Multiple Access (TDMA) scheduling, where a cen- tralized scheduler is aware of the global network topology and performs best-effort scheduling on a timeslot basis in order to 1) maximize the number of simultaneous transmissions from non-interfering stations; 2) minimize average network delay. The fundamental mechanism in IEEE 802.11 uses the Distributed Coordination Function (DCF), which attempts to access wireless medium in a distributed way and backs off for a random time following exponential distribution. Despite its simplicity to implement, DCF has serious draw- backs mainly for its poor throughput [23, 24]. There are two major models for analyzing network interference; one is the graph based model for solving the weighted maximum independence set problem on a conflict graph [17–21], the other for optimizing the geometric-based SINR [12–15] . The former is sometimes argued as be- ing an overly idealistic assumption; however, the Maximum Independent Set (MIS) problem is still involved in the latter model [16] and is of interest to our dissertation; thus we base our abstraction on solving the MIS in a centralized scheduler in Medium Access Control (MAC) layer. To give formal definitions on a graph G = (V;E), let w l2E be the wireless networking link weight (related to queue differential) and letd S (x;y) denote the hop distance betweenx;y2 V . Consider edgese u ;e v 2E and let@e u =fu 1 ;u 2 g;@e v =fv 1 ;v 2 g. We then define d(e u ;e v ) = min i;j21;2 d S (u i ;v j ) (1) to be the distance between edges. Similar to the definition in [11], a subset of edges E 0 is said to be valid subject to theK-hop interference model if, for alle 1 ;e 2 2 E 0 withe 1 6= e 2 , we have 7 d(e 1 ;e 2 )K. Let S K denote the set of K-hop valid edge subsets. Then the network scheduling under the K-hop interference model is maximize X l2E 0 w l ; subject to E 0 2S K : (2) where the gist is to optimize under interference constraint, that no two links withinK hop could be activated at the same time. In the K = 1 case, the problem is a max-weight matching prob- lem and thus has polynomial time solution (Edmonds’ blossom algorithm [41]). However, for the caseK > 1, the problem is proved to be NP-hard and non-approximable [11]. In most cases, the network scheduling problem has to be solved in every timeslot during network operation; thus, the time complexity of the exact scheduling problem becomes critical. Max-Weight scheduling by solving a weighted maximum independence set problem is a proved classical throughput optimal algorithm [25], that is, the scheduling set can stabilize all arrival traffic if such arrival rates are within the capacity region. However, in real-world applications, such algorithm is unrealistic due to its NP-hardness and time constraints. Instead, heuristics are widely used and well-studied, such as greedy style Longest-Queue-First (LQF) algorithm [1–6], random access algorithm [7] and classical probabilistic algorithms including simulated annealing, genetic algorithm, etc., discussed in more detail in section 2.2. LQF algorithm, in particular, has been claimed to achieve satisfactory throughput optimality inK-hop interference model [4,6], and guaranteed to achieve at least 1=6 of optimal throughput for K-hop, and 1=4 of it for 2-hop [4] with less than 20 nodes. 2.2 Simulated Annealing Among all classical heuristics, simulated annealing (SA) is the most important one. First proposed by Kirkpatric et al. [42], SA emulates the process of first melting a solid by heating it up and solidifying it by slowly cooling it down to a low-energy state to obtain pure lattice structure. SA is designed as a general probabilistic algorithm for any optimization problems to obtain a pure lattice structure, with the cost function being the algorithm counterpart of energy in physical metallurgy. SA is formulated as follows: p k+1 = 1 iff(x k+1 )<f(x k ) expf f(x (k+1) )f(x k ) t g otherwise, (3) wheret denotes temperature, f(x) denotes the cost function, k denotes the iteration step andp k denotes the probability of accepting neighbor x k at iteration k + 1. A properly chosen set of parameters is essential to obtain good results from SA, with the cooling schedule that describes the change in temperature being one of the most important ones. SA has been well studied for both MIS problem alone [26, 27] and applications in wireless networks [28, 29]. It is claimed that simulated annealing is superior to other competing methods with experimental instances of up to 8 70; 000 nodes [26]. Several other classical heuristics have also been applied to the MIS problem, including neural networks [30–32], genetic algorithm [33–35], greedy randomized genetic search [36], Tabu search [37–40]. However, throughout this thesis, SA will be our primary concern. 2.3 Quantum annealing Adiabatic quantum computation, a subcategory of quantum computing, first proposed in [54], and later physically implemented by D-Wave, maps a QUBO problem defined as min X E(x 1 ;x 2 ;:::;x N ) =c 0 + N X i=1 c i x i + N X i<j=1 c ij x i x j ; x i 2f0; 1g; (4) to an Ising formulation defined as H Ising = N X i=1 h i z i + N X 1i<j J ij z i z k (5) with i z being the z-Pauli operator of spin-i, where the eigenvalues of z are1 and 1. The annealer prepares a initial transverse magnetic field, an equal superposition of 2 N computational basis states, as H trans = N X k=1 x k : (6) During adiabatic evolution, the Hamiltonian evolves smoothly from H trans to H Ising with s monotonically increasing from 0 to 1, H(t) = (1s)H trans +sH Ising ; s2 [0; 1]: (7) Thus, from the adiabatic theorem, if the evolution is “slow enough”, the system would remain in its ground state; thus, the solution of the original QUBO problem could be obtained by measurement on the Ising problem with a certain probability of success. QUBO is widely studied and applied in many research fields that feature optimization, graph- ical models, Baysian networks, etc. One of the specific areas is the computer vision approach that involves minimizing energy functions. Felzenszwalb [43] provides an insightful survey of the applications of QUBO in computer vision. QUBO is proved to be NP-hard. There is some evidence that the D-Wave quantum computer gives a modest speed-up over classical solvers for QUBO problems, and may provide a large speed-up for some instances of QUBO problems [45]. Recently, on D-Wave 2X with 1152 qubits, the speedup reaches up to three orders of magnitude for a subset of scenarios in multiple query optimization problems [47]. 9 2.4 Minor embedding The general idea behind it is to embed a QUBO problem (a problem graph) into a hardware graph in order to be solvable by adiabatic evolution. Due to physical architectural considerations, such as limited qubit fan-out, minimizing coupler strengths, 2D chip integration etc. [96], the current hardware architecture is designed usingK(4; 4) bipartite cells interconnected in a 2D lattice struc- ture, named “Chimera topology”. For a problem graph to be embedded into a hardware graph, it is required that the problem graph is a subgraph of the architecture graph. In most cases, this is a very strong requirement for general problems, since the hardware graph is fixed. In the D-Wave architecture, minor embed- ding instead of subgraph embedding is used to allow 1-to-many vertex mapping [92]. By properly adjusting the coupling strengths of particular edges and nodes [93], more than one physical qubits can represent the same logical qubit, thus greatly increasing the range of graphs that can be minor embedded to a fixed hardware graph, at the cost of using more resources (more physical qubits). The definition of minor embedding is as follows. LetH be a fixed hardware graph. Given a problem graphG, theminorembedding ofG is defined by :G!H such that (i) each vertexv inV (G) is mapped to a connected subtreeT v ofH; (ii) there exists a mapV (G)V (G)!E(H) such that for each vw 2 E(G), there are corresponding i v 2 V (T v ) and i w 2 V (T w ) with i v i w 2E(H). Minor embedding relaxes the original requirement of subgraph embedding, provided that the resources (number of physical qubits) are adequate. Crucially related to minor embedding is the concept oftreedecompositionT ofG: Each vertexi2I of the treeT abstracts a subsetV i , called a “bag”, of vertices ofG such that (i)[ i2I V i =V (G); (ii) for anyvw2E(G), there is ai2I such thatv;w2V i ; (iii) for anyv2 V the setfi2 I : v2V i g forms a connected subtree ofT . The width of a tree decomposition is max i (jV i j 1). The treewidth (tw) is the minimum width over all tree decompositions. Thetreelength is the minimum over all decompositions of the maximum of the diameters of the bags. 2.5 Adiabatic Quantum Applications 2.5.1 Year 2012 On May 2011, D-Wave announced D-Wave One with 128 physical qubits, and later the application to protein folding was implemented on such annealer and utilized as many as 81 physical qubits with reduction of many-body to two-local interaction [43]. Success rate varied from 80:7% for 8 physical qubit case to 0:013% for 81 physical qubit case on one annealing run. In the experiment, no comparison was made with simulated annealing and no speedup was claimed. Adiabatic quan- tum machine learning application was also proposed [126], but not experimentally implemented. 2.5.2 Year 2013 A group at USC benchmarked D-Wave One [44] and concluded that in comparison with classical heuristic optimization algorithm (simulated annealing), no speedup had been detected; however, 10 quantum annealing was believed to demonstrate its advantage upon scaling to thousands of qubits. In September, application to determining Ramsey number [53] was done on D-Wave One. The application still has many-body reduction and claims that analog control error and precision ofh and J caused serious difficulties. As many as 28 computational qubits were used, transforming to 84 physical qubits, with the probability of success being 64:5% throughout 100; 000 annealing runs. They also claimed to beat simulated annealing in terms of time to solution (TTS), but did not claim general speedup. 2.5.3 Year 2014 D-Wave launched D-Wave Two with 512 physical qubits. Two applications to quantum annealing emerged, both led by a group from NASA Ames. One such application is Bayesian network structure learning [49], where for the first time suffi- ciency of suboptimal solution is proposed, with the claim that global optimum is not required. They also studied penalty weights and pointed to probable problem of analog control error caused by precision constraints. However, by claiming that only 7 logical qubits could be embedded, no experimental results were shown. The other application is operational planning problem [48]. For the first time, three high-level research challenges were identified, namely (1) Finding appropriate hard problems suitable for quantum annealing, (2) Mapping to QUBO with good choices of parameters, (3) Minor embed- ding onto hardware. They used qubits in the range of 8 16, with expected annealing runs in the range of 10 10000 to reach ground state with 99% certainty. In the same year, the MAX 2-SAT problem was experimentally tested on D-Wave One by a USC group with up to 108 physical qubits [127] and the results were compared with classical exact algorithms. The USC group claimed that D-Wave scales more favorably with problem size, instead of problem hardness. 2.5.4 Year 2015 The same NASA Ames group applied quantum annealing to electric power system fault detec- tion [50] on D-Wave II, and again emphasized that analog control error is a major obstacle in working with real world problem-defined graphs. They utilized as many as 340 physical qubits, reaching typically 9000 ST99 measure (expected repetitions to reach ground state with 99% cer- tainty). They compared results with classical exact solver and claimed a comparative speedup. However, general quantum speedup was not claimed. A group as USC also applied graph isomorphism problem to D-Wave II [51] which involves re- ducing baseline Hamiltonian to a more compact Hamiltonian. The solved problem size is as large as 18 logical qubits, with ST99 time around 1 second in the 18-qubit case. 11 D-Wave also launched D-Wave 2X with 1152 physical qubits. The first application on D-Wave 2X is database optimization [47], and they claimed to have found a subset of problems that demon- strated a speedup up to three orders of magnitude and for the first time used over 1000 qubits. However, another group at NASA while testing the new machine on deep learning application [52] claimed that no speedup was found compared to other competing classical heuristics. Interestingly, in late 2015, Google announced a result [125] claiming a 10 8 speedup on D-Wave 2X compared to state-of-the-art classical heuristics based on carefully crafted ‘artificial problems’. However, this claim has not been well received in academia and is still in debate, for the same reason mentioned before: 1) problem’s artificial nature and 2) suboptimal annealing time on easier cases may be hiding the exponential scaling 2.5.5 Speedup Note that, including the applications listed here, of all the applications we are aware of, most were solved with small logical instance size due to hardware constraints and none of them demonstrated conclusive speedup compared to classical methods. The Google result may be the first application that might qualify as a success story, but its validity is still in controversy. In fact, demonstrating quantum speedup is more subtle than it appears to be. There has recently been a lot of insightful work on such an endeavor [44–46]. One conclusive remark is that no evidence for a general speedup has been found, but potential speed up might exist in certain problems. 12 3 Wireless Network Setup In this chapter, we give a detailed introduction on our work on wireless network curvature, con- gestion and capacity region. 3.1 Introduction A Wireless Sensor Network (WSN) typically consists of a large number of low-power computation- capable autonomous nodes. Unlike wired networks where node-to-node delay is usually the only optimization objective, in WSNs power conservation is also a major concern. Sensor nodes usually carry generally irreplaceable power sources, densely deployed within frequently changing network topology [71]. This is also the reason why a power-conservative routing protocol is usually pre- ferred in sensor networks instead of network flooding. However, certain applications still require lower delay, which generally entails a trade-off with power consumption. SEAD (Scalable Energy- Efficient Asynchronous Dissemination [72]) is an example of a protocol that proposes to trade-off between node-to-node delay and energy saving. However, our focus here is to develop numerical connections between network topology and delay/routing energy. We develop quantitative relationship between network topology and per- formance, targeted at developing a control scheme that dynamically selects protocol in different network topological environment. Since a WSN is subject to frequent network topology change, we also analyze the impact of topology change on routing performance, and develop a measure of protocol robustness to varying topology. 3.1.1 Topology and Least Path Routing in Wired Networks Network congestion control is an important challenge in large and wide area networks. It is shown in [73] that, under Least Path Routing (LPR) protocol, where each node is forwarding packets in ac- cordance with a global routing table, congestion of the network arises as a combination of the LPR protocol and the negative curvature of the network. It is proved that in a negatively curved space in the sense of Gromov topology, geodesics between uniformly distributed (source, destination) pairs concentrate in the “centroid” of the network. To define the term “centroid,” a quantitative betweenness centrality metric is thus defined accordingly. Also asymptotic congestion estimates in terms of the size of the network in Gromov hyperbolic graphs and Euclidean lattices are developed in [74]. It is also shown in [75] that a network with uniform degree can have significant traffic congestion when the degree is larger than 6; which is an indication of negative curvature. As we will show in later sections, network “congestion” in wireless networks is consistent with these wireline results—provided that we use a better wireless specific curvature metric. 3.1.2 Heat Diffusion Routing and Dirichlet Routing Least Path Routing is simple to implement, but subject to heavy congestion at the centroid of the network if the network is negatively curved. Moreover, in wireless sensor networks, the global 13 topology information is not generally available to every node, so that access to a global routing ta- ble is no longer a valid assumption. Thus, a dynamic routing protocol referred to as Backpressure (BP) [76] routing has been proposed. It achieves maximum throughput in the presence of varying network topology without knowing neither arrival rates nor global topology. Heat Diffusion (HD) protocol, originally proposed in [77], is also a dynamic routing protocol with the unique feature that it mimics the discrete heat diffusion process with information only from neighboring nodes. It is proved that it stabilizes the network for any rate matrix in the interior of the capacity region. The Heat Diffusion protocol is briefly formulated as follows: At timeslotk, letQ (d) i (k) denote the number ofd-packets (those packets bound to destination d in node set) queued at the network layer in nodei, and letf (d) ij (k) denote the actual number of d-packets transmitted via linkij, constrained by the link capacity ij (k). HD is designed along the same 3-stage process as BP: weighting-scheduling-forwarding. • HD Weighting: At each timeslot k and for each link (i;j), the algorithm first finds the optimald–packets to transmit as q (d) ij (k) = max n 0; q (d) i (k)q (d) j (k) o ; (8) d ij (k) = arg max d Q (d) ij (k): To attribute a weight to each link, the HD algorithm performs the following: c f ij (k) = min nl 1=2q (d ) ij (k) m ; q (d ) i (k); ij (k) o ; (9) w ij (k) = c f ij (k) q (d ) ij (k) : (10) • HD Scheduling: After assigning the optimal weight (10) to each link, the scheduling matrix S(k) is chosen in a scheduling setS, where the scheduling set represents the set of activated links satisfying the hop constraint defined in Section 2.1. • HD Forwarding: Subsequent to the scheduling stage, each activated link transmits c f ij (k) number of packets in accordance with (9). The Dirichlet protocol (a variant of HD) was originally proposed in [78]. The Dirichlet routing energy is defined as the sum of the squares of the link packet transmissions, a d-class packet weighted by the link cost factors d ij . It is proposed that the routing cost can be minimized if the routing protocol follows Dirichlet’s Principle. The Dirichlet protocol is briefly reviewed as follows: • Dirichlet Weighting: A setD ij (k) is defined such thatq (d) ij (k)> 0;8d2D ij (k). Individual link cost, which represents the cost of transmitting a packet via such link, is defined as d ij . 14 The Dirichlet weighting consists in solving the optimization problem to find d f (d) ij (k) such that it minimizes: X d2D ij (k) ( (d) ij (k) 1 Q (d) ij (k) d f (d) ij (k)) 2 (11) subject to X d2D ij (k) d f (d) ij (k) ij (k) and 0 d f (d) ij (k)q (d) ij (k): (12) Notations are similar to those in the HD protocol. Then assign a weight to each classd2 K ij (k). The weight of each classd is given as w (d) ij (k) := 2 (d) ij (k) 1 q (d) ij (k) d f (d) ij (k) ( d f (d) ij (k)) 2 (13) and the final link weight is w ij (k) = X d2D ij (k) w (d) ij (k): (14) • Dirichlet Scheduling: It is the same as the HD scheduling with 1-hop interference model. • Dirichlet Forwarding: Subsequent to the scheduling stage, each activated link transmits a number d f (d) ij (k) ofd packets. 3.2 Ollivier-Ricci Curvature The relatively new Ollivier-Ricci curvature [79–81] is a graph version of the well known Ricci curvature in differential geometry and, as already argued in [89], it is a fundamental tool in dis- crete heat calculus. This can be traced back to the Ricci curvature on manifolds providing an upper bound on the size of the heat kernel [83, Sec. 3], [82, Sec. 5.6]. The Ollivier-Ricci curvature has already been used to anticipate congestion in a wireless network under the purely thermody- namical Heat Diffusion protocol [89]. The theoretical fact that underpins this observation is that the Ricci curvature regulates the flow of heat on a Riemannian manifold, in somewhat the same way that the sectional curvature regulates geodesics. Ollivier-Ricci curvature recent applications include formulating Hamiltonian Monte Carlo [107], unraveling Internet topology [110], model- ing robustness of cancer networks [108], and analyzing market fragility and systemic risk [109]. However, here, our interest in the Ollivier-Ricci curvature rather stems from its definition in terms of a “transportation cost,” which can be linked to queue occupancy, routing energy, even time to reach steady-state. Consider a weighted graph ((V;E);). On this graph, for each vertexi, we define a probability measurem i on the neighboring nodesN (i) as follows: m i (j) = ij P j2N(i) ij ; ifij2E; = 0; otherwise. In full agreement with the recent work on network traffic versus graph curvature [73, 75], the Ollivier-Ricci curvature is defined in terms of the transport properties of the graph: 15 Definition The Ollivier-Ricci curvature of the graph ((V;E);) endowed with the set of probabil- ity measuresfm i :i2Vg is defined, along the path [i;j], as ([i;j]) = 1 W 1 (m i ;m j ) d(i;j) ; (15) whereW 1 (m i ;m j ) is the first Wasserstein distance between the probability measuresm i andm j defined onN (i) andN (j), resp., W 1 (m i ;m j ) = inf ij X k;`2N(i)N(j) d(k;`) ij (k;`); where the infimum is extended over all “coupling” measures ij defined onN (i)N (j) and projecting on the first (second) factor asm i (m j ), that is, X `2N(j) ij (k;`) =m i (k); 0 @ X k2N(i) ij (k;`) =m j (`) 1 A ; andd(i;j) is the usual metric emanating from the edge weight. Figure 3: Concept of optimal transport and curvature, where a mass distributed over a ballB x is being transported to the ball centered aty. The comparison between (i) the piecemeal process of transporting every single mass element by parallel transport ofxx 0 and (ii) the process of lumping the total mass ofB x at its center and transporting it in one move to the center of the other ball for redistribution across the ball is quantified by the fraction R Bx d(x 0 ;T(x 0 ))dx 0 d(x;y)vol(Bx) . This measure is greater than1 if and only if Ricci [x;y]<0 in a negatively curved Riemannian manifold, or less than 1 if Ricci [x;y]>0 in a positively curved space. More intuitively, ij (k;l) is called transference plan. It tells us how much of the mass of k2N (i) is transferred tol2N (j), but it does not tell us about the actual path that the mass has to follow. The first Wasserstein distance is one class of shortest transportation distance between two prob- ability distributions. For details of this concept, see [84,85]. Bauer [81] developed a general sharp inequality for undirected, weighted, connected, finite (multi)graph ofN verticesG = (V;E). Note that it is computationally viable to solve the exact curvature instead of bounds. The optimal cou- pling can be solved via linear programming. The calculation of the 1 st Wasserstein distance is commonly referred to as Earth Mover Distance in Computer Science applications, such as pattern recognition. 16 3.3 Protocol Performance under Different Network Topology 3.3.1 Simulation Setup The Backpressure protocol, the Heat Diffusion protocol, the Dirichlet protocol, and the Least Path protocol were programmed and run in MATLAB 2015a. Link capacities are set to infinity unless otherwise specified. The centralized scheduling was implemented using Edmonds’ blossom algo- rithm [41] and can be solved inO(n 2 m) steps, wheren andm represents the number of nodes and edges of the graph, respectively. However, the actual wall-clock time of MATLAB implementa- tion of this algorithm is still extremely time-consuming in practice compared to other processes, and typically consumes more than 95% of processing time in every timeslot. For this reason, we propose, in Section 5, an Ising formulation of the scheduling problem underK-hop interference model for arbitrary K, which is proved to be NP-hard. The problems are solved efficiently on D-Wave. All simulations are multi-class. Each node is sending packets to every other node. The packet arrival rate follows a Poisson distribution with2 [2; 8]. For the sake of the comparison analysis, Least Path Routing is assumed to be implementable in wireless sensor networks. Although it is not a fair comparison since Least Path Routing protocol (with global routing table) has access to more information than other protocols, the former is still being considered in order to show that the results are consistent with congestion results in wired networks in the sense developed in [73]. The Ollivier-Ricci curvature is calculated locally between each pair of@e nodes. Even though the Ollivier-Ricci curvature is a local concept, we nevertheless give it a global significance by taking its numerical average over all edges. This process is similar to the Gauss-Bonnet theorem where the local curvature is integrated to give a global Euler characteristic. Unfortunately, such a theorem for the Ollivier-Ricci curvature (rather than the sectional curvature) on discrete graphs has yet to be developed. 3.3.2 Node-to-Node Delay Performance The average network delay is formulated as Q = lim !1 sup 1 1 X k=0 E ( X i2V X d2D q (d) i (k) ) (16) and since Poisson arrival rate is used in the simulation, by Little’s Theorem, the expected time- averaged total queue congestion is proportional to the long-term averaged node-to-node network delay. Thus, it is sufficient to deal with average queue occupancy over all nodes in the network. As shown in Fig. 4 with the curvature of the underlying network becoming more positive, the node-to- node delay is generally decreasing in all four protocols being used in the comparison. It is worth noting the following: • General Performance: Generally, Heat Diffusion protocol performs worst in the sense of node-to-node delay; this is due to the unique packet forwarding mechanism, where only half of the queue differential is being transmitted. This feature is to prevent packet looping [77]. 17 • Dirichlet vs. Least Path: Although Least Path Routing has access to more information (global routing table), in more negatively curved network, it still performs worse than Dirich- let routing in the sense of average delay, which is significant since Dirichlet routing is again a dynamic protocol where each node has information only on its neighboring nodes. • Dirichlet routing energy: Note that Dirichlet protocol proposes to minimize routing energy while ensuring queue stability for all stabilizable traffic [78]; however, in simulation it still possess relatively high routing energy. This is not a contradiction to the original theory. As shown in [89], protocols with higher steady state delay generally fails to stabilize faster as network capacity region slowly shrinks, and Dirichlet protocol has lowest queue occupancy among 3 dynamic protocols in comparison. , Figure 4: Steady state node-to-node delay of various routing protocols in different network topologies. LPR, BP, HD stands for Least Path Routing, Backpressure, Heat Diffusion, respectively. 3.3.3 Routing Energy Performance The total routing energy is formulated as R(k) = X ij2E X d2D ij (f (d) ij (k)) 2 : (17) The routing energy of the four protocols under comparison are plotted in Fig. 5. As shown, the Dirichlet routing generally has higher total routing energy than other protocols, but performs rela- tively better than Backpressure in more negatively curved network. It worth noting the following: • General Performance: As the network is becoming more positively curved, the total routing energy decreases. Note that the routing energy is a summation measure instead of an average measure. 18 • Sensitivity to topology: As can be seen, the routing energy of the Backpressure protocol is much more sensitive to change of topology than other protocols. This will be analyzed in further details later. Figure 5: Steady state routing energy of various routing protocols in different network topologies 3.3.4 Varying Topology and Sensitivity Due to the nature of wireless sensor networks, the performance of protocols in changing topol- ogy is an important issue and a low sensitivity of delay and routing cost can be of value in some applications. Topology control in wireless networks also arises to cope with constantly changing network topology, such as the one in [86]. We believe that topology control can be achieved by curvature control in wireless networks. Curvature control has already been developed in wired networks [73, 75]. Here we would like to quantitatively analyze how varying topology would affect network per- formance. Simulation of varying topology is done by randomly deleting/adding nodes and links, and by putting a Gaussian random weight on each link for every timeslot. The process is done in a controlled manner so that the overall curvature is decreasing for the first half of the process and then increasing for the second half. As shown in Fig. 6, different routing protocols performs significantly differently in varying topologies, but interestingly, both the delay and the Routing energy of the Dirichlet protocol re- main in a very steady state throughout the variation process of the curvature. Sensitivity Analysis: As could be seen from Figs. 4 and 6, the node-to-node delay and the Routing energy signifi- cantly increases with the topology of the network becoming more negative in Least Path routing, 19 Figure 6: Performance and sensitivity of different routing protocols in varying topology Backpressure and Heat Diffusion, but remains relatively stable for Dirichlet protocol. We define measures of protocol sensitivity to topology variation as S q = dq average d average ; S R = dR average d average ; (18) that is, the sensitivity of a protocol node-to-node delay and routing energy, respectively. q average denotes the total queue occupancy averaged over 20 timeslots (rather than over infinitely many timeslots as in Eq. (16)) to smooth over the transients of the network dynamics; R average denotes the total routing energy (17) averaged over 20 timeslots; and average is the Ollivier-Ricci curvature averaged over 20 timeslots. We propose to use them as metrics for network routing protocol. Also 20 Table 1: Numerical values of sensitivity of four protocols.S q is in the scale of10 3 ,S R is in the range of10 7 . 1 S could be used to denote the protocol robustness to changing topology. The numerical results of sensitivity of different protocols are summarized as shown in Table 1. Dirichlet protocol has lowest sensitivity, thus highest robustness to varying topology in the sense of node-to-node delay, while Heat Diffusion has highest robustness in the sense of routing energy. Subject to particular application needs, different protocols can be chosen using sensitivity metric. 3.3.5 Summary of Topological Impacts It has been shown via several numerical examples that the performance of different routing proto- cols varies significantly across networks with different topological characteristics. Dirichlet proto- col and Backpressure generally consumes more routing energy than Least Path Routing and Heat Diffusion, but also keeps delay reasonably low. The Heat Diffusion protocol, on the other hand, consumes as less routing energy as possible, while creating much higher delay. Its routing energy is both the lowest and the least sensitive to changing topology, while its delay is both the highest and the most sensitive. This is believed to be a result of the laziness feature that is developed against packet looping. Note that compared to Dirichlet routing and Heat Diffusion, Backpressure does not have an apparent strength in terms of delay or routing energy, but it rather performs mediocrely on both metrics. This demonstrates again the delay-energy tradeoff discussed in [88]. Thus, different protocols can be chosen according to different application needs, or a control approach can be developed to switch routing protocol in a graph curvature feedback scheme. 3.4 Network Capacity Region versus Ollivier-Ricci Curvature In this simulation, every graph is subject to different arrival rates. Previously, all nodes had an arrival rate of 1 PpT (packet per timeslot). Here, in this simulation, the arrival rate is progressively increased to 2 PpT, 3 PpT, 5 PpT and 10 PpT. The change of arrival rate is applied to every node. The link capacity is uniformly set to 1500 PpT for every link. In order to compare the capacity regions under different arrival rates with different Ollivier- Ricci curvature, Figure 7 shows the initial curve of average queue occupancy versus timeslot. In Figure 7, the link capacity is still infinity and arrival rate is still 1 PpT. From Figure 7, we can see that for graphs with negatively decreasing curvatures, it takes longer time for the networks to converge to their steady-states. Also, the steady-state queue occupancy is 21 Figure 7: Evolution of average queue occupancy with time under uniform arrival rate Figure 8: Evolution of average queue occupancy with time under different arrival rates higher in networks with more negative curvature, as also shown in Figure 7. However, no matter how long it takes to converge, they still converge within a finite amount of timeslots because the link capacity is infinity. This is not the case for networks with finite link capacity. Figure 8 show the performance with 1500 PpT link capacity under different arrival rates of 2 PpT, 3PpT, 5PpT, and 10 PpT, respectively. We can make several observations from Figure 8: 22 1) As the arrival rate goes higher, the graphs with more negative curvature tend to lose convergence more rapidly than those with more positive curvature. 2) Under specific arrival rate, if the graph with more positive curvature cannot converge to steady state, then graphs with curvature less than that graph would not converge either. This demonstrates, although not a proof, that as the curvature of a graph is decreasing, the capacity region is also be- coming smaller. 3) No matter what the arrival rate is, the time needed to reach steady-state and the steady-state average queue occupancy follow the same pattern as before, that is, they increase with decreasing curvature. 4) For any particular graph, if the arrival rate is increased, so will the time needed to reach steady- state. Careful inspection of the various graphs being generated shows that the curvature increases with the number of edges, and trivially the capacity region increases. Simulations at constant number of edges still show that the capacity region increases with the curvature. However, the important feature is that the number of edges is only one of the factors contributing to the curvature; among other factors, one will mention the combinatorics of the edges and the link cost factors ij . Note that increasing the number of edges is only one way to increases the curvature, but that next to the number of edges another contributing factor is the curvature. 3.5 Curvature driven adaptive control It is sometimes desirable for a network to be capable of multi-protocol switching, such as the RFID reader system in [87], especially under potentially significant curvature change that could degrade network performance below specifications if a single protocol is maintained. We hereby propose an adaptive control scheme taking switching decisions on the base of network curvature identifi- cation data to keep network performance as desired. Fig. 9 shows the overall architecture of the adaptive protocol. Instead of a classical adaptation law depending on a numerical estimate of the curvature, we propose a decision tree adaptation law that in addition utilizes preferences on spec- ifications, as shown in Fig. 10. The switching decision process can be done either every timeslot or preferably only once over an arbitrary number of timeslots since topology variation is mostly likely much slower compared with timeslot. Also, it is important to note that the implementation of such a control scheme requires a global gateway that sends the information of global curvature to every node. Since Ollivier-Ricci curva- ture is by itself a local measure, it is worth investigating in the future how control based on local curvature would affect the global performance by assuming that each node could operate under its own protocols in some synchronous manner. 23 Figure 9: Adaptive system switching to best protocol given current topology and desired performance metric Figure 10: Decision tree based switching logic of the adaptive system of Fig. 9. Each arrow color represents different categorization of curvature. 24 4 Embedding Preprocessor In this chapter, we give a detailed review of our work on embedding preprocessor. 4.1 Minor Embeddability and Problem Formulation 4.1.1 Treewidth and Minor Embeddability To determine whether a problem graph has a chance to be minor-embeddable into a hardware graph, one needs to check whether the treewidth of the problem graph is not larger than that of the hardware graph, defined in Section 2.4, following a well-known graph minor theorem as discussed in more detail in [94, 97, 99, 100]: Theorem 4.1 IfG is a minor ofH, then tw(G) tw(H). The problem is that the above condition is necessary but not sufficient for minor embeddability. For regular graphs, the estimation of the treewidth is both standard and efficient. ForF (m;c), the graph of anmm array of cells with each cell a bipartite graphK c;c (the D-Wave architecture), the treewidth of the graph has very tight lower and upper bounds quantified as [94] cm tw(F (m;c))cm +c 1: (19) For the 128-qubit D-Wave “Chimera” architecture consisting of a 4 4 array of K 4;4 bipartite graph cells, the treewidth approximation is in a range of [16; 19], and it is exactly 17 if one needs the accurate treewidth. However, as noted in [69], the biggest challenge to solve practical problems in quantum adia- batic computation is minor embedding of general graphs. The major problem here is not to find the treewidth of a fixed architecture graph H, but the treewidth of any arbitrary problem graph G. In order to rule out embeddability, that is, the case where tw(G) > tw(H), a fast approx- imation of treewidth is still needed. In general, determining the treewidth of arbitrary graph is NP-hard [97, 98]. Although a linear-time algorithm to determine whether a graph has treewidth at mostk is proposed in [97], the constants in the algorithm are extremely large and grow expo- nentially withk, which makes it impractical for most graphs [94]. Thus, heuristics are needed for practical considerations. 4.1.2 Problem Formulation There exist several recent quantum annealing applications running on D-Wave, including solving database optimization [47], graph isomorphism [51], power system fault detection [50], Bayesian network structure learning [49], and operational planning [48]. However, in these applications, the problem size is relatively small and solvable by minor embedding heuristics proposed in [111] and implemented on D-Wave. Embeddability is not generally pre-determined, but decided on a trial-and-error basis. Also, in most cases, embedding only needs to be determined once and used throughout the entire annealing process. However, this is not practical if the topology of the QUBO (i.e., the topology of the original graph problem) is not fixed. The latter is precisely the case when solving the network scheduling problem in wireless networks [121], which we take as a prototype 25 perform n embedding trials Embeddable after n trials? Leave the problem graph for further analysis Embed and start annealing process Yes Annealing Successful Arbitrary Problem Embeddable graphs Unembeddable graphs embeddability not known a priori proposed preprocessor not embeddable No saves embedding trial time Figure 11: Proposed pre-processor for the D-Wave adiabatic quantum computing platform. The arbitrary graph could be either “embeddable” or “unembeddable,” but this is not known. The “preprocessor” acts only on the “unembed- dable” graphs, as it only checks a necessary condition for embeddability, in polynomial-time in order to save embed- ding trial time. Note that even if the necessary condition test passes, the graph could still be unembeddable so that embedding trials are still necessary. problem for more general cases of problems with time-varying QUBO topology. The D-Wave heuristics of minor embedding reduces the time complexity of the current best exact algorithm ofO(2 (2k+1)logk jn H j 2k 2 2jn H j 2 je G j) [101] wherek is the branchwidth ofG, andn H andn G are the order of graphsH andG, respectively, toO(n H n G e H (e G +n G logn G )), at the cost of the following: • Success guaranteed only with a certain probability, and re-tries on failure; • No attempt to prove minor exclusion. We propose a polynomial-time preprocessor as shown in Figure 11, which could identify unem- beddable problems by treewidth estimation, thus significantly reducing the embedding trial time for arbitrary problems. The embedding trial time cannot be ignored in time-sensitive problems. Figure 12 shows simulations on a set of random graphs, performed in MATLAB 2015b on Intel 4960x with 32GB DDR3 RAM running Ubuntu 14.04 without parallelization. The embedding API is provided by D-Wave. The testing environment in the following sections remains the same, unless otherwise specified. Note that Figure 12 only intends to show a typical process where em- beddability is unknown, i.e. the order of the test input is at least larger than the treewidth of the D-Wave architecture. The computing time heavily depends on computing hardware, test set, and order of both the problem graph and the hardware graph; however, the time/cost breakdown would remain roughly the same for random graphs of arbitrary order—namely, the time spent on embed- ding trials using heuristics is much more significant than the actual annealing time. The latter is especially true if the problem graph is not minor embeddable in the Chimera architecture graph, as it may take many unsuccessful trial runs before it is “declared” that the graph is not embeddable. Clearly, a quick way to rule out embeddability would alleviate this situation, and this is the main motivation for this work. 26 Figure 12: Experimental results on wall-clock running time for the embedding heuristic. A set of 40 problems all with 40 nodes, generated in accordance with the Erd¨ os-R´ enyi model, is tested. On the left plot, in order to test success probability, the embedding trials do not stop if a successful embedding is found before the end of 40 embedding trials. On the right graph, to simulate the real scenario, a default 10-trial process is run and stops once an embedding is found in which case the graph is positively determined to be embeddable. On the other hand, if after a great many runs no embedding is found, the graph is declared unembeddable (although it might be embeddable if the trial missed the embedding). The orange part of the left pie is the time it takes heuristics to declare the problem to be not embeddable, whereas the blue part of the left pie is the time it taks to be successful at constructing and embedding. The numbers in the pie chart are in seconds. Parameter loading and sampling time are provided in [70]. Determining embeddability takes much shorter time since the process would stop once an embedding is found, and is usually found within the first few trials. There exist several methods for both bound estimation and exact solution of treewidth, such as polynomial-time bound estimations based on vertex degree (lower bound) [105], triangula- tion (lower bound) [102], vertex deletion (lower bound) [104], edge contraction (lower bound) [103], greedy degree (upper bound) [122], Breadth First Search (BFS, upper bound) [123], and exponential-time exact algorithms including QuickBB [117] using the branch and bound method and TreewidthDP [106] using dynamic programming. Most of these algorithms have issues with either 1) occasionally large error or 2) impractical time complexity. Note that some heuristics give very tight upper or lower bounds, including greedy, which slightly outperforms our Ollivier-Ricci algorithm in term of accuracy for varying order graphs (see Fig. 15). However, to the best of our knowledge, there is no algorithm that gives direct estimations instead of bounds on the treewidth. In Section 4.3, we compare our methods with the best heuristics and exact algorithms to demon- strate that our result is applicable in most scenarios. 4.2 Ollivier-Ricci Curvature and Treewidth At this stage, there is no theoretical proof that the Ollivier-Ricci curvature of a graph is related to its “tree-likeness.” However, it is the main purpose of this section to show that a strong correlation exists and to develop a proof that a lower bound on treewidth is an increasing function of curvature. Clear agreement between curvature and treewidth at the limit of their ranges of values is shown in Table 2. To better visualize the bigger picture of network topology and embeddability, the relationship among the most researched topics in network topology is provided in Figure 13. There exist weak 27 Table 2: Ollivier-Ricci curvature versus treewidth agreement at the limit of their respective ranges of values (n denotes the order of the graph) treewidth Ollivier-Ricci curvature range of possible values [1;n 1] (2; 1) lower bound reached for tree tree upper bound reached for complete graph complete graph links between Gromov and embeddability [112], Gromov and treewidth [113], treewidth and treelength [114]; and strong links between treelength and Gromov [116], treewidth and graph minor and thus embeddability as discussed before. We will show strong correlation, if not a strong theoretical link, between Ollivier-Ricci curvature and treewidth; thus by calculating curvature us- ing a linear programming interior point method, one would estimate treewidth in polynomial time. Network Topology how negatively curved how close to tree Ollivier-Ricci curvatrue Treewidth Treelength diameter of tree bag Gromov ? how hyperbolic embeddability strong link numerical link no link weak link no link strong link weak link weak link Figure 13: The conceptual connection among four related invariants in graph theory. We provide strong numerical connection between Ollivier-Ricci curvature and treewidth in random arbitrary graphs—the latter connects to graph minors, and hence curvature could determine unembeddability. It is worth noting that for engineering purposes, among the four topological invariants, only the Ollivier-Ricci curvature has polynomial-time complexity. As an aside, note that the Ollivier-Ricci curvature has been used to anticipate congestion in a wireless network under the specific Heat Diffusion communication protocol [89]. The theoretical fact that underpins this observation is that the Ricci curvature regulates the flow of heat on a Riemannian manifold, in somewhat the same way that the sectional curvature regulates geodesics. 28 The Ollivier-Ricci curvature has also seen recent applications including formulating Hamiltonian Monte Carlo [107], Internet topology [110], modeling robustness of cancer networks [108], and analyzing market fragility and systemic risk [109]. 4.2.1 Proof of lower bound We show that the Ollivier-Ricci curvature serves as a lower bound for the treewidth of an arbitrary graphG. This result is obtained by combining two known results: one that relates the lower bound on the spectrum of the graph Laplacian to the treewidth [115], and one that relates the spectrum lower bound to the Ollivier-Ricci curvature [81]. Proof It is known that tw(G)>b 3n 4 1 + 2 1 c 1 =:f( 1 ); (20) wheren and denote the order and the maximum degree, resp., of the graph and 1 denotes the second smallest eigenvalue of the graph Laplacian. The other result to be utilized is 1 (1k[t]) 1 t 1 ::: N1 1 + (1k[t]) 1 t ; (21) where k[t] denotes the lower bound on the Ollivier-Ricci curvature of the t-fold neighborhood graph as defined in [81]. By convention,t = 1 corresponds to the original graph, so thatk[1] =k is the lower bound on the Ollivier-Ricci curvature of the original graph. It is easily seen thatf() as defined by (20) is a monotonically increasing hyperbola in 1 . Hence for k 1 , we have f( 1 )f(k) and therefore tw(G)b 3n 4 k + 2k c 1: (22) For the case considered in Fig. 14, left, this lower bound is not very useful as it is negative for somek’s. However, its relevance lies in an important trend it reveals: Clearly, as the lower bound k on the Ollivier-Ricci curvature increases, the lower bound of the treewidthf(k) increases. Bauer et al. [81] showed that ast increases, the lower bound on 1 in (21) always gets nontriv- ially tighter. Therefore, a tighter bound would be tw(G)b 3n 4 1 (1k[t]) 1 t + 2(1 (1k[t]) 1 t ) c 1: (23) Note that generating neighborhood graphs significantly increases computational cost. One could, however, construct G[t] by simulating the random walk on the original graph G and by adding edgexy if the walk starting atx2 V (G) reachesy2 V (G) aftert steps (see [81, Secs. 2.1 and 2.2]). Even though (23) is an improvement over (22), it could still go negative for some networks. The reason is that, even though the bound (21) could be tight, the treewidth bound (20) could be very conservative and could even be negative. This especially happens when the maximum degree of the graph is high, as it is the case for scale-free graphs that have heavy tailed degree distribution. When the right-hand side of Equation (23) becomes negative, the bound unfortunately becomes 29 useless in the preprocessor of Fig 11 for treewidth computation. If, on the other hand, invoking Bauer’s claim of tightness we assume that tw(G) [RHS(11)] problem [RHS(11)] Chimera ; then we may be able to conclude that tw(G) tw(Chimera); in which case we can positively conclude thatG is not embeddable in the architecture while the heuristics would attempt to construct an embedding when there is no hope for. This improved procedure is not considered here, as our results indicate thatt = 1 already yields good results. An assessment of the extra effort needed to computek[t] versus the benefit of a tighter bound is left for further research. 4.3 Simulation Results We set up our simulation experiments on the hardware discussed in Section 2. In the following, we examine three evaluation criteria of our curvature-based estimate of the treewidth: namely, 1) the correlation between treewidth and Ollivier-Ricci curvature, 2) the accuracy of the estimate, and 3) the wall-clock time cost of the estimate compared to other estimation methods and the exact solution. 4.3.1 Correlation In Figure 14, we show a strong correlation between Ollivier-Ricci curvature and treewidth of Erd¨ os-R´ enyi random graphs. The curvature of the graphs is controlled by the probability of con- nection parameterp in the Erd¨ os-R´ enyi generator. In general, asp increases so does the curvature. We performed the experiment on two sets of graphs: those with the same order and those with vary- ing order. Since for varying order, the treewidth could go as high asjV (G)j 1, a normalization is needed. We propose tw norm (G) = tw(G) jV (G)j : (24) This is the normalization that was used in the simulation. Fig. 14 reveals a linear estimate of treewidth of the form tw(G) a(G) +c where(G) denotes the curvature of the graph. Even though Fig. 14 is restricted to Erd¨ os-R´ enyi graphs, it is shown in our previous work [124] that data points of trees and scale-free graphs lie on, or close to, the same interpolation line. (More accurate results could be obtained by polynomial interpolation.) 4.3.2 Accuracy We perform a linear interpolation-based treewidth estimation on randomly generated graphs of nontrivial treewidth computation. Since the exact solution has exponential complexity, the size of the test graphs is taken to be small enough to permit exact calculation. We estimate the treewidth based on several classical heuristics discussed in Section 4.1 in addition to the nonclassical curva- ture estimate. We compare the results with the exact solution, as shown in Figure 15, where the 30 Figure 14: Scatter plots of Ollivier-Ricci curvature versus treewidth (coefficient of determinationR 2 : 0:965 (left) and 0:889 (right)). The left plot is performed on a set of 40 random graphs of the same order using the Erd¨ os-R´ enyi random graph model with increasing probability of connection (hence increasing curvature). The right plot is performed on a set of 250 random graphs, with the order of the graphs varying from 10 to 34; moreover, each order case contains random graphs with different probabilities of connection. exact solution is normalized to 1. Under such normalization, any other returned treewidth estimate is called accuracy index, with the idea that the closer the accuracy index is to 1, the more accurate the heuristic or Ollivier-Ricci curvature estimate is. Out of several heuristics for upper and lower bound computation discussed in the previous sec- tion, we choose BFS and Greedy for upper bound estimation and Min Degree for lower bound estimation. It turns out that the curvature estimate is of statistically best accuracy on sample graph sets of fixed order but with varying connectivity, with the greedy algorithm coming next (left panel of Fig. 15). Note, however, that on the varying order test set (right panel of Fig. 15), the accuracy of the curvature estimate is not as good as that of the greedy algorithm, but still better than the remaining two algorithms. This may be improved by a more carefully thought of definition of “normalized treewidth” or by higher order polynomial interpolation. Student’st-test is utilized to test the significance of the observed Ollivier-Ricci curvature ac- curacy index being closer to the ideal value of 1 than the other estimates. Thet-test would assume that the distribution of the accuracy indexes returned by the various trials of the various methods (Greedy, Ollivier-Ricci, BFS, Min Degree) is Gaussian. It should be noted that the Greedy and BFS algorithms give only upper bounds on the accuracy index, while the Min Degree algorithm gives a lower bound. Consequently, the distribution of the accuracy indexes for the Greedy, Min Degree and BFS estimates is skewed and cannot be Gaussian. On the other hand, the sample distri- bution of the accuracy indexes returned by the Ollivier-Ricci curvature is symmetric, and appears to be Gaussian with mean 1. So thet-test cannot be applied “as is” to prove that the various means differ. In fact, on theoretical grounds, the mean of the Min Degree index is less than 1, while the mean of the Greedy and BFS indexes is larger than 1. So the only statistical test to be conducted is at-test on the mean of the Ollivier-Ricci curvature to be 1. We used the Matlabt-test implemen- 31 Figure 15: Accuracy analysis of Ollivier-Ricci curvature estimation and other well-known classical heuristics. Accu- racy is estimated relative to the exact solution, so that the exact solution has accuracy index of1. Bars show25th and 75th percentile of the estimation. The left graph shows the accuracy index of 60 graphs of the same order but with varying topology. The right plot shows the accuracy index of graphs with similar topology but with order varying from 10 to40. Accuracy larger than one means estimated treewidth is larger than actual treewidth. LibTW library [95] is used throughout the computation. tation with the null hypothesis that the mean of Ollivier-Ricci curvature is equal to 1 and thet-test does not reject the null hypothesis with a p-value of 0:3979 for the fixed order case and 0:1415 for the varying order case. 4.3.3 Time cost The classical treewidth heuristics used in accuracy comparison all have polynomial-time complex- ity and run smoothly in experiments; thus, their speed is not assessed in this section. We would mainly compare the curvature estimate with two best known exact treewidth algorithms: QuickBB and TreewidthDP. The former, discussed in detail in [106], treats the treewidth problem as a linear ordering problem, and finds a perfect elimination scheme of such ordering. The latter, discussed in detail in [117], performs a search over permutations of the vertices. TreewidthDP runs much more slowly in general than the latter; however, for memory constrained systems, QuickBB would utilize address space in a better way. The comparison is shown in Figure 16. As one would note from the left figure, curvature estimation takes more time as curvature and treewidth grow larger along the abscissa. Note that the time complexity of QuickBB scales asO(n ntw(G) ) [117, Sec. 8], wheren represents the oder of the graph. Therefore, QuickBB favors graphs of higher treewidth while curvature estimation favors graphs of lower treewidth. One may utilize this fact to develop “smart algorithms” that efficiently predetermine graph connectivity and decide which algorithm to choose. From the right plot of Figure 16, it follows that TreewidthDP scales exponentially with the order of the graph in general, and the curvature estimation appears subexponential as the curve is not quite a line. Note that it is also not possible in practice to scale the problem to larger size, as the time cost for TreewidthDP is already approaching practical limit. 32 Figure 16: Wall-clock time cost of classical exact treewidth algorithms and non classical curvature estimate. On the left panel, the abscissa is consistent with increasing treewidth on a set of 60 graphs with the same order. On the right panel, the abscissa is the order of the graph in a set of graphs with similar topological characteristics in the sense of normalized treewidth. Both simulations are performed in the same environment as before. No parallelization is added. Note that the time is in log scale. 5 Mapping Preprocessor In this chapter, we give a detailed review of our work on mapping preprocessor and related exper- imental results performed on D-Wave. 5.1 Ising Formulation 5.1.1 Overview We intend to design a quantum annealing scheduler to solve the WMIS problem and benchmark the results obtained by realistic network simulation against simulated annealing. The complete benchmarking is summarized in the following steps. Conflict Graph: The original scheduling problem on the original graph is reformulated as a QUBO problem on a conflict graph: the nodes of the conflict graph are the edges of the original graph with weightsw l (=c l in QUBO), while the edges of the conflict graph are pairs of edges of the original graph in scheduling conflict, as illustrated in Fig. 17, left. As such, subject to proper choice of theJ kl s, the QUBO problem on the conflict graph, the WMIS problem, provides a solution to the K-hop interference problem (2) on the original graph. Gap Expansion: In the WMIS problem, some kl scaling ofJ kl is introduced as shown in (28) to open up the gap of the Ising model, resulting in less scheduling violations, especially in the QA approach. 33 Embedding to Hardware: The QUBO problem has to be mapped to the Ising Hamiltonian via x i ! ( z i +I 22 )=2, with the constant energy shift ignored. We perform minor embedding into the D-Wave Chimera architecture, as illustrated in Fig. 17, right, or decide that it is not embeddable. Figure 17 gives a simple numerical example of how a network is mapped to D-Wave architecture. Measurement and Post-processing: We perform the annealing run a certain number of times on D-Wave, find the lowest energy configuration, and use such configuration as scheduling solution. Some commonly used algorithm techniques in adiabatic quantum computation, such as gauge transformation and majority vote on possible broken chains, are not performed. See the work of King et al. [70] for more discussion. 5.1.2 Wireless Protocol Let the original wireless network graph be G = (V;E), with vertices corresponding to actual physical nodes in the wireless network (such as routers or sensors) and edges being their possible physical connections. Each node receives packets in a Poisson random fashion. Packets are clas- sified according to their destinationd2 D, whereD V is a subset of sink (destination) nodes. Thed-packet backlog in nodei at timek will be denoted asq (d) i (k) andq (d) ij (k) :=q (d) i (k)q (d) j (k) will denote the queue backlog differential along linkij2E. Both the original Backpressure and the recent Heat-Diffusion protocols operate on the same weighting-scheduling-forwarding strategy. The weighting assigns a quality factor to every link, originally meant to be the queue differential q (d) ij (n), since links with the highest queue differ- entials at timeslot k should be those that move packets at timeslot k. The scheduling finds a combination of links that maximizes an aggregated weight subject to the interference constraints. The forwarding phase sends packets along those links that are activated by the scheduling process. The experimental simulation of network scheduling based on quantum annealing can be roughly divided into the following steps: • Weighting (Classical): For each linkij2E, give a weightw (d) ij (k) to the linkij relative to d-class packets. Edge weight can be defined arbitrarily as long as it incorporates the intent of the protocol. Here, the weighting follows the Dirichlet principle [78] as defined in Section 3.1. The weighting gives a weighted graph at the end of this step, such graph is undirected in our particular setup. • Preprocessing for Scheduling (Classical): Convert the edge weighted graph to the conflict graph and generate the QUBO problem according to Equation (25). This conversion has polynomial time-complexity. However, the minor-embedding heuristics could take as long as more than 10 seconds. • Scheduling by D-Wave (Quantum): The problem, formulated originally in Equation (2), is submitted to the D-Wave II and D-Wave 2X at USC ISI. Each annealing run takes 20s. For each scheduling instance, 1000 annealing runs are performed, and the lowest possible energy configuration is selected. However, we do not check independence (lack of interference) or optimality now. We will give a detailed discussion on this issue in a later section. 34 • Forwarding (Classical): On each activated linkij2 E 0 , an amount d f (d) ij (k) of packets are forwarded. All four steps are done within one timeslot of the simulation run, as shown in Figure 1. The size of the timeslot is typically in the scale of milliseconds in practice. It would take thousands of timeslots to reach steady-state under uniform packet arrival rate in a small network comprising no more than 100 nodes. Thus, one might naturally expect that, if an error occurs during one timeslot in quantum annealing due to errors from either decoherence or calibration, that error would accu- mulate over time. However, this is not the case, as will be shown in Section 5.2. The definition of edge weight makes the whole difference among many wireless network proto- cols. We set up our simulation based on the multi-class Dirichlet wireless protocol [78], following similar network setup as in [118]. The edge weight for thed-class (destinationd2 D) packets is defined in Section 3.1. It is proved that such protocol, with exact scheduling solution, is throughput optimal [78]. It is worth noting that such weighting completely depends on the network running status and traffic arrival rate, which is one of the biggest challenges to cope with in real-world applications of quantum annealing. 5.1.3 Mapping to QUBO GivenG = (V;E), the corresponding conflict graphG C = (V E ;E C ) has its vertex set equal to the edge set ofG and its edge set consisting of those pairs (e u ;e v ) such that their distance (1) inG is K. Precisely, ifm2V E andi;j2V , letm ij denote the edge inG corresponding to the nodem inG C . Thus the vertex set ofG C is defined asm2V E if and only ifm ij 2E for somei;j2V . The edge set ofG C is defined ase kl 2E C if and only ifd(m ij ;n kl )K. Also, we set the vertex weightw m =w m ij and edge weightw e kl = 1. The time complexity of such conversion process isO(jEj 2 ). Now, by solving the WMIS of the conflict graph, we solve the network scheduling problem of the original graph. As first proposed in [92], the WMIS problem can be formulated as the QUBO problem of finding the minimum of the binary function: f(x 1 ;:::;x N ) = X k2V E c k x k + X kl2E C J kl x k x l ; (25) or the corresponding Ising Hamiltonian by converting i = 2x i 1 for all i and ignoring the constant energy shift. It is proved in [92] that a sufficient condition for the ground state of the Hamiltonian to be the optimal solution to the WMIS problem is thatJ kl > min(c k ;c l ) forkl2E C andc k = w k . The binary variablex k = 1 if nodek is in the WMIS and 0 otherwise. Thus, the energy spectrum of the QUBO problem relates to the spectrum of the corresponding Ising formu- lation in Equation (5), with the ground state energy of the Ising problem giving the solution to the network scheduling problem. Note that the network scheduling problem has an Ising Hamiltonian with natural two-body interaction. Thus, unlike some other applications [43,53], additional reduction is not needed. Such 35 many-body reductions would cause sizable overhead, resulting in the practical problem size to be very small due to hardware constraints. 5 7 4 1 8 8 1 4 7 5 Conflict graph Minor embedding Figure 17: An example of network to conflict graph conversion, QUBO generation, and minor-embedding to Chimera architecture. The conflict graph is generated based on the 1-hop interference model. Only 4 cells ofK 4;4 structure of the Chimera architecture are shown. On D-Wave Two platform, 64 such cells are available, giving 512 available physical qubits. Quantum annealing is performed on such architecture to theoretically give the ground state of the configuration, and thus the network scheduling solution to the original problem. The numerical data shows the conflict graph weight structureG qubo and how it is mapped to the Ising Hamiltonian characterized byh Ising and J Ising . The off-diagonal entries of G qubo are chosen as 0:1 larger than the minimum of the weights of the two corresponding vertices of the conflict graph. 5.1.4 Measurement of quality Since the quantum scheduling is applied to a real network setup and performs annealing at every timeslot, thus, we define three benchmarking quality measures to capture a more complete quality figure. We define an overall quality measure of network delay, and two independent qualities as 36 throughput optimality and ST99. Among the wireless network protocols that have been demonstrated to be throughput-optimal (e.g., Backpressure and Heat-Diffusion), network delay came out as a parameter that can be opti- mized subject to throughput optimality. (Average Network Delay) The average network delay is originally defined in Eq. 16. Since Poisson arrival rate is commonly assumed in wireless network studies, by Little’s theorem, the expected time-averaged total queue congestion is proportional to the long-term averaged node-to- node network delay. Thus, it is sufficient to work with average queue occupancy over all nodes in the network. The network throughput at each timeslot is defined as P ij2E P d2D f (d) ij (k), where f (d) ij (k) is the number of d-packets actually forwarded (if interference constraints allow) along the edgeij at timeslotk. Since the exact solver for the protocol is proved to be throughput opti- mal, the throughput of D-Wave solution will always be less than or equal to that of the exact solver. (Extended Throughput Optimality) If the returned solution is independent (no interference con- straint violations), we define the quality factor of the returned solution F quantum 2 [0; 1] as the ratio of the network throughputs that result from the quantum and the classical solver, resp. Since the classical solver is exact, the quality factor of the quantum solution has a maximum of 1. But the quantum solver might not lead to an independent solution due to errors; thus, in the case of non-independence (interference constraint violation), we define the quality factor as follows: 8 > < > : F quantum = P ij2E 0 s f ij P kl2E 0 opt f kl ; if quantum scheduling set solutionS has no violations, ~ F quantum = P ij2E 00 s f ij P kl2E 0 opt f kl ; if quantum scheduling set solutionS has violations, (26) wheref denotes the forwarding amount as defined in Section 5.1.2,E 0 S denotes the set of edges in the scheduling set S computed by QUBO without violations and E 00 S the same set but with violations, and E 0 opt denotes the set of edges in the optimal scheduling set solved by the exact solver, without violations. If ~ F quantum > 1, there are violations that improve the throughput; if ~ F quantum < 1, there are violations but they do not even improve the throughput. In later sections, ‘throughput optimality’ and ‘optimality’ are used interchangeably. (ST99[OPT]) We also compare quantum annealing with simulated annealing results. Along the line of other benchmarking methods [44–46], we define a slight variant of speed measure, ST99(OPT), as the expected number of repetitions to reach at least a certain optimality level OPT with 99% certainty, P OPT = probability of reaching state with at least OPT optimality; ST 99[OPT] = log(1 0:99) log(1P OPT ) : (27) Note that this is of practical significance for time-sensitive problems like wireless network scheduling, where enough time might not be available within a timeslot for the quantum annealer 37 to reach the ground state, and quantum annealing could be of practical value if ST99[OPT] exceeds all currently available heuristics. It is also worth noting that quantum annealing gives suboptimal solutions with no additional time overhead. 5.2 Gap Expansion, Energy Compression We try to make the problem insensitive to quantum annealing errors by properly setting theh i and J ij terms in Eq. (5) in order to expand the final energy gap. Note that this gap is the energy gap between the eigenvalues of the Ising form at the end of the adiabatic evolution, not to be confused with the gap between ground and first excited states during the evolution. We introduce a scaling factor kl that multiplies the quadratic part of the QUBO formulation and as such scales the various terms so as to put more penalty weight on the independence constraints: f(x 1 ;:::;x N ) = X k2V E c k x k + X e kl 2E C kl J kl x k x l ; (28) whereV E andE C denote the vertex set and edge set, resp., of the conflict graph and kl is chosen uniform. In theory, kl = 1 would suffice as long as J kl > min(c k ;c l ), as the ground state has already encoded the correct solution of the WMIS problem [92]. However, since measuring the ground state correctly is not guaranteed, increasing kl becomes necessary to enforce the indepen- dence constraints, so that the energy spectrum of the non-independence states is raised to the upper energy spectrum and the feasible energy states are compressed to the lower spectrum. Figure 18 illustrates the effect of the feasible energy compression on a small random graph. A natural question regarding would be, How large is large enough? The problem is, interest- ingly, another optimization problem itself. The D-Wave II is subject to an Internal Control Error (ICE) that gives Gaussian errors with standard deviations h 0:05 and J 0:035 [70]. Putting too large a kl penalty would incur two problems: 1) The local fields would become indistinguish- able and 2) The minimum evolution gap would become too small. Accordingly, a few parameter values have been tried out and the results are shown in Table 6, which corroborates the experimen- tal results of Section 5.3 indicating that gap expansion would significantly influence the optimality of the returned solution. There exist several strategies in setting heavier penalty weights to expand the gap. We compare two different approaches to set the penalty weights: Global Adjustment: letJ kl = max(c k ;c l ); J max = max kl2E C J kl ; kl J kl = global J max ; forkl2E C : Local Adjustment: letJ kl = max(c k ;c l ); kl > 1: (29) In the local adjustment [92], the constraint onJ kl depends only on the fields at@kl, whereas in the global adjstment, contrary to [92],J kl depends on all fields. 38 Figure 18: Hamiltonian energy level evolution withH(s) = (1s)H trans +sH Ising , demonstrating the feasibility of energy compression on a small, randomly generated test wireless graph with 7 nodes on the conflict graph, plotting all 128 possible energy states in both before and after gap expansion cases. Both Hamiltonians are rescaled to have a maximum energy level of 2 for both coupling strengths and local fields. Note that for a typical desktop computer, such plots are impossible to construct once the size of the conflict graph goes higher than 12. Table 3: Search range for best parameters for simulated annealing Range Number of sweeps 400-10000 Number of repetitions 300-5000 Initial temperature 0.1-3 Final temerature 3-13 Scheduling Type linear/exponential 5.3 Experimental Results 5.3.1 Experimental Setup In general, simulated annealing (SA) is regarded as a strong competitor of quantum annealing (QA) and their performances are often compared together in the literatures [44–46]. On the other hand, it is known that SA can also boost the overall performance of classical heuristic algorithms in solving WMIS problems [26]. Thus, it is worth comparing our QA result with the performance of the SA algorithm. Here, we adapt a highly optimized simulated annealing algorithm (an ss ge fi vdeg) from reference [120], compiling the C++ source code with gcc 4.8.4 with MATLAB® C-mex API. Additionally, a wide range of parameters were tested to ensure near optimal performance of the algorithm within a reasonable run time. All the non-time-critical tests in this section were run on the servers of the USC Center for High-Performance Computing. We performed experiments on both D-Wave II and D-Wave 2X, and showed that QA has an 39 Table 4: Problem size of random graphs used in experiment. The third column represents the actual physical qubits used after minor embedding. Problem Size QUBO Size Physical Qubits Graph1 15 31 164 Graph2 20 57 405 Graph3 25 68 759 Graph4 30 84 783 advantage over SA after gap expansion, thus a potential general quantum speedup. We also showed that performance is improved significantly in D-Wave 2X compared to its predecessor thanks to better noise control in D-Wave 2X. Table 4 shows the parameters of the 4 random graphs being tested on D-Wave II and D-Wave 2X. The “problem size” is the order of problem graphs, the “QUBO size” representsjEj and thus generally scales asO(n 2 ), and the “physical qubits” is the number of nodes after minor embedding. With the effect of minor-embedding, a significantly large proportion of all available qubits are utilized (402 out of 502 on D-Wave II for Graph 2 and 783 out of 1098 on D-Wave 2X for Graph 4). 5.3.2 Average network delay In Figure 19, we show that, after gap expansion, the overall optimality significantly improves. As one could expect, the network delay is also improved and gets closer to the exact classical solution. It is also a rather subtle issue as how to deal with non-independent solutions returned by D-Wave. In the experiment, we skip the timeslot transmission if the lowest energy solution does not conform to theK-hop interference model. Sometimes, it makes sense to perform a polynomial greedy style matching when the returned result is non-independent, in which case the problem is more error- insensitive and thus more scalable. Here we consider the worst case scenario, that no transmission is allowed if the result is non-independent. Graph 2 is the only graph in which we compare the performance of D-Wave II and D-Wave 2X, due to unavailability of D-Wave II after the introduction of D-Wave 2X. Graph 3 and Graph 4 can only be solved on the latter platform due to their problem size. In Figure 20 we show that under the exactly same test condition, D-Wave 2X gains an advantage over SA in terms of average network delay after gap expansions in both test cases. Such advantage was non-existent in its predecessor. 5.3.3 Throughput optimality and ST99[OPT] Based on our assumption in the previous section, it is of vital importance that the solutions given by the solver satisfy the independent constraints. Here we use optimality measure as defined in Section 5.1.4. It may exceed 1 because solutions that do not satisfy the independent condition, may have a bigger total weight. We refer to those solutions that do not satisfy the independent condition as violations. From Table 5 we can see that energy compression directly results in less 40 0 20 40 60 80 100 120 Evolution in timeslot - Graph 1 0 500 1000 1500 2000 2500 Average network delay Classical exact solver QA QA with gap expansion SA SA with gap expansion 0 20 40 60 80 100 120 Evolution in timeslot - Graph 2 0 500 1000 1500 2000 2500 Average network delay Classical exact solver QA QA with gap expansion SA SA with gap expansion Figure 19: D-Wave II: Network delay of classical exact algorithm, quantum annealing, quantum annealing after gap expansion and simulated annealing are performed on two randomly generated networks. The left panel is a simpler case with 15 nodes and 31 edges, where quantum annealing reaches almost as good a solution as exact solution in terms of network delay; the right panel is a harder case with20 nodes and57 edges in total, both randomly generated using the Erd¨ os-R´ enyi model. On the harder graphs QA even with gap expansion does not quite reach the performance of the exact solver. 0 20 40 60 80 100 120 Evolution in timeslot - Graph 2 0 100 200 300 400 500 600 Average network delay Classical exact solver QA with gap expansion on D-Wave II QA with gap expansion on D-Wave 2X SA with gap expansion Figure 20: D-Wave 2X noise improvement compared to D-Wave II. Noise reduction is sufficient enough to demonstrate a quantum advantage over SA after gap expansion. Experiment is performed on the same set of graphs as those used in Figure 19, while the SA curve is slightly different due to the randomness of SA during re-performing it. violations of the independent condition in the case of QA. This could explain the huge performance improvement in Figure 19 and Figure 21. The test results of SA under a particular set of parameters are shown in Table 5. The range of our tested parameters are listed in Table 3. What is shown is the optimal SA result in our tests. It is worth noting the following observations of interest: • QA benefits more from gap expansion. 41 0 20 40 60 80 100 120 Evolution in timeslot - Graph 3 0 500 1000 1500 2000 2500 Average network delay Classical exact solver QA QA with mild gap expansion QA with strong gap expansion SA SA with mild gap expansion SA with strong gap expansion 0 50 100 150 Evolution in timeslot - Graph 4 0 500 1000 1500 2000 2500 3000 3500 4000 4500 Average network delay Classical exact solver QA QA with mild gap expansion QA with strong gap expansion SA SA with mild gap expansion SA with strong gap expansion Figure 21: D-Wave 2X: Network delay of classical exact algorithm, quantum annealing, quantum annealing after gap expansion with mild and strong strategies (representing local and global adjustment, respectively). Simulated annealing are performed on two networks randomly generated also with mild and strong strategies. The left one is a graph with 25 nodes and 68 edges, the right one is with 30 nodes and 84 edges, both randomly generated using Erd¨ os-R´ enyi model. • SA either finds the very optimal ground state or got trapped in very ‘deep’ local minimums. • QA scales better than SA. There is a relatively large number of violations in SA results even after gap expansion. This agrees with our previous knowledge about SA that it is designed to find the ground state and it is not op- timized to search for the sub-optimal results. This is especially true for larger graphs of Graph 4, which SA violates the independence constraint of in almost all cases (146 out of 150). In addition, Table 5 shows the exact numbers related to the performances of SA and QA after gap expansion. From optimality data returned by D-Wave, we can plot ST99 related to level of optimality shown in Figure 22. Note that our ST99 definition relies on optimality level, which could typically be 80% or 90% depending on user’s needs. In network setup, this optimality of classical heuristic heavily relies on topology of such network and traffic rate model. 5.3.4 Summary Although gap expansion in itself is a classical method, we showed that such procedure can help QA improve its results, and thus help demonstrate potential quantum advantage over SA. In Table 6, we show how setting the penalty weight in local or global approach would affect the quality of the returned solutions. We found that setting global = 1 would yield the best performance so far. We do not have a theoretical explanation for the wrong solutions; potential explanations on small problems have been discussed in [44–46]. Intuitively, as the problem size grows, it is much more difficult even to find close to ground states; indeed, as the penalty weight grows too large, the local fields begin to vanish, thus making the problem effectively more difficult since all weights have to be scaled to [2; +2]. The problem of quantitative connection among quality measure, network stability, and throughput optimality remains open. 42 Table 5: Performances After Gap Expansion for Graphs 2,3,4 G2 Number of violations (out of 120) G2 Average optimality (without violations) QA on D-Wave II 13 0.8129 QA on D-Wave 2X 0 0.8746 SA 31 0.9935 G3 Number of violations (out of 120) G3 Average optimality (without violations) QA on D-Wave 2X 0 0.8737 SA 98 0.7434 G4 Number of violations (out of 150) G4 Average optimality (without violations) QA on D-Wave 2X 0 0.8714 SA 146 0.7613 0.7 0.75 0.8 0.85 0.9 0.95 1 target optimality level 10 1 10 2 10 3 10 4 10 5 10 6 ST99 graph 1 original graph 1 after gap expansion graph 2 original graph 2 after gap expansion 0.7 0.75 0.8 0.85 0.9 0.95 1 target optimality level 10 3 10 4 10 5 10 6 ST99 graph 3 original graph 3 after gap expansion graph 4 original graph4 after gap expansion Figure 22: ST99, defined as log(10:99) log(1P OPT ) relative to required optimality specified by network, before and after gap expansion on four test graphs. Note that probability is calculated based on all solutions returned by D-Wave; thus ST99 of10 3 corresponds to one set of annealing runs, which in our case costs 20ms. The curve for Graph 2 ends at 0:9 optimality because there is no solution that satisfies such optimality after a total of120;000 annealing runs. Also note that the left figure is run on D-Wave II while the right figure is run on D-Wave 2X. 6 Conclusion We have presented the quantum framework to solve the wireless network scheduling problem, with the complete picture of such framework shown in Figure 23. Each component functions properly as demonstrated in previous sections. By the same token, we have also presented the first experimental D-Wave application of net- 43 Table 6: Penalty weight with different setup and resulting quality measure of returned solution averaged over 120 timeslots. Graph 2 is a larger size instance than Graph 1. Delay refers to average network delay in steady state, and NC refers to the non-convergent case where steady state is not reached within tested time span. Avg. Optimality - G1 Delay - G1 Avg. Optimality - G2 Delay - G2 kl = 1 + 0.781 384.9 0.077 NC kl = 2 0.928 376.3 0.312 1213 global = 1 0.974 268 0.741 566.1 global = 1:5 -0.165 NC -0.331 NC global = 2 -0.185 NC -0.356 NC Avg. Optimality - G3 Delay - G3 Avg. Optimality - G4 Delay - G4 kl = 1 + 0.8585 743 0.8478 1168.9 global = 1 0.8737 762.9 0.8714 975.8 work scheduling—a problem widely investigated in the field of wireless networking understood in the classical sense. The problem is reduced to a WMIS problems, itself reformulated in the form of the minimum energy level of an Ising Hamiltonian, itself mapped to the Chimera architec- ture by utilizing a differential geometrical approach [124] that immediately rules out cases where minor-embedding is impossible. We increased the success rate of the quantum annealer and the simulated annealer to return non-violation solutions by classically increasing the energy gap, so that the range of acceptable solutions is greatly increased. This framework, which aims at solving WMIS problems in general, can be trivially applied to other problems involving WMIS. By comparing QA with SA, though omitting Quantum Monte Carlo (QMC), we found a po- tential comparison point where QA outperforms SA in the sense of benefit from gap expansion as seen in previous chapters. Although, as seen from Table 5, SA has an optimality advantage in non- violation cases in test case G2, it is however interesting to observe that among suboptimal solution cases QA has less violations than SA. For larger graphs tested on D-Wave 2X, SA lost such only advantage in terms of optimality while QA retains almost identical optimality upon scaling. As far as speed is concerned, we noted that on Graph 2, and disregarding all overhead, the 1,000 runs of QA took a total of 1000 20sec = 20msec on D-Wave II to render the solution utilized in the performance analysis. On SA on the other hand, 5,000 runs, totaling 1 minute in the adopted simulation environment, were needed to get solutions comparable with those of QA. This comparison might however be misleading as there is room for considerable speed improve- ment of SA on specialized architecture, such as High Performance Computer (HPC). It is also argued [45,46] that even 20s might be too large for particular problems; however, this is the min- imum programmable annealing time on D-Wave platform. Despite encouraging results, due to the very limited experimental data, we cannot positively assert a general sizable advantage of quantum annealing against simulated annealing. However, it is our hope that the present study could be the inspiration for future general speedup demonstrations. It is also important to notice that the scaling 44 Figure 23: Complete picture of proposed quantum network scheduling framework on D-Wave 2X has been significantly improved. In Table 5, the optimality level for test graphs ranging from size (number physical qubits) 405 to 783 remains very close to optimality level of 0:87. All test cases come with solutions of 0 violations. The classical wireless protocols (Backpressure, Dirichlet and Heat Diffusion) return solutions with the interference constraints always satisfied—so that there is forwarding at every timeslot. Here, the QA and SA interference problem solvers allow solutions that violate the interference constraints in the interest of speed. Unfortunately, should an interference constraint be violated even at a remote corner of the network, our simulation stops the transmission, so that the Dirich- let protocol as it now stands cannot take advantage of the proposed algorithm speedup. As to whether the Dirichlet protocol could be tweaked to take advantage of the faster QA solution to the interference problem is widely open. 45 7 References [1] A. Dimakis & J. Walrand, “Sufficient Conditions for Stability of Longest-Queue-First Scheduling: Second-Order Properties Using Fluid Limits’, Advances in Applied Probability”, vol. 38, no. 2, pp. 505521, 2006. [2] C. Joo, X. Lin & N. Shroff, “Understanding the Capacity Region of the Greedy Maximal Scheduling Algorithm in Multi-Hop Wireless Networks”, INFOCOM’08, Phoenix, Arizona, Apr. 2008. [3] G. Zussman, A. Brzezinski & E. Modiano, “Multihop Local Pooling for Distributed Throughput Maximization in Wireless Networks”, IEEE INFOCOM’08, Phoenix, Arizona, Apr. 2008. [4] M. Leconte, J. Ni & R. Srikant, “Improved Bounds on the Throughput Efficiency of Greedy Maximal Scheduling in Wireless Networks”, MOBIHOC’09, May 2009. [5] B. Li, C. Boyaci & Y . Xia, “A Refined Performance Characterization of Longest-queue-first Policy in Wireless Networks”. In ACM MOBIHOC, pages 6574, New York, NY , USA, 2009. ACM. [6] A. Brzezinski, G. Zussman & E. Modiano, “Distributed Throughput Maximization in Wireless Mesh Networks via Pre-Partitioning”. IEEE/ACM Trans. Netw., 16(6):14061419, December 2008. [7] A. Proutiere, Y . Yi & M. Chiang, “Throughput of Random Access without Message Passing”, in Conference on Information Sciences and Systems, Princeton, NJ, USA, Mar. 2008. [8] D. J. Baker, J. Wieselthier & A. Ephremides, “A Distributed Algorithm for Scheduling the Activation of Links in a Self-organizing Mobile Radio Network”. IEEE ICC, pp. 2F.6.1-2F.6.5, 1982. [9] B. Miller & C. Bisdikian, “Bluetooth Revealed: The Insider’s Guide to an Open Specification for Global Wireless Communications”. Prentice Hall, 2000. [10] H. Balakrishnan, C. Barrett, V . Kumar, M. Marathe & S. Thite. “The Distance-2 Matching Problem and its Relationship to the MAC-Layer Capacity of Ad Hoc Wireless Networks”. IEEE JSAC, 22(6):1069-1079, Aug 2004. [11] G. Sharma, R. Mazumdar & N. Shroff, “On the Complexity of Scheduling in Wireless Networks”, MobiCom’06, Proceedings of the 12th annual international conference on Mobile computing and networking, Pages 227-238. [12] D.M. Blough, G. Resta & P. Sant, “Approximation algorithms for wireless link scheduling with SINR-based interference”, IEEE Transactions on Networking, vol. 18, no. 6, Dec. 2010 [13] D. Chafekar, V .S. Anil Kumar, M.V . Marathe, S. Parthasarathy & A. Srinivasan, “Capacity of wireless networks under SINR interference constraints”, INFOCOM’08, Phoenix, AZ, USA, Apr. 2008 [14] T. Moscibroda, R. Wattenhofer & A. Zollinger, “Topology control meets SINR: the schedling complexity of arbitrary topologies”, MobiHoc’06, ACM, Florence, Italy, May. 2006 [15] P. Gupta & P.R. Kumar, “The capacity of wireless networks”, IEEE Transactions on Information Theory, 46(2):388-404, Mar.2000 [16] M. Andrews & M. Dinitz, “Maximizing capacity in arbitrary wireless networks in the SINR model: complexity and game theory”, INFOCOM’09, Rio de Janeiro, Apr. 2009 [17] K. Jain, J. Padhey, V .N. Padmanabhan & L. Qiu, “Impact of interference on multi-hop wireless network perfor- mance”, MobiCom ’03, ACM, San Diego, California, USA, Sep. 2003 [18] M. Alicherry, R. Bhatia & L.E. li, “Joint channel assignment and routing for throughput optimization in multira- dio wireless mesh networks”, IEEE Journal on Selected Areas in Communications, vol.24, no.11, pp.1960-1971, 2006 46 [19] M. Kodialam & T. Nandagopal, “Characterizing the capacity region in multi-radio multi-channel wireless mesh networks”, MobiCom’05, ACM, Cologne, Germany, Sep. 2005 [20] S.S. Sanghavi, L. Bui & R. Srikant, “Distributed link scheduling with constant overhead”, ACM SIGMETRICS, vol. 35, no. 1, pp. 313-324, 2007 [21] P.J. Wan, “Multiflows in multihop wireless networks”, MobiHoc’09, pp. 85-94, New Orleans, USA, May. 2009 [22] Official homepage of the IEEE 802.11 working group, http://www.ieee802.org/11 [23] G. Bianchi, “Performance analysis of the IEEE 802.11 distributed coordinationfunction”, IEEE Journal on Se- lected Areas in Communications, vol. 18, pp. 535547, Mar. 2000. [24] F. Cali, “Dynamic tuning of the IEEE 802.11 protocol to achieve a theoretical throughput limit”, IEEE/ACM Transactions on Networking, vol. 8, pp. 785799, Dec. 2000. [25] L. Tassiulas & A. Ephremides, “Stability Properties of Constrained Queueing Systems and Scheduling Policies for Maximal Throughput in Multihop Radio Networks”. IEEE Trans. Autom. Control, 37(12):19361948, Dec. 1992. [26] S. Homer & M. Peinado, “Experiments with polynomial-time clique approximation algorithms on very large graphs”. Cliques, Coloring, and Satisfiability: Second DIMACS Implementation Challenge, volume 26 of DI- MACS Series. American Mathematical Society, Providence, RI, 1996 [27] X. Xu, J. Ma & H.W. An, “Improved Simulated Annealing Algorithm for the Maximum Independent Set Prob- lem”, Intelligent Computing V olume 4113 of the series Lecture Notes in Computer Science pp 822-831 [28] Y .G. Kim & M.J. Lee, “Scheduling multi-channel and multi-timeslot in time constrained wireless sensor net- works via simulated annealing and particle swarm optimization”, IEEE Communications Magazine, IEEE, V ol.52, Issue 1, pp 122-129 [29] M. Mappar, A.M. Rahmani & A.H. Ashtari, “A New Approach for Sensor Scheduling in Wireless Sensor Net- works Using Simulated Annealing”, Computer Sciences and Convergence Information Technology, 2009. ICCIT ’09. Fourth International Conference on, Seoul, Korea [30] T. Grossman, “Applying the INN model to the max clique problem”. Cliques, Coloring, and Satisfiability: Sec- ond DIMACS Implementation Challenge, volume 26 of DIMACS Series. American Mathematical Society, Provi- dence, RI, 1996 [31] A. Jagota, “Approximating maximum clique with a Hopfield network”. IEEE Trans. Neural Networks, 6:724735, 1995. [32] A. Jagota, L. Sanchis & R. Ganesan, “Approximately solving maximum clique using neural networks and re- lated heuristics”. Cliques, Coloring, and Satisfiability: Second DIMACS Implementation Challenge, volume 26 of DIMACS Series. American Mathematical Society, Providence, RI, 1996 [33] T. N. Bui & P. H. Eppley, “A hybrid genetic algorithm for the maximum clique problem”. In Proceedings of the 6th International Conference on Genetic Algorithms, pages 478484, Pittsburgh, PA, 1995. [34] M. Hifi, “A genetic algorithm - based heuristic for solving the weighted maximum independent set and some equivalent problems”. J. Oper. Res. Soc., 48:612622, 1997. [35] E. Marchiori, “Genetic, iterated and multistart local search for the maximum clique problem”. In Applications of Evolutionary Computing, volume 2279 of Lecture Notes in Computer Science, pages 112121. Springer-Verlag, Berlin, 2002. 47 [36] T. A. Feo & M. Resende, “A greedy randomized adaptive search procedure for maximum independent set”. Operations Research, 42:860878, 1994. [37] R. Battiti & M. Protasi, “Reactive local search for the maximum clique problem”. Algorithmica, 29:610637, 2001. [38] C. Friden, A. Hertz & D. de Werra. “Stabulus: A technique for finding stable sets in large graphs with tabu search”. Computing, 42:3544, 1989. [39] C. Mannino & E. Stefanutti, “An augmentation algorithm for the maximum weighted stable set problem”. Com- putational Optimization and Applications, 14:367381, 1999. [40] P. Soriano & M. Gendreau, “Tabu search algorithms for the maximum clique problem”. Cliques, Coloring, and Satisfiability: Second DIMACS Implementation Challenge, volume 26 of DIMACS Series. American Mathemati- cal Society, Providence, RI, 1996 [41] J. Edmonds, “Paths, trees, and flowers”. Canad. J. Math. 17: 449467. doi:10.4153/CJM-1965-045-4. [42] S. Kirkpatrick, C.D. Gelatt Jr & M.P. Vecchi, “Optimization by Simulated Annealing”. Science 220 (4598): 671680. [43] P. F. Felzenszwalb, “Dynamic programming and graph algorithms in computer vision”, pattern analysis and machine intelligence, IEEE Transactions on, V olume 33, Issue 4, pp. 721-740, 2011. [44] S. Boixo, T. F. Rnnow, S. V . Isakov, Z. Wang, D. Wecker, D. A. Lidar, J. M. Martinis & M. Troyer, “Quantum annealing with more than one hundred qubits”, Nature Phys. 10, 218 (2014) [45] T. F. Rnnow, Z. Wang, J. Job, S. Boixo, S. V . Isakov, D. Wecker, J. M. Martinis, D. A. Lidar & M. Troyer, “Defining and detecting quantum speedup”, Science 345, 420 (2014) [46] I. Hen, J. Job, T. Albash, T. F. Rnnow, M. Troyer & D. A. Lidar, “Probing for quantum speedup in spin glass problems with planted solutions”, arXiv:1502.01663 [47] I. Trummer & C. Koch, “Multiple Query Optimization on the D-Wave 2X Adiabatic Quantum Computer”, arXiv: 1501.06437 [48] E.G. Rieffel, D. Venturelli, B. O’Gorman, M.B. Do, E. Prystay & V .N. Smelyanskiy, “A case study in program- ming a quantum annealer for hard operational planning problems”, Quantum Information Processing, V ol. 14, Issue 1, pp 1-36, Jan. 2015 [49] B. O’Gorman, R. Babbush, A. Perdomo-Ortiz, A. Aspuru-Guzik & V . Smelyanskiy, “Bayesian network structure learning using quantum annealing”, The European Physical Journal Special Topics, V ol. 224, Issue 1, pp 163-188, Feb. 2015 [50] A. Perdomo-Ortiz, J. Fluegemann, S. Narasimhan, R. Biswas & V .N. Smelyanskiy, “A quantum annealing ap- proach for fault detection and diagnosis of graph-based systems”, The European Physical Journal Special topics, V ol. 224, Issue 1, pp 131-148, Feb. 2015 [51] K.M. Zick, O. Shehab & M. French, “Experimental quantum annealing: case study involving the graph isomor- phism problem”, Scientific Reports, DOI: 10.1038/srep11168, Jun. 2015 [52] M. Benedetti, J. Realpe-Gmez, R. Biswas & A. Perdomo-Ortiz, “Estimation of effective temperatures in a quantum annealer and its impact in sampling applications: A case study towards deep learning applications”, arXiv:1510.07611 [53] Z. Bian, F. Chudak, W.G. Macready, L. Clark & F. Gaitan, “Experimental Determination of Ramsey Numbers”, Phys. Rev. Lett. 111, 130505, Sep. 2013 48 [54] E. Farhi, J. Goldstone, S. Gutmann & M. Sipser, “Quantum Computation by Adiabatic Evolution”, arXiv:quant- ph/0001106 [55] V . Choi, “Minor-embedding in adiabatic quantum computation: I. The parameter setting problem”, Quantum Information Processing, V olume 7, pp. 193–209, 2008. [56] V . Choi, “Minor-embedding in adiabatic quantum computation: II. Minor-universal graph design”, Quantum Information Processing: V olume 10, Issue 3, pp. 343-353, 2011. [57] P.I. Bunyk et al., “Architectural considerations in the design of a superconducting quantum annealing processor”, arXiv: 1401.5504 [58] D. Deutsch, “Quantum Theory, the Church-Turing Principle and the Universal Quantum Computer”. Proceedings of the Royal Society of London A 400: 97117. [59] P. Shor, “Algorithms for quantum computation: Discrete logarithms and factoring”, SIAM Journal on Comput- ing, V olume 26, No. 5, pp. 14841509 (1997); quant-ph report no. 9508027 [60] L. Grover, “A fast quantum mechanical algorithm for database search”, Proceedings of the 28th Annual ACM Symposium on the Theory of Computing, pp. 212219 (1996); quant-ph report no. 9605043 [61] T. Lanting et al., “Entanglement in a Quantum Annealing Processor”, Phys. Rev. X 4, 021041, May 2014 [62] S. Boixo et al., “Computational Role of Collective Tunneling in a Quantum Annealer”, arXiv:1411.4036 [63] H. Katzgraber, F. Hamze & R. Andrist, “Glassy Chimeras Could Be Blind to Quantum Speedup: Designing Better Benchmarks for Quantum Annealing Machines”, Phys. Rev. X 4, 021008. [64] J. King et al., “Benchmarking a quantum annealing processor with the time-to-target metric”, arXiv:1508.05087 [65] K. Pudenz, T. Albash, D.A. Lidar, “Error corrected quantum annealing with hundreds of qubits”, Nature Comm. 5, 3243 [66] W. Vinci et al., “Quantum Annealing Correction with Minor Embedding”, Phys. Rev. A 92, 042310 [67] W. Vinci, T. Albash & D.A. Lidar, “Nested Quantum Annealing Correction”, arXiv:1511.07084 [68] A. Mishra, T. Albash & D.A. Lidar, “Performance of two different quantum annealing correction codes”, Quan- tum Information Processing, Feb. 2016, V ol, 15, Issue 2, pp 609-636 [69] K.J. Wu, “Solving practical problems with quantum computing hardware”, ASCR Work-shop on Quantum Com- puting for Science, February, 2015 [70] A.D. King, C.C. McGeoch, “Algorithm engineering for a quantum annealing platform”, arXiv: 1410.2628, Oct. 2014 [71] I.F. Akyildiz, W. Su, Y . Sankarasubramaniam & E. Cayirci, “A Survey on Sensor Network”, IEEE Communica- tion Magazine, vol. 40, no. 8, Aug. 2002, pp. 102-114. [72] B. Karp & H. T. Kung, “GPSR: Greedy perimeter stateless routing for wireless networks”, Proceedings ACM MobiCom’00, Boston, MA, Aug. 2000, pp. 243-254. [73] E. Jonckheere, M. Lou, F. Bonahon & Y . Baryshnikov, “Euclidean versus hyperbolic congestion in idealized versus experimental networks”, Internet Mathematics, V ol. 7, No.1, pp. 1-27, March 2011. [74] O. Narayan & I. Saniee, “The large scale curvature of networks”. arXiv:0907.1478v1, [cond-mat.stat-mech] 9 July 2009, July 2009. 49 [75] M. Lou, E. Jonckheere, Y . Baryshnikov, F. Bonahon & B. Krishnamachari, “Load Balancing by Network Cur- vature Control”, International Journal of Computers, Communications and Control (IJCCC), V ol. 6, No.1, pp. 134-149, March 2011. ISSN 1841-9836. [76] L. Tassiulas & A. Ephremides, “Stability properties of constrained queueing systems and scheduling policies for maximum throughput in multi-hop radio networks”, IEEE Tran. Automatic Control, vol. 37, no. 12, pp. 19361949, 1992. [77] R. Banirazi, E. Jonckheere & B. Krishnamachari, “Heat diffusion algorithm for resource allocation and routing in multihop wireless networks”, GLOBECOM, Anaheim, California, USA, December 3-7, 2012, pp. 5915-5920, Session WN16: Routing and Multicasting. [78] R. Banirazi, E. Jonckheere & B. Krishnamachari, “Dirichlet’s Principle on Multiclass Multihop Wireless Net- works: Minimum Cost Routing Subject to Stability”, ACM International Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems, Montreal, Canada, September 21-26, 2014; [79] Y . Ollivier. “Ricci curvature on Markov chains on metric spaces”. J. Funct. Anal. 256 (2009) [80] Y . Ollivier, “Ricci curvature of metric spaces”, CNRS, UMPA, cole normale suprieure de Lyon, 46, alle dItalie, 69007 Lyon, France [81] F. Bauer, J. Jost & S. Liu, “Olivier-Ricci curvature and the spectrum of the normalized graph Laplace operator”. arXiv:1105.3803v1[math.CO] 19 May 2011, 2011. [82] A. Gregoryan, ”Heat kernels on weighted manifolds and applications,” Cont. Math. 398, pages 93191, 2006. [83] J. Cheeger, M. Gromov & M. Taylor, “Finite propagation speed, kernel estimates for functions of the Laplace operator, and the geometry of complete riemannian manifolds”. J. Differential geometry , 17:15–53, 1992. [84] C. Villani, “Topics in optimal transportation”, volume 58 of Graduate Studies in Mathematics. Providence, RI, 2003. [85] C. Villani, “Optimal transport, Old and new”, volume 338 of Grundlehren der Mathematischen Wissenschaften. Springer-Verlag, Berlin, 2009. [86] J. Pan, Y . Hou, L. Cai, Y . Shi & S. Shen, “Topology control for wireless sensor networks”, Proceedings of the 9th annual international conference on Mobile computing and networking, pages 286-299 [87] M. Lewis & K. W. Hsu, “TelosRFID: An ad-hoc wireless networking capable multi-protocol RFID reader sys- tem”, Radio and Wireless Symposium (RWS), 2010 IEEE, 10-14 Jan. 2010 [88] A. Durresi, V . Paruchuri & L. Barolli, “Delay-energy aware routing protocol for sensor and actor networks”, Parallel and Distributed Systems, 2005. Proceedings. 11th International Conference on, 20-22 July 2005 [89] C. Wang, E. Jonckheere & R. Banirazi, “Wireless network capacity versus Ollivier-Ricci curvature under Heat Diffusion (HD) protocol”, American Control Conference (ACC 2014), Portland, OR, June 04-06, 2014; [90] A. Lucas, “Ising formulations of many NP problems”, Frontiers in Physics 2, 5 (2014) [91] P. Felzenszwalb, “Dynamic programming and graph algorithms in computer vision, pattern analysis and machine intelligence”, IEEE Transactions on, V olume 33, Issue 4, pp. 721-740, 2011. [92] V . Choi, “Minor-embedding in adiabatic quantum computation: I. The parameter setting problem”, Quantum Information Processing, V olume 7, pp. 193–209, 2008. [93] V . Choi, “Minor-embedding in adiabatic quantum computation: II. Minor-universal graph design”, Quantum Information Processing: V olume 10, Issue 3, pp. 343-353, 2011. 50 [94] Ch. Klymko, B. D. Sullivan & T. S. Humble, “Adiabatic quantum programming: Minor embedding with hard faults”, arXiv:1210.8395 [quant-ph]. [95] Th. van Dijk, J-P van den Heuvel & W. Slob, “Computing treewidth with LibTW”. (http://www.treewidth.com/docs/LibTW.pdf) [96] P.I. Bunyk et al., “Architectural considerations in the design of a superconducting quantum annealing processor”, arXiv: 1401.5504 [97] H. Bodlaender, “A linear-time algorithm for finding tree-decompositions of small treewidth”. SIAM J. Comput. 25, 6 (1996), 1305-1317 [98] H. Bodlaender & A. Koster, “Treewidth Computation II. Lower Bounds”, Technical Report UU-CS-2010-22, Dept. of Information and Computing Sciences, Utrecht University (2010). [99] T. Kloks, “Treewidth. Computations and approximations”. No. 842 in Lecture Notes in Computer Science. Springer Verlag, 1994. [100] N. Robertson & P. Seymour, “Graph minors - XIII: The disjoint paths problem”. J. Comb. Theory, Ser. B 63, 1 (1995), 65-110. [101] I. Adler, F. Dorn, F.V . Fomin, I. Sau & D.M. Thilikos, “Faster parameterized algorithms for minor containment”. Theor. Comput. Sci, 412(50):7018-7028, 2011. [102] A. Berry, J. Blair & P. Heggernes, “Maximum cardinality search for computing minimal triangulations”, 28th International Workshop on Graph-Theoretic Concepts in Computer Science, pp 1-2, London, UK, 2002, Springer- Verlag. [103] V . Gogate & R. Dechter, “A complete anytime algorithm for treewidth”, Proceedings of the 20th conference on Uncertainty in artificial intelligence, pp 201-208, Arlington, Virginia, 2004. AUAI Press. [104] A. Koster, H. Bodlaender, S. van Hoesel, “Treewidth: computational experiments”. Research Memoranda 001, Maastricht: METEOR, 2002. [105] S. Ramachandramurthi, “The structure and number of obstructions to treewidth” SIAM Journal on Discrete Mathematics, 10(1):146-157, 1997. [106] H. Bodlaender, F. Fomin, A. Koster, “On exact algorithms for treewidth”, Technical Report UU-CS-2006-032, Institute of Information and Computing Sciences, Utrecht University, 2006. [107] S. Holmes, S. Rubinstein-Salzedo & C. Seiler, “Curvature and Concentration of Hamiltonian Monte Carlo in High Dimensions”, arXiv:1407.1114, May. 2015 [108] A. Tannenbaum et al., “Graph curvature and the robustness of cancer networks”, Scientific Reports, doi:10.1038/srep12323, Jul. 2015 [109] R. Sandhu, T. Georgiou & A. Tannenbaum, “Market fragility, systemic risk, and Ricci curvature”, arXiv: 1505.05182, May. 2015 [110] C. Ni, Y . Lin, J. Gao, X. Gu & E. Saucan, “Ricci curvature of the Internet topology”, INFOCOM 2015, Hong Kong, Apr. 2015 [111] J. Cai, W. Macready & A. Roy, “A practical heuristic for finding graph minors”, arXiv:1406.2741, Jun. 2014 [112] W. Carballosa, J. Rodrguez, O. Rosario, & J. Sigarreta, “Gromov hyperbolicity of minor graphs”, arXiv:1506.06047, Jun. 2015 51 [113] F. Montgolfier, M. Soto & L. Viennot, “Treewidth and Hyperbolicity of the Internet”, DOI: 10.1109/NCA.2011.11 Conference: Network Computing and Applications (NCA), 2011 10th IEEE International Symposium on. [114] D. Lokshtanov, “On the complexity of computing treelength”, Mathematical Foundations of Computer Science 2007, V ol. 4708, pp 276-287 [115] L. Chandrana & C. Subramanian, “A spectral lower bound for the treewidth of a graph and its consequences”, Information Processing Letters, V ol. 87, Issue 4, 31 Aug. 2003, pp 195200 [116] V . Chepoi, F. Dragan, B. Estellon, M. Habib & Y . Vaxs, “Diameters, centers, and approximating trees of delta-hyperbolic geodesic spaces and graphs”. In Proceedings of the Twenty-Fourth Annual Symposium on Com- putational Geometry (pp. 59-68). ACM. [117] V . Gogate & R. Dechter, “A complete anytime algorithm for treewidth”, 20th Conference on Uncertainty in Artificial Intelligence (UAI), 2004. [118] C. Wang, E. Jonckheere & R. Banirazi, “Control and Performance Analysis of Network Topology in Wireless Sensor Networks”, submitted to American Control Conference, 2016 [119] C. Bron & J. Kerbosch, “Algorithm 457: finding all cliques of an undirected graph”, Commun. ACM (ACM) 16 (9): 575577 [120] S.V . Isakov, I.N. Zintchenko, T.F. Rnnow, M. Troyer, ’Optimized simulated annealing for Ising spin glasses’, arXiv:1401.1084 [121] C. Wang, H. Chen & E. Jonckheere, “Quantum versus simulated annealing in wireless interference network optimization”, Scientific Reports, DOI: 10.1038/srep25797, May. 2016 [122] H. Markowitz, “The elimination form of the inverse and its application to linear programming”, Manage. Sci. 3, 255-269, 1957. [123] D. Rose, R. Tarjan & G. Lueker, “Algorithmic aspects of vertex elimination on graphs”, SIAM J. Comput. 5, 266-283, 1976. [124] C. Wang, E. Jonckheere, & T. Brun, “Ollivier-Ricci curvature and fast approximation to tree-width in embed- dability of QUBO problems”. ISCCSP IEEE, Athens, Greece. doi: 10.1109/ISCCSP.2014.6877946 (2014, May 21-23). [125] V . Denchev et al., “What is the Computational Value of Finite Range Tunneling?” arXiv:1512.02206 [126] K. Pudenz & D.A. Lidar, “Quantum adiabatic machine learning”, Quantum Information Processing, DOI 10.1007/s11128-012-0506-4, 2012 [127] S. Santra, G. Quiroz, G. Steeg & D.A. Lidar, “Max 2-SAT with up to 108 qubits”, New Journal of Physics, V olume 16, April 2014 52
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Imposing classical symmetries on quantum operators with applications to optimization
PDF
Learning, adaptation and control to enhance wireless network performance
PDF
Theoretical and computational foundations for cyber‐physical systems design
PDF
A protocol framework for attacker traceback in wireless multi-hop networks
PDF
Cooperation in wireless networks with selfish users
PDF
Enhancing collaboration on the edge: communication, scheduling and learning
PDF
Understanding the characteristics of Internet traffic dynamics in wired and wireless networks
PDF
Quantum feedback control for measurement and error correction
PDF
Utilizing context and structure of reward functions to improve online learning in wireless networks
PDF
Control and optimization of complex networked systems: wireless communication and power grids
PDF
Optimizing task assignment for collaborative computing over heterogeneous network devices
PDF
Symmetry in quantum walks
PDF
Optimal resource allocation and cross-layer control in cognitive and cooperative wireless networks
PDF
Open quantum systems and error correction
PDF
Rate adaptation in networks of wireless sensors
PDF
Applications and error correction for adiabatic quantum optimization
PDF
Dynamic routing and rate control in stochastic network optimization: from theory to practice
PDF
Towards interference-aware protocol design in low-power wireless networks
PDF
Robust routing and energy management in wireless sensor networks
PDF
Distributed wavelet compression algorithms for wireless sensor networks
Asset Metadata
Creator
Wang, Chi
(author)
Core Title
Quantum computation in wireless networks
School
Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Electrical Engineering
Publication Date
07/21/2016
Defense Date
06/02/2016
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
OAI-PMH Harvest,quantum computation,wireless networks
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Jonckheere, Edmond (
committee chair
), Brun, Todd (
committee member
), Krishnamachari, Bhaskar (
committee member
), Nakano, Aiichiro (
committee member
)
Creator Email
somearfar@gmail.com,wangchi@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c40-271925
Unique identifier
UC11280196
Identifier
etd-WangChi-4571.pdf (filename),usctheses-c40-271925 (legacy record id)
Legacy Identifier
etd-WangChi-4571.pdf
Dmrecord
271925
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Wang, Chi
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
quantum computation
wireless networks