Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Optimization of the combinatoric closely spaced objects resolution algorithm with adiabatic quantum annealing
(USC Thesis Other)
Optimization of the combinatoric closely spaced objects resolution algorithm with adiabatic quantum annealing
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Optimization of the Combinatoric Closely Spaced Objects Resolution Algorithm with Adiabatic Quantum Annealing by John J. Tran A Dissertation Presented to the FACULTY OF THE GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulllment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (Computer Science) May 2020 Copyright 2020 John J. Tran Abstract One of the challenges of automated target recognition and tracking on a two-dimensional focal plane is the ability to resolve closely spaced objects (CSO). To date, one of the best- known CSO-Resolution algorithms rst subdivides a cluster of image pixels into equally spaced gridpoints. Then it conjectures that an unknown number of targets are located at the centers of those subpixels and, for each set of such locations, calculates the associ- ated irradiance values to minimize the sum of squares of the residuals. The set of target locations that leads to the minimal residual becomes the starting point of a non-linear least-squares t (e.g. Levenberg-Marquardt, Nelder-Mead, expectation-maximization, or trust-region), which completes the estimation. The overall time complexity is expo- nential. Although numerous strides have been made over the years, vis- a-vis heuristic optimization techniques, the CSO-Resolution problem remains largely intractable, due to its combinatoric nature. We propose a novel approach to address this computational obstacle. By mapping the CSO-Resolution algorithm to a quantum annealing model, a CSO-Resolution algorithm can then be programmed on an adiabatic quantum optimiza- tion device. This will eectively demonstrate that a combinatoric problem, once held to be intractable, might soon be attainable with the proper hardware platform. Before such an implementation can be attempted, the theoretical and mathematical foundation for such an eort must be established. This dissertation is the description and submission of that analysis. ii Dedicated to the memory of Dr. Frank J. Macchiarola and Dr. Patrick E. Small. iii Acknowledgements First and foremost, I would like to express my deepest and sincere gratitude to my advisor, teacher, boss, and friend, Dr. Robert F. Lucas, without whom, I could not have accomplished this research. Bob, I can never thank you enough for all that you have done for me and my family. I would like to thank the thesis committee members: Dr. Aiichiro Nakano, Dr. Stephan Haas, Dr. Thomas Gottschalk, and Dr. Kevin Scully. I am very appreciative of the helpful comments from Dr. Jacqueline Curiel, Dr. Kevin Scully, Dr. Darren Semmen, and CDR Dan Davis (USN Ret). Your amazing edits and feedback added much needed clarity to the thesis. To my friends and colleagues at ISI & USC, thank you for your support and en- couragement. Aiichiro, Lizsl, Ken-ichi, Suzana, Greg, Federico, Itay, Ke-Thia, Dan, and Gene, I appreciate the friendship we've built over our time together at USC. To my friends and colleagues at Aerospace, Shelley, Kent, Don, JD, Carlton, Eugene, Gabe, Nemesio, and Jonathan, thank you for the intellectual and stimulating conversa- tions. Thank you especially to Tom, Kevin and Darren for the rigorous, but amazing, discussions on CSOs and combinatoric algorithms. iv To my godmother, Mrs. Mary Macchiarola, through the years, you have been a spiritual tour de force supporting me with the pursuit of academic goals that are congruent to the formation of my faith. Thank you. To my family, I am grateful to my parents for teaching me the importance of hardwork and perseverance in the face of adversity. To my mom, more than just giving me life, you have been a role model for kindness, love and sacrice. Last but not least, a heartfelt thanks to Johnny, you've inspired me to nish the race, and to Jackie, thank you for your unconditional love through the years | I am forever yours. RESPICE STELLAM, VOCA MARIAM v Table of Contents List Of Tables ix List Of Figures x List of Algorithms xi Chapter 1: Introduction 1 1.1 Space Surveillance Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.1.1 Detection and Data Generation . . . . . . . . . . . . . . . . . . . . 2 1.1.2 Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.1.3 Data Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.1.4 Resource Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2 Space Surveillance Research . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.1 Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.2 Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.2.2.1 Multiple Hypothesis Tracking . . . . . . . . . . . . . . . . 6 1.2.2.2 Assignment Problems . . . . . . . . . . . . . . . . . . . . 7 1.2.3 Fusion and Sensor Schedule . . . . . . . . . . . . . . . . . . . . . . 8 1.3 Computational Complexity Research . . . . . . . . . . . . . . . . . . . . . 8 1.3.1 NP Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.3.2 Quantum Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.3.3 Quantum Gates Model . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.3.4 Adiabatic Quantum Annealing Model . . . . . . . . . . . . . . . . 10 1.4 Research Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Chapter 2: The CSO Framework 12 2.1 CSO-Resolution as a Prelude to Tracking . . . . . . . . . . . . . . . . . . 12 2.1.1 Image Acquisition and Processing . . . . . . . . . . . . . . . . . . 13 2.1.2 Detection Data Handover to Tracking Component . . . . . . . . . 14 2.1.3 Tracking and Estimation . . . . . . . . . . . . . . . . . . . . . . . . 16 2.2 CSO Physical Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.2.1 Focal Plane Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.2.2 Point Source Model . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.2.2.1 Spread Function . . . . . . . . . . . . . . . . . . . . . . . 20 2.2.2.2 Optimization of Energy Spread Function . . . . . . . . . 21 vi 2.2.2.3 Noise Model . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.2.3 Focal Plane Construction Algorithm . . . . . . . . . . . . . . . . . 23 2.3 CSO-Resolution Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.4 CSO-Resolution Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.4.1 Traditional Approach to CSO-Resolution . . . . . . . . . . . . . . 26 2.4.1.1 Continuous Clusters of Pixels . . . . . . . . . . . . . . . . 28 2.4.1.2 Construction of a Subgrid Search Space . . . . . . . . . . 29 2.4.1.3 Point Source Search . . . . . . . . . . . . . . . . . . . . . 29 2.4.1.4 CSO-Resolution Algorithm . . . . . . . . . . . . . . . . . 30 2.4.1.5 Renement . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.4.2 Advanced CSO-Resolution Techniques . . . . . . . . . . . . . . . . 33 2.5 Complexity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.6 CSO-Resolution with Parallel Computing Models . . . . . . . . . . . . . . 36 2.6.1 Data Streaming Model . . . . . . . . . . . . . . . . . . . . . . . . . 37 2.6.2 MapReduce Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 2.6.3 Analysis of Parallel Algorithms . . . . . . . . . . . . . . . . . . . . 39 2.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Chapter 3: CSO-Resolution is NP-Complete 42 3.1 Background and Denitions . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.2 Similar NP Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 3.2.1 Closest Vector Problem (CVP) . . . . . . . . . . . . . . . . . . . . 47 3.2.2 Nearest Subspace Problem (NSP) . . . . . . . . . . . . . . . . . . . 48 3.2.3 Multidimensional Knapsack Problem (MKP) . . . . . . . . . . . . 49 3.3 Formal Proof that CSO-Resolution is NP-Complete . . . . . . . . . . . . . 50 3.3.1 CSO-Resolution Problem Denitions . . . . . . . . . . . . . . . . . 51 3.3.2 Polynomial Transformation MKP to CSO-Resolution . . . . . . . . 53 3.3.3 CSO-Resolution Reduction to Decision Problem . . . . . . . . . . 54 3.4 Algorithmic Progresses of Related NP Problems . . . . . . . . . . . . . . 55 3.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Chapter 4: CSO-Resolution Quantum Annealing Algorithm 59 4.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 4.1.1 Quantum Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 4.1.2 Quantum Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 4.1.3 Optimization and Probabilistic Problems . . . . . . . . . . . . . . 61 4.2 Optimization via Quantum Annealing . . . . . . . . . . . . . . . . . . . . 62 4.2.1 Probabilistic Condence . . . . . . . . . . . . . . . . . . . . . . . . 63 4.2.2 Ising Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 4.2.3 Adiabatic Quantum Optimization . . . . . . . . . . . . . . . . . . 64 4.2.4 Graph Partitioning Example . . . . . . . . . . . . . . . . . . . . . 66 4.3 CSO-Resolution with Quantum Annealing . . . . . . . . . . . . . . . . . . 69 4.3.1 Mapping the Knapsack Problem to an Ising Spin Glass . . . . . . 69 4.3.2 CSO-Resolution Quantum Annealing Estimation Model . . . . . . 70 4.3.3 CSO-Resolution Quantum Annealing Algorithm . . . . . . . . . . 70 vii 4.4 Programming the CSO-Resolution on an Annealer . . . . . . . . . . . . . 73 4.4.1 Ising Parameters Extraction . . . . . . . . . . . . . . . . . . . . . . 73 4.4.2 Limitations of Chimera Graph Architecture . . . . . . . . . . . . . 73 4.5 Analysis of the CSO-Resolution Quantum Algorithm . . . . . . . . . . . . 77 4.5.1 CSO-Resolution Quantum Algorithm Complexity . . . . . . . . . . 77 4.5.2 Minimum CSO-Resolution Conguration . . . . . . . . . . . . . . 78 4.5.3 CSO-Resolution Quantum Algorithm Representation . . . . . . . . 79 4.6 Improvements of the Quantum Representation . . . . . . . . . . . . . . . 81 4.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Chapter 5: Conclusion 84 5.1 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 5.2 Shortcomings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 5.3 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 Reference List 91 viii List Of Tables 2.1 Key OSM information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.2 CSO problems complexity growth . . . . . . . . . . . . . . . . . . . . . . . 36 3.1 CSO-Resolution problem variables . . . . . . . . . . . . . . . . . . . . . . 53 3.2 Comparison of NP problems . . . . . . . . . . . . . . . . . . . . . . . . . 56 ix List Of Figures 1.1 Space surveillance program . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2.1 Space surveillance ow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.2 Post-processed image data . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.3 Multi-stage track initialization procedure . . . . . . . . . . . . . . . . . . 16 2.4 Focal plane image and radar scan comparison . . . . . . . . . . . . . . . . 18 2.5 Focal plane in detail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.6 Optimization of Energy Spread Function . . . . . . . . . . . . . . . . . . . 22 2.7 Focal plane image with two CSO targets . . . . . . . . . . . . . . . . . . . 25 2.8 Focal plane with two point sources . . . . . . . . . . . . . . . . . . . . . . 27 2.9 Focal plane with targets resolved . . . . . . . . . . . . . . . . . . . . . . . 32 2.10 CSOs as spatial clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 3.1 Complexity classication . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 4.1 Graph partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 4.2 Chimera graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 4.3 Graph partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 x List of Algorithms 1 Focal Plane Generator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2 Traditional Approach to CSO-Resolution . . . . . . . . . . . . . . . . . . 28 3 CSO-Resolution Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 30 4 Recursive Expansion of Combinations . . . . . . . . . . . . . . . . . . . . 31 5 CSO-Data Streaming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 6 CSO-MapReduce . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 7 CSO-Resolution Quantum Algorithm . . . . . . . . . . . . . . . . . . . . . 74 xi Preface There are many problems in space technology with computational requirements that cannot realistically be met by traditional digital computers, even massively parallel com- puters. Many of these problems must be solved in very short periods of time to be useful in detecting, identifying and characterizing fast moving objects with very little sensory input. Failure to do so in the time alloted could have catastrophic consequences. Advances in sensor technology and the continuing evolution of space vehicles continue to produce an increasing amount of data. Additionally, researchers have witnessed the complexity of data analysis growing proportionally with the exponential growth of the amount of data being collected. The ability of traditional computers to manage this data, as postulated by Moore's Law [Moo65], will approach its limit in the near future. Moreover, many of the underlying combinatoric algorithms driving the space applications are nondeterministically polynomial (NP) and in some cases, exponential. Among them is the class of multitarget-multisensors tracking algorithms, whose solution cannot be discovered in polynomial time on conventional computing platforms. To this end, this dissertation claims that quantum annealers might be a suitable platform for overcoming these challenges. Quantum computing algorithms, such as Grover's search algorithm [Gro96], the Quan- tum Fourier Transform [Cop02] and Shor's algorithm [Sho97], could permit the processing of increasingly large sets of data in asymptotic polynomial time. While the appreciation for quantum computing is well-represented in a number of research areas, e.g. cryptog- raphy, articial intelligence, etc., the application of these algorithms to space systems is, to date, largely absent from the literature. xii This thesis investigates the use of quantum computing to solve one of the most com- putationally intensive algorithms found in space systems: the resolution of closely spaced objects (CSO). In particular, it examines the use of open-system adiabatic quantum an- nealing, a form of quantum enhanced optimization. The research methodology of this thesis includes an in-depth theoretical examination of the CSO-Resolution algorithm's complexity and a novel approach to tackle the CSO-Resolution problem. The document is organized as follows: Ch 1 Introduction. This chapter introduces the CSO-Resolution problem, the motiva- tion for this research, and related works. Ch 2 Physical CSO Model. This chapter provides a theoretical and practical back- ground on the challenges of the CSO-Resolution problem. Ch 3 CSO-Resolution Problem is NP-complete. This chapter presents a theoretical proof that the CSO-Resolution problem belongs to a family of NP problems and is, in fact, NP-complete. Ch 4 CSO-Resolution Quantum Annealing Algorithm. This chapter introduces a quantum algorithm to solve the CSO-Resolution problem, bounds the problem size, and discusses the techniques for optimizing the quantum representation of the solution. Ch 5 Conclusion. This chapter summarizes the research with an optimistic outlook: although the current quantum annealing systems lack a sucient number of qubits and connectivity to solve CSO-Resolution problems. When these do become avail- able, we will already have laid the foundation and shown the plausibility for a practical approach to address this challenge. This thesis provides an analysis and substantiation of a proof-of-concept approach demonstrating that an important, pragmatic and complex space surveillance and track- ing algorithm can be reformulated and executed on an adiabatic quantum optimization xiii system in a timely manner. By bridging the gap between theoretical quantum com- puting and a real (yet currently intractable) problem, the research addresses the space surveillance community's ongoing demand for higher delity modeling and simulation ar- chitectures [STSQ14] and faster times to solution. Finally, this eort could pave a way for extensibility by outlining a path forward by creating a template for developing algo- rithms to solve other general challenging problems using the adiabatic quantum annealing optimization paradigm. John J. Tran Los Angeles, California Spring 2020 xiv Chapter 1 Introduction The eld of space surveillance research touches many science and engineering disciplines. These include, but are not limited to, exploring deep space, tracking satellites, and per- forming missile detection. As such, space surveillance research impacts national security, aeronautic safety, and space exploration. Although the eld of research is broad, the focus here is structured on four areas: detection, tracking, sensor fusion, and sensor scheduling. Detection Tracking Fusion Sensor Schedule Figure 1.1: A space surveillance program can be broken down into four distinct areas: (a) detection, (b) tracking, (c) fusion, and (d) sensor scheduling. Figure 1.1 illustrates what can be thought of as a pipelined work ow of subsystems. The rst area focuses on the detection of objects. The second area concerns the tracking 1 of the detected targets. Once targets of interest are detected and tracked, the prediction of their future locations becomes important. In the case of national defense applica- tions, such as missile detection, the ability to correctly detect and track lethal objects rapidly is of vital importance. The fusion of detection and tracking windows provides the \big" picture situation awareness of the operational theater. Finally research on schedul- ing deals with resource management and allocation. Together, each of the areas works synergistically to support the overall space surveillance research program. 1.1 Space Surveillance Overview All of the tasks described below are interrelated, as data are pipelined from one stage to the next. A complete space surveillance system is generally eective if all components are accurate and work together in a cohesive fashion. We nd that the underlying algorithms for each of the stages have their own set of challenges. Over the years, numerous heuristic and optimization techniques have been developed and have achieved reasonably satisfac- tory results; these have been measured by comparing the predictive models against Monte Carlo simulations [Got14]. We elaborate on each of the stages in the following sections. 1.1.1 Detection and Data Generation The target detection phase, also sometimes referred to as data generation, is arguably the most important aspect of space surveillance. The detection of potential targets in space often begins with the capturing of a static two-dimensional image. The methods of acquisition vary depending on the type of sensor and the acquisition technology, e.g. infrared or radar. For targets that are suciently spaced apart, it is simple to capture 2 an image and identify the objects in an automated fashion. It is not as simple to acquire targets that are closely spaced, at least not in an automated fashion. 1 1.1.2 Tracking Targets, e.g. satellites or missiles, are of interest when moving, and tracking their position, velocity and other kinematic properties can pose a challenge to researchers. Tracking the kinematic properties, of targets has been of interest in the space surveillance arena for years, e.g. radar systems that track airplane motion or missile detection platforms that track missile launches and predict their nal destination. Needless to say, implementing and sustaining accurate metrics of velocity and acceleration are essential for the accurate prediction of destination location. 1.1.3 Data Fusion Data fusion is the process of aggregating sensor data from multiple viewing windows and sources together to increase delity. We consider two types of sensor systems: radar and infrared. In general, radar systems generate a tremendous amount of energy to acquire images of their targets. For example, the Cobra Dane, one of the largest military radar systems, generates 900 kW of average radiated energy [Del16]. It is neither feasible nor sustainable to emit unlimited energy into open space. Therefore, it is often necessary for sensors to re-aim the narrow eld of view of their focal plane. In contrast, infrared sensors, which are passive in nature, consume much less energy and can provide a wider eld of view. However, infrared sensors do not oer depth information and therefore they fail to 1 In fact, even with the human-in-the-loop methods, the task remains challenging. 3 adequately convey key information of their targets [Sch19]. For this reason, the ability to fuse together detection and tracking data from multiple sensor sources and viewing windows is essential to enhance a tracking systems to provide a more comprehensive picture. 1.1.4 Resource Scheduling Because sensor resources are limited, the scheduling of their \staring" (or aiming) window should be done in an ecient manner. This is a balance between availability and coverage, and in an optimal conguration, the system would minimize loss of coverage using an ecient scheduling algorithm. The scheduling of sensor focal plane placement is akin to the well-known traveling salesperson problem. 1.2 Space Surveillance Research Over the years, a signicant amount of research has been devoted to the eld of target tracking. This section summarizes a brief survey of relevant topics. Albeit not exhaustive, it is a representative sample of past studies and active research in this area. 1.2.1 Detection The earliest and most often cited work in the eld of CSO-Resolution is the seminal paper by M.J. Tsai, who proposed a model for detection and estimation of closely spaced objects. Unresolved targets' position and intensity are estimated using the Akaike Information Criterion (AIC) and maximum likelihood estimator [Tsa80]. 4 In their paper, Tsai and Rogal employed a \staring" sensor, as opposed to the tra- ditional scanning senors, to estimate the position of targets on a detector array [TR89]. Nowakowski, et al, postulated that temperature discrimination is dicult for passive sensors and their work focused on how temperature can impact sensor measurements. Their research paper also provides an illuminating background on multispectral pas- sive sensor detectors, which are an important component of object detection on a focal plane [NWK90]. Reagan and Abatzoglou described a model-based maximum likelihood estimation technique for the estimation of the intensities and positions of CSO [RA93]. Barto- lac, et al, discussed an alternative approach to the CSO-Resolution problem. Theirs was an attempt to show that a complex neural network can approximate an arbitrary number of CSO targets with varying accuracy. For this, they built a training neural network to approximate a multi-dimensional mapping detector to the CSO parameter space [BA94]. Kirubarajan, et al, studied closely spaced cells and built a corresponding tracking model for observation of tracks. Their approach relies on the multi-assignment algorithm and it modies the cost function of decreasing target size [KBSP01]. Salmond, et al, proposed a dierent approach by using a Bayesian recursive l- ter. Theirs included the presence of clutter in the scenario and employed a bootstrap lter [SFG97]. Lilo and Schulenburg also proposed a Bayesian CSO-Resolution ap- proach [LS94, LS02]. Their approach, like many others, does not address the large-scale computational cost and it sacrices accuracy by using probabilistic resolutions. Korn, et al, presented two approaches for solving midcourse CSO-Resolution problems using an infrared (IR) focal plane. Their rst approach estimated the targets' coordinates 5 on the focal plane one frame at a time. Their second approach examined a sequence of frames. Both approaches are based on measuring the radiant intensity and employ least square minimization [KHF04]. Macumber, et al, studied unresolved closely space objects, or UCSO, problems. In particular they focused on how to hierarchically resolve these objects using standard noise reduction techniques [MGFP05]. Lin, et al, tracked CSO targets using sequences of images with the Probability Hypoth- esis Density (PHD) lter and employed the multi-assignment tracking method. Further- more, the authors made good use of the quantum-behaved particle swarm optimization (QPSO) method 2 to optimize the objective function. [LXA + 11b, LXX + 11, LXA11a]. Most recently, recognizing that CSO-Resolution problems are combinatoric in nature, Tran, et al, are studying the CSO-Resolution problem by performing a theoretical map- ping of the problem to the adiabatic quantum annealing optimization model [TSSL14, STSQ14]. 1.2.2 Tracking 1.2.2.1 Multiple Hypothesis Tracking Reid introduced an important paper on tracking multiple targets in a cluttered environ- ment [Rei79]. Here the author developed the Multiple Hypothesis Tracking (MHT), which is dierent from Bar-Shalom's probabilistic association [CK08]. Cox and Hingorani pro- vided an ecient (linear time) implementation of Reid's MHT algorithm. Their approach 2 Despite its namesake, the algorithm and implementation has nothing to do with quantum information processing. 6 is suitable for motion video [CH96]. Gadaleta, et al, grouped CSO targets into clusters and tracked these clusters using the multiframe assignment (MFA) technique. Their idea was to extend the MFA technique into multiframe cluster tracking [GPRS04]. Black- man summarized the MHT method, which is used for solving data association problems. He analyzed the MHT's advantages over the single hypothesis method (SHM). Black- man's paper provides an informative background on tracking problems in general [Bla04]. Danchick and Newman reformulated Reid's Multiple Hypothesis Tracking method with a linear time tracking algorithm of CSO clusters [DN06]. In summary, MHT is a useful and simplied approach to tracking. Its primary draw- backs include an operating environment whose low resolution dataset detracts from the needed delity. What's more, MHT implementers often concede defeat to combinatoric explosion. 1.2.2.2 Assignment Problems An assignment problem is an approach to tracking algorithms that tries to correlate potential tracks with a weighted scoreboard. The algorithm retains potential tracks that have a high score and eliminates those with a low score from future consideration. The biggest drawback to this approach is the accruing of \mis-associations" due to incorrect initial guesses or a bad CSO-Resolution implementation, which can render the detection system eectively useless. Poore, et al, formulated tracking as an NP-hard assignment problem. This paper goes into detail describing data association as a combinatorial association problem [Poo94]. Chen, et al, examined the idea of new tracks being spawned in the process of estimation. 7 Furthermore, their paper provided an excellent background on the three approaches to tracking: nearest neighbor (NN), strongest neighbor (also known as JPDA or Joint Prob- abilistic Data Association), and MHT [CK08]. 1.2.3 Fusion and Sensor Schedule Unlike the tracking portions of space surveillance, data fusion and sensor schedules are less impacted by the detection (in particular the CSO-Resolution) portion. Therefore, they are not included in the literature survey. 1.3 Computational Complexity Research 1.3.1 NP Problems While there are many papers devoted to NP problems and complexity, the following two papers highlight the subtle nuances of problems that are theoretically combinatoric, but in practice do not exhibit combinatoric properties. When studying CSO-Resolution, the metrics and analyses they provide to properly determine the complexity of the CSO- Resolution algorithms are considered. In other words, is the problem at hand a worthy candidate for research? Cheeseman, et al, showed that a set of critical parameters can sometimes tip the scale between easy and hard instances of NP problems. The idea is that NP cases of hard problems are the boundary cases and that, for most problems, the solutions are achievable in polynomial time. However, there are degenerate cases which tip the running time into 8 NP space. So the question becomes: what are the parameters that elevate a problem from easy to hard? [CKT91] Fomin and Kaski present a quick survey on the concept of Exact Exponential Algo- rithms (EEA). Here they contrast the EEA approach to parameterized algorithms, and approximation algorithms. They used the canonical MAX-2-SAT, graph coloring, and Hamiltonian path examples to illustrate their ndings. This is illustrated in Figure 1.1, which can be thought of as a pipelined work ow of compartmentalized subsystems [FK13]. The rst area focuses on the detection of objects. The second area concerns the tracking of the detected targets. Once targets of interest are detected and tracked, prediction of their future locations becomes important. 1.3.2 Quantum Algorithms There is a rich body of literature on quantum information processing (QIP) research. We highlight here two major works in the advancement of QIP from an algorithm and hardware perspective. 1.3.3 Quantum Gates Model In terms of advancements in quantum computing algorithm development, we nd the research done by Shor on a quantum factoring algorithm [Sho97] and Grover on a quan- tum database access algorithm [Gro97] to be foundational. Their contributions include proof-of-concept demonstrations for utilizing a generalized quantum circuit to attack NP problems. 9 1.3.4 Adiabatic Quantum Annealing Model Conceptually, Kato was said to be one of the rst to mention the adiabatic theorem back in 1950 [Kat50]. In 2000, Farhi, et al, authored the often cited paper that introduced the adiabatic theorem [FGGS00] as a tool for quantum optimization. Mizel, Lidar, Mitchell, and, separately, Ahabronov, et al, made important advancements in Adiabatic Quantum Computation (AQC) by theoretically proving that the AQC model is equivalent to the Generalized Quantum Circuit (GQC) model [MLM07, AvDK + 08]. The rst commercially available quantum computing device was introduced by D- Wave Systems in 2011. To be more precise, it is an open system, adiabatic quantum annealer, specialized optimization device. D-Wave's claim to be a quantum device was certainly not without controversy. After its announcement, numerous contentious argu- ments ensued in both academia and the social media. 3 In 2012, Spedalieri, et al, made a breakthrough in moving one step closer to proving entanglement on the D-Wave system [Spe12]. They showed that the D-Wave is in fact ex- ploiting entanglement, a quantum property. 4 Most recently, Lanting, et al, experimentally measured the value of a witness for entanglement, thereby providing \an encouraging sign that quantum annealing [on the D-Wave] is a viable technology for large-scale quantum computing" [LPS + 14]. With all that said, we acknowledge that, to date, this platform has not yet been observed to provide any hope for \quantum" speed-up. Despite the controversies surrounding D-Wave, we believe a quantum optimization device for combinatoric problems has promising potential, including renewed eorts by 3 The most notable and vocal critic of D-Wave was Scott Aaronson, the self-declared D-Wave's Chief Critic, whose blog http://www.scottaaronson.com/blog/ vocalizes the controversy. 4 Their work demonstrated, with numerical tests, a way to determine the entanglement or separability of a state even if there is not enough information to determine a density matrix. 10 researchers to reframe classical problems such as Ising spin glass optimization problems. The Max-Sat combinatorial problems using adiabatic optimization have been exhaustively studied from a theoretical standpoint [Cho08]. Work by McGeouch, and separately, Boixo and Sandtra measured experimentally and evaluated the D-Wave system's performance as a platform for combinatorial optimization [SQSL14, MW13]. Lucas provided Ising formulations to Karp's 21 NP problems and derivation and mapping of these problems to the Ising spin glass problems [Luc13]. These mappings are proofs-of-principle and rely on a number of generalized assumptions, for example, overlooking the challenges of the actual programming of any specic device. 1.4 Research Motivation Most studies of the CSO-Resolution problems have been devoted to improving sensor technology, better capturing image resolution, and developing heuristics to optimize lo- cal solutions. To our knowledge, no eort has been devoted to complexity analysis of algorithms for the CSO-Resolution problem or in developing algorithms that specically address the exponential complexity. In fact, we suspect that the prohibitory compu- tational cost may constitute just the type of problem that is too dicult to be solved eciently and in a timely manner on a traditional non-quantum computer. To that end, we investigate a mapping of the CSO-Resolution problem to a form suitable for solution by the adiabatic quantum annealing optimization model, such as ones that are implemented by the D-Wave devices. 11 Chapter 2 The CSO Framework In this chapter, we oer a treatment of the CSO-Resolution framework model similar to that of Korn [KHF04], Macumber [MGFP05], and Gottschalk [Got14], but also expand on their work with our own contributions. We start with a background discussion by oering context and insight into how the CSO-Resolution framework ts into the larger space surveillance system. We then delve into a detailed physical description of the focal plane and follow that with the point source model. These two concepts together make up the building blocks for the CSO-Resolution application and with them, we develop an estimation model. We analyze two common parallel computing strategies to potentially tackle the CSO-Resolution problem. Finally, we conclude with the computational com- plexity analysis to quantify the computational cost of the commonly accepted standard approach to solving the CSO-Resolution problem. 2.1 CSO-Resolution as a Prelude to Tracking The goal of this thesis, as previously stated, is to address one of the challenges of the rst stage of space surveillance, the detection of targets in a low delity environment. It 12 is, however, important to frame the CSO-Resolution discourse in relation to the space surveillance target-tracking applications. In the big picture perspective, let us refer to the notional surveillance processing ow (Figure 2.1). While philosophically obvious, it is still worth noting that the projection (or estimation) of the targets' path and position is only realistically possible if they have been properly tracked for a period of time, and tracking is only possible if the same targets have been correctly detected and identied. Image Acquistion Image Processing CSO-Resolution Target Characterization CSO Data Hand Over Track Initiation Filtering Projection DETECTION TRACKING ESTIMATION Figure 2.1: Space surveillance ow: 1) detection of targets, 2) tracking of targets, and 3) projection of targets. The resolution of CSO targets is the third step of the detection stage. 2.1.1 Image Acquisition and Processing The detection of targets begins with the acquisition of the image raw data, then the application of image processing algorithms, and nally the resolution of the targets of 13 interest. From these three steps, we can then accurately characterize the targets. Target characterization is the process of assigning attributes and matching proles of a target type to a detection. An \image" of a scenario can be acquired from a sensor, such a camera or a radar. For simulation purposes, an image can also be generated, for example, using a recursive fractal generator [Tra98] or by placing targets on a grid with xed dimensions. The placement location of the targets is chosen using a random draw that is statistically based on noise and measurement models of the sensors. The noise models vary amongst the dierent sensors. For example, an image that has been acquired using stereo telescopes will have a more accurate prole of the target's energy dissipation than that of a single telescope. Using various image processing procedures, sets of Object Sighting Messages (OSMs) are then generated from the raw data of an image. Ideally, these OSMs are detections of active real target objects. False detections could initiate a chain of misinformation where errors would propagate further down the work ow, with no opportunities for re- covery [Got14]. 2.1.2 Detection Data Handover to Tracking Component The datasets for the current problem can be regarded as streams of single-time snapshots, with signicant image processing required to determine an appropriate collection of OSMs from the raw sensor data. A schematic characterization of a single image/dataset is shown in Figure 2.2. 14 Figure 2.2: Single data scan and extracted object information. The shaded/colored squares in Figure 2.2 represent active pixels after (possibly sig- nicant) image processing. Each cluster of adjacent active pixels can potentially give rise to an OSM. For the task at hand, the working assumption for data content of each OSM is shown in Table 2.1. Frame number Time measurement for the underlying dataset Area Geometry of the connected pixel collection within the OSM Shape Geometry for extended OSMs Orientation Measure for extended OSMs Identication Underlying Target ID Position Coordinates (X;Y ) Irradiance Intensity/brightness measurement Table 2.1: Key OSM information captured during the detection phase. 15 The detection task involves more than just determining a target's spatial and irradi- ance (or intensity) information. The \area" can be thought of as the number of active pixels associated with the OSM. The shape and orientation elds provide additional in- formation for multi-pixel OSMs. The key OSM information (Table 2.1) is passed on to the tracking phase, which is the next stage in the software pipeline [Got14]. 2.1.3 Tracking and Estimation The tracking stage is responsible for partitioning the collective datasets from the sensor into track objects associated with distinct, perceived target objects. Based on assumed models for the underlying targets, the tracks provide (smoothed) estimates of the targets position and velocity versus time. The tracking of targets implicitly involves the related task of identifying and ignoring false detections in the sensor datasets. Track initialization (Figure 2.3) is eectively a batch process working with multiple active data scans at any one time. This requires a signicant computational resource for high frame rate sensors for which a single underlying real target could remain well within a single pixel for a number of consecutive datasets [Got14]. Figure 2.3: Multi-stage track initialization procedure. 16 The brute force approach to initialization, e.g., require 16 hits in 20 scans, appears to be unfeasible in terms of computational requirements. Herein lies yet another candidate for combinatorial algorithm optimization and an opportunity for future research, and will be discussed in Chapter 5. The estimation stage is the kinematic prediction, i.e. position, velocity, and accelera- tion, of target object's movement. Applications, such as missile defense systems, necessi- tate the accurate projection of the targets of interest. Work by Blackman, Hovanessian, Schlessinger, and Gottschalk further explores the tracking and estimation model which is beyond the scope of this research [Bla04, Hov88, Sch19, Got14]. In summary, tracking estimation is built on the premise that CSO detection and resolution occurence have been correctly processed further upstream. 2.2 CSO Physical Model Having established that the CSO framework is an essential prelude to tracking and how it ts into the overall space surveillance program, we now turn our attention to its physical model. Our discussion of the CSO physical model is organized as follows: rst, we describe the focal plane, which aligns with the image acquisition step; next, we examine the point source model, whose characteristics are essential for proper target detection and discrimination processing; nally, we introduce an algorithm that eciently generates a synthetic focal plane, which is representative of a measured sensor image. 17 2.2.1 Focal Plane Model There are many scientic and engineering applications where a low-resolution image masks the underlying details containing critical information. Consider for example, that a photograph is taken by telescope of distant objects in space and that the measured image contains several bright spots. If these bright spots are closely spaced and their relative size, on the image plane, is smaller than the image's pixel size, then the image cannot convey some fundamental details about the bright spots: the exact number of light sources, their relative location, and their level of brightness. Another application where the resolution of the measurement masks the underlying details is a radar scan of airborne and space-borne vehicles. Again, if the resolution of the radar is insuciently low, then the system cannot accurately convey the number of vehicles and their location, velocity and acceleration. In fact, the application of CSO problems are fairly common in radar research [HS09]. In Figure 2.4, we compare two focal planes that are representative of an infrared image and a radar scan capture. (a) (b) Figure 2.4: (a) A focal plane of an infrared image and (b) a focal plane of a radar scan. 18 In both examples, the accuracy of the targets of interest is obscured by the resolution of the sensor platforms. Although the focal plane patterns are synthetically-generated and that ground truth is available a priori, our approach to the CSO-Resolution research assumes that the underlying truth is hidden. Formally, a focal plane is a two-dimensional image consisting of K objects, with each object k located at positions (x k ;y k ) and each having an intensity a k . For simplicity of notation, we denote this point source with the tuple (x k ;y k ;a k ). We identify the pixel by its row and column as (i;j), where i the row and j is the column. dx dy (x 1 ,y 1 ,a 1 ) (x 2 ,y 2 ,a 2 ) Figure 2.5: A focal plane with two energy source objects inside the center pixel (1; 1). Figure 2.5 illustrates a focal plane with all of its pixels having the same width dx and height dy. The two energy sources are located within the borders of the center pixel. Although the energy sources k =f1; 2g are conned to the center pixel, they also contribute energy to the surrounding pixels. This energy spread is said to be be 19 \discretized" at the edges of the pixels | and presumably can be more accurately modeled using dierential equation techniques. However, this is not necessary at the scale of our CSO physical model. 2.2.2 Point Source Model We dene a point source as a dimensionless target object on a focal plane that irradiates energy which can be in the form of light. This point source or energy source has an intensity level and a dissipation rate. The energy intensity level spreads across the focal plane but is modeled to discretize uniformly across the pixel boundary. When considering the properties of a point source, we look at the energy spread function, the signal to noise ratio, the sensor's sensitivity, and the estimation of units of energy per pixel. 2.2.2.1 Spread Function Although there are many ways to model a point source, we consider a simple energy distribution model. Equation 1 shows the contribution to pixel (i;j) from a point source at location (x;y) with a rotationally symmetric Gaussian point spread function (PSF). I x k y k a k (i;j) = a k 2 2 Z i+0:5 i0:5 e (x k ) 2 2 2 d Z j+0:5 j0:5 e (y k ) 2 2 2 d (1) For each pixel (i;j) of the focal plane, the energy at a point source at position (x;y) is computed with Equation 2, where a k is the intensity/energy of the k object, and is the x position and is the y position of the target. The key insight here, based on the conservation of energy, is that the measured irradiance at each pixel position (i;j) 20 is assumed to be the cumulative contributions from each of the K objects. The total measured energy on pixel (i;j) is: I meas (i;j) = K X k=1 I a k x k y k (i;j) (2) The energy on detector (EOD) is the ratio of energy on pixel to the input energy, and its accuracy is specic to the instrumentation. In other words, the accuracy of the sensor is a function of the point source energy output in relation to the sensitivity of the sensor [Sch19]. 2.2.2.2 Optimization of Energy Spread Function When computing the intensity of a pixel, we can see Equation 1 imposes a computational tax because the numerical approximation of an integration function is calculation inten- sive. This is especially true for those focal planes with a large number of pixels. A more eective and ecient technique for computing the EOD is used in this research. This technique involves the use of the erf function [CWL + 16]. Here we dene () as the error estimation of a spread: () = 1 + erf p 2 2 (3) Equation 1 can now be expressed in a more ecient form. Figure 2.6 illustrates an ecient energy function Q(x;y) contribute to the pixel with corners at (x 1 ;y 1 ) and (x 2 ;y 2 ). 21 Q(x,y) (x1,y1) (x2,y2) Figure 2.6: Q computes the energy contribution from a target at (x;y) to a pixel with corners (x 1 ;y 1 ) and (x 2 ;y 2 ). Q(x;y) = x 2 x E x 1 x E y 2 y E y 1 y E (4) and E is sigma value derived from a xed EOD value: E (eod) = 1 2 p 2erf 1 ( p eod) (5) Equation 4 estimates the energy spread functionQ. The overall computation cost for a focal plane is polynomial, and in fact, is O(n). 22 2.2.2.3 Noise Model Because a measured focal plane is not perfect, we need to account for noise in a focal plane model. For most applications, a simple unit gaussian white noise representation centered around 0, N(;), is sucient. The complete spread energy function is now: Q(x;y) = x 2 x E x 1 x E y 2 y E y 1 y E +N(;) (6) This model has been shown experimentally to represent noise models in many of the existing sensors. In fact, without the noise model, most conjugate gradient search methods would descend into the correct solution without requiring an initial guess [BV04]. 2.2.3 Focal Plane Construction Algorithm To construct a synthetic focal plane given a set of targets, we turn to Algorithm 1, which constructs a focal plane by summing up the intensity from all targets on each of the pixels in the focal plane. Algorithm 1 Construct a focal plane from a set of targets. 1: function FocalPlane(C) . C set of closely spaced targets(due to its ) 2: for all pixels q2FP do 3: FP q 0 . Initialize FP 4: end for 5: for all pixels q2FP do 6: for all targets p2C do 7: FP q FP q + CalcPixel(q;p) 8: end for 9: end for 10: return FP 11: end function 23 The function CalcPixel(), which is derived from Equation 6, computes the con- tribution from a target p on a pixel q. Note that its coordinate is represented by a two-dimensional grid, whose index is (i;j). This function calculates the contribution from a point source to each pixel with boundaries at corners (x 1 ;y 1 ) and (x 2 ;y 2 ) and it is proportional to the irradiance (or intensity). 1 2.3 CSO-Resolution Problem One of the major challenges of automated target recognition and tracking on a two- dimensional focal plane is the detection of objects so closely spaced together that their intensity proles merge to the point that the objects no longer appear distinct. Figure 2.7 illustrates a simple example of a focal plane image. Perhaps not apparent to the naked eye, this focal plane contains closely spaced objects and would be dicult to characterize in absence of the details which are both requisite and absent. Ultimately, we hold that the CSO-Resolution problem is a challenging non-linear optimization problem with many local solutions. As mentioned earlier, CSO-Resolution is a non-linear optimization problem because there are many local solutions and the problem is also exacerbated by the noise and inexact instrumentation of the sensors. As with many gradient search problems, CSO- Resolution requires a good initial guess, which allows the algorithm to search in the correct neighborhood [BV04]. 1 This is also referred to as the \amplitude" of the pixel in the energy distribution parlance. 24 Figure 2.7: A two-dimensional focal plane with two closely spaced objects located 1.25 pixels apart. The image on the left is shown without the point sources, while the one on the right reveals their locations. When masked, it is virtually impossible to guess the locations and intensity of the targets. Given an image, such as the one shown in Figure 2.7, we wish to determine how many objects are present on the focal plane locations and the intensity of these objects. Formally, the CSO-Resolution problem is to identify the correct point source conguration which is embedded information about targets on: The number of point source targets The location of each of the point source targets The intensity of each of the point source targets 25 In summary, the goal of the CSO-Resolution problem is to estimate the best point source conguration. The main challenge with this problem is obscuration of information due to the lower delity of sensor data and the presence of noise in the CSO model. 2.4 CSO-Resolution Solution The CSO-Resolution problem has been challenging, and researchers have made numerous attempts at attacking the problem. This section is a systematic analysis of one of the most popular approaches to address CSO-Resolution and is organized as follow. First, we review the commonly accepted \best approach" to CSO-Resolution. We then look at some of the optimization techniques that have been developed. Finally, we present a set of na ve parallel computing models with the goal of speeding up the intense computation component of the CSO-Resolution algorithm. A simple way to understand the CSO-Resolution solution is to put it into the proper context. Consider Algorithm 1, which demonstrates the construction of a focal plane from a known set of point source congurations, where each conguration encapsulates the position and intensity of a target. The CSO-Resolution problem is the reverse of Algorithm 1. 2.4.1 Traditional Approach to CSO-Resolution Over the years, a number of approaches have been developed to estimate the resolution of CSO targets on a focal plane. We consider here, the most widely accepted approach which is also often known as the so-called \traditional approach" [Tsa80]. Although the traditional approach to CSO-Resolution is broad and encompassing, our research 26 eort focuses only the third step of the traditional approach. Algorithm 2 addresses the following problem statement: Given the image shown in Figure 2.8, we seek to determine how many targets are present on the focal plane, their locations, and their brightness. This situation presents a non-linear optimization problem with many local solutions. Thus, the CSO-Resolution problem requires a good initial guess. Generating a \quality" initial guess involves hypothesizing the number of targets and their positions, which are gridpoints that are formed from subdividing pixels into evenly-spaced grids. Figure 2.8: Example of a focal plane with two point sources. 27 Perhaps a simplistic observation, the estimation of closely spaced objects is essentially a nonlinear optimization problem involving large datasets. As the number of potential targets increases, so does the number of guesses required. In fact, the increase can be exponential and will be formally proved in Chapter 3 to be NP-Complete. As such, this traditional approach as formulated is currently computationally challenging because a polynomial-time algorithm does not, and may never, exist. Algorithm 2 Traditional approach to CSO-Resolution. 1. Identify a contiguous cluster of pixels that contains possible unresolved CSO's 2. Subdivide the pixels into R subpixels 3. For each choice (x;y) of K centers of subpixels, calculate the a which minimizes the residual e R(a;x;y;I meas ), using linear algebra 4. Rene the choice of (x;y) and a with the least residual using an optimization algorithm like Levenberg-Marquardt 2.4.1.1 Continuous Clusters of Pixels One simple action we can take to optimize the processing of uncessary target choices is eliminating from consideration those pixels with a low probability of containing targets. This approach is based on the observation that a focal plane image of distant space contains clusters of pixels whose energy spread function falls o rapidly when entering the neighboring pixels because the peak energy level cannot realistically spread over the real distance each pixel represents. For example, if a pixel represents the real distance of 10,000 kilometers, it is highly impropable to have a light source that does not attenuate beyond ve pixels. 28 The method of identifying the clusters of pixels is trivial using a simple high-pass lter that removes those pixels below a certain cut-o threshold . The cut-o threshold is determined by normalizing the pixel values and eliminating those pixels falling below a certain percentage of the threshold. The precise value is a tuning action based on the results from simulation to simulation. 2.4.1.2 Construction of a Subgrid Search Space Once the candiate pixels have been identied, we subdivide evenly each pixel into smaller and equally spaced subgrids. We nd the process of constructing subgrid lines for the selected pixels is straightforward. We partition each pixel into equally spaced horizontal and vertical grid lines. We consider each grid point to be a candidate position for a CSO target. If is the number of grid lines, then the number of locations is K . This step admittedly will have increased the number computations by a factor of k . When executing the CSO-Resolution algorithm, the number of grid lines must be restricted because brute force computation grows exponentially with the number of targets k, as we shall see later in section 2.5. 2.4.1.3 Point Source Search Because the CSO-Resolution problem is a non-linear optimzation problem, identifying the correct point source conguration amounts to trying out every possible combination of (x k ;y k ;a k ) such that the dierence, or residual e R, between the measured pixel intensity and the sum of the irradiance contributions from K objects is minimized. With this 29 approach, we will have determined a \quality" starting guess for the locations of the CSO targets. e R(a;x;y;I meas ) = min a;x;y X i X j I meas (i;j) K X k=1 I a k x k y k (i;j) 2 (7) Minimizing the residual e R in Equation 7 is at the heart of the CSO-Resolution prob- lem. For this, we seek to nd the precise combinationsf(a;x;y)g such that the dierences between the measured focal plane is generated focal plane (Algorithm 1) minimized. 2.4.1.4 CSO-Resolution Algorithm Putting everything together, Algorithm 3 returns the conguration with the minimum residual. The function Residual() is derived from Equation 7 and BuildFocalPlane is described in Algorithm 1. Algorithm 3 CSO-Resolution Algorithm. 1: function Enumerate(n, FP m ) . n targets, FP m measured forcal plane 2: GP =expand(jFPj;sx;xy) . sx, sy are subgrid sizes 3: combos choose(GP;n) 4: return combos 5: end function 6: function CSOResolution(n, FP m ) 7: FP min fg, r min 0 8: C Enumerate(n;FP m ) 9: for all combo c2C do . C is all possible combinations of potential target sets. 10: FP e BuildFocalPlane(c) 11: r Residual(FP e ;FP m ) 12: fFP min ;r min g min(fFP min ;r min g;fFP e ;rg) 13: end for 14: returnfFP min ;r min g 15: end function The number of target combinations,jCj, is very large and since the calculation of the residuals (Algorithm 3, lines 9-13) is independent, we can attempt to exploit this fact to 30 optimize the throughput with parallelization algorithms. The expansion of all possible combinations (line 3) calls the choose() function, which is expanded in Algorithm 4. Algorithm 4 Recursive expansion of combinations of possible targets. choose(const std::vector<Point2D> &gridpoints, const int len, const int start, std::list<Point2D> &result, std::list< std::vector<Point2D> > &combos) { /* base case */ if (result.size() == len) { std::list<Point2D>::iterator s1 = result.begin(); std::list<Point2D>::iterator s2 = result.end(); combos.push_back( std::vector<Point2D>(s1,s2) ); return; } /* reduction step */ for (size_t i = start; i < gridpoints.size(); i++) { result.push_back(gridpoints[i]); choose(gridpoints, len, i + 1, result, combos); result.pop_back(); } } Although Algorithm 4 is recursive and combinatoric in design, where the recursion in induction step visits each branch n times, the overall performance of the routine is relatively low. For a focal plane with the resolution of 1010 pixels and subdividing each pixel into 44 subgrids, the total number of combinations generated is 1600. We observed that this portion of the computation does not contribute to the overall performance bottleneck. 31 2.4.1.5 Renement Once the solution to the minimal residual is identied, the nonlinear problem described above will have many local solutions; the estimation algorithms to solve it will re- quire good initial guesses. We can apply any of the conjugate gradient search methods to perform a rened search. Techniques such as the Levenberg-Marquardt Algorithm (LMA) [Mor78, Lev44, Mar63] serves well for the rened solution. Note that the rened solution, while important for the space surveillance's \detection and identify" phase, is not a central focus of our research. Figure 2.9: In this example of a focal plane with targets resolved, we see that the truth is, in fact, closer to the initial guess than to the LMA renement positions. This can be attributed to gaussian noise o the measured focal plane. 32 2.4.2 Advanced CSO-Resolution Techniques Experimental work by Semmen and Gottschalk have shown that the basic CSO-Resolution implementation works for focal planes with lower resolution and those with at most two targets [Got14]. With this approach, we can combine or cluster of multiple CSOs as a single source. The underlying assumption is that the is cance of multiple CSOs is not as important during the detection phase. In the course of their CSO-Resolution research, Gottschalk, et al, proposed numerous techniques to optimize the stage prior to target detection. Theirs include several key underlying assumptions and is further described below. Clustered CSOs 1 target Idealized CSO-Resolution 3 targets Figure 2.10: Basic CSOs as spatial clusters. Since the focal plane is an image of objects at a distance and the resolution (in pixels) of the image often lumps multiple distance CSO objects together into a cluster of 33 brightness, the underlying challenge is the \recovery" of the actual CSO from clustered pixels. Figure 2.10 distinguishes between idealized CSO-Resolution and a na ve clustered resolution. Reliance on detection clustering is a reasonable approximation for identifying un- resolved targets, but it does have a shortcoming in that lines of sight between targets assigned to dierent CSOs need not be larger than the assumed resolution cut-o. So- called Single-Link Cluster techniques would ensure that all pair-wise distances between objects in dierent clusters exceed resolution cutos, but this is done at the cost of poten- tial \long, stringy" clusters with overall lengths well above the nominal resolution cuto, hence the need for higher delity models [Got14]. As expected, despite best eorts, their implementation is still limited to K = 2. We maintain that this presents a signicant drawback and is a far cry from a realistic model that would contain many targets of interest. 2.5 Complexity Analysis Although challenging, the analysis for the CSO-Resolution problem is not intractable. This is in light of the abundance of heuristic optimizations. The question remains: does the problem scale with the implementation of heuristic optimizations? Furthermore, previous eorts to analyze the CSO-problem were performed using CPU time rather than algorithmic analysis. We consider here some observations as to how the scientic literature considers the CSO-Resolution problem's complexity. Adaptive optimization algorithms used to attack 34 CSO problems are not in and of themselves useful for complexity analysis. For example, Korn's application of the generic algorithms represents a special case that ignores the solution space containing many local minima and does not appeal to the broader and more general CSO-Resolution requirements [KHF04]. In fact, Lin conducted an analy- sis showing that a generic algorithm does not work well for problems with many local minima [LXA11a]. Chang utilized a statistical approach to the CSO-Resolution prob- lem, determining that an analytic expression does not exist, therefore only a probabilistic resolution is possible [CD82, MC06]. Our general observation on the complexity of the CSO-Resolution is best understood as one whose solution is constructed from a combinatoric formulation and without a polynomial time solution, would require a \hopeless" brute forced approach. As the number of potential targets increases, so does the number of guesses required. As a matter of perspective, consider the following computational requirement estimation. Suppose we want to search an 10 10 pixel subset of a focal plane and we divide each pixel into a grid of 4 4 subpixels. There are (10)(10)(4)(4) = 1600 possible subpixel positions for a single object. For two objects, since each object can appear anywhere on a focal plane, there are 1600 2 possible initial guesses. For N objects, there are 1600 N possible initial guesses. The data collected in Table 2.2 illustrates the complexity growth of a typical CSO problem with an exponentially growing computation cost. This experiment was performed on a Linux host with 32 cores of Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz processors and 128 Gb of memory. 35 Subgrid Size Targets Combos Filtered Combos Wall Clock (s) 4 4 2 2.5600E+06 2016 0.553 8 8 2 4.0960E+07 32640 8.743 4 4 3 4.0960E+09 41664 17.668 8 8 3 2.6214E+11 2763520 1467.909 4 4 4 6.5536E+12 635376 451.444 8 8 4 1.6777E+15 174792640 120193.183 8 8 5 1.0737E+19 Not possible Not possible Table 2.2: The growth in complexity for a CSO-Resolution problem with focal plane of size 10 10 pixels. The number of subgrid divisions determines the delity and improves on the accuracy of the initial guess. The \Filtered Combos" column shows the number of congurations to be included in the approximation after the program has ltered the pixels below a cuto threshold. The daunting growth in the number of operations leads to the following conclusion. On a practical level, we observe that when the combinatoric problems have a computation count exceeding the billion number of operations they rapidly become intractable. 2.6 CSO-Resolution with Parallel Computing Models As indicated earlier, at its core, CSO-Resolution is an optimization problem and the op- timization renement steps are independent. Because these paths are independent, one can argue that the CSO-Resolution problem can potentially be attacked with a parallel algorithm. There are a number of parallel computing architectures and Flynn's taxon- omy, which is based on concurrency of instruction and data, organized them into four paradigms [Fly72]: Single Instruction, Single Data (SISD) Multiple Instruction, Single Data (MISD) Single Instruction, Multiple Data (SIMD) 36 Multiple instruction, multiple data (MIMD) Given the above taxonomy, we explore the use of parallel computing to speed up the computational bottleneck using two well-established parallel computing models which are at a higher abstraction level. They are: the data \streaming" model and the MapReduce model, and both models, in fact, are aligned to the SIMD architecture. For each model, we consider properties of the CSO-Resolution problem and look for ways to ways to exploit it with a concurrent computational model. 2.6.1 Data Streaming Model Consider the SIMD parallel model, where the same instruction is applied over multiple data items. This abstraction is suitable for a multithreaded execution model. In fact, as mentioned earlier, because optimization searches are independent, we can exploit the massively threaded computing model. An example hardware that supports this mold is the Graphics Processing Unit (GPU). This is sometimes referred to as the data streaming computational model, where a massively parallel architecture, using graphics cards as a specialized hardware platform, leverages highly concurrent processing with a large number of threads. The pseudocode shown in Algorithm 5 depicts the CSO-Resolution algorithm on a GPU platform with CUDA support. 2 The algorithm works by calling the native library \scatter" function, which would distribute measured focal plane (FP ) values and a set of targets from a pre-calculated collection of all possible target combinations (Algorithm 4) to the worker threads on the GPU. These worker threads would then independently 2 The Compute Unied Device Architecture (CUDA) programming language was developed by NVIDIA Corporation to support parallel and scientic computing on its GPU hardware [NBG08]. 37 Algorithm 5 The CSO-Data Streaming Algorithm. 1: function CSO-GPU(targets, FP ) . list of targets and measured FP 2: FP 0 CalcPixel(q;p) 3: return FP 0 4: end function 5: function Scatter( targets, FP ) . list of targets and measured FP 6: e R Residual(targets;FP ) 7: emit( e R, targets) 8: end function 9: function Gather( ) 10: end function 11: function GPU-CSO(V ) 12: Z map(PSF!V ) 13: fx;y;zg reduce(Z) 14: end function execute the CSO-GPU() function on their assigned target list and each returns a residual value. Finally, the results from all the worker threads are collected with a library \gather" fuction and the target set with the minimum residual is chosen as the solution. 2.6.2 MapReduce Model MapReduce (MR) is an alternative parallel computational model for processing large datasets, which was popularized by Dean and Sanjay [DG04]. The wide adoption of MR as a parallel computing model came about as a result of the OpenSource software Hadoop [Whi09] and later Spark [ZXW + 16]. Although the MapReduce system is complex, the computing model can be understood as a computational construct that is structured around two phases. First, during the \map" phase, the system applies a function f on a large collection of data, More specically, the f emits (or produces) a key-value pair tuple of the format <key, value>. 38 Second, during the \reduce" phase, the system takes the map phase's output stream, i.e. the <key,value> tuples, and applies a function f to combine the results into single value. The functions f and f are domain-specic, however, the application of these functions on the data is the same for all systems, irrespective of the implementation. Algorithm 6 The CSO-Resolution MR Algorithm. 1: function CSO-PRF(targets, FP ) . list of targets and measured FP 2: FP 0 CalcPixel(q;p) 3: return FP 0 4: end function 5: function Mapper( targets, FP ) . list of targets and measured FP 6: e R Residual(targets;FP ) 7: emit( e R, targets) 8: end function 9: function Reducer( (k;v), (k 0 ;v 0 ) ) 10: return min(fk;vg;fk 0 ;v 0 g) . the residual e R is embedded in v 11: end function The pseudocode shown in Algorithm 6 demonstrates CSO-Resolution algorithm on a general purpose MR system. Similar to the GPU implementation, MR has a distribution function which is Mapper and an aggregate function which is Reducer. Whereas the GPU implementation requires explicit \gather" and \scatter" function calls, the MR system automatically calls the \mapper" and \reducer" functions. 2.6.3 Analysis of Parallel Algorithms For both the MapReduce and GPU streaming models, the residuals, or dierences calcu- lation, between the measured focal plane and the synthesized focal plane, share common computation code (Algorithm 1). For phase 2 of the MapReduce, although a global min- imum residual value must be manually retained, this does not pose a bottleneck. For the 39 GPU streaming model, CUDA provides a library with the native minimization function as part of the gather-reduce function. The data streaming model leverages a similar strategy to MR and takes advantage of independent computation thread. This strategy is based on the use of the native map and gather functions. The GPU and the MR techniques rely on pre-execution construction of all possible combinations of congurations. For both models, the computation unit counts are equivalent. Unfortunately, this requires the computation of all possible congurations growing exponentially as the number of targets are increases. 2.7 Summary In summary, we have described the CSO problem in terms of the physical and compu- tational models. In recognizing that CSO is a prelude to tracking and estimation, we framed the CSO model in the larger context of space surveillance, which includes the ac- quisition, detection, tracking and estimation phases. The CSO-Resolution is an integral part of the detection phase, and because of its importance, carries signicant downstream impact. Therefore, accurate detection is essential for the correct estimation of a target's current and future position. The CSO physical model is based on characterizing the bright spots on a focal plane. The underlying uncertainties are the number of targets, their locations, and their bright- ness. This limitation in knowledge is due to the resolution limitations of the sensors. We formally dened the CSO-Resolution problem, which is an estimation on the posi- tion and irradiance uncertainties of unknown targets on a focal plane image. We reviewed 40 the CSO-Resolution algorithm which is based on the so-called \traditional method", as an approach to the CSO-Resolution problem. We also provided the complexity analysis of the algorithm. We close out the chapter with two parallel algorithms for CSO-Resolution optimization. Ultimately, given its combinatoric nature, in particular with the expansion of the possible initial congurations, the algorithms fall well short of achieving the needed speed-up goals. We considered two parallel computing strategies to speed up the CSO-Resolution algorithm. We asked: what are the computational theoretical boundaries? In doing so, our parallel algorithms can provide structural insight as to why these models will not scale for larger number of CSO targets. Thus far, the discussion of the CSO-Resolution problem has been based on experi- mental work performed and analysis of the algorithm's perceived \choke point." A formal analysis on the problem's intractability is required. This view is supported by a theoret- ical proof in Chapter 3. 41 Chapter 3 CSO-Resolution is NP-Complete Over the years, the complexity of the CSO-Resolution problem has been generally ac- cepted by the cognizant computational sciences communities as being \dicult" and this view is based on the common acceptance that there is no known \better" way than the CSO-Resolution algorithm (see Algorithm 3) to estimate unresolved targets. An extensive literature search yielded no results on a formal proof, which also supports our intuition on the problem's diculty. Another indication of its notoriety as a dicult problem is evident in the numerous attempts to attack the problem indirectly and this observation can be drawn from eorts to \optimize" CSO-Resolution through heuristic means. This work expands on the understanding of the problem's complexity and is signicant be- cause it speaks to the (in)feasibility of the problem's solution with traditional methods. One of this thesis's contributions to this research eld includes an understanding of the structure and the nature of the CSO-Resolution problem. It does this by providing a formal proof attesting to the problem's \hardness." For this, we present a theoretical proof that CSO-Resolution problem is NP-complete or NPC. 1 1 NP-Complete is often referred to asNPC . 42 This chapter is organized as follows. We begin with the denition of computational complexity. We provide the background on computational intractability. We look at a few known NP-hard problems as candidates for polynomial mapping (or embedding) into the CSO-Resolution problems. We start our proof with the formal reduction of the MKP (Multidimensional Knapsack Problem) to the CSO-Resolution problem. We conclude with reduction of the CSO-Resolution problem to a decision problem. 3.1 Background and Denitions Computational complexity is understood to be a study or analysis of a problem, and assigning it a \big O" notation, which is a mathematical concept describing the limiting behavior (performance or space) as inputs tends to a particular value or innity. Big O notation is useful for determining if a problem can be solved in a \reasonable" time. For reasonableness, we say that the problem scales at polynomial or super polynomial rate with respect to the problem size. The underlying goal for pursuing a novel computational model to solve the CSO-Resolution problem rests on a claim of this thesis, which holds that CSO-Resolution problem is theoretically intractable the with traditional computing platforms. To this end, we begin with the denition of tractability. Denition 1 (Tractability). A tractable problem is a problem that can be solved by an algorithm with a polynomial-time performance as the upper bound. Conversely, an intractable problem is one that does not have a polynomial-time algorithm and an algo- rithm, if it exists, that has an exponential lower-bound performance. 43 Denition 2 (Decision Problems). A decision problem is a problem that can be posed as a \yes" or \no" question on all inputs. P NP NP-hard NP-complete Figure 3.1: Complexity classication. Denition 3 (P). The class of problems which can be solved by a deterministic poly- nomial algorithm. Denition 4 (NP). The class of decision problems which can be solved by a nondeter- ministic polynomial algorithm. A problem is said to be P if it can be solved in polynomial time and it can also be veried with a certicate function (or algorithm) in polynomial time; on the other hand, a problem is said to be NP, if it is nondeterministic polynomial but also veriable in polynomial time. Problems that can be veried with a certicate function, by denition, can be reduced to decision problems. Conversely, decision problems are not necessarily P nor NP. 44 Central to computational complexity research is the perpetually (and yet to be proven) question of whether or not P = NP. Despite being an open problem, the general con- sensus among researchers is P6= NP and most NP problems are intractable whereas P problems are tractable. For the remainder of the thesis, we will only consider problems in the NP space. Finally, the essence of the problems in the NP space is that there may exist multiple computations for each candidate solution, which follows that a large number of inputs would result in a large number of computations. Denition 5 (NP-Hard). The class of problems to which every NP problem reduces. In other words, a decision problem H is NP-hard when for every problem L2 NP, there is a polynomial-time reduction from L to H [Lee90]. Denition 6 (NP-Complete). The class of problems which are NP-hard and belong to NP. Most solutions to NP-hard problems cannot be veried in polynomial time. In con- trast, solutions to NP-complete problems can be veried in polynomial time [CLRS09]. This reduction is important because it satises the requirement for the certicate func- tion. Furthermore, we say that problem A is NP-complete if A is NP and there exists a problem B that is NP-hard or NP-complete, such that B can be can be embedded into the problem A in polynomial time. Problems that are both NP and NP-hard belong to the NP-complete class [CLRS09]. In summary, an NP-complete problem is: A problem whose solution can be positively veried in polynomial time. This is the certicate that the solution is correct. 45 An NP-Complete problem is a problem that is, in a certain sense, at least as dicult to solve as any other NP-hard problem. Based on the above, we argue that NP-hard problems are amongst the most dicult known problems, and, in the NP space, NP-complete problems are just as dicult. In other words, an NP-complete problem is in NP and is at least as dicult to solve as any other NP problems. In fact, Garey and Johson argued that NP-complete problems are the hardest of problems of NP class and that if we can solve any one of the NP problems, we can solve all NP problems [GJ79]. Given these denitions, we now have the building blocks for working out the theoret- ical proof. First, we assert that although the CSO-Resolution problem is an optimization problem, it can be reduced to a decision problem. Second, we assert that the CSO- Resolution is at least as hard as one of the NP-Hard problems. These assertions serve as the basis for arguing that the CSO-Resolution problem is NP-complete. 3.2 Similar NP Problems As previously indicated, the goal of this chapter is to formally show that the CSO- Resolution problem is a computationally challenging problem and to properly classify its \hardness" level. To support this goal, we considered a few known NP problems. 2 These are the Closest Vector Problem (CVP), the Nearest Subspace Problem (NSP),and the 2 These problems are known to be either NP-hard or NP-complete since either complexity space would support the NP-completeness proof. Literature often refers to NP-Hard problems as \NP" problems. This is an unfortunate confusing application of the term NP as not all NP-hard problems are necessarily NP (Figure 3.1). 46 Multidimensional Knapsack Problem (MKP). For each of these three problems, we look to see if and how they could map on to the CSO-Resolution problem. 3.2.1 Closest Vector Problem (CVP) We rst studied CVP, an NP-hard optimization problem, where the objective to is nd the vector in the lattice structure, the set of integer linear combinations, that is closest to a target vector [Ajt96, Ajt98]. 3 CVP presents itself as an attractive candidate for mapping to the CSO-Resolution problem because of its similar problem formulation to CSO-Resolution. Denition 7 (CVP). Given a nite-dimensional real vectorspace V , a nite subsetU of V and an element v of V , nd a vector from a subset of integer sums of U that is the closest to u. The mapping from CVP to CSO-Resolution is as follows. When discretizing pixels into gridpoints, we treat the location tuple of (x;y) as an index into annn dimensional vector I x;y . By setting the vector space V to contain all vectors in the subset, then there is a subset of vector space that denotes the presence of a subset of gridpoints which represents the presence of targets at those locations. Although the CSO-Resolution problem involves oating point numbers, one useful trick to circumvent the integer CVP's constraint is to reshape the CSO-Resolution problem as an integer problem. This can be accomplished by scaling the intensity of the vectors small enough that integer multiples of that intensity are a ne enough resolution. 3 A special case of CVP is the shortest vector problem (SVP). Both CVP and SVP are important problems in the eld of lattice cryptography [Mic12]. 47 Using CVP for the embedding is not without critiques. First, there are no limits to the number of vectors in the sum for CVP problem. The CSO-Resolution problem, however, is limited by the number of targets. When mapping CVP to the CSO, we can argue that overtting the CSO data would lead to a solution that implies many targets. It is possible to apply techniques, which include the clustering algorithm to reduce the large number of solutions. Unfortunately, this optimization to address this shortcoming is itself an NP hard problem. Second, the literature suggests that the \error" (or residual) vector consists of positive and negative integers. This is problematic for the CSO-Resolution problem which disallows negative coecients. These reasons eliminate CVP from consideration as an embedding candidate. 3.2.2 Nearest Subspace Problem (NSP) NSP was the next the NP problem we considered as a candidate for mapping to the CSO- Resolution problem. Although, Basri el alii introduced an algorithm to approximate the solution to NSP, the number of subspaces is exponential, so their algorithm falls victim to the curse of dimensionality. Therefore, NSP's performance remains overall exponen- tial [BHZM11]. In fact, NSP was proven to be NP-hard problem [KV09]. Formally, NSP is dened as follows. Denition 8 (NSP). LetfS 1 ;S 2 ; ;S n g be a collection of linear spaces inR d and each with an intrinsic dimension k s . Given a query Q in R d which can either be a point or a subspace with intrinsic dimensions k Q and (Q;S i ), which is the distance between the query item andS i , wherexin, we seek the subspaceS that is nearest to the query. 48 For each subspace, consider a basis for subspace. This basis is analogous to the vectors that dene the target positions in the CSO problem. The distance to Q corresponds to the focal plane residual and is given by the linear combination of the basis vectors that is closest to Q. However, the similarity between these two problems breaks in that the denition of a subspace permits these coecients of this linear combination to be negative. In the analogous CSO problem, these coecients represent intensities and cannot be negative. For this reason, we concluded that there exists an instance NSP such that there is no equivalent representation in the CSO-Resolution formulation, thus eliminating NSP as a viable candidate for the embedding. 3.2.3 Multidimensional Knapsack Problem (MKP) Finally, we looked at MKP 4 as an embedding candidate. It should be noted that the traditional one-dimensional knapsack problem, also known as the 0-1 knapsack problem, is a special case of MKP [PRP06, BEBE10, PRP10]. In fact, MKP is much harder than a one-dimensional knapsack problem [Pis95]. Magazine and Chern proved that MKP is NP-Hard by embedding the NP-Complete partition problem into MKP [MC84]. Formally, MKP is dened as follows. Denition 9 (MKP). Given a knapsack with n items, and each with the prot p j > 0, there arem resources each with a capacity ofc i > 0, the weight for each itemj consuming 4 Unfortunately, literature sometimes refer to a multiple knapsack problem as MKP, which is, again, an unfortunate confusing acronym. For the purpose of this thesis, MKP stands for multidimensional knapsack problem. 49 resource i is w ij 0. The goal is to identify a subset of items that maximizes the prot. Expressed in integer linear programming (ILP) notation the problem is: maximize r = n X j=1 p j x j subject to n X j=1 w ij x j c i ; 8i2f1;:::;mg x j 2f0; 1g; 8j2f1;:::;ng (8) Although the number of candidate solutions is large, it is still nonetheless nite. If we make ^ r the max solution, then Equation 8, can be framed as a minimization problem: minimize ^ r n X j=1 p j x j subject to n X j=1 w ij x j c i ; 8i2f1;:::;mg x j 2f0; 1g; 8j2f1;:::;ng (9) Since MKP is NP-hard and does not allow negative coecients, as shown in Equa- tion 9, which makes it a suitable candidate for the embedding. 3.3 Formal Proof that CSO-Resolution is NP-Complete To formally show that the CSO-Resolution problem is \dicult" and that there exists no known polynomial time solution, our strategy is to perform a mapping of a known NP- hard problem to the CSO-Resolution problem. The argument here is that if a polynomial time solution exists for the CSO-Resolution problem, then we can solve that NP-hard problem, too, in polynomial time [GJ79, CLRS09]. 50 Theorem 1. The CSO-Resolution is NP-complete. Proof. The CSO-Resolution proof is structured as follows. x3.3.1 contains descriptions and denition CSO-Resolution problem. x3.3.2 demonstrates polynomial-time transfor- mation from MKP to CSO.x3.3.3 provides a polynomial-time certicate function. 3.3.1 CSO-Resolution Problem Denitions The CSO-Resolution problem presented here is a restatement of the CSO physical model presented in Chapter 2. We highlight some key concepts and denitions that are essential for the CSO-Resolution proof. Denition 10 (PRF). A point response function or PRF is a polynomial-time function that calculates a target's energy contribution to a pixel based on its location and intensity. The PRF computation is dened in Equation 4. In fact, PRF computation for a single target on a pixel is O(c) and based on Algo- rithm 1, the computation cost for calculating the intensity for all targets on all pixels in a focal plane is Km, where K is total number the targets and m is total number of pixels, which is O(n). Denition 11 (Synthesized Focal Plane). P is a two-dimensional focal plane with m pixels. The value of each of the pixels of P is calculated by summing up the PRF of all targets k2K at location (x k ;y k ). P = X k2K PRF x k ;y k (i) (10) 51 Denition 12 (Measured Focal Plane). ^ P is a two-dimensional focal plane withm pixels and the values of the pixels are data from a sensor. ^ P =I meas (i) (11) Denition 13 (Residual). Finding the locations of (x k ;y k ) fork2K is the minimization of the residual e R, which is the dierence vector between P and ^ P . This minimization problem is: e R(x;y;I meas ) = min x;y m X i=1 I meas (i) X k2K PRF x k y k (i) 2 (12) Note that Equation 7 shows e R as a function of discretized gridpoints, which included the amplitude as part of the grid space. For the CSO-Resolution theorem, we will re- strict the amplitude to be a constant therefore Equation 12 is a rewrite of Equation 7 with the amplitude a k removed, since a k is a xed value and does not aect the min- imization search. This restriction does not change the outcome of the proof because a CSO-Resolution problem with xed amplitudes is a subset of the more generalized CSO-Resolution problem. Therefore, the \hardness" of the problem is preserved. Denition 14 (Pixel Capacity). For a CSO-Resolution problem, it is possible to \overt" the number of targets on a focal plane when postulating K. To address overtting, we introduce the \capacity" c i as a constraint on a pixel, which limits a normalized independent metric on pixel i. 52 Overtting occurs when a candidate solution is a result of increasing the number targets, and is oset by reducing the irradiance value of each of the targets. To guard against overtting, c i restricts the overtting with a bias w ij on a target at j. Table 3.1 summarizes the variables as they are related descriptions and denitions of the CSO-Resolution problem. ^ P i Focal plane measured by a sensor P i Focal plane synthesized using the PRF m Number of pixels of a focal plane i Pixel index G j Discretized gridpoints n Number of gridpoints j Grid-point index x j Target at grid point j is part of the solution if 1, otherwise 0 w ij Proportion of the contribution of a target j to pixel i c i The \capacity" of a pixel i Table 3.1: Summary of variables for the CSO-Resolution problem. 3.3.2 Polynomial Transformation MKP to CSO-Resolution Using the denitions fromx3.3.1, we can perform the following polynomial-time mapping from MKP to CSO-Resolution. Consider a two-dimensional focal planeP that is consists of m pixels, and the intensity value of the each pixel is P (i)2 R 0 , where i2 M = f1;:::;mg. Let us discretize the pixels ofP into into subgrids such thatP hasn gridpoints 53 with each grid-point represented by G j , where j2N =f1;:::;ng. If a grid point x j is present then it has the value of 1, otherwise 0. We hypothesize that there are K targets with each of the targets k2 K located at a grid point G j . We wish to also prevent a solution where the number of targets K is large. Denition 15 (CSO-Resolution as ILP). Let us now restate the CSO-Resolution problem as an integer linear programming (ILP) problem: minimize e R = ^ P P 2 subject to K X k n X j=1 w ij x j c i ; 8i2f1;:::;mg;8k2 1;:::;K x j 2f0; 1g; 8j2f1;:::;ng (13) The embedding of MKP in CSO-Resolution is complete, since Equation 13 is equiva- lent to Equation 9. Incidentally, the ILP representation of the CSO-Resolution problem presents itself as a nice one-to-one mapping from MKP. Because the embedding is one- to-one, we conclude that its computational cost is polynomial-time, and in fact O(n). 3.3.3 CSO-Resolution Reduction to Decision Problem As stated earlier, CSO-Resolution is an optimization problem. To transform the CSO- Resolution problem from an optimization problem into a decision problem, we pursue the following strategy with the understanding that all optimization problems can be expressed as decision problems [CLRS09]. A decision problem requires a polynomial time verication function, which is predicated on a solution that answers a \yes-no" question. 54 To be able to transform an optimization problem to a decision problem we must have a polynomial verication, which is predicated on reframing the problem as a decision problem. Denition 16. A conguration C is a set of coordinates f(x;y)g K such that when applying PRF to the members c2 C, a focal plane P is generated. C is said to be a candidate solution to the CSO-Resolution problem and if C is the solution then P = ^ P . To verify a solution to the CSO-Resolution problem, we consider the following \deci- sion" question: Given a grid spaceG, can we identify a point source congurationCG, such that when applying PRF to C, a two-dimensional focal plane P ^ P is synthesized? For the CSO-Resolution problem, \closeness" is determined by the residual e R = P ^ P 2 , where is a xed and acceptable dierence. The time complexity to generate P from C is O(n), which is shown in Algorithm 1, since each pixel in P is generated by summing up the contributions of irradiance from targets k at location c2 C. The time complexity for the calculation of the residual e R is O(n). Therefore, the validation verication is polynomial. 3.4 Algorithmic Progresses of Related NP Problems The NP problems discussed in this chapters share common characteristics. First, they are hard problems and there are no known polynomial time algorithm solutions. Second, 55 researchers have sought alternative means to attack these problems, and for most of these problems, one or more a quantum algorithms has been proposed as possible mean to address the combinatoric challenge. Table 3.4 below summarizes the current state of these problems and their algorithmic progresses. Problem Complexity Space Combinatoric Algorithm Quantum Algorithm CVP NP-Hard X X NSP NP-Hard X X MKP NP-Hard X * CSO-Resolution NP-Complete X X Table 3.2: NP problems and their algorithmic progress. NSP research includes an quantum approach to nearest neighbors for machine learn- ing. Though this is not the same as subspace, it presents an interesting approach that can be further studied [WKS15]. CVP is important to the eld of lattice cryptography and one notable eort with a quantum algorithm has been studied [Reg04]. MKP, up to now, has no notable eort with a quantum algorithm. However, works have been initiated by R.F. Lucas to tackle the 0-1 Knapsack problem [Luc13] This promising outlook further solidies that the research intent of this thesis as a worthwhile pursuit. To this end, Chapter 4 proposes a quantum algorithm as a mean to attack the CSO-Resolution problem. Furthermore, given that we have demonstrated a polynomial-time transformation from MKP to CSO-Resolution, it is logical to anticipate an indirect quantum algorithm for MKP. 56 3.5 Summary The traditional understanding of the CSO-Resolution problem is that it is a dicult problem. In Chapter 2 we model the CSO-Resolution problem by demonstrating that it is dicult from a computational performance standpoint. In this chapter we formally proved that it is as hard as the hardest problem in NP space. We dened the algorithm complexity, starting with P, NP, and all the way to NPC. To prove the hardness of a problem, it must rst be reduced to an NP and that it can be veried in polynomial time. This, of course, is the basis for transforming an optimization problem into a decision problem. The motivation behind this approach is that it is suitable for the NPC completion proof. The strategy for proving NPC is to demonstrate that the problem is NP, i.e. there exists a polynomial time certicate of verication, and that a known problem that is known to be either NPC or NP-hard can be reduced to a problem as hard. With these two established, we can claim that it is at least as hard as the NPC or NP-hard problem and the solution is veriable. We presented three candidate NP-hard problems as candidates to embed into the CSO-Resolution problem. From which, we determined that the MKP is the most suitable candidate for the embedding. We dened the CSO-Resolution problem as an ILP as the key step to formally demon- strated that the CSO-Resolution problem is NP-complete. In summary, we have demon- strated that: 57 1. CSO-Resolution is NP-Hard. The CSO-Resolution optimization problem is recasted as a CSO-Resolution ILP problem (13) which is equivalent to MKP (9). The embedding is O(n) since the MKP representation is a direct mapping to the CSO-Resolution ILP representation. 2. CSO-Resolution is NP. A CSO-Resolution decision solution can be veried with a certicate decision function in polynomial time. Our proof further validates the need to pursue a quantum optimization approach to attack the CSO-Resolution problem. This is a major step in the solutions of these important issues in astronautic object identication and analysis. 58 Chapter 4 CSO-Resolution Quantum Annealing Algorithm In the previous chapters, we have shown that the best CSO-Resolution algorithm using a classical computer is exponential. In fact, we have also formally proven that the CSO- Resolution problem is NP-Complete. Faced with this challenge, we now turn our attention to a potentially promising computing paradigm, i.e. quantum information processing (QIP), to solve the CSO-Resolution problem. As previously discussed in Chapter 1, the realization of QIP as a viable pathway toward solving intractable problems, in a realistic time window, is promising yet speculative. The prospect of hardnessing QIP as a mean to attack intractable problems is evident in the race to quantum supremacy [AAB + 19, Pre19, BIS + 18]. While this thesis does not attempt to settle the controversy nor to make claims that such a computer can exist, the focus here is to show that if such a device does exist, we can potentially enhance space surveillance research eld by tackling a notorious computationally-taxing problem problem with the techniques presented in this chapter. This chapter is organized as follows. We start with a background on quantum com- puting, followed by a study of quantum annealing, propose a quantum algorithm that utilizes an annealer to estimate the CSO-Resolution problem, and close with a discussion 59 on how the algorithm could be programmed on an idealized quantum annealer. Although, our discussions centers around the main goal of attempting to solve the CSO-Resolution problem, we hope to demonstrate a broader appeal that lends itself to pursuing solutions for challenging problems, and especially those that are combinatoric. 4.1 Background Unlike the classical digital computer, which encodes the information state in registers that are made out of capacitors, quantum computers encode their information states within the spin direction of a photon or electromagnetic waves. Additionally, whereas in a traditional computer, information is encoded into binary statef0; 1g, a quantum bit (qubit) can take on complex states, such that a qubit can exist in both 0 and 1 states simultaneously. The qubit is said to be in a superposition state. This view allows for quantum computing to extend to ll the Hilbert space and produces an exponential expansion of a possible state for a qubit. 4.1.1 Quantum Models While there have been many quantum computing models developed over the years, today the two widely discussed quantum computing architectures are the gate-model and the annealing-model. The quantum gate-model's goal is to control and manipulate how the quantum states evolve over time. This is a dicult endeavor because quantum systems are sensitive. Noise model and control have been a topic of research since the dawn of quantum com- puting [Pon06]. 60 The annealing model harnesses the natural evolution of quantum states to a nal state, where the answer is encoded in the conguration. Annealing models are suitable for optimization and probabilistic sampling problems. We will discuss this in more detail in Section 4.2. 4.1.2 Quantum Problems Feynman in his now famous keynote speech in 1981 was the rst to propose a problem that is suitable for a quantum computer. He conjectured that for a certain physics simulation, a quantum computer is required and that a classical computer would not be able to accomplish it [Fey82]. Feynman argued that \problem of simulating the full time evolution of arbitrary quantum systems on a classical computer is intractable." In fact, the state of the quantum system in vectorspace has dimensions that grow exponentially proportional to the size of the system. This conjecture was later proven to be correct [Llo96]. 4.1.3 Optimization and Probabilistic Problems A search problem, at its core, is an optimization problem. Consider, for example, the traveling salesman's dilemma: given all the available paths, which is the best path? Similarly, in physics: given a system's conguration at an excited state, what is the most likely conguration the system at its minimum energy state? Quantum physics can nd such a state of the system, and can do so eciently [NMG + 18]. For certain problem sets, e.g. machine learning, a viable approach is to take sampling from the many low energy states and characterize the landscape, which gives states models for improving input parameters. 61 Probabilistic models handle uncertainty by accounting for gaps between knowledge and errors in data sources. Probabilistic distributions represent unobserved quantities in the model, which include noises, and they can address the challenge of relating observa- tions to the measured data. 4.2 Optimization via Quantum Annealing While in some circles it is arguable if optimization by quantum annealing can be consid- ered as quantum computing, for the purpose of this thesis we take on the same position held by Deutsch [Deu85]. Two classical deterministic computing machines are `computationally equivalent' under given labelings of their input and output states if they compute the same function under those labelings. But quantum comput- ing machines, and indeed classical stochastic machines, do not `compute functions' in the above sense: the output state depending on the input state. The output state of a quantum machine, although fully determined by the input state is not an observable and so the user cannot in general discover its label. Nevertheless, the notion of computational equivalence can be generalized to apply to such machines also. Using this argument, a quantum annealer is a \computer" that computes the output states of a problem dened by a functionf. Although the output is stochastic, if sucient 62 simulations are performed, we assert that, with statistical condence, in the output of that \computer" which was designed to compute the function f. 4.2.1 Probabilistic Condence The approach to adopting statistical sampling-results as an output of a model using large- scale simulations, which may include QIP experiments, remains an open research area. In particular, one can ask: how does the noise in these experiments impact the accuracy of the results? This outlook hinges on the Cherno bound theorem [MR95], which states that if x 1 ;x 2 ;x 3 are independent variables then the probability can be bounded by P n X i=1 n 2 ! e 22n (14) With a sucient number of simulations of independent paths, we posit an idealized architecture that, although yielding probabilistic results, can accept the solution with relative statistical condence. 4.2.2 Ising Model Statistical mechanics served as the foundation for the Ising model for ferromagnetic sys- tem. The Ising model is a n-dimensional lattice model which measures the interactions and bias of a magnetic system [Isi25]. The Ising model is a macroscopic physical system observed from a microscopic structure. What makes the Ising problem particularly chal- lenging is that it has many degrees of freedom. In a lattice conguration, an the Ising model is made up of dipoles at the vertices and each dipole can either spin up or down, 63 depending on the direction of the electromagnetic spin. Formally, the Hamiltonian of the system is H(s) =J N X <kl> s k s l B N X k s k (15) where J is the interaction constant, <kl> is the sum over the nearest neighbors, B is the external magnetic eld, and N is the number of dipoles in the system. The spin direction ofs can be either + and, to represent an up spin or a down spin, respectively. Despite many attempts to solve the Ising problem, it has been theoretically proven to be dicult, and nding the solution to an Ising problem has been proven to be NP- hard for two or more dimensions [Bar82], as the number of possible combinations one must test is combinatoric. Recent aggressive attempts have been made using monte carlo simulation methods to solve the Ising problem, and these include the use of a GPGPU [Jur11, PVPS09, Wei11]. In addition to having broad appeal in the many elds, the Ising model itself serves as the foundation for the quantum annealing computation model. D-Wave, the rst commercially available \quantum" system, utilized the adiabatic theorem and built their system with the hope of exploiting the quantum annealing feature to solve the Ising problem [DW18]. 4.2.3 Adiabatic Quantum Optimization The D-Wave Quantum Processing Unit (QPU) architecture is designed to address the dicult problem of nding the minimum energy state for H(). 64 H ising = A(s) 2 X i ^ (i) x ! + B(s) 2 0 @ X i ^ (i) z + X i>j J i;j ^ z (i)^ (j) z 1 A (16) Where ^ (i) x;z are Pauli matrices operating a qubit q i and h i and J i;j are qubit biases and coupling strength, respectively [DW18]. If one can map a challenging problem to the Ising model, one can, via the direct embedding method, easily \program" the D-Wave [Cho08]. To embed an Ising problem on the D-Wave, one expresses the vector h j and and matrix J ij (of Equation 16) as input variables. An output from the D-Wave is a conguration of values for i;j , which yields the minimum energy state for H(). We note that the D-Wave does not always provide a solution and its results are stochastic, thus a histogram of multiple runs are often desirable. AQC is an approach that utilizes a specialized optimization hardware in a brute force way to nd the minimum energy state of an Ising conguration. To this end, the essence of the Ising model formulation is a minimization of a quadratic polynomial (cost function) where the variables are constrained tof0; 1g. The standard form for a problem in the Ising model is: H() = X j h j j + X i6=j J ij i j (17) where i 2f0; 1g, h j 2R, and J ij 2R, for all i;j. Finding the minimum of H() is proven to be NP-hard [Bar82], as the number of possible combinations one must test is combinatorial. The D-Wave system has an additional term in their Hamiltonian which 65 is the controller and this value is a xed constant derived from experimentation. This constant, as we will see, does not change the overall approach to using a quantum annealer as a optimization device, see Equation 16. A number of dicult problems, including the NP-hard graph partitioning problem have been mapped to the Ising model in hopes that a quantum annealing device can solve an otherwise intractable problem in polynomial time [Luc13]. By mapping the CSO-Resolution problem onto a quantum annealing device we hope to achieve better asymptotic performance. 4.2.4 Graph Partitioning Example We present here an example of how an NP-hard problem, that is simple to express, can be mapped to the Ising spin glass model, which may potentially benet from the quantum advantages. This mapping is inspired by Lucas's mapping of Karp's famous 21 NP hard problems to the Ising spin glass model. We note that that our contribution is converting his Hamiltonian to an embedded form that can be programmed into the D-Wave. Consider an undirected graph G = (V;E) with an even number N =jVj of vertices. We are interested in how to partition the graph so that when dividing the set V into two subsets of equal size N=2, such that the number of edges connecting the two subsets is minimized. This is known to be an NP-hard problem [Kar72]. 66 Figure 4.1: Partitioning a graph into two equal subsets with a balanced number of nodes is an NP-hard problem. One approach is to map the graph partition problem into an Ising spin problem. For this we consider the following goals: Balance the partition: Place an Ising spin s v =1 on each vertex v2V on the graph, where +1 and1 denote the vertex being in either the + or the set Minimize the number of inter-partition edges: Penalize any two vertices (v i and v j ) having opposite spins that are connected by an edge It follows that the cost of ensuring that partition size is balanced: H A (s) =A N X i=1 s i ! 2 (18) For the enforcement of the second constraint, we specify the the cost of \connecting edges" between partitions: 67 H B (s) =B X ij2E 1s i s j 2 (19) Total penalty cost for partitioning the graph into two subgraphs: H(s) =H A (s) +H B (s) (20) We now need to express H A and H B in Ising form: H A (s) =A N X i=1 s i ! 2 =A N X i=1 s 2 i +A X i6=j s i s j =K 1 +A X i6=j s i s j (21) and H B (s) =B X ij2E 1s j s j 2 =K 2 B 2 X ij2E s i s j (22) Constants K 1 and K 2 can be ignored as they do not depend on s. Therefore, the Hamiltonian for the graph partition problem as an Ising problem is: H(s) =A X i6=j s i s j B X ij2E s i s j (23) Our challenge is to nd the J ij . We can pick values for A and B such that: A>>B if balance is important AB if balance and minimizing the number of nodes are equally important A<<B if minimizing the number of edges is important 68 Given that the cannonical Ising form is: H() = X j h j j X i6=j J ij i j (24) Since the magnitude h i of each node does not contribute to the overall minimization problem, this term is omitted from the Ising formulation. Therefore, Equation 23 is of the same form as Equation 24 thus demonstrating the values for J ij are now simple inputs and can be programmed into the D-Wave machine. 4.3 CSO-Resolution with Quantum Annealing 4.3.1 Mapping the Knapsack Problem to an Ising Spin Glass In Chapter 3, we demonstrated that MKP can be embedded as a CSO problem in polyno- mial time. Another possible approach is to follow Lucas who performs a transformation of a generalized Knapsack problem to the Ising spining glass model. H A =A 1 W X n=1 y n ! 2 +A W X n=1 ny n X w x ! 2 (25) H B =B X c x (26) Although this transformation is theoretically possible, the process of mapping such a model to a chimera quantum annealer is challenging. The rst diculty with this approach is that it is not \ready" for the AQC annealer with its current representation. The second challenge is that the Ising formulation presented is for the 0-1 knapsack 69 problem which is simpler than the MKP [Pis95]. We argue that, by way of the CSO- Resolution problem, a quantum algorithm for MKP is now possible. 4.3.2 CSO-Resolution Quantum Annealing Estimation Model Unlike the grids in the traditional approach, we need to also discretize the intensities because the targeted hardware does not have a mechanism to dynamically compute a on the y. The set of grid points is nowf(a r ;x r ;y r )j r = 1;:::;Rg. As previously stated, the goal is to determine the intensity and position of a xed number K of targets. For this, we would like to nd the choice of (x;y) and a which minimizes: e R(a;x;y;I meas ) = R X r I meas (r) K X k=1 I a k x k y k (r) 2 (27) where I meas (r) is the pixel value at gridpoint r and I a k x k y k (r) is the discretized irra- diance value of the k-th object at gridpoint r. 4.3.3 CSO-Resolution Quantum Annealing Algorithm We now seek to express the CSO-Resolution problem (Equation 27) in the same form as the Ising model (Equation 17). This transformation will, in theory, permit us to solve the CSO-Resolution problem on a D-Wave machine. For ease of notation, we setM ij =I meas (i;j) and ijr =I arxryr (i;j). Let2f0; 1g R where R is the total number of gridpoints. The r-th entry r of denotes whether the r-th grid point is part of the CSO solution. LetK be the desired number of objects in the CSO solution and let h be a weighting parameter. To determine the number of targets and their locations, we minimize H A (): 70 H A () = X i;j M ij X r ijr r 2 (28) = X i;j M 2 ij + X r 2 4 2 X i;j M ij ijr 3 5 r + X r 2 4 X i;j 2 ijr 3 5 2 r + X r6=s 2 4 X i;j ijr ijs 3 5 r s As with all minimization problems, constants do not aect the problem's outcome and can be ignored. Furthermore, since r 2f0; 1g, we can substitute 2 r = r : H A () = X r 2 4 2 X i;j M ij ijr + X i;j 2 ijr 3 5 r + X r6=s 2 4 X i;j ijr ijs 3 5 r s (29) To clean things up, we set e h r =2 X i;j M ij ijr + X i;j 2 ijr (30) and e J rs = X i;j ijr ijs (31) The Hamiltonian H A can now be expressed in standard Ising form: H A () = X r e h r r + X r6=s e J rs r s (32) 71 In order to avoid overtting (see Dention 14), which are cases where the entire focal plane is populated with CSOs whose irradiance values are small, we must also constrain the number of CSOs present in the focal plane. For this, we minimize H B (): H B () =h X r r K ! 2 (33) Applying the same steps above to H B (), the Hamiltonian that constrains the CSO count is: H B () = h " X r r K # 2 = h X r 2 r +h X r6=s r s 2hK X r r +hK 2 = X r (h 2hK) r +h X r6=s r s (34) The complete Hamiltonian for the CSO-Resolution problem isH() =H A ()+H B () and can now be expressed as: H() = X r ( e h r +h 2hK) r + X r6=s ( e J rs +h) r s (35) With ^ h = e h r +h 2hK and ^ J = e J rs +h, we now have the appropriate representation of the CSO-Resolution problem in Ising form: H() = X r ^ h r + X r6=s ^ J rs r s (36) 72 4.4 Programming the CSO-Resolution on an Annealer 4.4.1 Ising Parameters Extraction One of the major thrusts of this research has been to assemble inputs for an Ising annealer from a CSO model. For this, we have developed an approach to automate the Ising parameters extraction. Algorithm 7 builds ^ h and ^ J (Equation 36), which are canonical input variables for an annealer that simulates the Ising model. The function phi is a wrapper around PRF (Equation 6), the functions calc-J and calc-H are developed using the Clojure [Hic08] map and reduce constructs, which generate H and J rapidly and eciently from inputs of gridpoints and pixels. Once ^ h and ^ J have been constructed, programming on a annealer (such as a D-Wave machine), in theory, should be fairly straightforward. 4.4.2 Limitations of Chimera Graph Architecture Despite early optimisms surrounding the D-Wave, its restrictive chimera graph structure (Figure 4.2) makes writing programs more dicult than expected. In fact, embedding the CSO-Resolution quantum algorithm such that it can be programmed on an annealer requires many variables and many connections (or interactions) between the variables. This deciency can be attributed the chimera graph structure which is limited by the available hardware. As such, even with the simplest CSO model, the experimentation capability is beyond the D-Wave 2X's current capacity which will be discussed in further detail inx4.5.3. 73 Algorithm 7 CSO-Resolution Quantum Annealing Algorithm. (defn phi "Calculate contribution from point source at index r on pixel at (i,j)." [r p {{R :R} :pixel eod :eod :as Pa}] (let [[x y z] (n-to-xya Pa r) [i j] (n-to-ij R p) box (place-box [i j] Pa) target (place-target [x y z] Pa)] (calc-prf box target eod))) (defn calc-H "Calculate the H at gridpoint r, with bias h." [M r h K {{Ron the D-Waven [p] (c/square (phi r p Pa)))] (+ (* -2.0 (reduce + (map f1 P))) (reduce + (map f2 P)) h (* -2.0 h K)))) (defn calc-J "Calculate the J interaction between gridpoints r & s, with bias h and K desired target." [M r s h K {{R :R C :C} :pixel :as Pa}] (let [P (pixel-range Pa) J (fn [p] (if (= r s) 0 (* (phi r p Pa) (phi s p Pa))))] (+ (reduce + (map J P)) h))) (defn CSO-Resolution [bias estimate Pa] (def G (grid-range Pa)) (def h-f (fn [r] (calc-H M r bias K Pa))) (def J-f (fn [r] (map (fn [s] (calc-J M r s bias K Pa)) G))) (def h (map h-f G)) (def J (flatten (map J-f G)))) 74 Figure 4.2: The D-Wave connectivity is structured as a Chimera graph (courtesy of D- Wave Systems). This gure shows a 3 3 chimera graph and qubits which are arranged in 9 unit cells. The chimera graph architecture on the D-Wave QPU is a representation of the in- terconnecting qubits arranged in a lattice structure. Each vertex in the lattice has six degrees because it can only be physically connected to four neighbors on the same plane and two neighbors in the orthogonal plane. A chimera graph of size C s is an ss grid of chimera cells where each cell contains a bipartite graph on eight vertices. In an ideal setting, assuming a perfect and noise-free annealer, the D-Wave 2X can support a fully connected graph with an underlying chimera graph of size C 16 which can represent 2048 75 problem variables 1 [KYR + 19]. However, due to fabrication limitations and trapped mag- netic ux, the number of usable qubits of a deployed D-Wave system is lower. In fact, according to D-Wave: In a D-Wave QPU, the set of qubits and couplers that are available for computation is known as the working graph. The yield of a working graph is typically less than the total number of qubits and couplers that are fabricated and physically present in the QPU [DW20]. There have been numerous attempts to improve on the limitations of the chimera graph representation [BKR16, CMR14, YD16, XSW + 18, OOTT18]. Additionally, novel graph architecture, such as Pegasus has been proposed to run on the annealer [DSC19]. Despite these eorts, programming on the D-Wave QPU remains dicult from the per- spective of connectivity scaling. To this end, it is important to understand if, and how, the CSO-Resolution problem on a QPU is bounded by the limited number of interconnections and the limited number of qubits (variables). In summary, although we have proposed an algorithm to generate the inputs to the D-Wave QPU, the number of variables ( ) and the interconnections between qubits (Equation 33) remain large. Because of the limited interconnections, dictated by its chimera architecture, the actual programming of D-Wave remains an open-ended chal- lenge that is beyond the scope of this thesis; however, an analysis quantifying the limiting factors will be presented the following section. 1 We compute number of variables with the relationship n = 8s 2 because problem diculty scales exponentially with the chimera size. 76 4.5 Analysis of the CSO-Resolution Quantum Algorithm To analyze the performance and \programmability" of CSO-Resolution quantum algo- rithm on a chimera-structured annealer architecture, we rst dene some relating termi- nologies and metrics. Recall fromx4.3.2 that a target's possible location and irradiance is represented byf(a r ;x r ;y r )j r = 1;:::;Rg where R is the total number of gridpoints which are required for an AQC estimation and R corresponds to the number of variables required, i.e. r 2f0; 1g R . Consider a two-dimensional focal plane P . Let us assume a uniform grid of S points in both directions and a uniform grid of points for the irradiance. The traditional method has a grid size R = S 2 , the CSO-Resolution quantum algorithm has grid size R 0 =S 2 , and the improved binary representation, as shown inx4.6, would have a grid sizeR 00 =S 2 log( ). We conducted two analyses on the eectiveness and feasibility of the adiabatic quantum CSO-Resolution algorithm. First, we studied the complexity of the quantum algorithm on an idealized annealer. Second, given that such device is not yet available, we determined if the limitation of the D-Wave and its chimera-graph structure would preclude an implementation of a typical CSO-Resolution problem. 4.5.1 CSO-Resolution Quantum Algorithm Complexity Given an idealized quantum annealer , we consider the following two key attributes when assessing the CSO-Resolution algorithm. First, the annealer would have true quantum properties, i.e. entanglement, quantum parallelism 2 and a closed system. Second, it 2 A system exhibits quantum parallelism properties when its memory register exists in a superpositions of base states. In other words, quantum parallelism occurs when a function f (though applied only once) simultaneously aects all components of the superposition. 77 should not impose any limitations on the number of connections between the qubits, i.e. an idealized annealer with n qubits would have C = 1 2 n(n 1) connections, such that each qubit would have an interaction with each of the other qubits. Given the above attributes, the CSO-Resolution quantum algorithm when executed on , the number of targetsK would not impact the complexity since a quantum system can simultaneously search independent solution paths. This is an important insight into the potential for quantum advantage. Let us compare the runtime complexity of the traditional method against the the quan- tum CSO-Resolution algorithm. For the traditional method, the complexity is O(S 2K ) and as K increases, so does the runtime, and, in fact, it increases exponentially. The quantum algorithm the complexity, again, thanks to quantum parallelism, is O(S 2 ). If S, it is easy to see that we would need for K 3 to hopefully harness the desired speedup due to quantum advantage. 4.5.2 Minimum CSO-Resolution Conguration Given the chimera graph's restrictive structure, we consider below a minimum CSO- Resolution conguration. A meaningful linear t for a single pixel requires four subpixel gridpoints. As discussed in Chapter 2, the pixel values represent the data and the grid- points are represented by the variables. Each gridpoint has three parameters (x, y, and the amplitude). For the seven-pixel CSO conguration (Figure 4.3), we can estimate the location and amplitude of two closely space targets with six unknowns using seven data points. 78 Figure 4.3: The smallest CSO conguration for two closely-spaced targets requires four subpixel gridpoints per pixel and a minimum of seven pixels. Therefore, this conguration requires a minimum of 28 location gridpoints. 4.5.3 CSO-Resolution Quantum Algorithm Representation We apply the quantum resolution algorithm to the minimum CSO conguration (shown above) and with discretized amplitudes. There are two challenges to embdedding a rep- resentation of the CSO-Resolution problem in an annealer: limited variables and limited connectivity. First, for a two digit precision model, the amplitude discretization requires at least 100 discrete steps (at each grid location on the focal plane), which means that for the minimum CSO-Resolution problem, the required total number of variables is 2800. The D-Wave machine at USC/ISI has only 1098 working qubits, hence a chimera graph of size C 12 . The D-Wave 2X QPU is structured as a C 16 chimera graph and it contains 2048 qubits. For both D-Wave machines, the required number of variables exceeds the number of available qubits. 79 Second, adding to challenge of having a limited number of variables is the need for high-connectivity, i.e. a dense or near-dense J, where the total number of connections needed is C 3:9 million edges. The connectivity of a D-Wave 2X QPU contains 2048 nodes and 6016 edges which means that the underlying connectivity graph is sparsely- connected. At the cost of reducing the number of available variables, minor embedding, a qubit-chaining method, can be employed to increase connectivity [DW20]. Another obstacle is the decrease in reliability of a chain when it exceeds ve nodes [BCM + 17, HNC18]. With minor embedding, the D-Wave machine at ISI/USC can represent a fully connected graph with a J of approximate size 50 50 and the D-Wave 2X can support a fully connected graph with a J of approximate size 100 100. The minor embedding graphs on both machines are much smaller than a J of size 2800 2800 that is needed to embed the minimum CSO-Resolution problem. Given these limitations, it is not yet possible to run a CSO-Resolution experiment on the D-Wave QPU, even for what is considered a \toy problem" with a minimum CSO conguration. For a realistic CSO model, the number of location gridpoints can be held to a xed size. However, the irradiance range can vary signicantly. In fact, it is reasonable to expect that a higher delity estimation on a target's brightness level necessitates a range for requiring more than three digit precision. To this end, the unary representation implies the number of variables is large and inecient. To address this concern, a more ecient representation has been developed. 80 4.6 Improvements of the Quantum Representation In the presented quantum algorithm, the number of gridpoints (R) grows linearly as we increase the number of irradiance discretization points. This is problematic even though the location of gridpoints can be xed. We propose an approach that can represent the Hamiltonian with the irradiance values in binary format. In doing so, we would reduce the number of variables needed to represent the discrete irradiance values from n to log(n). Approach 1: Improving the eciency involves using an that combines an index value with a digit value. If we let q be a variable that represents the index of a two- dimension grid location andf q;p g be the binary representation of the amplitude, where q is the grid location and p is the digit in amplitude, then the following Hamiltonian captures a mapping. H a () = X q;p q;p 2 p ijq M ij (37) H b () = X q q K ! 2 (38) H c () = X q (1 a ) X p q;p (39) Approach 2: To improve the required number of variables involves representing the Hamiltonian as a decoupling of the amplitude and gridpoints. Here, we letf q;p g be the binary representation of the amplitude. We dene a further set of variablesf q;p g. For p 1; q;0 is the same variable as q;0 . 81 The Hamiltonian in this approach is also a sum of three terms, H a ,H b , andH c . The rst term, H a is the same and H b is almost the same: H b () = X q q;B1 K ! 2 (40) Where B is the number of bits in the binary representation of the amplitude. The nal part of the Hamiltonian ensures that q;B1 equals 1 exactly when at least one of q;p equals 1: H c (;) = X q X p1 ( q;p q;p1 q;p ) 2 q;p1 q;p (41) In both cases, the theoretical representations can be worked out with algebra to get the nal Hamiltonian. The optimization eorts shown above are theoretical improvements to the CSO- Resolution quantum embedding representation. They demonstrate the potentials with future annealer technology that can improve in connectivity and can increase in the num- ber of qubits available for computation. However, an open research opportunity remains: will the reduction fromR 0 =S 2 toR 00 =S 2 log( ) accommodate the quantum algorithm mapping such that it can be programmed on the annealer? 4.7 Summary We have presented a quantum annealing approach to address a challenging problem: the resolution of closely-spaced objects. The methodology discussed in this chapter presented 82 itself as potentially eective in resolving a very dicult problem, one that has been plaguing the space surveillance community for decades. We studied the adiabatic quantum annealing architecture and conceived how the CSO- Resolution problem would match up against and be amenable to it. Positive contributions from this eort included the design of a quantum algorithm to tackle the CSO-Resolution problem and the development of an algorithm to transform a CSO-Resolution problem into inputs to an AQC annealer. We performed analyses on the quantum CSO-Resolution algorithm and observed that the chimera graph structure as presented by the current D-Wave is not suitable for our problem in practice. Namely, the number of qubits and their connectivity is not large enough. We proposed an optimization strategy to enhance the representation of the CSO- Resolution algorithm by reducing the number of variables required. Even with the opti- mization of the representation, embedding a realistic CSO-Resolution problem onto the current D-Wave architecture is not yet practical. However, we anticipate future improve- ments to the hardware to overcome these challenges. 83 Chapter 5 Conclusion In the space technology research eld, there are many problems that are dicult and, in fact, many do not have a solution. Currently, among these problems is the CSO- Resolution problem. CSO-Resolution is an active research eld and is important on many fronts. First, within the space technology community, it is an integral rst step toward the tracking of targets. Its applications' impact on the astronomy eld with the study of terrestrial motion and in the military science eld, the study of tracking and estimation are important and urgent issues. In particular for national security, the detection of targets such as missiles and manned or unmanned vehicles relies heavily on the accurate and ecient CSO-Resolution algorithms. CSO-Resolution is challenging problem that touches multiple disciplines. In physics, it is a complicated problem that requires an accurate model of the spatial and kinematic rep- resentation of the physical world. In computer science, to resolve targets in closely spaced area, intensive computational resources are required. In mathematics, CSO-Resolution, though widely studied, is plagued by the curse of dimensionality because of the problem's combinatoric structure. 84 Over the years, there have been many attempts to understand the problem's challenge. Up to now, these studies, well-represented in professional literature, have seemingly not been included in comprehensive algorithmic analysis. An algorithmic and complexity analysis is important because, if a problem is believed to be \hopeless," researchers can avoid searching for a direct solution, especially one that involves the use of brute force. Researchers, when attempting to improve the CSO-Resolution problem solution per- formance, have made numerous attempts to make progress and improvements by way heuristics. These attempts have resulted in moderate improvements, but they often make compromises in terms of accuracy of the model by decreasing the resolution of the sensor or capping target numbers at a relatively low two or three target candidates. These barriers have deterred researchers from working with higher delity modeling and simulations, especially of a realistic sensor captured scenario. To these points, this thesis purports to answer the call in three theoretically sound ways: 1. We formally performed theoretical analysis on the diculty of the problem 2. We presented and justied a quantum annealing optimization algorithm to attack the CSO-Resolution problem 3. We have improved on the na ve variable expression with a more compact represen- tation 85 5.1 Contributions Though we believe that this thesis has made a number of contributions to the elds of computer science, and modeling and simulation, we highlight below the three major contributions. First, the work from this thesis has provided a formal proof demonstrating that the CSO-Resolution problem is NP-Complete. This is signicant on two fronts: Initially, it validates the worthiness of the research; additionally: work has previously not been done, therefore precluding the needed stepping stone for future CSO and combinatoric algorithm research Second, the work done on the mapping of MKP to CSO-Resolution has an interesting side eect; we have indirectly explored a quantum algorithm for MKP, a notoriously dicult problem in the NP-hard space. While this was an unintended contribution, we note this accomplishment here as it highlights again that this work is among the rst, if not the rst, to provide a quantum algorithm to address MKP and may be even more widely extensible to many other issues of like kind. Third, the research provides a mapping of the CSO-Resolution problem to the cur- rently operational adiabatic quantum annealer. While it is arguable if the D-Wave platform actually yields the optimistic performance increases anticipated from quantum computation, this thesis does not presume to settle this debate. Instead, the thesis's accomplishment demonstrates that an NP-Complete problem can be attacked if a quan- tum annealer platform is available and performance enhancements are produced as an- nounced. Given the limitations of the chimera-graph conguration, the thesis presented 86 a two approaches to optimize the representation of Ising model by reducing the number of variables required from n tolog(n). Although the reduction in the number of variable requirements has no immediate impact, we argue that it will eventually help in the quan- tum computing world, where the number of qubits is a premium commodity for sizing real world problems into a quantum computing platform. We rely on the fact that similar issues and criticisms greeted the introduction of massive parallel computer technologies in the early nineties. 5.2 Shortcomings A potential criticism for the thesis is its lack of experimentation data. While it would be nice to have conducted experimentation on the D-Wave to empirically show the algo- rithm's correctness, the thesis's main objectives were theoretical and algorithmic devel- opment. However, this work is a necessary step to the exploitation of the improvements which are putatively forthcoming. This work should enable future courses of action for a research on fertile ground, rich with opportunities. The adiabatic hardware architecture and implementation currently at the Lockheed- Martin/USC Quantum Computer Center located at ISI has several deciencies. First, the current technology does not support a realistic problems due to its restrictive chimera graph architecture; second, the D-Wave systems, like all adiabatic systems, still has signal noise, such that it is dicult to get consistent and reliable results. Given the hardware conguration of the current generation of the D-Wave Two architecture housed at the Center, the number of operable qubits is 1098. This is further exacerbated by each qubit's 87 being limited to the interaction with only six neighbors. Plans for addressing both the number of cubits and the neighbor-connectivity problem have been announced. Finally, because the D-Wave model does not allow for dynamic computation of the irradiance value, we must discretize its values ahead of time. This limitation currently impacts our approach in two ways. First, it limits the precision of the irradiance compu- tation. Second, the number of gridpoints increases the variable requirements by a factor ofn (or log(n) if we convert the discretization points from unary to a binary representa- tion). This too is a problem for future researchers to attack, using this thesis as a starting point. Together, these challenges limit the size of our CSO-Resolution problems and have contributed to the diculties with the pursuit of experimentation results. 5.3 Future Work In light of both the progresses and limitations discussed in the previous sections, there are number of opportunities to further this research and we highlight three areas below. First, to validate the research, experimentation verifying the correctness of adiabatic quantum algorithm would greatly add value to the research. Since the ground work has been performed in Algorithm 7, where the inputs to the annealers have been worked out, we can easily program a future D-Wave architecture with a larger number of qubits. This opportunity presents itself as a beckoning opportunity, contingent on the delivery of the more capable D-Wave or comparable annealer. 88 Second, given the extensive and laborious research involved in the development the embedding, we see tremendous benets resulting from a repeatable embedding methodol- ogy. We also note that current embedding method is largely based on trial and error. The ability to characterize the structure of an embedding process/algorithm with a formulaic implementation technique can potentially bear a plethora of fruitful outcomes. Third, the research introduced an opportunity to potentially show a quantum algo- rithm for MKP. While this was unintended, we can leverage this nding and see how the results our research with the CSO-Resolution problems can be used to attack other combinatoric NP problems. 89 Epilogue In summary, we have shown a straightforward approach to mapping a problem of expo- nential complexity on to the Ising model, which can be attacked by a quantum annealing model. Our approach has given hope to a possibility of solutions to certain class of tracking and estimation problems that have been previously accepted as intractable. 90 Reference List [AAB + 19] Frank Arute, Kunal Arya, Ryan Babbush, Dave Bacon, Joseph C. Bardin, Rami Barends, Rupak Biswas, Sergio Boixo, Fernando G. S. L. Brandao, David A. Buell, Brian Burkett, Yu Chen, Zijun Chen, Ben Chiaro, Roberto Collins, William Courtney, Andrew Dunsworth, Edward Farhi, Brooks Foxen, Austin Fowler, Craig Gidney, Marissa Giustina, Rob Gra, Keith Guerin, Steve Habegger, Matthew P. Harrigan, Michael J. Hartmann, Alan Ho, Markus Homann, Trent Huang, Travis S. Humble, Sergei V. Isakov, Evan Jerey, Zhang Jiang, Dvir Kafri, Kostyantyn Kechedzhi, Julian Kelly, Paul V. Klimov, Sergey Knysh, Alexander Korotkov, Fedor Kostritsa, David Landhuis, Mike Lindmark, Erik Lucero, Dmitry Lyakh, Salvatore Mandr a, Jarrod R. McClean, Matthew McEwen, Anthony Megrant, Xiao Mi, Kristel Michielsen, Masoud Mohseni, Josh Mutus, Ofer Naaman, Matthew Neeley, Charles Neill, Murphy Yuezhen Niu, Eric Ostby, Andre Petukhov, John C. Platt, Chris Quintana, Eleanor G. Rieel, Pedram Roushan, Nicholas C. Rubin, Daniel Sank, Kevin J. Satzinger, Vadim Smelyanskiy, Kevin J. Sung, Matthew D. Trevithick, Amit Vainsencher, Benjamin Villalonga, Theodore White, Z. Jamie Yao, Ping Yeh, Adam Zalcman, Hartmut Neven, and John M. Martinis. Quantum Supremacy Using a Programmable Supercon- ducting Processor. Nature, 574(7779):505{510, 2019. [Ajt96] Mikl os Ajtai. Generating Hard Instances of Lattice Problems. In Proceedings of the Twenty-Eighth Annual ACM Symposium on Theory of Computing, STOC '96, pages 99{108, New York, NY, USA, 1996. ACM. [Ajt98] Mikl os Ajtai. The Shortest Vector Problem in L2 is NP-hard for Randomized Reductions. In Proceedings of the Thirtieth Annual ACM Symposium on Theory of Computing, STOC '98, pages 10{19, New York, NY, USA, 1998. ACM. [AvDK + 08] D. Aharonov, W. van Dam, J. Kempe, Z. Landau, S. Lloyd, and O. Regev. Adiabatic Quantum Computation is Equivalent to Standard Quantum Com- putation. SIAM Review, 50(4):755{787, 2008. [BA94] T. J. Bartolac and E. P. Andert. Feed-forward Neural Networks to Solve the Closely-Spaced Objects Problem. In O. E. Drummond, editor, Signal and Data Processing of Small Targets 1994, volume 2235 of Society of Photo- Optical Instrumentation Engineers (SPIE) Conference Series, pages 94{105, July 1994. 91 [Bar82] F. Barahona. On the Computational Complexity of Ising spin Glass Models. Journal of Physics A: Mathematical and General, 15(10):3241{3253, 1982. [BCM + 17] Zhengbing Bian, Fabian Chudak, William Macready, Aidan Roy, Roberto Sebastiani, and Stefano Varotti. Solving SAT and MaxSAT with a Quan- tum Annealer: Foundations and a Preliminary Report. In International Symposium on Frontiers of Combining Systems, pages 153{171. Springer, 2017. [BEBE10] Vincent Boyer, Didier El Baz, and Moussa Elkihel. Solution of Multidimen- sional Knapsack Problems via Cooperation of Dynamic Programming and Branch and Bound. European Journal of Industrial Engineering, 4(4):434, 2010. [BHZM11] Ronen Basri, Tal Hassner, and Lihi Zelnik-Manor. Approximate Nearest Subspace Search. IEEE Transactions on Pattern Analysis and Machine In- telligence, 33(2):266{278, February 2011. [BIS + 18] Sergio Boixo, Sergei V. Isakov, Vadim N. Smelyanskiy, Ryan Babbush, Nan Ding, Zhang Jiang, Michael J Bremner, John M Martinis, and Hartmut Neven. Characterizing Quantum Supremacy in Near-Term Devices. Nature Physics, 14(6):595, 2018. [BKR16] Tomas Boothby, Andrew D. King, and Aidan Roy. Fast Clique Minor Gen- eration in Chimera Qubit Connectivity Graphs. Quantum Information Pro- cessing, 15(1):495{508, 2016. [Bla04] S. S. Blackman. Multiple Hypothesis Tracking for Multiple Target Tracking. Aerospace and Electronic Systems Magazine, IEEE, 19(1):5{18, 2004. [BV04] Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge University Press, New York, NY, USA, 2004. [CD82] Chaw-Bing Chang and Keh-Ping Dunn. A Functional Model for the Closely Spaced Object Resolution Process. Technical Report Technical Report 611, Lincoln Laboratory, Lexington, Massachusetts, 5 1982. [CH96] I. J. Cox and S. L. Hingorani. An Ecient Implementation of Reid's Multiple Hypothesis Tracking Algorithm and Its Evaluation for the Purpose of Visual Tracking. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 18(2):138{150, 1996. [Cho08] V. Choi. Minor-Embedding in Adiabatic Quantum Computation: The Pa- rameter Setting Problem. Quantum Information Processing, 7(5):193{209, 2008. [CK08] H. Chen and Y. Kirubarajan, T. and Bar-Shalom. Tracking of Spawning Targets with Multiple Finite Resolution Sensors. Aerospace and Electronic Systems, IEEE Transactions on, 44(1):2{14, 2008. 92 [CKT91] P. Cheeseman, B. Kanefsky, and W. M. Taylor. Where the Really Hard Problems Are. In Proceedings of the 12th international joint conference on Articial intelligence - Volume 1, IJCAI'91, pages 331{337, San Francisco, CA, USA, 1991. Morgan Kaufmann Publishers Inc. [CLRS09] Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Cliord Stein. Introduction to Algorithms, Third Edition. The MIT Press, 3rd edi- tion, 2009. [CMR14] Jun Cai, William G. Macready, and Aidan Roy. A Practical Heuristic for Finding Graph Minors. arXiv preprint arXiv:1406.2741, 2014. [Cop02] D. Coppersmith. An Approximate Fourier Transform Useful in Quantum Factoring. New York, 19642(IBM Research Report 19642), 2002. [CWL + 16] Jia-Hua Cheng, Zhe Wang, Logan Lebanowski, Guey-Lin Lin, and Shaomin Chen. Determination of the Total Absorption Peak in an Electromagnetic Calorimeter. Nuclear Instruments and Methods in Physics Research Sec- tion A: Accelerators, Spectrometers, Detectors and Associated Equipment, 827:165{170, 2016. [Del16] W. Delaney. From Vision to Reality 50+ Years of Phased Array Develop- ment. In 2016 IEEE International Symposium on Phased Array Systems and Technology (PAST), pages 1{8, October 2016. [Deu85] David Deutsch. Quantum Theory, the Church{Turing Principle and the Universal Quantum Computer. Proceedings of the Royal Society of London. A. Mathematical and Physical Sciences, 400(1818):97{117, 1985. [DG04] Jerey Dean and Sanjay Ghemawat. MapReduce: Simplied Data Process- ing on Large Clusters. In 6th Symposium on Operating System Design and Implementation (OSDI 2004), San Francisco, California, USA, December 6-8, 2004, pages 137{150, 2004. [DN06] R. Danchick and G. E. Newnam. Reformulating Reid's MHT Method with Generalised Murty K-best Ranked Linear Assignment Algorithm. Radar, Sonar and Navigation, IEE Proceedings -, 153(1):13{22, 2006. [DSC19] Nike Dattani, Szilard Szalay, and Nick Chancellor. Pegasus: The Second Connectivity Graph for Large-Scale Quantum Annealing Hardware. arXiv preprint arXiv:1901.07636, 2019. [DW18] D-Wave. Getting Started with the D-Wave System: User Manual. D-Wave Systems, Inc., Burnaby, British Columbia, Canada, 2018. [DW20] D-Wave. D-Wave System Documentation. D-Wave Systems, Inc., Burnaby, British Columbia, Canada, 2020. [Fey82] R. P. Feynman. Simulating Physics with Computers. International Journal of Theoretical Physics, 21(6):467{488, 1982. 93 [FGGS00] E. Farhi, J. Goldstone, S. Gutmann, and M. Sipser. Quantum Computation by Adiabatic Evolution. arXiv preprint quant-ph/0001106, January 2000. [FK13] F. V. Fomin and P. Kaski. Exact Exponential Algorithms. Commununica- tions of the ACM, 56(3):80{88, March 2013. [Fly72] Michael J. Flynn. Some Computer Organizations and Their Eectiveness. IEEE Transactions on Computers, C-21:948{960, 1972. [GJ79] Michael R. Garey and David S. Johnson. Computers and Intractability: A Guide to the Theory of NP-Completeness. W. H. Freeman & Co., New York, NY, USA, 1979. [Got14] T. D. Gottschalk. SPET Overview. Technical Report ATM-2015-03408, The Aerospace Corporation, El Segundo, California, 2014. [GPRS04] S. Gadaleta, A. B. Poore, S. Roberts, and B. J. Slocumb. Multiple Hypoth- esis Clustering and Multiple Frame Assignment Tracking. In Proceedings of SPIE, volume 5428, pages 294{307, 2004. [Gro96] L. K. Grover. A Fast Quantum Mechanical Algorithm for Database Search. In Proceedings of the Twenty-Eighth Annual ACM Symposium on Theory of Computing, STOC '96, pages 212{219, New York, NY, USA, 1996. ACM. [Gro97] L. K. Grover. Quantum Mechanics Helps in Searching for a Needle in a Haystack. Physical Review Letter, 79:325{328, July 1997. [Hic08] Rich Hickey. The Clojure Programming Language. In Proceedings of the 2008 Symposium on Dynamic Languages, page 1. ACM, 2008. [HNC18] Maxwell Henderson, John Novak, and Tristan Cook. Leveraging Adi- abatic Quantum Computation for Election Forecasting. arXiv preprint arXiv:1802.00069, 2018. [Hov88] S. A. Hovanessian. Introduction to Sensor Systems. Artech House Commu- nication and Electronic Defense Library. Artech House, 1988. [HS09] Matthew A. Herman and Thomas Strohmer. High-Resolution Radar via Compressed Sensing. IEEE Transactions on Signal Processing, 57:2275{ 2284, 2009. [Isi25] Ernst Ising. Beitrag zur Theorie des Ferromagnetismus. Zeitschrift f ur Physik, 31(1):253{258, February 1925. [Jur11] Miro Juri si c. Simulation of Ising Spin Model Using CUDA. Master's thesis, University of Split, Split, Croatia, November 2011. [Kar72] Richard M. Karp. Reducibility Among Combinatorial Problems. Complexity of Computer Computations, pages 85{103, 1972. 94 [Kat50] Tosio Kato. On the Adiabatic Theorem of Quantum Mechanics. Journal of the Physical Society of Japan, 5(6):435{439, 1950. [KBSP01] T. Kirubarajan, Y. Bar-Shalom, and K. R. Pattipati. Multiassignment for Tracking a Large Number of Overlapping Objects and Application to Fi- broblast Cells. Aerospace and Electronic Systems, IEEE Transactions on, 37(1):2{21, 2001. [KHF04] J. Korn, H. Holtz, and M. S. Farber. Trajectory Estimation of Closely Spaced Objects (CSO) Using Infrared Focal Plane Data of an STSS Platform. In Signal and Data Processing of Small Targets, volume 5428, pages 387{399, 2004. [KV09] Ravindran Kannan and Santosh Vempala. Spectral Algorithms. Foundations and Trends in Theoretical Computer Science, 4(3{4):157{288, 2009. [KYR + 19] James King, Sheir Yarkoni, Jack Raymond, Isil Ozdan, Andrew D. King, Mayssam Mohammadi Nevisi, Jeremy P. Hilton, and Catherine C. McGeoch. Quantum Annealing amid Local Ruggedness and Global Frustration. Jour- nal of the Physical Society of Japan, 88(6):061007, 2019. [Lee90] Jan V. Leeuwen. Handbook of Theoretical Computer Science: Algorithms and Complexity. MIT Press, Cambridge, MA, USA, 1990. [Lev44] Kenneth Levenberg. A Method for the Solution of Certain Non-Linear Prob- lems in Least Squares. Quarterly of Applied Mathematics, 2(2):164{168, 1944. [Llo96] Seth Lloyd. Universal Quantum Simulators. Science, 273(5278):1073{1078, 1996. [LPS + 14] T. Lanting, A. J. Przybysz, A. Yu. Smirnov, F. M. Spedalieri, M. H. Amin, A. J. Berkley, R. Harris, F. Altomare, S. Boixo, P. Bunyk, N. Dickson, C. En- derud, J. P. Hilton, E. Hoskinson, M. W. Johnson, E. Ladizinsky, N. Ladizin- sky, R. Neufeld, T. Oh, I. Perminov, C. Rich, M. C. Thom, E. Tolkacheva, S. Uchaikin, A. B. Wilson, and G. Rose. Entanglement in a Quantum An- nealing Processor. arXiv preprint quant-ph/1401.3500, January 2014. [LS94] W. E. Lillo and N. W. Schulenburg. Bayesian Closely Spaced Object Reso- lution Technique. In Proceedings of SPIE, volume 2235, pages 2{13, 1994. [LS02] W. E. Lillo and N. W. Schulenburg. Bayesian Closely Spaced Object Reso- lution with Application to Real Data. In Proceedings of SPIE, volume 4729, pages 152{162, 2002. [Luc13] A. J. Lucas. Ising Formulations of Many NP Problems. ArXiv e-prints 1302.5843v3, February 2013. [LXA11a] L. Lin, H. Xu, and W. An. Track Closely-Spaced Moving Objects. SPIE Newsroom, 2011. 95 [LXA + 11b] L. Lin, H. Xu, W. An, W. Sheng, and D. Xu. Tracking a Large Number of Closely Spaced Objects Based on the Particle Probability Hypothesis Density Filter via Optical Sensor. Optical Engineering, 50(11):116401, November 2011. [LXX + 11] L. Lin, H. Xu, D. Xu, W. An, and K. Xie. QPSO-based Algorithm of CSO Joint Infrared Super-Resolution and Trajectory Estimation. Systems Engineering and Electronics, Journal of, 22(3):405{411, 2011. [Mar63] Donald W. Marquardt. An Algorithm for Least-Squares Estimation of Non- linear Parameters. Journal of the Society for Industrial and Applied Mathe- matics, 11(2):431{441, 1963. [MC84] Michael J. Magazine and Maw-Sheng Chern. A Note on Approximation Schemes for Multidimensional Knapsack Problems. Mathematics of Opera- tions Research, 9(2):244{247, 1984. [MC06] Shozo Mori and Chee-Yee Chong. Multiple Target Tracking with Possibly Unresolved Measurements Using Generalized Janossy Measure Concept. In Oliver E. Drummond, editor, Signal and Data Processing of Small Targets 2006, volume 6236, pages 229 { 239. International Society for Optics and Photonics, SPIE, 2006. [MGFP05] Daniel Macumber, Sabino Gadaleta, Allison Floyd, and Aubrey Poore. Hier- archical Closely Spaced Object (CSO) Resolution for IR Sensor Surveillance. Optics & Photonics 2005, 5913:591304{591304{15, August 2005. [Mic12] Daniele Micciancio. Inapproximability of the Shortest Vector Problem: To- ward a deterministic Reduction. Theory of Computing, 8(1):487{512, 2012. [MLM07] Ari Mizel, Daniel A. Lidar, and Morgan Mitchell. Simple Proof of Equiv- alence between Adiabatic Quantum Computation and the Circuit Model. Physical Review Letters, 99:70502, August 2007. [Moo65] Gordon E. Moore. Cramming More Components onto Integrated Circuits. Electronics, 38(8), April 1965. [Mor78] Jorge J. Mor e. The Levenberg-Marquardt Algorithm: Implementation and Theory. In G.A. Watson, editor, Numerical Analysis, volume 630 of Lecture Notes in Mathematics, pages 105{116. Springer Berlin Heidelberg, 1978. [MR95] Rajeev Motwani and Prabhakar Raghavan. Randomized Algorithms. Cam- bridge University Press, New York, NY, USA, 1995. [MW13] C. C. McGeoch and C. Wang. Experimental Evaluation of an Adiabiatic Quantum System for Combinatorial Optimization. In Proceedings of the ACM International Conference on Computing Frontiers, CF '13, pages 23:1{ 23:11, New York, NY, USA, 2013. ACM. 96 [NBG08] John Nickolls, Ian Buck, and Michael Garland. Scalable Parallel Program- ming. In 2008 IEEE Hot Chips 20 Symposium (HCS), pages 40{53. IEEE, 2008. [NMG + 18] Wolfgang Niedenzu, Victor Mukherjee, Arnab Ghosh, Abraham G Kofman, and Gershon Kurizki. Quantum Engine Eciency Bound Beyond the Second Law of Thermodynamics. Nature communications, 9(1):165, 2018. [NWK90] J. Nowakowski, M. Wlodawski, and A. Kalisz. Temperature Discrimination of Closely Spaced Objects. Proceedings of SPIE, 1305:153{166, 1990. [OOTT18] Shuntaro Okada, Masayuki Ohzeki, Masayoshi Terabe, and Shinichiro Taguchi. Improving Solutions by Embedding Larger Subproblems in a D- Wave Quantum Annealer. In Scientic Reports, 2018. [Pis95] David Pisinger. Algorithms for Knapsack Problems. PhD thesis, University of Copenhagen, Copenhagen, 1995. [Pon06] Abhilash Ponnath. Diculties in the Implementation of Quantum Comput- ers. arXiv preprint cs/0602096, 2006. [Poo94] A. B. Poore. Multidimensional Assignment Formulation of Data Associa- tion Problems Arising from Multitarget and Multisensor Tracking. Compu- tational Optimization Applications, 3(1):27{57, March 1994. [Pre19] John Preskill. Why I Called It `Quantum Supremacy', October 2019. [On- line; posted 2-October-2019]. [PRP06] Jakob Puchinger, G unther Raidl, and Ulrich Pferschy. The Core Concept for the Multidimensional Knapsack Problem. EvoCOP, 3906(Chapter 17):195{ 208, 2006. [PRP10] Jakob Puchinger, G unther Raidl, and Ulrich Pferschy. The Multidimen- sional Knapsack Problem: Structure and Algorithms. INFORMS Journal on Computing, 22:250{265, 5 2010. [PVPS09] Tobias Preis, Peter Virnau, Wolfgang Paul, and Johannes J. Schneider. GPU Accelerated Monte Carlo Simulation of the 2D and 3D Ising Model. Journal Computational Physics, 228:4468{4477, 2009. [RA93] J. T. Reagan and T. J. Abatzoglou. Model-Based Superresolution CSO Processing. In Proceedings of SPIE, volume 1954, pages 204{218, 1993. [Reg04] Oded Regev. Quantum Computation and Lattice Problems. SIAM Journal on Computing, 33(3):738{760, March 2004. [Rei79] D. B. Reid. An Algorithm for Tracking Multiple Targets. Automatic Control, IEEE Transactions on, 24(6):843{854, 1979. [Sch19] M. Schlessinger. Infrared Technology Fundamentals. CRC Press, 2019. 97 [SFG97] D. J. Salmond, D. Fisher, and N. J. Gordon. Tracking and Identication For Closely Spaced Objects in Clutter. In In Proc. of the Europ. Contr. Conf, 1997. [Sho97] P. W. Shor. Polynomial-Time Algorithms for Prime Factorization and Dis- crete Logarithms on a Quantum Computer. SIAM Journal on Computing, 26(5):1484{1509, October 1997. [Spe12] F. M. Spedalieri. Detecting Entanglement with Partial State Information. Physical Review A, 86(6), December 2012. [SQSL14] S. Santra, G. Quiroz, G. Ver Steeg, and D. Lidar. Max 2-SAT with up to 108 Qubits. New Journal of Physics, 16(4):45006, 2014. [STSQ14] K. J. Scully, J. J. Tran, D. L. Semmen, and G. D. Quiroz. Quantum Compu- tation Approaches to Closely Spaced Object Resolution. Technical Report ATR-2014-01313, The Aerospace Corporation, El Segundo, California, 2014. [TR89] M. J. Tsai and F. A. Rogal. Position Estimation Of Optical Point Targets Using Staring Detector Arrays. In Proceedings of SPIE, volume 1096, pages 78{85, 1989. [Tra98] John J. Tran. Ecient Simulation of Multiphase Flow in Three-Dimensional Fracture Networks. Master's thesis, University of Notre Dame, Notre Dame, Indiana, August 1998. [Tsa80] M. J. Tsai. Simulation Study on Detection and Estimation of Closely Spaced Optical Targets. Technical Report ADA088098, Lincoln Lab, Massachusetts Institute of Technology, Lexington, Masschusetts, March 1980. [TSSL14] J. J. Tran, K. J. Scully, D. L. Semmen, and R. F. Lucas. Closely Spaced Object Discrimination Computation Using Quantum Annealing Model. In Proceedings of the IS&T/SPIE Conference, San Francisco, CA, 2014. [Wei11] Martin Weigel. Simulating Spin Models on GPU. Computer Physics Com- munications, 182:1833{1836, 2011. [Whi09] Tom White. Hadoop: The Denitive Guide. O'Reilly Media, Inc., 1st edi- tion, 2009. [WKS15] Nathan Wiebe, Ashish Kapoor, and Krysta M. Svore. Quantum Nearest- Neighbor Algorithms for Machine Learning. Quantum Information and Computation, 15(3-4):318{358, March 2015. [XSW + 18] Shu Xu, Xiangxiang Sun, Jizhou Wu, Wei-Wei Zhang, Nigum Arshed, and Barry C. Sanders. Quantum Walk on a Chimera Graph. New Journal of Physics, 20(5):053039, May 2018. [YD16] Z. Yang and M. J. Dinneen. Graph Minor Embeddings for D-Wave Com- puter Architecture. Technical report, Department of Computer Science, The University of Auckland, New Zealand, 2016. 98 [ZXW + 16] Matei Zaharia, Reynold S. Xin, Patrick Wendell, Tathagata Das, Michael Armbrust, Ankur Dave, Xiangrui Meng, Josh Rosen, Shivaram Venkatara- man, Michael J. Franklin, Ali Ghodsi, Joseph Gonzalez, Scott Shenker, and Ion Stoica. Apache Spark: A Unied Engine for Big Data Processing. Com- munications of the ACM, 59(11):56{65, October 2016. 99
Abstract (if available)
Abstract
One of the challenges of automated target recognition and tracking on a two-dimensional focal plane is the ability to resolve closely spaced objects (CSO). To date, one of the best-known CSO-Resolution algorithms first subdivides a cluster of image pixels into equally spaced gridpoints. Then it conjectures that an unknown number of targets are located at the centers of those subpixels and, for each set of such locations, calculates the associated irradiance values to minimize the sum of squares of the residuals. The set of target locations that leads to the minimal residual becomes the starting point of a non-linear least-squares fit (e.g. Levenberg-Marquardt, Nelder-Mead, expectation-maximization, or trust-region), which completes the estimation. The overall time complexity is exponential. Although numerous strides have been made over the years, vis-à-vis heuristic optimization techniques, the CSO-Resolution problem remains largely intractable, due to its combinatoric nature. We propose a novel approach to address this computational obstacle. By mapping the CSO-Resolution algorithm to a quantum annealing model, a CSO-Resolution algorithm can then be programmed on an adiabatic quantum optimization device. This will effectively demonstrate that a combinatoric problem, once held to be intractable, might soon be attainable with the proper hardware platform. Before such an implementation can be attempted, the theoretical and mathematical foundation for such an effort must be established. This dissertation is the description and submission of that analysis.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Imposing classical symmetries on quantum operators with applications to optimization
PDF
Dynamics of adiabatic quantum search
PDF
Controlling electronic properties of two-dimensional quantum materials: simulation at the nexus of the classical and quantum computing eras
PDF
Applications and error correction for adiabatic quantum optimization
PDF
Error correction and quantumness testing of quantum annealing devices
PDF
Performance optimization of scientific applications on emerging architectures
PDF
Dynamic topology reconfiguration of Boltzmann machines on quantum annealers
PDF
Towards optimized dynamical error control and algorithms for quantum information processing
PDF
Lower overhead fault-tolerant building blocks for noisy quantum computers
PDF
Open-system modeling of quantum annealing: theory and applications
PDF
Minor embedding for experimental investigation of a quantum annealer
PDF
Explorations in the use of quantum annealers for machine learning
PDF
The theory and practice of benchmarking quantum annealers
PDF
Tunneling, cascades, and semiclassical methods in analog quantum optimization
PDF
Designing efficient algorithms and developing suitable software tools to support logic synthesis of superconducting single flux quantum circuits
PDF
Topological protection of quantum coherence in a dissipative, disordered environment
PDF
Demonstration of error suppression and algorithmic quantum speedup on noisy-intermediate scale quantum computers
PDF
Quantum computation in wireless networks
PDF
Quantum coding with entanglement
PDF
Exploiting structure in the Boolean weighted constraint satisfaction problem: a constraint composite graph-based approach
Asset Metadata
Creator
Tran, John Joseph
(author)
Core Title
Optimization of the combinatoric closely spaced objects resolution algorithm with adiabatic quantum annealing
School
Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Computer Science
Publication Date
02/14/2020
Defense Date
12/09/2019
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
adiabatic,combinatoric,CSO,OAI-PMH Harvest,optimization,quantum
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Lucas, Robert F. (
committee chair
), Gottschalk, Thomas D. (
committee member
), Haas, Stephan (
committee member
), Nakano, Aiichiro (
committee member
), Scully, Kevin J. (
committee member
)
Creator Email
johnjtran@acm.org
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c89-269753
Unique identifier
UC11674954
Identifier
etd-TranJohnJo-8177.pdf (filename),usctheses-c89-269753 (legacy record id)
Legacy Identifier
etd-TranJohnJo-8177.pdf
Dmrecord
269753
Document Type
Dissertation
Rights
Tran, John Joseph
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
adiabatic
combinatoric
CSO
optimization
quantum