Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Cooperative localization of a compact spacecraft group using computer vision
(USC Thesis Other)
Cooperative localization of a compact spacecraft group using computer vision
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Cooperative Localization of a Compact
Spacecraft Group using Computer Vision
by
William A. Bezouska
A Dissertation Presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulllment of the
Requirements for the Degree of
DOCTOR OF PHILOSOPHY
(ASTRONAUTICAL ENGINEERING)
December 2019
Copyright 2019 William A. Bezouska
Abstract
This work is concerned with how a group of satellites operating in close proximity might collec-
tively determine the relative state of individual satellites using vision-based measurements. This
capability has applications in aggregated and modular satellites, orbital construction and repair,
and planetary and asteroid exploration. Whereas prior work has traditionally limited analysis to
one-to-one satellite rendezvous scenarios, the present work treats the problem as a whole to focus on
collaborative solutions facilitated by information sharing within the group of satellites. The solution
presented in this thesis uses a Mutliplicative Extended Kalman Filter to fuse relative vision-based
measurements collected by individual satellites as well as other onboard sensors such as star track-
ers and gyroscopes. The resulting collaborative estimate that uses all available information is more
accurate than if satellites estimate relative state in isolation. Furthermore, the lter framework is
leveraged to select only a subset of measurements at each timestep with the goal of reducing power
consumption, computation load, or communication bandwidth during operations. This selection
strategy is posed as a minimization problem to select a specied number of measurements that
reduce the weighted trace of the expected state covariance matrix for the full system state. A
heuristic approach is used to perform this minimization that incorporates the measurement error
characteristics of the specied vision-based sensor. The ltering and sensor selection methods are
then tested using both simulated noisy measurement data as well as simulated vision-based data
using rendered images. Finally, this work includes an extensive survey of the literature on orbital
computer vision applications and on the subject of collaborative localization.
2
Acknowledgements
I would like to acknowledge and sincerely thank a number of people for making this work possible.
First, thank you to my dissertation advisor, Dr. Joseph Kunc. Thanks to David Barnhart, my re-
search advisor and a constant source of inspiration. Thank you to my colleagues at The Aerospace
Corporation for new ideas and perspectives, especially Dr. Donald Lewis whom I have been fortu-
nate to work for and learn from since the beginning of my time there. Thank you as well to The
Aerospace Corporation for nancial support as part of the Advanced Degree Fellowship program.
Thank you to my parents, Anthony Bezouska and Dr. Theresa Walsh, for a childhood lled with
learning and for showing me that the world abounds with interesting questions to explore. Thank
you to my parents-in-law, Michael Willoughby and Janelle Willoughby, for, among many other
things, helping to watch my wonderful children while I worked on this research. And nally, above
all, thank you to my wife, Lindsey Willoughby, for the encouragement and understanding that has
helped me to nally conclude this adventure.
3
Contents
1 Introduction 8
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.2 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.3 Solution Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.4 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.5 Nomenclature and Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.5.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.5.2 Quaternions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.5.3 Reference Frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2 Survey: Orbital Computer Vision 18
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.2 Orbital Computer Vision Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2.1 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2.2 Challenges of Orbital Computer Vision . . . . . . . . . . . . . . . . . . . . . 19
2.3 Orbital Computer Vision Missions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.3.1 Past Missions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.3.2 Future Missions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.4 Vision Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.4.1 Visual Cameras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.4.2 LIDAR Cameras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.4.3 Simulation and Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4
CONTENTS 5
2.5 Position Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.5.1 Star Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.5.2 Optical Navigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
2.5.3 Satellite Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.6 Pose Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
2.6.1 Cooperative Pose Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
2.6.2 Non-cooperative Pose Estimation . . . . . . . . . . . . . . . . . . . . . . . . . 57
2.7 Other Applications of Orbital Computer Vision . . . . . . . . . . . . . . . . . . . . . 73
2.7.1 Image-based Visual Servoing for Robotics . . . . . . . . . . . . . . . . . . . . 73
2.7.2 Data Quality Assurance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
2.7.3 Remote Sensing Image Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 75
2.7.4 Satellite Target Characterization . . . . . . . . . . . . . . . . . . . . . . . . . 77
2.7.5 Computational Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . 78
2.7.6 Validation and Verication Eorts . . . . . . . . . . . . . . . . . . . . . . . . 79
2.8 Future Research Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
3 Survey: Cooperative Localization and Sensor Selection 80
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
3.2 Cooperative Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
3.2.1 Participant Identication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
3.2.2 Kalman Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
3.2.3 Maximum a Posteriori Estimation . . . . . . . . . . . . . . . . . . . . . . . . 85
3.2.4 Distributed Estimation and Consensus . . . . . . . . . . . . . . . . . . . . . . 86
3.2.5 Mutual Localization Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
3.2.6 Cooperative Target Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
3.2.7 Structure from Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
3.2.8 Camera Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
3.2.9 Space-based Cooperative Localization . . . . . . . . . . . . . . . . . . . . . . 96
3.2.10 Distributed Computer Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
3.3 Sensor Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
CONTENTS 6
3.3.1 General Sensor Selection Problem . . . . . . . . . . . . . . . . . . . . . . . . 100
3.3.2 Sensor Selection for Target Tracking . . . . . . . . . . . . . . . . . . . . . . . 101
3.3.3 Sensor Selection for Cooperative Localization . . . . . . . . . . . . . . . . . . 101
3.3.4 Camera Placement and Selection . . . . . . . . . . . . . . . . . . . . . . . . . 102
3.3.5 Coverage and Path Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
3.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
4 System Filter Design 105
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
4.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
4.3 System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
4.3.1 Dynamical Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
4.3.2 Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
4.4 Cooperative Localization Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
4.4.1 Filter Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
4.4.2 Measurement Graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
4.4.3 Filter State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
4.4.4 Measurement Updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
4.4.5 Kalman Gain and Estimate Update . . . . . . . . . . . . . . . . . . . . . . . 117
4.4.6 Covariance Propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
4.4.7 State Propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
4.4.8 Validation Gate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
4.4.9 Filter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
4.5 Simulations and Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
4.5.1 Baseline Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
4.5.2 Performance Impact: Number of Satellites . . . . . . . . . . . . . . . . . . . . 128
4.5.3 Performance Impact: Measurement Graph Topology . . . . . . . . . . . . . . 128
4.5.4 Computational Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
4.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
CONTENTS 7
5 Pose Estimation 133
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
5.2 Image Rendering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
5.2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
5.2.2 Camera Frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
5.2.3 Spacecraft Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
5.3 Monocular Pose Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
5.4 Stereo-vision Pose Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
5.4.1 Point Cloud Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
5.4.2 Point Cloud Registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
5.4.3 Monte Carlo Error Characterization . . . . . . . . . . . . . . . . . . . . . . . 142
5.4.4 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
5.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
6 Sensor Selection 151
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
6.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
6.3 Measurement Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
6.4 Sensor Selection Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
6.4.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
6.4.2 Uniform Spanning Trees (RANDOM TREE) . . . . . . . . . . . . . . . . . . 154
6.4.3 Maximum Covariance Trace (GREEDY) . . . . . . . . . . . . . . . . . . . . . 154
6.4.4 Randomly Selected Edges (RANDOM) . . . . . . . . . . . . . . . . . . . . . . 156
6.5 Sensor Selection Strategies Performance Comparison . . . . . . . . . . . . . . . . . . 156
6.5.1 Comparison of All Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
6.5.2 Greedy and Random Selection Comparison . . . . . . . . . . . . . . . . . . . 158
6.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
7 Conclusion and Future Work 161
Bibliography 163
Chapter 1
Introduction
1.1 Motivation
Estimating the relative position and attitude (collectively known as pose) between two satellites
is essential for rendezvous and proximity operations (RPO) in orbit. Pose estimation is used, for
example, by autonomous space cargo vehicles to rendezvous with the International Space Station
(ISS), by robotic servicing spacecraft to repair damaged satellites, or by exploration spacecraft
inspecting comets and asteroids. Onboard relative pose estimation techniques have been matured to
provide a diverse collection of robust, accurate, and autonomous solutions for RPO. The incredible
developments in computer vision over the last two decades has been a large contributor to these
advances. Computer vision allows a spacecraft to extract knowledge from a two or three dimensional
images. Two dimensional images are provided by cameras while three dimensional images are often
supplied by depth sensors providing range, in addition to intensity, for each pixel.
With few exceptions, proximity satellite missions requiring relative pose estimation have been
limited to a one-to-one interaction: one chaser spacecraft estimating the pose of a single target
spacecraft. This mode of operation is a natural t for both docking missions and satellite servicing
missions. Recently, however, new space architectures based on many individual interacting space-
craft have been proposed. Such systems would require relative pose estimation of all the satellites
in the group; the ability to conduct many-to-many pose estimation is a a clear enabler for future
multi-satellite missions.
In the early 2000's the National Aeronautics and Space Agency (NASA) pursued a interfero-
8
CHAPTER 1. INTRODUCTION 9
metric space-based satellite known as the Terrestrial Planet Finder consisting of many individual
satellites
ying in formation necessitating highly accurate relative position estimation within the
swarm [1]. Although ultimately canceled, this project precipitated signicant academic exploration
into the many-to-many relative state estimation problem (see [2]). Because the use of formation
ight, such as interferometry, often necessitate large baseline separations between spacecraft, rel-
ative state estimation is limited to positional measurements such as range, bearing, or distance.
Relative attitude estimation in these works was left to local sensors such as star trackers whose
estimates are simply dierenced.
The U.S. Defense Advanced Projects Agency (DARPA) has undertaken several projects during
the last decade related to distributed spacecraft. In the late 2000's DARPA began work on the
Future, Fast, Flexible, Fractionated Free-Flying Concept (better known as F6) [3]. F6 sought to
distribute typical spacecraft capabilities and processes among many free
ying satellites in close
proximity. Such a system would require robust relative state estimation to maintain proximity and,
importantly for F6, provide resilience against both natural failures and adversary attacks [4].
More recently, DARPA pursued a novel mission concept known as Phoenix that, among other
things, attempted to construct a single spacecraft created by aggregating smaller cellular satlets [5].
Satlets would be capable of independent operation as well as constitute the building blocks of larger
satellite assemblies by physically connecting and pooling collective resources. During an on-orbit
reconguration, individual satlets could detach, form new structures, or replace other degraded
satlets. This cellular morphology would benet from a robust and distributed relative pose estima-
tion scheme for the collection of satlets. Finally, in contrast to typical formation
ight schemes, this
collection of satlets would be suciently close to allow for direct relative attitude estimation using
onboard computer vision. The vision-based research was limited to a single spacecraft determining
the pose of a single target [6].
Researchers at the California Institute of Technology and the University of Surrey are developing
a large recongurable segmented mirror telescope concept consisting of mirror elements hosted on
individual microsatellites that can un-dock, recongure, and then dock again to optimize telescope
properties [7]. The Autonomous Assembly of a Recongurable Space Telescope (AAReST) mission
is being planned to demonstrate the essential technologies and concepts. Relative navigation is
considered an important part of the reconguration process and is currently based on imaging
CHAPTER 1. INTRODUCTION 10
sensors, though it appears only in a one-to-one navigation mode. A cooperative localization scheme
for the entire telescope swarm would potentially allow for more rapid telescope reconguration.
Some of the concepts of the AAReST mission have appeared in previous research by Surrey;
Wokes et al. [8] proposed a constellation of small self-contained cellular satellites, Intelligent Self-
powered Modules, that can dock to create large structures in space. Computer vision was used for
satellite identication and relative pose estimation between satellites, however, it does not appear
that the cooperative localization problem was addressed.
Another distributed spacecraft architecture is a swarm of many 100 gram satellites presented
by Hadaegh et al. [9]. The proposers specically highlight distributed state estimation and control
as a challenge of swarm
ight, noting that distributed or hierarchical groupings of these very small
satellites could overcome the complexities or large-scale swarms.
Finally, the author came upon a conceptual study by researchers at Cornell University that
envisioned a swarm of individual microsatellites injected directly into Saturn's rings as in-situ
observers to study to composition and motion of particles that make up the rings [10]. This swarm
would be an ideal application area for a cooperative localization scheme and could be optimized to
not only localize the satellites, but also localize the moderately sized ring particles as part of the
science mission. A robust decentralized or distributed scheme would allow the mission to gracefully
degrade as individual satellites are disabled due to collisions with ring particles.
Estimating the states of a formation of spacecraft has been an active area of research. Recent
work such as [11] and [12] have specically addressed distributed Kalman Filter-based estimation
for a swarm of spacecraft. However, an exhaustive literature review of computer vision applications
in space has not revealed any past research for the collective relative pose estimation of a compact
group of spacecraft using computer vision. Additionally, there does not appear to be any research
into the sensor selection for a such a sensing architecture. Therefore, the related problems of
cooperative localization of a satellite swarm and sensor selection strategies for such a swarm were
chosen as the focus of this dissertation.
CHAPTER 1. INTRODUCTION 11
1.2 Problem Statement
The research problem is illustrated in Figure 1.1. A satellite swarm is composed of n-satellites
which are free to move among any attitude or position. Each satellite maintains full spherical
visibility withm-sensors; six such sensors are depicted in the gure without loss of generality. Each
sensor operates independently and a single image or image pair can be used to measure relative
pose between two satellites. Fusing measurements provide state estimation for the swarm. The
following two research problems are addressed in the present work:
1. Given a compact swarm of independent free
ying spacecraft equipped with cameras, coop-
eratively estimate the position, velocity, orientation, and angular velocity of all spacecraft
using shared information in a decentralized or distributed manner.
2. Perform the above cooperative localization while dynamically selecting a reduced set of mea-
surements to minimize computation, maximize estimation accuracy, or both.
1.3 Solution Approach
The solutions presented in this dissertation are the cumulative results of several publications by
the author [13][14][15][16] that are summarized and unied here. The rst problem (estimation) is
solved using a Multiplicative Extended Kalman Filter (MEKF) which is designed to address the
quaternion states. This approach extends results in [17] which only addressed a one-to-one satellite
scenario. The state of the lter is expanded to include all satellites in the cooperating swarm. The
second problem is solved through minimization of an objective based on a weighted trace of the
expected state estimate covariance (i.e., [18]). The MEKF provides a framework to predict the
covariance at the next timestep based on a set of available measurement; the minimization chooses
a subset of measurements which minimizes the objective function. A greedy approach is used to
perform the minimization.
1.4 Assumptions
The following assumptions are presented which limit the scope or simplify the problem formulation.
CHAPTER 1. INTRODUCTION 12
x
y
z
Spacecraft
Activated Camera
Star Tracker Mode
A
B
C
D
Figure 1.1: Problem formulation for the localization of a spacecraft swarm using cameras.
CHAPTER 1. INTRODUCTION 13
A swarm consists of n identical satellites, where n is known a priori
It is assumed that the swarm is made up of a known number of identical satellites. It is reasonable
that the number would be known since a constellation designer would establish this prior to launch.
We assume the satellites are identical for simplicity, though this may be an interesting area for future
research. Finally, we note here that the cooperative localization approach is not concerned with
localization for proximal non-satellite items such as inspection targets, foreign objects, or debris.
However, this too is both an interesting area for future research as well as a reasonable application
for a swarm of satellites.
Satellites are suciently close to allow for relative pose estimation
A basic motivating characteristic is that the satellite swarm is suciently compact such that all
satellites can resolve at least one other satellite using computer vision in order to extract both posi-
tion and orientation measurements. In the scenarios discussed in this work the average separation
distance between satellites of no more than 30 m for the small satellites (40 cm cubes). Localization
for swarms with larger baselines is addressed, for example, in [19].
Each satellite has m-cameras which provide full spherical visibility
Every identical satellite has m-cameras which do not overlap and which collectively provide full
spherical visibility for each satellite. For the proposed research direction, we choose m = 6 without
loss of generality. Visible cameras are relatively inexpensive sensors relative to alternatives such as
LIDAR. In addition, the electronics which process the images can be shared by the m cameras.
Satellite movement is unconstrained in position and orientation
Satellites are free to move in any arbitrary position and orientation as dened by the equations of
motion. If satellite motion is controlled, for example via propulsive maneuvers, it is assumed that
the control input information is available to any state estimation scheme. It is important to note
that the proposed research topic does not address satellite control or planning.
CHAPTER 1. INTRODUCTION 14
Individual cameras or satellites may fail during operation
It is assumed that individual cameras, or entire satellites, may fail during operation. The localiza-
tion scheme should be robust enough to operate through such events. Although a satellite stops
functioning, the remaining satellites are expected to continue to localize it.
Satellites use a separate sensor system for docking operations
Docking operations are not included within the scope of this proposed research topic. It is assumed
that the cooperative localization scheme provides state estimation for mid-range operations. When
two satellites are very close (i.e. less than 40 cm), it is assumed that a ne pose estimation scheme
will be used for docking. This is reasonable as most RPO frameworks use multiple independent
sets of sensors for various ranges [20].
All satellites have perfectly synchronized clocks
All satellites are assumed to have perfectly synchronized clocks. This is a reasonable assumption
because all satellites can synchronize clocks via radio frequency communication or onboard clock
references which have accuracies far better than required for cooperative localization (assuming
moderate relative satellite velocities).
Communications from one spacecraft are received by all spacecraft
An important assumption is that the communications network is fully connected and, in practice,
acts as a one-to-all broadcast network. This assumption means that communication considerations
are independent of network topology and that communications cost is simply proportional to the
amount of data broadcast by each satellite. This type of network topology is obviously an idealized
version that may not be realistic in all situations. However, for a compact group of spacecraft,
it seems reasonable that even low-power communications broadcast on omni-directional antenna
would be heard by all other satellites.
CHAPTER 1. INTRODUCTION 15
1.5 Nomenclature and Preliminaries
1.5.1 Notation
A vector is indicated with a bold lower case letter (e.g., a), a matrix is written with a bold capitalized
letter (e.g. A), and a scalar appears as an unbolded letter (e.g., a). The identity matrix of size n
is written I
n
, the zero matrix of sizenm is written 0
nm
, and a matrix of all ones is written 1
n
.
If the size is dened implicitly, the subscript is omitted. For convenience, we write cross products
between two vectors a and b using a matrix multiplication form:
a b = [a] b (1.1)
[a]
2
6
6
6
6
4
0 a
3
a
2
a
3
0 a
1
a
2
a
1
0
3
7
7
7
7
5
(1.2)
Vectors which include the hat notation (^ a) indicate expectations of a true vector (a).
1.5.2 Quaternions
Quaternions are used in this paper to parameterize rotations in three-dimensional space and are
written as four element arrays for convenience and always denoted by the letter q (though they are
not, strictly speaking, vectors). We dene a quaternion in the usual way with the Euler rotation
axis (e) and the Euler rotation angle () as:
q (e;) =
2
6
4
e sin (=2)
cos (=2)
3
7
5 (1.3)
Note thatjjqjj = 1. We also use q (v) to denote the quaternion representation of a rotation vector v
which will be important for attitude error parameterization. The rotation matrix T corresponding
CHAPTER 1. INTRODUCTION 16
to a quaternion parameterization q is denoted T (q). T can be found from:
(q) =
2
6
6
6
6
6
6
6
4
q
4
q
3
q
2
q
3
q
4
q
1
q
2
q
1
q
4
q
1
q
2
q
3
3
7
7
7
7
7
7
7
5
(1.4)
(q) =
2
6
6
6
6
6
6
6
4
q
4
q
3
q
2
q
3
q
4
q
1
q
2
q
1
q
4
q
1
q
2
q
3
3
7
7
7
7
7
7
7
5
(1.5)
T (q) = (q)
T
(q) (1.6)
Quaternion multiplication can be written using (q) and (q) as well:
q
a
q
b
=
(q
a
) q
a
q
b
=
(q
b
) q
b
q
a
(1.7)
Finally, note that successive rotations using this form of quaternion multiplication are written in
the same order as successive rotation matrices:
T (q
a
q
b
) = T (q
a
) T (q
b
) (1.8)
1.5.3 Reference Frames
A coordinate frame is denoted byF. To specify a rotation from frameF
A
to frameF
B
, we use
T
BA
= T
q
BA
. To specify the position from from the origin ofF
A
to the origin of frameF
B
in
F
A
, we use r
A
BA
. Similarly,!
B
BA
gives the rate of rotation (the angular velocity vector) fromF
A
to
F
B
as expressed by vector coordinates in frame B. When it is necessary to specify the frame that
a vector's coordinates are expressed in, we use r
A
for expression inF
A
.
We use three primary reference frames in this paper. The rst is the inertial reference frame
(F
I
). The second is the common Hills-Clohessy-Wiltshire rotating frame (F
H
) which is initially
aligned with the inertial frame at t = 0. FrameF
H
will often be referred to as the \HCW" frame
CHAPTER 1. INTRODUCTION 17
in this paper for clarity. We assume that the transform fromF
L
toF
I
, indicated byT
IH
, is known
precisely at any time t. The nal type of frame is a body xed frame for each spacecraft (i.e.,
FrameF
B
i
for satellite i) and is dened by a rotation from the inertial frame to the body frame,
T
q
B
i
I
, and a position vector r
H
i
, from the origin of the HCW frame (F
H
) to the center of mass of
bodyi and has coordinates in the HCW frame. We use q
i
and r
i
to indicate these transformations
for brevity. Finally, each satellite has one or more camera frames,F
Cm
, used to monitor other
satellites. These frames are shown in Figure 1.2.
x
y F
I
z
r
i
r
j
r
ij
r
Cm
x
y
F
H
z
x
y F
B
i
z
x
y F
B
j
z
x
y
F
Cm
z
Figure 1.2: Coordinate frames for problem formulation.
Chapter 2
Survey: Orbital Computer Vision
2.1 Introduction
This chapter provides a comprehensive introduction to orbital computer vision. Computer vision is
the eld of research involved with extracting and producing information from visual data. Visual
data typically consists of two-dimensional digital images in the visible spectrum, but may include
infrared images, three-dimensional point clouds generated by scanning or
ash light detection and
ranging (LIDAR) systems, or even synthetic aperture radar images. In addition, computer vision in
space rarely operates in isolation, often relying on data from other sensors such as inertial navigation
sensors (gyroscopes and accelerometers), ranging sensors (LIDAR, RADAR, radiometric ranging),
and navigation sensors (GPS).
Orbital computer vision is a term for the application of computer vision by spacecraft outside
of planetary atmospheres. Although built on the same domain-agnostic fundamental computer
vision techniques and capabilities used on planetary surfaces, orbital computer vision possesses
distinct characteristics: constrained processing power, high reliability and fault tolerance require-
ments, radiation hardened
ight qualied electronic components and algorithms, space environment
considerations, illumination challenges, testing challenges, and many other space-specic consider-
ations. [21] provides a good overview of the challenges involved. Note that space computer vision
applications for planetary surfaces like Mars is a mature and active area of research (e.g. [22]),
however the following survey is focused on applications while in orbit.
The author is unaware of a comprehensive survey on computer vision use in space. The closest
18
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 19
survey is a recent article by Opromolla et al. [23] in 2017 which extensively covers pose estimation
in space (the predominant orbital computer vision application). Some authors have approached
this topic as a subset of the space robotics research eld. For example, Flores-Abad et al., as
part of a survey of on-orbit servicing robotics, surveyed methods for target motion prediction
using computer vision including techniques and past satellite missions [24]. Rebord~ ao provides
an overview of satellite computer vision in the context of optical navigation techniques including
planetary exploration and satellite formation
ight applications [25]. Yawen et al. [26] presented
a survey of non-cooperative pose estimation techniques for space targets where techniques are
categorized by sensor type: single camera, stereo camera, multi-camera, depth (LIDAR) camera,
and multi-sensor fusion of visible and depth cameras. However, no survey appears to cover all areas
of computer in vision nor suggests future areas for research.
2.2 Orbital Computer Vision Overview
2.2.1 Applications
Some of the primary applications for computer vision in space are:
Target line-of-sight direction estimation
Target position and orientation (pose) estimation [27]
Satellite identication
Satellite status characterization
Orbital servicing [24]
Small body exploration (e.g. asteroids, comets)
2.2.2 Challenges of Orbital Computer Vision
Applying computer vision techniques in space involves several constraints due to the orbital nature.
One key challenge is limited computational resources. As an example, [28] has noted that computer
vision processing can take as many as two orders of magnitudes more time on current radiation
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 20
hardened
ight computers than standard desktop computers. In their example, a processing pipeline
that takes 200 milliseconds on a standard Linux computer would take approximately 20 seconds
on a RAD750 space-qualied computer running VxWorks. In another example, processing a single
image on the Mars Spirit and Opportunity rovers running a 20 MHz RAD6000 to detect dust
devils takes 15 seconds and to detect clouds takes 20 seconds [29]. In yet another example, the
EO-1 spacecraft operating a Mongoose M5 (and using 40 percent of the CPU for vision processing)
takes between 5 and 50 minutes to perform image classication on a single image, depending on
the specic task and algorithm [30]. Finally, even if a spacecraft had suciently fast processors, it
will also likely be power limited, as increased power requirements translates to increased cost.
The single source of solar illumination, especially for computer vision tasks far from Earth,
results in shading which challenges many vision approaches [31]. Additionally, the rotation and
irregular shape of objects can induce highly dynamic shading due to constantly changing occlusion.
[32] summarizes historical diculties encountered by relative navigation sensors to include harsh
lightning conditions, background clutter, accurately simulating scenario in a laboratory environ-
ment, and performance in space environment (radiation, temperature, etc.).
2.3 Orbital Computer Vision Missions
2.3.1 Past Missions
Although this present survey is concerned with all orbital computer vision research, it is essential to
highlight those missions that have demonstrated techniques on orbit. The expense (economic, po-
litical, societal) of conducting orbital demonstrations is a fundamental dierentiating characteristic
of orbital computer vision; these applications are not viewed as mature by the space community
until they are qualied on orbit. Therefore, this section should provide insight into the maturity of
orbital computer vision globally. For a compilation of known computer vision missions that have
occurred, see Table 2.1.
1
Various other vehicles have used vision algorithms and LIDAR sensors for docking with the ISS including SpaceX's
Dragon and Orbital ATK's Cygnus.
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 21
Mission Year(s) Vision Application(s)
Space Shuttle 1991 Cooperative Pose Estimation [33]
International Space Station
1
1992-2017 Cooperative Pose Estimation [34]
ETS-VII 1997 Cooperative Pose Estimation [35]
Deep Space 1 1998 Optical Navigation [36]
Stardust 1999 Optical Navigation [37]
Earth Observing-1 2000 Remote Sensing Image Analysis [30]
XSS-10 2003 Satellite Tracking [38]
Hayabusa 2003 Cooperative Pose Estimation [39]
Mars Exploration Rover 2003 Terrain Relative Navigation [40]
DART 2004 Cooperative Pose Estimation [41]
Rosetta 2004 Optical Navigation [42]
Deep Impact / EPOXI 2005 Optical Navigation [43]
XSS-11 2005 Non-cooperative Pose Estimation [44]
Mars Reconnaissance Orbiter 2005 Optical Navigation [45]
Orbital Express 2007 Cooperative Pose Estimation [20]
Automated Transfer Vehicle 2008-2014 Cooperative Pose Estimation [46]
Non-cooperative Pose Estimation [47]
Hubble Servicing Mission 2009 Non-cooperative Pose Estimation [48]
Space Shuttle (STS-128, STS-131) 2009-2010 Cooperative Pose Estimation [49]
Non-cooperative Pose Estimation [49]
H-II Transfer Vehicle 2009-2016 Cooperative Pose Estimation [50]
PRISMA 2010 Cooperative Pose Estimation
Non-cooperative Pose Estimation [51]
Juno 2011 Optical Navigation [52]
Space Shuttle (STS-134) 2011 Cooperative Pose Estimation [53]
SPHERES 2012-2016 Non-cooperative Pose Estimation [54]
STARE 2012 Satellite Tracking [55]
Intelligent Payload Experiment 2014 Remote Sensing Image Analysis [30]
Hayabsua-2 2014 Cooperative Pose Estimation [56]
ANGELS 2014 Unknown
Banxing-2 2016 Unknown
OSIRIS-REx 2016 Non-cooperative Pose Estimation [57]
Raven 2017 Non-cooperative Pose Estimation [58]
Astrobee 2017 Cooperative Pose Estimation
Non-cooperative Pose Estimation [59]
RemoveDEBRIS 2017 Cooperative Pose Estimation
Non-Cooperative Pose Estimation [60]
Prox-1 2018 Optical Navigation [61]
Table 2.1: Selection of space missions using computer vision.
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 22
Engineering Test Satellite Number 7 (ETS-VII)
The National Space Development Agency of Japan (NASDA), which has since merged with other
Japanese organizations to become the Japanese Aerospace Exploration Agency (JAXA), developed
the Engineering Test Satellite Number 7 (ETS-VII) to prototype and verify space robotics tech-
nologies for future space missions such as satellite servicing [35]. The mission, launched in 1997,
consisted of twin satellites, with the chaser satellite being equipped with a 2 meter, 6 degrees of
freedom (DOF), robotic manipulator with cameras mounted on the end-eector and rst joint.
Relative navigation was provided by a camera sensor at short distances (less than 10 m), a laser
ranger at middle distances (2 to 500 m), and cooperative GPS measurements beyond that. The
Charge Coupled Device (CCD) camera sensor provided position and attitude of the target satellite.
In addition to a total of six CCD cameras mounted on the satellite, the robotic manipulator had a
camera mounted on the end eector. This camera was used to perform close range visual servoing
of the robotic arm at 2 Hz by using two sets of specically designed painted markers on the target
spacecraft: 1) a 2-point marker allowing position, pitch, and roll determination and 2) a 3-point
marker to deduce the full orientation including yaw [62]. Specied mission accuracy was 1 mm
(o-axis), 3 mm (on-axis), and 1 degree of rotation at a distance of 20 cm; these accuracies were
validated during on-orbit experimentation. This experiment included the successful release and
automated capture of a target [62].
Deep Space 1
The Deep Space 1 exploration mission launched in 1998 to perform a
yby of asteroid Braille
and comet Borelly as part of NASA's New Millennium Program to demonstrate new technologies
for space
ight [36]. Deep Space 1 included many advanced technologies including ion propulsion,
a multiple spectrum integrated camera and spectrometer, and an autonomous navigation system
(AutoNav) which provided on computer vision (specically, autonomous optical navigation). Au-
toNav, developed by the Jet Propulsion Laboratory (JPL), used visible light images to determine
the direction to solar system objects such as asteroids or planets by comparing the objects observed
location relative to stars in collected images [63]. Multiple bearing measurements, combined with
a priori data on target locations, allowed for orbit determination of the spacecraft at an accuracy
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 23
necessary for close approaches. At far distances, during the cruise phase, solar system objects
appeared as point sources and thus the primary image processing requirement was to identify the
object against a star eld. As Deep Space 1 approached its
yby targets, a centroiding algorithm
was required to analyze the extended object in the image to locate a center. AutoNav was also
successfully used on the Stardust mission launched in 1999 and the Deep Impact mission launched
in 2005 [37].
Earth Observing-1 (EO-1)
NASA launched an Earth remote sensing spacecraft in November 2003 known as Earth Observing
One (EO-1) which included onboard software known as the Autonomous Sciencecraft Experiment
which can autonomously plan, collect data, process data, and make decisions based on the results
[30]. Part of this software is the ability to perform computer vision tasks on collected hyperspectral
imagery. Algorithms detected hot volcanoes, classied
ooded regions, dierentiated between snow,
water, ice, and land, and detected change in scenes over time [64]. Onboard processing allowed
ight software to autonomously modify collection plans based on the results of image processing.
XSS-10
An early foray by the U.S. Air Force into proximity operations began with the January 2003
launch of XSS-10, the rst in a series of microsatellite technology demonstration missions [38] [65].
The battery-powered satellite conducted a brief 23.5 minute mission to demonstrate autonomous
rendezvous and proximity operations [66]. The XSS-10 inspected its upper stage rocket body and
ultimately approached to within approximately 50 m. Relative navigation state was initialized and
updated using centroid tracking of visible images provided from two onboard CCDs [67]. One CCD
was dedicated for proximity operations while the other was congured to perform star tracking for
inertial attitude determination. Images at several designated inspection points were processed using
an onboard digital signal processor which provided line-of-sight errors to the guidance, navigation,
and control system. Maneuvering between inspection points was done autonomously based on
propagated state.
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 24
Hayabusa
JAXA launched the Hayabusa mission, previously known as MUSES-C, in May 2003 to rendezvous
with the Itokawa asteroid, collect samples of the surface material, and return to Earth in mid-2010.
Hayabusa successfully touched down on Itokawa on November 25, 2005. The Hayabusa spacecraft
was equipment with a suite of rendezvous sensors allowing proximity measurements in the range
between 20 km to rendezvous with the surface [39]. LIDAR provided range measurements from 20
km to 500 m while a four-beam laser range nder provided range and attitude information (relative
to the asteroid) from 100 m to 7 m. Optical navigation cameras served two functions during the
mission. At ranges greater than 500 m, a telescopic optical camera consisting of a CCD and 550
nanometer lter conducted tracking of the asteroid center [68]. At close ranges, a wide eld of
view camera was used to determine the location of a specially designed re
ective target marker
released by the Hayabusa spacecraft onto the surface of Itokawa. This mode was used when the
spacecraft was within 35 m of the surface and provides accurate estimation of lateral spacecraft
motion relative to the surface [69]. An onboard pulsed light source illuminated the asteroid surface
during image collection to ensure robust tracking of the highly re
ective target markers. Hayabusa's
optical navigation system also had a planned experimental mode where template matching was used
to autonomously determine lateral velocity based on the asteroid terrain itself in the absence of
articial target markers [70][71]. Details on the use and result of this experimental mode were
unavailable.
Automated Transfer Vehicle
Between 2008 and 2014, the European Space Agency launched ve expendable cargo vehicles re-
ferred to as the Automated Transfer Vehicle (ATV) to the ISS. ATV has two relative navigation
sensors: a optical camera-based system known as the Videometer and an active scanning LIDAR
system known as the Telegoniometer [72]. The Videometer illuminates the target with laser light
and provides line-of-sight measurements to the target vehicle and relative attitude within 30m
based on triangulation of target ducials [73]. The Telegoniometer is a scanning LIDAR that pro-
vides range and line-of-sight measurements for retrore
ector ducials mounted on the target. The
Telegoniometer is used for
ight control monitoring on the ATV. During the nal mission, ATV-5,
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 25
a new scanning LIDAR known as LIRIS-2 was tested by passively collecting data from 3.5 km to
docking (See Section 2.4.2).
Vision Navigation Sensor (STS-134)
In 2011, NASA tested the rst prototype of the Vision Navigation Sensor (VNS) as well as an optical
Docking Camera on the Space Shuttle STS-134 mission [74] as part of the Sensor Test for Orion
RelNav Risk Mitigation (STORRM) [53]. The relative navigation sensors collected information
during docking and undocking and compared it to \truth" data provided by the Shuttle's Trajectory
Control Sensor. The VNS
ash LIDAR is planned as the primary relative navigation sensor for the
future Orion crew vehicle and the Docking Camera provides piloting cues [32]. It will be used in a
cooperative mode tracking up to ten retro-re
ectors on the target and has a planned operational
range of 5 km. Within 20 m, pose is estimated using a RANSAC-based
2
approach [32].
Rosetta
ESA launched the Rosetta spacecraft on March 2, 2004, to eventually rendezvous with the Churyumov-
Gerasimenko comet in 2014. During the mission, Rosetta used an autonomous optical navigation
capability to track objects during close approach
ybys, including the Moon and asteroid 2867
Steins [42]. In the Autonomous Flyby Mode, the onboard attitude control system autonomously
orients the spacecraft to track the optical target center in images captured by the navigation camera
[75].
Demonstration of Autonomous Rendezvous Technology (DART)
NASA sponsored the Demonstration of Autonomous Rendezvous Technology (DART) to prove
key technologies and techniques for autonomous rendezvous. Computer vision for this system was
provided by NASA's next generation Advanced Video Guidance Sensor (AVGS), an iteration on
the Video Guidance Sensor previously
own on two shuttle missions (STS-87 and STS-95) [41] (see
Section 2.4.2 for a detailed discussion of AVGS). AVGS was intended to provide bearing measure-
ments between 500 and 200 m of the target and bearing, range, and orientation data within 200
m of the target spacecraft, Multiple Paths, Beyond-Line-of-Sight Communications (MUBLCOM)
2
Random Sample Consensus (RANSAC) is a common iterative outliers detection method.
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 26
[76]. The AVGS illuminated the cooperative target with infrared lasers in two wavelengths: 808 and
850 nanometers. MUBLCOM was equipped with wavelength sensitive corner cube re
ectors which
limited re
ected returns in the former wavelength, thus enabling image subtraction methods to re-
duce scene clutter. The relative spacing and pattern of the three retro-re
ectors in the image-plane
was used to estimate MUBLCOM bearing, range, and orientation. Two sets of three re
ectors
were used to accommodate long range and short range operation, however the short range set was
not ultimately used during the DART mission [77]. The satellite launched to low Earth orbit on
April 15, 2015, for what was intended to be a 24-hour autonomous rendezvous. However, 11 hours
into this planned operation, the DART spacecraft collided with the MUBLCOM due to navigation
errors in the Guidance, Navigation, and Control (GNC) subsystem unrelated to the performance of
AVGS. The mission successfully demonstrated the acquisition of bearing measurements as DART
approached MUBLCOM, though close range tracking was never initiated due to the collision thus
preventing autonomous range and orientation estimation experiments [77].
Deep Impact
The AutoNav system demonstrated on Deep Space 1 was subsequently used for NASA's 2005 Deep
Impact mission to study (and impact) the Tempel 1 comet. The mission launched in January 2005
and successfully intercepted Tempel 1 on July 4, 2005. AutoNav was used as the exclusive naviga-
tion system during the nal phase of the mission from two hours before impact to approximately 13
minutes following impact for
yby observations [43]. Deep Impact consisted of a
yby spacecraft
and a releasable interceptor. AutoNav was used independently on both to determine the relative
comet trajectory and select a preferred impact location on the comet surface. The
yby spacecraft
used two cameras for AutoNav: a primary medium resolution imager with a 10 milliradian eld of
view and a backup high resolution imager with a 2 milliradian eld of view. Both used a visible
light CCD. The interceptor used only a single medium resolution camera similar to that on the
yby spacecraft. Both systems used similar processing pipelines to determine relative trajectories
and acquired images continuously every 15 seconds starting two hours before predicted impact.
Images were processed to determine a center of brightness of the target comet. This centroid sub-
tended approximately 10 pixels 90 minutes before impact and 35 pixels 35 minutes before impact
[78]. A blob detection algorithm was used during the rst 60 minutes to scan the entire eld of
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 27
view for pixels above a pre-determined threshold. Next, a Centroid Box algorithm determined the
center of brightness within a 400 x 400 pixel window centered on the estimated location of the
target comet. Prior to the nal interceptor trajectory maneuver, at 12.5 minutes before impact, a
scene analysis algorithm was used to determine an oset from the centroid that would maximize
the probability of impacting an illuminated comet surface that had favorable visibility conditions
from the
yby spacecraft. This algorithm running independently on each platform, the impactor
and
yby spacecraft, estimated these values which were suciently consistent [79]. EPOXI, an
extended mission of the Deep Impact
yby spacecraft, used AutoNav to successfully
yby comet
Hartley 2 in November 2010 [80].
Orbital Express
The Orbital Express mission, sponsored by DARPA, demonstrated an array of satellite rendezvous
and servicing technologies including autonomous docking, capture with a robotic arm, electronic
component replacement, and fuel transfer. The Orbital Express demonstration spacecraft, ASTRO,
was launched on March 8, 2007, and soon began a series of experiments designed to demonstrate
increasingly complex autonomous tasks with the target satellite, NEXTSat [20]. ASTRO was
equipped with two relative navigation systems: Autonomous Rendezvous and Capture Sensor Sys-
tem (ARCSS) and the aforementioned AVGS. ARCSS provided primary navigation (range and
bearing) at long-to-mid ranges from greater than 200 km to 200 m. AVGS provided primary prox-
imity navigation (target spacecraft pose) within 200 m with ARCSS providing secondary, redundant
pose estimation during this time [81]. ARCSS consisted of three vision sensors: narrow eld-of-view
long-range visible camera, wide eld-of-view short-range visible camera, and a long-wave infrared
camera. ARCSS also included a laser ranger which was cued by vision sensor data. ARCSS sen-
sor data was autonomously processed onboard by Vis-STAR, a custom software suite which had
three modes of operation. At long ranges, point source tracking was used to distinguish the target
spacecraft from stellar background and provide bearing measurements. At mid-ranges, a silhouette-
and/or edge-matching algorithm is used to estimate pose, using a priori knowledge of NEXTSat At
close-ranges, when the target completely lls the sensor eld-of-view, a set non-coplanar ducials
mounted on NEXTSat are used to cooperatively estimate pose. See Section 2.4.1 for a detailed
discussion of this system.
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 28
Hubble Space Telescope Servicing Mission 4
During the May 2009 Hubble Space Telescope Servicing Mission 4, NASA demonstrated the fea-
sibility of a comprehensive pose estimation system called the Relative Navigation System (RNS)
which had been developed for a future, ultimately canceled, mission to conduct robotic servicing
and de-orbit of Hubble [48]. RNS was placed in the Shuttle bay and operated during docking
operations in a passive mode (i.e. pose solution was not used for trajectory control or guidance
during approach). RNS consists of three optical cameras with varying ranges, a GPS receiver,
and an FPGA-based SpaceCube processor. Pose estimation consisted of two algorithms processing
data from all three cameras: Goddard Natural Feature Image Recognition (GNFIR) and ULTOR
Passive Pose and Position Engine (P3E). GNFIR uses edge matching between collected images and
an onboard 3D wire-frame model of the target. ULTOR P3E uses spatial frequency analysis to
correlate collected images with an onboard database of lters trained using simulated imagery gen-
erated a priori on the ground. During rendezvous and departure, GNFIR was able to track Hubble
successfully. ULTOR P3E was unable to track during these times due to operating mode congu-
ration issues, but successfully operated using stored imagery processed on the ground following the
mission.
PRISMA
The Swedish National Space Board sponsored a formation
ight and rendezvous experiment mission
executed by OHB Sweden in collaboration with the German Aerospace Center (DLR), the French
Space Agency (CNES), and the Technical University of Denmark (DTU) [82][83]. The PRISMA
mission consisted of two, initially attached, spacecraft in Sun-synchronous orbit: Mango (a 3-axis
stabilized chaser satellite with impulsive maneuvering capability) and Tango (a 3-axis stabilized
target satellite which could not maneuver). Both satellites had GPS receivers, allowing for highly
accurate relative and absolute navigation for comparison with other included navigation solutions.
CNES provided a cooperative radiometric ranging system for formation
ying and DTU provided a
modied star tracker named the Vision Based Sensor
3
(VBS) which could perform cooperative and
3
There appears to be con
icting information on the precise name of this system. The VBS designers have referred to
it as\Visual Based System" whereas the spacecraft designers, as well as more recently, the VBS designers themselves,
have used\VisionBasedSensor." Therefore, we use Visual Based Sensor above. Thankfully, the acronym is consistent
so the informed reader will be only momentarily distracted by this inconsistency.
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 29
non-cooperative bearing and pose estimation of the target spacecraft. The VBS system used two
visible spectrum star tracker cameras (in addition to two other unmodied star tracker cameras) to
observe the target spacecraft at far (1000 to 500 km), intermediate (2 km to 30 m), and short (200 m
to 20 cm) ranges [51]. The cooperative mode used light emitting diode (LED) beacons on Tango to
determine pose at ranges within 200 m. Non-cooperative mode uses an image matching algorithm
based on feature tracking and on-board a priori information of the target three dimensional shape
[84]. Details on the operation of VBS appear in Section 2.4.1. DLR used the VBS instrument for
an experiment in the extended mission phase to demonstrate far range (30 km to 3 km) angles-only
navigation; regions of interest produced by VBS were analyzed to determine spacecraft attitude
from stars, to discriminate the target spacecraft, and then provide line-of-sight measurements to
estimate target position. Lastly, according to [85], the Mango spacecraft also included a high
resolution camera (2048 x 2048 pixel CCD) for capturing still images and video segments during
experiments; there is no indication that these images were processed on-board using computer
vision. Many of these videos, as well as some from the VBS, have been publicly released
4
.
Intelligent Payload Experiment (IPEX)
The JPL and California Polytechnic State University San Luis Obisbpo collaborated on the Intelli-
gent Payload Experiment (IPEX), a microsatellite mission launched in December 2013 to validate
onboard data processing and autonomous operations technologies [86]. The satellite included ve
visible light cameras with 2048 x 1536 pixel resolution and a narrow eld of view providing 100
meter resolution per pixel at NADIR. IPEX demonstrated a collection of autonomous image pro-
cessing onboard. Simple algorithms included normalized dierence ratios, band ratios, and spectral
analysis. More complex and computationally expensive algorithms tested on IPEX were support
vector machine classication, spectral unmixing techniques, a random forest classier, and image
salience analysis [21]. For example, the random forest classier was used to segment an image into
Earth surface, clouds, Earth limb, or space, providing an indication of the contents of an image in a
bandwidth ecient way. The salience analysis was designed to detect regions of interest which could
then be autonomously selected and delivered to ground users. IPEX generated over 30,000 image
products, including 450 images processed during autonomous operations as of January 2015 mission
4
E.g., https://www.youtube.com/watch?v=2E2SeNPC2lo (as of September 12, 2019)
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 30
end. Finally, a key feature of IPEX was the ability to autonomously update the onboard collection
schedule for new imaging opportunities based on the results of the onboard image processing such
as detection of interesting features or new events.
Synchronized Position Hold Engage Reorient Experimental Satellites (SPHERES)
The Synchronized Position Hold Engage Reorient Experimental Satellites (SPHERES) laboratory
is a set of three small free-
ying satellites which have been used onboard the ISS starting in 2006 to
conduct research on formation
ight, rendezvous and docking [87]. In 2012, a hardware addition was
delivered to the ISS refereed to as VERTIGO Goggles which provided onboard vision capabilities for
the SPHERES satellites [88]. Each VERTIGO Goggles consists of stereo-vision set of monochrome
CMOS cameras as well as a Linux 1.2 GHz embedded computer. A complete presentation on the
VERTIGO system is given in [54]. In February 2013, they were successfully tested for the rst time
with the authors noting, \this was the rst time a fully autonomous vision-based navigation strategy
was demonstrated in space for a non-cooperative spacecraft" [54]. Non-cooperative vision consisted
of using the stereo-vision cameras to establish a depth map of the target (another SPHERES
satellite) and then using the center-of-geometry of the resulting point cloud as a proxy for target
center-of-mass. An onboard control system performed a circumnavigation of the target using this
position information as feedback. In 2014, a SPHERES satellite equipped with VERTIGO Goggles
was used to perform simultaneous localization and mapping (SLAM) of another SPHERES satellite
including the estimation of full six degree of freedom state, center of mass location, principal axes of
inertia, and ratios of inertia [89]. The SLAM method used stereo images collected at 2 Hz and the
target object was spinning at approximately 10 rotations per minute. Also in 2014, a SPHERES
satellite was observed by two other VERTIGO-equipped SPHERES satellites simultaneously [54].
This dataset was intended to be used for future collaborative mapping research [90].
Hayabusa-2
Building on the success of Hayabusa, JAXA launched Hayabusa-2 on December 3, 2014, and was
based on the
ight heritage of Hayabusa. It rendezvoused with its asteroid target, 1993 JU3,
on June 27, 2018, and should return to Earth in 2020. The optical navigation concept during
landing was generally the same as for Hayabusa, though ve releasable target markers are carried
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 31
onboard instead of three [56]. Hayabusa-2 carries two wide-eld of view 1 megapixel cameras with
54 by 54 degree elds of view and a telescopic 1 megapixel camera with a 5.4 by 5.4 degree eld
of view. As with Hayabusa, the primary source for lateral velocity information during landing
is tracking the articial target markers. In addition, as in Hayabusa, researchers have proposed
experimental autonomous tracking modes which do not require these markers. Researchers from
the University of Tokyo and JAXA presented a method of autonomous estimation for asteroid shape
and rotational motion based on optical images which is planned to be tested during the Hayabusa-
2 mission; however, the actual processing during the experiment will occur on the ground due to
computational restrictions on the spacecraft [91].
Raven
In February 2017, NASA launched the Raven experimental relative navigation payload to be tested
on the ISS. Raven will be attached to the ISS and observe arriving and departing vehicles using vi-
sual and infrared cameras and a
ash LIDAR camera [58][92]. Pose is non-cooperatively estimated
independently by two separate algorithms: GNIFR using visual camera images and Goddard Flash-
Pose using
ash LIDAR point clouds. Both systems use a priori knowledge of the target 3D shape
to minimize reprojection error in images via an iterative algorithm. A Kalman Filter is used to
fuse pose estimates with an inertial measurement unit and dedicated star tracker measurements.
At the time of writing, results from this
ight experiment are unavailable. Raven builds on previ-
ous ground based testing using the Argon system, a sensor suite with two visual cameras, a
ash
LIDAR, and onboard computing (SpaceCube) [93]. Argon also used FlashPose and GNIFR to
estimate pose of the target spacecraft.
OSIRIS-REx
OSIRIS-REx is a NASA mission to return an asteroid sample by performing a touchdown on the
target asteroid and then returning to Earth [57]. The mission launched in September 2016 and
is planned to perform the touch-and-go maneuver sometime in 2020. The OSIRIS-REx Mission
plans to use natural feature tracking as a backup for rendezvousing with the target asteroid Bennu
[94]. This tracking method uses a catalog of known features, patches of the asteroid surface, which
have been pre-selected by ground operators during the survey phase. Features will have been
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 32
selected for both correlation quality as well as coverage considerations of the target surface. Each
feature consists of a position in asteroid-centered coordinates, a 2-D array of altitudes above the
surface around the feature point, and a 2-D array of albedo values. During approach, features will
be rendered using this information and a current estimate of pose. Then, they will be spatially
correlated with images captured by a camera to perform matching. These feature matches then
update the spacecraft state (pose) and the process repeats. Correcting the distortions is not done
due to limited computing resources on the spacecraft. Finally, OSIRIS-REx uses a rolling shutter
so the eect of shifting pixels when observing a moving object (like the target asteroid) must be
accounted for [57].
Space-Based Telescopes for Actionable Renement of Ephemeris (STARE)
The Space-Based Telescopes for Actionable Renement of Ephemeris (STARE) mission was launched
in 2012, with a follow-on satellite launched in 2013, as a proof-of-concept mission to improve orbital
information for target satellites via space-based observation [55]. The satellite is a 3U Cubesat with
an 8.5 cm aperture telescope payload. Onboard algorithms autonomously detect satellite tracks
against the background star eld which can then be used for orbit determination of the targets [95].
10 raw images are used to create a sky background mask which is subtracted from an observation
image to isolate target streaks. An algorithm is used to determine the start and end points of each
satellite track. Star positions are also captured to provide accurate pointing information for each
observation.
Astrobee
NASA has developed a follow-on for the SPHERES free
ying robotic platform for the ISS called
Astrobee [96]. The small satellite which operates within the station is available for future research
payloads and guest scientist experiments as well as serving to record astronaut activities and con-
ducting internal surveys of the ISS. The satellites achieve 6 DOF control by fan-based propulsion.
Each Astrobee has a suite of vision sensors used for specic functions: navigation and docking
(visual cameras), hazard avoidance and attaching to an ISS hand rail (LIDAR time of
ight depth
sensor) [59]. Some vision tasks involve specically designed ducials, others use knowledge of
ISS interior layouts and appearances, and still others require no a priori information. The initial
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 33
Astrobee experiment launched to the ISS in 2019.
Prox-1
The Prox-1 mission is an ESPA-class mission to demonstrate autonomous relative navigation and
control in LEO [61]. The system has both infrared and visible light cameras as well as an Intel Atom
dual-core processor to host image processing algorithms. Infrared images are used to determine
relative position and velocity of the target satellite (the LightSail satellite, ejected from the Prox-
1 satellite). Size and shape of the target satellite pixels area in the infrared images are used to
determine range [97]. Prox-1 launched in 2019.
RemoveDEBRIS
The RemoveDEBRIS is an ESA mission to demonstrate on-orbit active debris removal techniques
which launched in 2018 [60]. The mission includes the testing of a harpoon and net to capture
a debris target, an inspection system including cameras and
ash LIDAR, and the processing of
images for cooperative and non-cooperative navigation
5
. Cubesats (DebriSATs) are ejected from
the spacecraft (RemoveSAT) and used as targets for debris removal. Several navigation approaches
are being researched for inclusion in the demonstration including model-based pose estimation
using both edge- and color-based features [98]. To simulate the model for the reprojection error
minimization approach, both a CPU using wireframe models and a GPU approach using full 3D
rendering have been proposed.
Others
There are multiple spacecraft missions likely using computer vision for which very little information
could be found in the literature. We brie
y summarize what is known about these missions before
concluding this section. In 2005, the U.S. Air Force launched the XSS-11 satellite which included
the RELAVIS scanning LIDAR. In 2014, the U.S. Air Force launched the Automated Navigation
and Guidance Experiment for Local Space (ANGELS) satellite which was developed by the U.S. Air
Force Research Laboratory. The satellite payload performs \detection, tracking, and characterizing
of space objects" approaching within \several kilometers" of the target [99]. In 2016, China deployed
5
It appears that this processing will be done on the ground using images acquired during orbital operations
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 34
the Banxing-2 microsatellite from the Tiangong-2 space station which then conducted formation
ight and autonomous navigation with the station [100]. Banxing-2 had two onboard cameras
operating in visible and infrared, though it is unclear if the autonomous navigation uses these
cameras.
2.3.2 Future Missions
Mission Year Vision Application(s)
Proba-3 2020 Cooperative Pose Estimation [101]
Cubesat Proximity Orbital Demonstration 2020 Non-cooperative Pose Estimation [102]
Mars 2020 2020 Non-cooperative Pose Estimation [103]
AAReST Unknown Non-cooperative Pose Estimation [104]
SUMO/FREND Unknown Non-cooperative Pose Estimation [105]
Asteroid Redirect Mission Unknown Unknown
HyspISI Unknown Remote Sensing Image Analysis
Table 2.2: Select future space missions using computer vision.
In this section, we note any planned or proposed space missions that intend to use orbital
computer vision. A compilation of identied future missions can be found in Table 2.2. This
list is likely incomplete and only projects which appear to be active are included. There have
been a number of concepts which have undergone some development in the past (Phoenix, Tecsas,
ConeXpress, SMART-OLEV, etc.) which are not summarized here; however, any published insights
from these development activities for orbital computer vision appear later in this chapter.
The California Institute of Technology and the University of Surrey are collaborating on a
future formation
ight and docking demonstration system known as Autonomous Assembly of a
Recongurable Space Telescope (AAReST) [104]. Computer vision is used to enable un-docking,
reconguration, and re-docking of the individual nanosatellites which make up the aggregated tele-
scope. Designers are planning to use a commercial modulated light sensor (Microsoft Kinect) to
conduct pose estimation and target identication on each spacecraft. The University of Surrey pre-
viously conducted research to design the Surrey Training Research and Nanosatellite Demonstrator-
2 (STRaND-2) satellite which included a prototype LIDAR pose estimation concept based on the
Microsoft Kinect which uses structured light in the short wave infrared range with a CMOS camera
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 35
to determine depth at 60 frames per second [106]. In addition, AAReST will use ducial mark-
ings consisting of LEDs and patterned glyphs placed on all spacecraft to aid in navigation. It is
anticipated that all satellites will have these sensors to provide robustness to solar blinding and
equipment failure; this is a conceptual break from the chaser/target framework that many of the
past missions have adopted.
ESA plans to undertake a technology experiment known as the Project for Onboard Autonomy-3
(PROBA-3) which will be ESA's rst precision formation
ying mission, using two separated space
vehicles to form a 150 m solar chronograph [107]. PROBA-3 will use a cooperative pose estimation
system to determine the pose of one passive satellite from the other maneuvering satellite. To
accomplish this, PROBA-3 uses an optical camera (VBS) to observe active light beacons on the
target spacecraft, providing line of sight measurements to each resolved beacon at a rate of 1 Hz
[101]. The system is intended to work from 2 km to 10 m.
DARPA has sponsored the SUMO/FREND project to demonstrate the feasibility of servicing
satellites in GEO. SUMO is the overall project, while FREND is the payload portion. Obermark
et al. present some information on the vision system which is envisioned to include an imaging
LIDAR for pose estimation during approach and a stereo-vision camera (including three CMOS
cameras for both redundancy and challenging illumination conditions) for robotic manipulation in
close ranges [105]. The authors focus on the challenges of lighting for vision applications on orbit.
Specically, by mounting the cameras with various orientations, some specular re
ections can be
overcome geometrically. Early research on grapple point identication includes Hough-transform-
based feature detection of \cup-cone" components on the target spacecraft.
The Integrated Navigation Sensor Platform for EVA Control and Testing (INSPECT) is a pro-
posed sensor suite for SPHERES which includes a stereo vision optical camera (the aforementioned
VERTIGO Goggles), a LIDAR imager (MESA SR4000), and a thermal infrared camera (FLIR A5)
[108]. The authors view the intelligent fusion of these complementary various data sources as a key
enabler for future applications. The system has been tested in ground laboratories and zero gravity
parabolic
ights.
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 36
2.4 Vision Sensors
Computer vision in space relies on vision data produced predominately by two types of sensors: op-
tical cameras which produce two-dimensional intensity images in one or more spectral bandwidths
(i.e. colors), and LIDAR sensors which produce a range measurement for each pixel in addition
to the intensity image. Additional sensors including range (LIDAR or RADAR), satellite naviga-
tion (GPS), and inertial sensors (accelerometers, gyroscopes) are often integral parts of relative
navigation solutions. This section surveys available sensing technologies used for orbital computer
vision.
2.4.1 Visual Cameras
Star Trackers for Orbital Computer Vision
Star trackers have been proposed by multiple researchers as onboard computer vision sensors for
target tracking and pose estimation. As mentioned in Section 2.3.1, the PRISMA satellite used a
star tracker with modied software for vision tasks. McBryde and Lightsey presented a dual-use
imaging sensor meant to perform both star tracking and visual navigation of a small satellite target
[109]. Zhang et al. discuss a single star tracker used in both star tracking and rendezvous mode by
adjusting the integration time or choosing focal planes with adjustable sensitivity [110]. Re
ected
light from close objects such as spacecraft saturates the sensor when using typical star tracking
integration times causing blooming and challenging vision algorithms. This same exposure issue was
encountered during an XSS-10 experiment sequence which attempted to image a target spacecraft
using the onboard star tracker [38]. The specic experiment sequence was unsuccessful because the
auto-exposure system did not operate successfully and the target intensity overwhelmed the CCD
focal plane array.
Alternative Camera Congurations
Darling et al. discuss the design of a stereo-vision camera system for using in proximity operations.
Features from Accelerated Segment Test (FAST) algorithm are used to detect features and Fast
Retina Keypoint (FREAK) feature descriptors are used to nd corresponding features in each cam-
era using an approximate nearest neighbors algorithm (ANN) [111]. The line-of-sight measurements
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 37
of the object in each camera frame are fused in an Extended Kalman Filter (EKF) with inertial,
GPS, Sun, and magnetometer sensors to determine the relative position of the target satellite.
Airbus demonstrated a sensor suite consisting of two IR cameras and one visible camera that
were tested during the last ATV
ight [47]. The system, known as LIRIS-1 was created by Sodern
and used commercial o the shelf uncooled microbolometer IR cameras and a visible camera. All
three cameras had a 60 by 45 degree eld of view.
Two cameras can be combined to increase the eld of view. For example, Du et al. [112]
propose using two cameras with partially overlapping elds of view to increase the total eld of
view available to a vision system in a collaborative manner. In addition to limited eld of view,
another challenge that vision systems must address is automatic exposure control [48].
Visual Based System
The VBS was designed by the Technical University of Denmark as an extension of their existing
micro Advanced Stellar Compass (microASC or ASC). VBS provides line-of-sight (also known as
bearing) information for non-stellar space objects at magnitude 7 or brighter [113] in addition to
identifying and locating stars. In the case of PRISMA, VBS used two additional star cameras, one
unmodied for far range operations and one that has been modied specically for bright scenes by
adding a xed aperture and lter [113]. VBS operation depends on which mode it is in: far range,
intermediate range, or short range.
At far range, VBS acts as a nominal star tracker with the exception that it examines a list of
luminous objects not found in the onboard catalog which is a byproduct of normal star identi-
cation algorithms. Each image will produce 2-20 of these non-stellar objects. VBS compares the
list between subsequent frames to determine which of these objects represents the target; the dis-
crimination algorithm uses a band of acceptable orbital velocities based on the assumption that the
target is in the same general orbit as the chaser [113]. Note that in far range, because of the star eld
which provides attitude information for the chaser spacecraft, directional accuracy for the target
spacecraft is on the order of 3 arcseconds (30 m at 1000 km). Far range during rendezvous begins
with a systematic search of the sky, though experiments onboard PRISMA demonstrated that VBS
tended to begin tracking an erroneous target object resulting in unacceptable lter initialization
[114].
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 38
At intermediate range, the target becomes so bright that the camera shutter begins to be used
and stars become too dim to be detected. The varying luminosity of the object allows for an
estimate of distance to the target in this mode.
At short range, a pattern of LED ducials
ashing at one pulse per second on each side enables
cooperative pose estimation. Accuracies of 0.5 millimeters in range, 0.1 millimeters in lateral cross-
range, and 1 degree orientation in all axes are achievable. Note that the active LED illumination
enables accurate pose estimation despite illumination variations. In addition, VBS can provide non-
cooperative pose estimation by matching detected features between images and a three dimensional
target model known a priori. Accuracies degrade by ve to ten times and this non-cooperative mode
is sensitive to illumination conditions. For technical details, see [115].
ULTOR
Th ULTOR system, built by Advanced Optical Systems, is a video processor that performs auto-
matic target recognition based on data provided by imaging sensors [116]. It was developed with
the U.S. Navy. The software conducts spatial frequency correlation using training images. The
system operates on several dierent processor architectures, including a Field Programmable Gate
Array (FPGA), and can perform 180 template correlations per second using an 256 x 256 pixel
image size. A single camera is used as input. Pose is estimated by using a correlation technique to
identify known distinctive reference features from training data (handles, bundles of wires, struc-
tural elements, etc.), measuring the location of these features in the image, and then using an
n-point perspective algorithm to determine pose from these feature locations in the image as well
as their corresponding known locations on a 3D model of the target spacecraft [117]. Training data
is produced by creating high quality simulated imagery of the target spacecraft along planned tra-
jectories that will be encountered during the mission; unless simulated prior to launch, this system
cannot perform pose for arbitrary relative locations, orientations, and illumination angles. Finally,
templates are stored in bins separated by range to shorten search times for the correlation engine
when some knowledge of distance information is known.
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 39
Autonomous Rendezvous and Capture Sensor System
ARCSS was the relative navigation system used by the Orbital Express ASTRO spacecraft to sense
the NEXTSat target. The system included two visual cameras (long range and short range), a
long wave infrared sensor for tracking during operations in poor lighting, a laser rangender, and
an LED spotlight for close range operations during nighttime conditions [118]. The visible sensors
used CMOS focal planes and were chosen to reduce blooming and streaking under harsh lighting
conditions. The infrared sensor was an uncooled microbolometer. ARCSS was used alongside the
AVGS on Orbital Express at short ranges to provide redundant measurements of bearing, range, and
relative attitude. When the target became suciently large within the eld of view, Vis-STAR is
used to determine the attitude by spatial correlation silhouette tracking [119]. An apparent motion
algorithm between frames is used to subtract Earth background. The tracking system is designed
to operate using visible or infrared imagery interchangeably, providing continuous coverage over
various lighting conditions. According to
ight results, the system performed successfully during the
Orbital Express mission under various stressing on-orbit lighting conditions and Earth background
clutter to provide bearing, range, and attitude measurements [81].
2.4.2 LIDAR Cameras
LIDAR is an active sensing technology that uses light to estimate the distance to illuminated objects
my measuring round trip
ight time for each returned re
ection. LIDAR has found extensive use
in remote sensing to produce highly accurate altitude measurements from airborne or space-based
platforms. Imaging LIDAR is a subset of LIDAR technology which produces range measurements
for individual pixels in a given eld of view much as a camera collects intensity values for each pixel.
Current imaging LIDAR sensors can be categorized by the method the scene captured: scanning,
detector array, or modulated light. Scanning LIDARs use a mechanical scanner to pass the narrow
laser beam over the scene while determining the range for each pixel sequentially. Detector arrays
(often referred to as
ash LIDARs) use a single
ash of laser light and then record the round trip
time of the returning light simultaneously for each pixel in the array. Modulated light LIDARs
encodes a psuedo-random number onto the laser light and determines the relative timing oset in
the returned light for each pixel. Unlike camera systems which rely on computer vision algorithms to
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 40
deduce range, imaging LIDARs directly produce a 3D point cloud of the target scene. A somewhat
recent survey of LIDAR use in space by Christian and Cryan [120] provides an introduction to
LIDAR in space as well as a summary of space-qualied LIDAR sensors available today. [121]
provides insight into LIDAR design requirements for space applications.
Heretofore, scanning LIDARs have been the most used type of imaging LIDAR in space. How-
ever,
ash LIDARs, which do not have moving parts (a major potential source of failure), have
been used in recent space-based demonstrations more commonly than scanning LIDARs. Flash
LIDARs are able to capture an entire scene a single point in time which is useful when observing
spinning or moving targets so as to create an accurate range image. Flash LIDARs have been
demonstrated on three space shuttle
ights prior to retirement: STS-127, STS-133, and STS-134
[122]. Amzajerdian et al. have provided a recent survey on
ash LIDAR systems for future space
missions including planetary landing and orbital proximity operations as well as perspectives on
near term
ash LIDAR technology improvements [122]. For planetary landing,
ash LIDAR can be
used for altimetry, terrain relative navigation, hazard detection and avoidance, and hazard relative
navigation [122]. These functions were demonstrated as apart of NASA's Autonomous Landing
and Hazard Avoidance Technology project to improve technology levels for future missions.
Modulated light systems have yet to see use in space, though researchers are exploring this
for space-based experiments. For example, researchers have demonstrated the use of modulated
light depth sensors such as photonic mixer devices. One such system uses a eld of infrared
(870 nm) LEDs to illuminate a target and the light is modulated at 20 MHz which allows a non-
ambiguous distance range of 7.6 m [123]. These types of devices have challenges in orbital situations
with re
ective surfaces and secondary illumination (i.e. the Sun) which require future research to
compensate [124].
Perhaps the simplest imaging LIDAR system is a single laser range nder (single pixel) which
is scanned over the space object target either by passive orbital or attitude motion of the sensing
spacecraft. Laser range nders are much more inexpensive, lighter, and have longer operating
ranges than traditional imaging LIDARs. A single beam laser range nder is proposed by Nayak
et al. to form target point clouds by designing relative orbit and attitude trajectories including
planned maneuvering around a target to achieve a minimum point cloud density [125]. The ability
and willingness to switch between dierent relative orbits using propulsion is the key enabler to
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 41
create three dimensional point clouds with sucient target coverage using a single laser. [126] used
a single beam laser ranger nder as well as a multi-beam laser range nder arranged in a grid
pattern to create point clouds while circumnavigating the target in a relative orbit. Results showed
that a error-free laser range nder measurements could create a sucient point cloud, however the
time to collect an equivalent point cloud using imaging LIDAR was almost an order of magnitude
less. Finally, a laser range nder-based system would require high accuracy pointing knowledge to
create the point cloud. [127] propose the inclusion of an infrared camera in a single-beam laser
range nder-based point cloud generation system. The infrared camera provides image data of the
target as well as detects the strike point of the laser. These image-registered strike points are used
to create Voronoi cells which are then used to reduce coverage gaps by nding the largest empty
circle over all Voronoi points. Attitude trajectories are then created to target these uncovered areas
with subsequent laser range nder measurements.
It is interesting to note that for both
ash and scanning LIDAR imaging systems, the eld of
view can be dynamically adjusted in real-time by adjusting the divergence of the transmitter beam;
for example, a narrow beam can be used to increase operational range and a wide beam can be
used to observer larger surfaces perhaps for hazard detection. In scanning systems, the beam scan
pattern can by dynamically adjusted to optimize both power usage and pose estimation accuracy.
For example, the current pose estimate can be used to update the scan size to match the object or
region of interest [49].
Super-resolution techniques have been presented as a method to increase the image resolution
of a
ash LIDAR system which typically has limited resolution as compared to optical cameras
[122]. These methods rely on combining multiple images of the same, preferably static, scene to
yield a single higher resolution image. The method has three steps: three dimensional modication
via back projection method, a six dimensional use of the Lucas-Kanade registration algorithm,
and a modied inverse ltering algorithm [128]. The researchers nd that a magnication of 4
times beyond the sensor resolution can be achieved by using 20 images of the same scene in their
planetary landing scenario.
The following sections summarize a selection of LIDAR systems which have been designed for
use in space.
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 42
Vision Navigation Sensor
The VNS is a
ash LIDAR relative navigation sensor being developed by NASA and Lockheed
Martin [74]. The Nd:YAG laser operates at 1572 nanometers and has a 256 x 256 pixel resolution
focal plane array [53]. As a LIDAR, it produces intensity images with a depth value for each pixel.
VNS is currently planned to be used in a cooperative mode with specically designed retro-re
ectors
axed to the targe. The system operates at 30Hz and 3D position of retro-re
ector targets are
provided at 5Hz [53]. Researchers have also proposed non-cooperative operations including pose
estimation of a simulated rocket body rotating at 30 degrees per second [53].
LIRIS-2 3D Imaging LIDAR
LIRIS-2 3D imaging LIDAR is a scanning LIDAR produced by Jena-Optronik and demonstrated
on the ATV-5 mission (August 2014) where it collected data during both rendezvous and departure
[129][47]. The sensor uses an eye-safe laser to scan a 40 x 40 degree eld of view and has an
operating range of 3500 m against targets with retro-re
ector ducials and 260 m without ducials.
The image frame rate is 3 Hz. Several scan modes are available depending on the target object
distance.
Videometer
The Videometer is the primary relative navigation system for the ATV from a distance of 300 m
to docking and is derived from the Sodern SED16 autonomous star tracker; the primary dierence
is the active illumination laser and processing of the retrore
ector patterns. [130]. Range, line-of-
sight, and target attitude (when within 30 m) is estimated once per second. The system is designed
to be robust to solar/lunar/albedo illumination as well as expected space environment noises (i.e.
South Atlantic Anomaly). At 300 m, the range accuracy is 6 m. The eld of view is 24 degrees
square.
Rendezvous Sensor / Telegoniometer
The Rendezvous Sensor produced by the German company Jena-Optronik is a scanning LIDAR
camera which has been used for cooperative pose estimation on all ve ESA ATV missions, as the
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 43
sole relative navigation sensor on the Japanese HTV missions, and as the primary sensor for the
Orbital Sciences (now, Northrop Grumman) Cygnus vehicle [47] [131]. The Rendezvous Sensor is
designed to use three retro-re
ectors on the target to perform cooperative pose estimation, however,
the designers have experimented with using the system to create a depth map of the eld of view
for use in non-cooperative pose estimation [50]. The eld of view of the laser scanner is adjustable
in real-time onboard and supports at least a 40 by 40 degree eld of view requirement for the
European ATV. The design range is between 1 and 730 m. Finally, because its original design was
meant for supply missions to the manned ISS, the laser used is eye safe.
Advanced Video Guidance Sensor
The AVGS is a
ash LIDAR system which uses two laser diodes at 800 nm and 850 nm [132].
Two colors are used such that specically designed retro-re
ectors on the target do not re
ect light
in one of the bands. The targets can therefore be easily isolated by subtracting one illuminated
image from the other. The sensor is intended to provide accurate operations from 1 to 300 m. The
eld of view is approximately 16 degrees. The output rate of target positions is 5 Hz. This sensor
was used on DART and Orbital Express. Additional information on AVGS can be found in [133]
and information on
ight results appears in [134]. It is noteworthy that results from ground tests
and
ight tests showed consistency, indicating that sucient ground testing may produce results
comparable to space.
Other LIDAR Sensors
The Rendezvous Lidar System is a scanning LIDAR system developed by MDA and Optech for use
on the XSS-11 mission [44][135]. Other sensors not summarized here include TriDAR, DragonEye,
Trajectory Control Sensors, Video Guidance sensors, and others; see [120] for more information.
2.4.3 Simulation and Testing
As with all spacecraft components, orbital computer vision systems must go through extensive
modeling, simulation, and testing prior to launch. In addition, the cost of on-orbit testing and the
limited availability of existing image sets from proximity operations on orbit has lead researchers in
this domain to create simulations in which to test their computer vision sensors and algorithms. The
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 44
most common simulation method to date has been computer-based 3D rendering. This technique,
which has been made very aordable in the last twenty years, relies on geometric models of the
target satellite.
Researchers at the Beijing University of Aeronautics and Astronautics (BUAA) have created
the BUAA Satellite Image Dataset (BUAA-SID 1.0) which consists of 25,760 rendered gray scale
images of satellites modeled using 3ds Max [136]. Each of the 56 satellites in the dataset contains 230
dierent viewpoints around the simulated satellite. This dataset does include samples with varying
illumination or scale. The 3D spacecraft CAD models from this database were used to create an
improved high delity satellite image data set (BUAA-SID 1.5) by Zhang et al. by incorporating
material properties, realistic lighting, point spread function simulation of optics, motion blur, and
imaging noise [137].
Lingenauber et al. published a publicly available data set of grayscale stereo images taken 2 m
from a spacecraft mockup target under various illumination conditions and approach trajectories
[138]. The complete data set is currently publicly available
6
and includes ground truth poses,
camera calibration, and target satellite 3D models.
West Virgina University created a virtual simulator using OpenGL
7
to render target spacecraft
during pose estimation tests [139]. Woods and Christian created GLIDAR, an open source OpenGL-
based software package to create depth images (i.e. LIDAR point clouds) from 3D models [140].
Zhang et al. used OpenGL to render distant (20 to 70 km) space targets by incorporating star
catalogs, 3D satellite models and orbits, sensor parameters, and Sun position with Gaussian noise
added to simulate imaging noise [141]. [142] used 3ds Max to render images and OpenSceneGraph
to model the target object behavior.
Oumer et al. [143] notes the major diculty in using computer rendering for training data or
testing is the diculty in targets with non-smooth surfaces, specically those covered with Multi-
Layer Insulation which produces highly re
ective and specular surfaces. In this case, physical
modeling with solar simulators appears to provide the highest delity test images.
Robotics frameworks have been used to simulate orbital computer vision scenarios. Leveraging
open source frameworks especially allows for rapid and inexpensive virtual experimentation. [144]
6
http://rmc.dlr.de/rm/en/sta/martin.lingenauber/crooscv-dataset
7
OpenGL is a widely supported application programming interface for creating 2D and 3D images, often using
hardware accelerated rendering.
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 45
used the open source Gazebo 3D simulator to produce rendered images and imaging LIDAR point
clouds of space station targets and then used the open source Robot Operating System framework
combined with the open source Point Cloud Library to determine pose using the Iterative Closest
Point algorithm.
Many researchers have used a high delity graphics simulator that accurately models sensors
including the use of material-specic bidirectional re
ectance distribution functions (BRDF) [145],
sensor noise models, and representative illumination models. Dahlin et al. [146] used NASA's
Engineering DOUG Graphics Engine (EDGE) to simulate images of the ISS. EDGE is publicly
available and has been used to train astronauts for rendezvous and docking. EDGE produces
images under varying lighting conditions, camera settings, resolutions, and imaging rates.
Physical testing facilities have also been used to experiment with and test computer vision
applications for space [147] [148]. The U.S. Naval Postgraduate School created a
at
oor facility
to simulate rendezvous scenarios using
oating target and chaser spacecraft which helps to simulate
a three-dimensional (two translational and one rotational) version of free six-dimensional motion
on orbit [149]. The chaser spacecraft used cooperative pose estimation to track a target satellite
equipped with three infrared LEDs. The Georgia Institute of Technology has a 5 degree of freedom
spacecraft simulator known as the Autonomous Spacecraft Testing of Robotic Operations in Space
(ASTROS) which has been used for orbital computer vision tests [150]. In support of NASA's
Satellite Servicing Capabilities Oce, the Goddard Space Flight Center has created a Servicing
Technology Center which includes facilities to test computer vision systems [151]. This Center
includes physical mockups of asteroids, satellite components, and full satellite targets.
To verify the optical navigation techniques for OSIRIS-REx, a 14x14 meter wall was constructed
to simulate a region of the asteroid near touchdown [57]. A robotic manipulator was used to simulate
asteroid approaches. The U.S. Naval Research Laboratory (NRL) has a test facility for proximity
operations, the NRL Proximity Operations Testbed, including the ability to perform calibration
and system verication under controlled lighting conditions and geometry [105]. [152] presents a
testbed for on-orbit servicing which uses a custom 3D simulation environment, VEROSIM, and
incorporates two lightweight robots with a physical satellite scale model and a stereo camera,
respectively.
The European Proximity Operations Simulator 2.0, located at the German Space Operations
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 46
Center, provides a simulation and test environment for proximity operations using two robotic
manipulator platforms to precisely simulated 6DOF relative motion from 25 to 0 m apart [153]. A
solar simulator (ARRI Max 12/18) produces orbit-like illumination conditions. Rendezvous sensors
such as visible or LIDAR cameras are regularly used as part of simulated experiments to test and
verify relative navigation solutions.
Scale models of target spacecraft are another common way to physically simulate computer
vision on orbit. These models are typically illuminated by single light sources which attempt
to mimic solar illumination. [154] used a physical scale model of Radarsat along with thermal
cameras to test pose estimation algorithms. [155] uses a scale model of Chang'e-2 mounted on
a 6 DOF platform which is illuminated by a solar simulator. Kelsey et al. [156] note that in
their experiments, the pose estimation results using synthetically rendered images of spacecraft
and physical scale models were similar.
Spacecraft materials are an important part of modeling. Harris et al. characterized the close
range (5 to 30 m) use of a laser range nder on typical spacecraft materials (aluminum, Mylar,
solar cells, ne steel mesh) under a variety of lighting conditions and incidence angles (0-45 degrees)
[157].
A key challenge for the space-based computer system is establishing a verication regime which
can qualify a vision system for general situations beyond the limited testing scenarios. As an
example, ideally a rendezvous vision system can be tested against only a subset of potential target
satellites. This is contrary to current practices where a complete vision system has been qualied
for a specic mission with a specic target perhaps even with a specic relative trajectory. The
appeal of some form of general qualication is reduction of verication costs. English et al. propose
a method for general performance evaluation of space-based vision systems [49]. Their framework
includes two types of testing: relative performance over some operational range and absolute testing
of the simulation environment to ensure that it correctly models system performance. Since non-
cooperative pose estimation may be highly target dependent, the authors propose a generic shape
(a reduced pose ambiguity cuboctahedron) to be used a a reference target during pose estimation
system qualication tests.
Although ground calibration and testing of a system is essential, in many cases, either one
time or occasional on-orbit calibration will be necessary to account for system changes due to
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 47
launch eects, thermal eects, optics degradations, or electronic failures. In the case of the Orion
Multi Purpose Crew Vehicle, which requires autonomous optical navigation as part of
ight safety,
Christian et al. present a detailed mathematical treatment of an on-orbit autonomous calibration
procedure using star eld images [158]. It is noted that the high-accuracy angular position infor-
mation of stellar objects from star catalogs actually makes on-orbit calibration quality potentially
\superior" to pre-launch calibrations. The Levenberg-Marquardt Algorithm non-linear optimiza-
tion technique is noted to be the standard algorithm for performing camera calibration and is used
here as well to recover the ten camera calibration parameters (ve intrinsic parameters and ve
lens distortion parameters).
McByrde and Lightsey present a complete testing pipeline for small satellite imaging sensors
intended for star tracking and proximity navigation which involves image simulation, software-in-
the-loop testing, and hardware-in-the-loop validation [109].
2.5 Position Estimation
2.5.1 Star Tracking
Perhaps the rst application of computer vision on orbit, and certainly the most mature and pro-
liferated, is the imaging star tracker. Modern star trackers use a CCD focal plane array to form
images of the star eld and then perform onboard image processing techniques combined with star
identication algorithms to yield an inertial attitude solution for the star tracker and thus the space-
craft. This solution may be fused with other sensors: magnetometers sensing orientation relative
to a planetary magnetic eld, Sun sensors providing the direction of the Sun, and inertial sensors
which measure changes in orientation (e.g. mechanical, ring laser, and resonating gyroscopes). A
star tracker is often viewed as a high accuracy (arcminute or arcseconds), low rate sensor (1-10
Hz), which provides an update to inertial navigation. Recent authors have discussed Star Tracker
Only attitude estimation and the design challenges this concept introduces [159]. Star trackers are
covered extensively in the literature [160] including from the image processing perspective [161]
[162]. We brie
y highlight the computer vision functions of modern star trackers.
Star tracker processing consists of ve steps following image capture: 1) extraction of point
source locations with corresponding brightness values, 2) extraction of features from this point
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 48
source set, 3) catalog search for matching stars based on these features, 4) a recursive catalog
search based on a priori attitude information once an attitude estimate is established, 5) attitude
estimation based on known inertial directions of identied stars [163] [164]. The rst step of point
source extraction is typically addressed by thresholding the image to nd pixels of interest and
then determining the centroid of the region as described in [160]; note that slightly defocused
images are used to spread the stellar point source over several pixels to achieve sub-pixel accuracy.
Steps two through four are typically known as the star identication or Star-ID problem and their
complexity largely determines the processing power and time required for star tracking. If the star
identication problem is attempted without a priori attitude information, this is often known as
the lost-in-space problem. The nal step, attitude estimation using algorithms such as QUEST
[165], is a well-established area and will not be discussed here. Improved algorithms and decreasing
hardware cost have made star trackers an option for even microsatellites [163].
Star identication is an active area of research and there are several recent surveys comparing
the variety of techniques proposed during the last three decades [164] [161] [162]. The earliest
algorithm to make the lost-in-space problem tractable took the separation of the two closest stars
from a candidate star as well as the included angle among all three stars as a set of features; this set
was used to search an on-board database in linear-time [166]. Other techniques have included the
star brightness as a feature to improve catalog search speed by ordering candidate stars to avoid
redundant star pair permutations. Researchers have also focused on improving database search
including the use of binary search trees, neural networks, and the creation of custom catalogs which
make searching more ecient. Finally, pattern recognition algorithms have been proposed based
on the direct use of the binary images following the threshold and centroiding step. Future research
in star trackers includes attitude rate estimation, non-stellar object detection and identication,
improvements in performance, and reductions in cost [164].
2.5.2 Optical Navigation
Optical navigation is the use of images to determine position of a spacecraft relative to a target.
Optical navigation has long been applied manually on the ground for interplanetary missions,
especially those involving close approaches or
ybys. Recent advances have created the capability to
perform this navigation autonomously onboard. Note that this section discusses optical navigation
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 49
to determine the relative position of a target planetary body. However, the term optical navigation
is also used by the research community to refer to pose estimation; this subject is covered Section
2.6. Owen [167] provides a thorough tutorial of the mathematical formulation of optical navigation
as well as some of the image processing methods.
Christian [168] presented a method to determine planet centroid and diameter for optical nav-
igation by modeling the planet target as an ellipsoid and then tting an ellipse to the planet's
horizon in an image. This method is most useful at far range when landmarks cannot be accurately
tracked. The problem can be posed as a maximum likelihood estimation optimization problem
and three algorithms were compared to nd a solution, with an algorithm known as the funda-
mental numerical scheme producing the most desirable result. Christian presents an updated limb
localization technique in [169] which includes attention to light illumination considerations for the
target terrain. Finally, another important measurement of interest which can be extracted from
images of target planetary bodies is the angle between the lit horizon and a reference star; three
such measurements can be used to produce line-of-sight measurements [170].
Researchers from JPL have proposed a Deep-space Positioning System which provides au-
tonomous position determination in the solar system in a self-contained onboard instrument [171].
This is an extension of the AutoNav concept used to during the Deep Space 1, Stardust, and Deep
Impact missions. In addition to performing onboard orbit determination, the system would be
capable of terrain relative navigation for small body exploration or landing scenarios. One such ap-
plication would be the tracking of landmarks on an asteroid or comet to determine relative position
and velocity of the body as well as body dynamics such as the local gravity eld. The hardware
proposed for this concept includes an integrated star tracker to establish high accuracy pointing in
addition to observing planetary bodies.
Schwartz et al. presented an interplanetary Cubesat concept which uses autonomous optical
navigation to search for and rendezvous with small bodies such as asteroids [172].
2.5.3 Satellite Tracking
The section extends optical navigation techniques to situations when the target is another space-
craft. If the spacecraft target range is unknown, this is known as angles-only navigation. This is
an extensive and active area of research, of which only some highlights are included here. Note
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 50
that the computer vision aspect of satellite tracking is relatively small (pixel or blob detection) and
most of the focus is on orbit determination methods or dynamical system estimation techniques.
Gong et al. present the theory of angles-only navigation using bearing measurements produced
by an onboard CCD camera [173]. An EKF is used to fuse the bearing measurements of the target
spacecraft with GPS information on the observing spacecraft.
Wonden and Geller formulate an angles-only approach to relative satellite navigation which
provides measurements which are fused with inertial measurements within an EKF [174]. Images
provide a unit vector direction to the target which is provided to the EKF as azimuth and elevation
angles. The performance of this method is analyzed using both a Monte Carlo simulation and
linear covariance analysis.
McBryde et al. [109] use the area of the satellite target to estimate the range based on deter-
mining the major and minor axis of the blob compared to knowledge of the spacecraft model. [175]
proposed a computer vision method to determine the separation distance and relative velocity of the
Chang'e-5T1 sample return module during separation prior to Earth re-entry to reduce reliance on
ground-based observers during this critical phase. [176] uses the centroid of a point cloud produced
by stereo-vision to determine the geometric center of a target object. [97] proposed a method to
determine the relative location of a target object using infrared and visible cameras. A standard
blob tracker is used to determine centroid location in the image frame which provides a line-of-sight
measurement to the target. The system then estimates range based on the thresholded area which
is adjusted based on the ratio between major and minor axes of the target in the image. Once a
satellite has been detected, a Kalman Filter can be used to improve tracking between frames by
reducing the likely region the target will occupy on subsequent frames [177].
2.6 Pose Estimation
Perhaps the most common application of computer vision on orbit behind star tracking is estimating
the pose of another nearby satellite. Pose estimation is an essential component of rendezvous and
docking, satellite servicing, debris mitigation, and the exploration of minor planetary objects like
asteroids and comets. An extensive and recent literature review of pose estimation techniques for
spacecraft is given by Opromolla et al. [23] This section will highlight a variety of techniques used
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 51
for pose estimation.
The computer vision literature for orbital applications has generally delineated pose estimation
techniques into either cooperative or non-cooperative. Note that the cooperative term used here
is completely unrelated to the cooperative localization problem. It will be an enduring challenge
for the proposed dissertation to dierentiate these homonyms. Cooperative pose estimation uses
ducial markings which designers place on a target spacecraft to make feature point detection and
matching easier. Conversely, non-cooperative pose estimation assumes that no such markings exist.
Model-based non-cooperative pose estimation leverages a priori information about the target shape
or appearance to estimate pose. Non-model-based non-cooperative pose estimation assumes no a
priori knowledge which often leads to a requirement to estimate shape in addition to pose. The
following sub-sections will examine these pose estimation categories.
Pose estimation is often used for docking or satellite servicing. However, other less traditional
uses have appeared in the literature. For example, [178] proposes pose estimation as a method
to aid in human teleoperation of satellite servicing platforms where communications latency re-
sults in a disparity between real-world state and the virtual state presented to a human operator;
pose estimation enables consistency matching between the real world and the rendered graphics
environment. Other authors have suggested computer vision-based pose estimation as a measuring
solution to tracking a test mass within a drag free satellite mission such as Gravity Probe B, Laser
Interferometer Space Antenna, or the Inner-Formation Gravity Measurement Satellite System. In
this concept, a mass
oats freely within a central cavity of a satellite and its position is tracked
using an array of cameras [179].
2.6.1 Cooperative Pose Estimation
Fiducial Markings
Cooperative pose estimation exploits specically designed ducial markings installed on the target
spacecraft in known locations. Fiducial markings, also referred to as ducials, simplify the pose
estimation process and improve robustness of feature identication and matching. Because the
ducials can be located in the images with high condence and accuracy and their location in a
target-centered frame is known a priori, fewer feature points are necessary as compared to non-
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 52
cooperative pose estimation approaches. The design of ducials may be mission specic to enable
recognition in dierent situations: illumination conditions, range of potential target spacecraft
states, and vision sensor characteristics. In one application for ecient cooperative pose estimation,
[150] suggests that ducials should be robust to scale changes, should be co-planar, in sucient
quantity for pose estimation, and placed in an asymmetric and non-collinear topology.
Choice of ducials aects computational requirements; for example, Zhang et al. found a six
times speed up of cooperative pose estimation using their custom circular ducial markings as
compared to AprilTag, a popular ducial-based localization technique for augmented reality [180].
In addition to detection, ducial markings may contain information to enable unique identi-
cation of a specic marking. [181] notes that six ducials are required for a unique solution, but
only four are required if they are coplanar and non-collinear. Finally, it should be noted that al-
though the focus of this section (and paper) is orbital computer vision, ducial markings are used
extensively in general computer vision and they are an active area of computer vision research.
Fiala suggests some performance metrics when designing or choosing a ducial system including
false positive rate, inter-marker confusion rate, minimal marker size, marker library size, immunity
to lighting, immunity to partial occlusion, identication speed, and vertex jitter characteristics
[182].
Fiducials can be classied as active or passive [183]. Active ducials emit light which can be
detected by vision sensors. Light emitting diodes (LEDs) are an example of active ducials [148].
The LED color and
ashing period may also be used to uniquely identify a specic ducial. Passive
ducials rely on passive illumination from the Sun or Earth's albedo or active illumination from the
observing spacecraft. Passive ducials may be two-dimensional patterns or simple three-dimensional
objects. Some passive ducial have been created with specic re
ectively characteristics to improve
detection against noisy or cluttered background. One common example is the use of a retro-
re
ector which only re
ects certain bands of light [184]. These retro-re
ectors are then illuminated
successively by two dierent colors of light, only one of which is re
ected back. A binary image
from the non-re
ective color-illuminated scene is subtracted from the re
ective color-illuminated
scene which helps to isolated the retro-re
ectors. This method is especially useful for spacecraft
targets which are wrapped in highly re
ective and specular multi-layer insulation. Other common
passive ducials include solid circles, multi-scale embedded circles [150], shapes which can be easily
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 53
recovered via a Hough Transform [185], and barcodes. Several authors suggest ducials at multiple
scales (near and far) and for multiple purposes (line-of-sight measurement, pose estimation, satellite
identication) [8] [150].
The ETS-VII space robotic system used a ducial consisting of two white circles on a black
background which allowed sucient orientation determination in two axes while the robotic system
had a suciently large capture envelope in the third axis [156]. This ducial allowed for identi-
cation using the modest computational resources (1 MIPS) at the time. [186] use three ducials
which form an isosceles triangle which is perpendicular to the target surface (one point is mounted
on a stick projecting from the surface). This allows a simple three point registration algorithm
known as P3P.
One preferred characteristics of a ducial is the invariance of its centroid location to pose
changes. For example, the centroid location of a circle is invariant to position changes and one
degree of rotation. Gatrell, Sklair, and Ho proposed the concentric contrasting circles as a simple
passive ducial marking that can be extracted with low computational burden [183].
Testing ducials under operational considerations is an essential step prior to launch. As an
example, the corner cube retro-re
ectors used by the ATV Videometer were found during testing
to induce a bias in the line of sight measurements as a result of diraction in the re
ectors [187].
Fiducials can be designed to be self-identifying which potentially simplies the feature match-
ing step. For example, [188] uses the number of dots contained within a detected square as the
identier. Color-coded ducials have been proposed to leverage the additional information pro-
vided by multiband (i.e. color) cameras where a unique color is used for each satellite [8]; this can
be especially useful for multi-satellite constellations where multiple targets in the same image are
possible. Wokes et al. also suggested an \L-shaped" ducial formed by two perpendicular lines of
eight large re
ective pixels on which a binary pattern has been encoded which is used to uniquely
identify the target [8]. Approximate rotations of the target object could be recovered by the ratio
of the two line lengths in the image.
For planetary bodies, or even orbital debris, the physical placement of ducials allows coop-
erative pose estimation. This was the approach used by the Hayabusa and Hayabusa-2 missions.
Ogawa et al. [189] build on this concept with a method of sequentially releasing multiple du-
cials on the surface of an asteroid to incrementally improve horizontal position estimation during
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 54
landing.
Hannah suggested the use of \pizza targets" for computer vision systems that use frequency-
domain correlation since they have distinctive spatial frequencies [116]; this simplies the feature
detection step during pose estimation. These ducials are circles which have been cut into alternat-
ing \slices" of black and white. [190] presented a complex ducial consisting of a circle, a line, and
several dots, one of which is oset from the plane. The circle and line allow target identication,
the line allows for a rough estimate of roll and correct labeling of dots, and the dots allow for high
accuracy determination of pose using P3P. This ducial was intended specically for autonomous
docking scenarios in close proximity.
Fiducials that have been designed for both high accuracy localization as well as high condence
identication have seen widespread use in the robotics and augmented reality communities. Two
such ducials, ARToolkit and ARTag, are commonly used [182]. ARToolkit ducials consist of a
clear thick black square border, a white space within the border, and a single black symbol on the
white space to provide identication. The black border allows for full pose estimation using only
one ducial. ARTag attempts to improve this process by using digital encoding (much like barcodes
or QR codes) instead of a symbol for higher condence and computationally easier identication
[191]. ARTag has enough digital space to uniquely identify 2002 dierent ducials.
Point Set Alignment
Many problems in orbital computer vision are an instance of point set alignment. Typically, a set
of corresponding points in two frames are provided as input and the output is the six degree of
freedom rigid transformation between these two frames: translation and orientation. Point sets
could be either 2D image coordinates or 3D scene coordinates. Some common examples include
2D-2D, 2D-3D, and 3D-3D alignments (see [192] for background and solution approaches). [193]
provides a general overview of the 3D-3D alignment problem and [194] provides a comparison of
four major solution algorithms for the problem. The book by Olivier Faugeras provides a thorough
treatment of three-dimensional computer vision [195]. Three-dimensional computer vision overlaps
with the eld of photogrammetry, a eld focused on extracting measurements from photographs,
as discussed in [196]. For cooperative pose estimation, ducial markings are used to ease point
extraction and matching.
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 55
Although pinhole camera models are the typical model used for pose estimation in the literature,
some authors have proposed other models. For example, Qi et al. used a thick lens camera model to
estimate pose using a zoom lens on a spacecraft observing a cooperative target [197]. Zoom lenses
have the potential for conducting pose estimation over a larger range than a xed focal length
lens and thus may have use in future space rendezvous systems; conversely, zoom lenses rely on
mechanical systems which increases the potential for failures as well as the possibility of calibration
changes.
Solution Methods
Once features with known locations have been detected, as in cooperative pose estimation, the
pose is found using either closed form or numerical optimization routines. As Tweddle et al. note,
the general problem, known as either exterior orientation or absolute orientation problem, can be
considered solved [196] as there are many known algorithmic approaches. The minimum number
of points necessary for a solution is three (sometimes referred to as the P3P problem), however
four points are often used to resolve the ambiguity of four equivalent solutions of the P3P solution
[188]. The problem can be generalized to Perspective from n Points (PnP) where there are more
feature points than unknown and the problem is overdetermined [188].
An early solution for space-based cooperative pose estimation was presented by Sklair, Gatrell,
and Ho in 1990 which used a set of ve co-planar concentric contrasting circle ducials to recover
pose [181]. Two camera models were used for pose estimation to compare performance: a simple
pinhole camera model and a calibrated camera model that incorporates measured focal length,
image center, lens distortion, and horizontal scaling factor.
Tweddle and Saenz-Otero present a cooperative pose estimation scheme based on a single cam-
era and a ducial marking consisting of four co-planar concentric circle sets [196]. An iterative
approach is used to solve the exterior orientation problem which minimizes the mean-squared re-
projection error of the ducial points. In addition, the authors make use of a Multiplicative EKF
(MEKF) to overcome the deterministic constraint introduced by using a quaternion as an attitude
parameterization (a quaternion has four parameters, but only three degrees of freedom). Wonden
et al. [174] approach the cooperative pose estimation with a standard EKF which operates on line
of sight measurements to a small set of target ducials with known locations. [198] presents a co-
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 56
operative pose determination pipeline which uses Shi-Tomasai (for far range) and SURF
8
(for close
rang) feature detectors to match circular ducials based on ane registration, then uses a 4-point
version of the EPnP algorithm to provide an initial pose. Following initialization, a Kanade-Lucas-
Tomasi tracker is used to track ducials frame-to-frame and a factor graph approach is used to
smooth pose estimates between frames.
Arantes et al. [199] present a mathematical formulation for the pose estimation problem as
a non-linear least squares optimization problem and present performance comparisons between
Newton, Newton-Gauss, and Levenberg-Marquardt solution methods. It is assumed that at least
three correspondences between image features and known real world points has been established by
some method, though 36 points are used in their simulations. The authors nd that the Newton-
Gauss method provides for faster convergence, though the accuracy reached is on par with the
Levenberg-Marquard method.
Zhang et al. [178] presented a pose estimation pipeline based on a ducial composed of two
connected black rhombuses on a white background. After thresholding, a chain code representation
for edges is created which is both adaptable to rotation and computationally ecient. From these
edges, feature points are extracted which match the ducial description. The initial pose estimation
is computed analytically and rened; real world points with known positions are projected into
the image plane using a weak-perspective camera model. The error between this projection and
the corresponding image feature points is used to iteratively improve the pose estimation until
convergence.
Zhang et al. [150] presents a cooperative pose estimation pipeline which uses an ecient approx-
imate Laplace of Gaussian blob detector to detect ducials, estimates pose using a homography-
seeking robust point set registration algorithm, and incorporates incremental smoothing to perform
temporal ltering. Subsequent improvements in computational complexity allow real-time perfor-
mance on the limited onboard spacecraft computing resources [180].
A sparse grid quadrature lter has been proposed as an improvement on the EKF and the
unscented Kalman Filter (UKF) for the non-linear and high-dimensional problem involving relative
orbital motion, camera projection models, and rotational kinematics [200]. The authors found that
8
Speeded Up Robust Features, or SURF, is a feature detector and descriptor for pixels which was designed to be
both fast and robust.
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 57
the accuracy of this lter can be adjusted without causing signicant increase in computational
requirements.
Uncertainty
Establishing the uncertainty in the pose estimation process is challenging because of the non-linear
system model, the variety of potential error sources, and the uncertain nature of feature detection
and matching. However, an understanding of the sources of uncertainty allows for targeted improve-
ments in parts of the pose estimation architecture including sensors, algorithms, and ducials. [188]
approached this problem by performing a Monte Carlo simulation of the entire cooperative pose
estimation pipeline to determine the pose uncertainty associated with uncertainty in three items:
camera intrinsic parameters, positions of 3D feature points in satellite frame, and the positions of
associated 2D feature points in images. [201] used a Cramer-Rao approach to determine the best
achievable precision of a cooperative pose estimation system based on a specic triangular ducial
marking.
2.6.2 Non-cooperative Pose Estimation
Non-cooperative pose estimation is a term used within the space robotics community where pose
is estimated for a target spacecraft without the use of specically designed ducial markings on
the target. Non-cooperative pose estimation may be used, for example, to inspect an asteroid
or a failed satellite. Non-cooperative pose estimation can generally be divided into model-based
approaches which use a priori information about the geometric shape of the target and non-model-
based approaches which may include shape estimation as part of the estimation process. See [156]
for the dierence between model-based and non-model-based non-cooperative pose estimation.
Preliminary Image Processing
An important rst step for non-cooperative pose estimation is image segmentation to separate the
target from the background. When viewing a space target with a space background, segmentation
can be easily done using some form of binary thresholding. However, the target may overlap
another spacecraft or planetary body. For example, in Low Earth Orbit, Earth will often appear
in the background of target images. [202] present a method to segment images containing Earth
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 58
background using frame-to-frame feature tracking of Shi-Tomasi corners within regions based on
color and texture. Regions with feature motion are labeled as the target and those without feature
motion are labeled as background. The presented method takes from 1 to 10 seconds to process ten
images on a 2.3 GHz computer which may be challenging to implement in realtime with onboard
spacecraft processors. Fausz et al. used a morphological lter approach to segment images including
Earth background which helped to preserve ne details [145]. [203] provides a detailed mathematical
overview of state estimation for target objects using both landmark features, where image features
are matched with known locations from a priori knowledge, and correspondence features, where
features with unknown locations are matched between subsequent image frames.
Model-based
A set of corresponding feature points in a 2D image and the 3D scene can be used to estimate pose.
When usingn-points from a single image, this is known as the PnP problem. Unlike in cooperative
pose estimation where a set a dcuials (i.e. features) with known locations in the 3D scene is used
to estimate pose, the non-cooperative approach to the PnP pose estimation requires, as a rst step,
establish the correspondence between feature points in the 2D images and 3D scene. A common
approach to feature matching is to use feature descriptors (e.g. SURF, SIFT
9
) and then perform a
distance-based search in the feature descriptor space for nearest neighbors. Once a correspondence
is established, the PnP problem is solved using at least 6 point pairs to provide unambiguous pose
10
. Sharma and D'Amico compared the performance of four PnP solvers (Posit, EPnP, Posit+,
and a Newton-Raphson approach) specically for space applications [204]. They found that each
solver presented dierent advantages and disadvantages for non-cooperative pose estimation and
suggested they may be synergistic for dierent scenarios or regimes. They noted that PnP solutions
took approximately 10 milliseconds on a 30 MHz spaceborne computer.
9
Scale-Invariant Feature Transform, or SIFT, is a feature detector and descriptor that is designed to be robust to
scale, rotation, and illumination changes in the image scene. It has experienced widespread application in computer
vision.
10
Symmetric spacecraft may require more than 6 points to unambiguously resolve pose
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 59
Point Cloud Registration
The Iterative Closet Point (ICP) algorithm has seen widespread use in satellite computer vision
research. ICP is used to align two point clouds in a way that minimizes the point-to-point distance
between the two clouds. Note that the points do not require correspondence between model and
image points; this is the key dierence when compared to PnP approaches. Terui et al. used ICP to
determine pose from stereo-vision-produced depth images of a physical scale model satellite target
under simulated solar and albedo illumination conditions [205]. Two heuristics of the two point
clouds to be aligned, variance and kurtosis, are used for pre-alignment to avoid ICP converging
to a local minima. Ventura et al. used ICP on depth images of a target spacecraft collected
using a modulated light sensor (Microsoft Kinect) [206]. The ICP algorithm, which is sensitive to
initialization, is provided an initial estimate for pose using a template matching algorithm which
relies on a database of images of possible attitude congurations. Template matching is also used
to initialize ICP in [207]. Initialization is a challenge for all iterative pose estimation schemes [208].
Ruel et al. [209] proposes an ICP approach using LIDAR depth images for pose estimation of
spacecraft. The authors note that in many orbital scenarios (such as docking with the ISS), an
initial target pose estimate is typically available for ICP as a priori knowledge. Also, the authors
noted that ICP as a model-matching approach has challenges when with presented object views
which do not have enough geometric information to reduce pose ambiguity. Shahid and Okouneva
presented a technique to use a custom scalar stability index to determine a region of a point cloud
that is least sensitive to noise or pose ambiguities [210]. This method is intended to increase both
the accuracy and convergence time of ICP registration. Intuitively, this approach attempts to nd
sub-regions of the point cloud which contain the most unique shapes and features.
Fausz et al. used ICP to rst create a complete point cloud by aligning subsequent LIDAR
images and then using ICP to register the dense point cloud to a cloud generated from a known
3D model of the target [145]. To improve ICP convergence during the nal pose estimation step, a
minimum volume ellipsoid was t to both the model and the measured point cloud to establish a
reasonable initial pose estimation for renement in ICP. In addition to steps to improve initializa-
tion, Aghili et al. found that integration of ICP with a recursive lter such as an EKF improves
convergence for tumbling objects where the EKF estimator also estimates inertial parameters to
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 60
improve motion prediction [211]. The Kalman Filter reduces measurement noise, improves precise
pose estimate, allows tracking over wider range of object velocity, and allows the pose estimation
system to continue even during temporary sensor failure or occlusion.
Opromolla et al. present an algorithm which uses 3D LIDAR images to determine an initial
coarse pose estimate and then uses that estimate to initialize a custom implementation of ICP
[207]. The initial pose estimate is produced by leveraging principal component analysis (PCA)
to nd the main axis of the collected point cloud and then using a template matching scheme to
determine the rotation about that axis of the target. The template matching relies upon onboard
generation of simulated LIDAR point clouds to compare with the collected point cloud. Once the
initial pose estimate is found, a custom ICP algorithm which includes a prediction step based on a
linear kinematic lter to rene the pose.
Rhodes et al. provide a thorough introduction to the use of feature histogram descriptors,
which can be used to simultaneously recognize and estimate pose of an object, as a useful method
to initialize ICP for spacecraft pose determination using LIDAR images [212].
Liu et al. initialize ICP using a two-step process [213]. First, the principal directions of the
point clouds (both model and sensors) are found and compared which allows an estimate of the
possible translation region between the clouds. Next, a branch and bound approach is combined
with ICP to perform a global search of the pose space (angle-axis representation) which can be
represented by a 3D ball of size .
An initial pose can be found with feature correspondence between two point clouds. Feature
descriptors for depth images have been examined by [123]. The point pair descriptor vector includes
numerical values based on geometrical relationships between a pair of points. The collection of point
pair feature descriptors is then compared to point pairs from a known model of the target. Once
correspondence is established, local rigid transformations are found for sets of points and then
clustered to determine the global pose estimate. This initial pose estimate is rened using ICP.
Tzschichholz et al. presented a feature descriptor for 3D images produced using a modulated light
depth sensor (a photonic mixer device) which was optimized for low resolution, noisy distance
measurements, and distance-dependent signal attenuation [124]. These descriptors were used to
match 3D image points against a known model of the spacecraft target and then determine target
pose based on corresponding 3D feature sets.
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 61
Some pose estimation algorithm are used to rene initial pose estimates. [156] presents a
method based on Iterative Reweighted Least Squares which projects a 3D wireframe model onto
the captured images given some a priori pose estimate. A one dimensional search along edge normals
is used to determine motion and this iteratively updates the pose estimate. The authors nd that
this renement can compensate for up to 20 degrees in rotational error. A similar reprojection
method is used by [214] where the lines of a model are projected onto the image and the best
alignment between these two is searched for. [215] presents a two stage approach consisting of a
silhouette-based template matching method spread over multiple frames in a Baysian framework
to perform initial pose determination and then uses a 3D reprojection to rene the pose estimate.
Although computationally expensive, the method was shown to have high robustness to background
clutter (Earth) and illumination variations during tracking sequences.
The feature-based pose estimation methods discussed so far have performed pose estimation in
a sequential process: rst features are matched to produce point correspondences between image
points and real-world points and then these correspondences are used in some pose estimation
algorithm. Shi et al. instead proposed the use of SoftPOSIT, an algorithm which simultaneously
solves for point correspondences and pose in an iterative process [216]. In their application, sim-
ulated infrared images of space targets are matched with 3D wire-frame models which have been
discretized into real-world point clouds.
Yu et al. [217] established relative attitude by implementing TRIAD/QUEST
11
using the 3D
location of feature points on a target satellite. An EKF and UKF are compared to perform relative
position estimation over time and the authors nd that the UKF provides better performance.
Biondi et al. posed the estimation of attitude and angular rate as a signal processing problem
and used compressed sensing approaches to recover the time series of individual quaternion com-
ponents from 3D point sets based on noisy or corrupted measurement data [218]. A UKF was used
to estimate angular rate based on quaternions.
Edge Matching Techniques
One method proposed to non-cooperatively determine spacecraft pose is to match lines detected in
images with lines of a wire-frame model of the target known a priori. However matching detected
11
TRIAD is used for exactly three points and QUEST is used if there are more than three points available.
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 62
lines which are often spurious, broken, or noisy in an image taken without knowledge of position
or orientation is a computationally dicult problem. D'Amico et al. approach this method by
establishing \perceptual groupings" which group line segments which have end points in close
proximity, are parallel, or are collinear [219]. Signicance scores are used to rank these groupings
and prioritize search. The authors note that the method could be extended to arbitrary curves
instead of only straight lines. The proposed method is demonstrated on real imagery from the
PRISMA mission. Sharma et al. uses an edge-based technique that is designed to detect large
features (i.e. spacecraft structures) as well as smaller features (i.e. spacecraft antennas) and
compare them to an a prior target wireframe model [220]. These features of multiple scales are
useful for resolving ambiguities in symmetric satellites.
Zou et al. suggest using both edge and point features to improve the performance of pose
estimation, especially under varying illumination conditions where edge features may be matched
more robustly [221].
Petit et al. modied the wire-frame edge re-projection approach to include full onboard 3D
rendering of the target model based on a current pose estimate [222]. Instead of projecting the
wireframe model, a full 3D model is rendered using OpenGL. Rendering has an advantage of
automatically handling self-occulusions of the model. Then, an edge extraction scheme using depth
discontinuities (since the Z-buer information is available) and texture discontinuities produces a
set of edges to match with the current camera image. The intention is to produce a set of model
edges which better match what a camera would observe. A non-linear optimiziation scheme which
minimizes the distance between lines is used to iteratively estimate pose. Note that the method
presented required a GPU for rendering, a capabilitiy which is currently not generally available for
space platforms.
Pose estimation can be used to provide situational awareness to human operators of robotic
systems. [223] presents a method to track known features during ISS operations to provide a
wire frame overlay on video images for operating astronauts. The feature tracking is done using
reprojection of known shapes (in this case, a circle) onto image and conducting an edge search to
align the shape with the image.
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 63
Shape Matching Techniques
One area of model-based non-cooperative pose estimation exploits knowledge of components com-
monly found on target spacecraft which would apply to many types and designs of satellites. The
launch adapter on satellites is one such object. Researchers have presented methods to recognize
and localize adapters using mathematical morphology operators followed by canny edge detection
and nally a Hough transform to detect lines [224]. In [225], pose is estimated by identifying and
tting ellipses in images to estimate the orientation for circular components such as engine nozzles
or launch adapters. Kumar et al. similarly use circular shapes to determine pose of spacecraft
analytically [226]. Velasquez et al. presented a pose estimation scheme using a single camera to
track the interface ring of a target spacecraft [139]. A Canny edge detector creates a binary edge
map from which elliptical shapes are extracted, detected, and used to analytically compute the
pose based on ellipse parameters. An EKF is used to smooth measurements in the system. Du et
al. conduct pose estimation by rst using a Hough transform to detect line edges, then grouping
these into pairs of parallel lines to form parallelograms, and nally using the angle between these
groups of lines to nd the orientation of an assumed-rectangular surface plane [112].
Some methods to recover pose are based on partial knowledge of spacecraft shape. In [227], the
cylindrical shape of some spacecraft is exploited to determine pose by matching extracted ellipses
from a single image with a known set of circular cross-sections from a spacecraft model. The
iterative projection of a known asymmetric object from the spacecraft model such as a solar panel
onto the image plan is used to recover the full pose. In [155], the authors use a model-based line-
based matching algorithm that attempts to identify solar panel shapes in images. Line segments
are detected and merged to create a transformed rectangular shape whose corner points are then
extracted and used to determine pose via the Levenberg-Marquardt optimization method. In [112],
a common structural element of communications satellite antennas is used to determine pose using
four lines of a rectangular frame which are extracted using a Hough transform. Cropp et al. created
a tailored set of heuristics based on human observations and insight to determine correspondences
between a wireframe model and lines extracted from images using a Hough transform; though the
hueristics were target specic, the reduction in possible line matches to test (via RANSAC) allowed
for real-time processing [228].
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 64
Machine Learning Approaches
Some non-cooperative pose estimation algorithms rely on a large collection of real or simulated
images of a known target to train a model from which the pose can be inferred. One example by
Zhang et al. is the use of Homeomorphic Manifold Analysis to model the mapping between a satellite
pose and its appearance in captured images based on the use a large sample of training images for the
known target [229]. Note that the authors compared several input image representations: binary,
distance transform, grayscale, and histogram of oriented gradients. A pose estimation method based
on convolutional neural networks was shown to successfully recover pose from simulated rendered
images of a spacecraft [230]. The network was trained on approximately 3000 labeled synthetic
images of a spacecraft in various orientations. However, the training set of images did not include
varying lighting geometry, specular re
ections, or image noise. Classication error rates were as low
as 20 percent were reached. Shi et al. used PCA for pose estimation by matching a test image to a
collection of 12660 previously captured images of the same spacecraft covering the entire pose space
[154]. The authors found that this approach is sensitive to the number of images in the sample
set and the number of principal components used. Zhang et al. [231] used a kernel regression
supervised learning approach to simultaneously perform object recognition and 2D orientation
estimation (pitch and yaw). The authors tested the performance several object representations
such as binary images, gray images, distance transforms, moment invariants, Fourier descriptors,
and histogram of oriented gradients (HOG).
Simulating images is tractable on the ground, but the number of images which must be created
can be extensive. For arbitrary trajectories, the database must cover the full 6 degrees of freedom
conguration space (out to some set range from target) as well as two or more degrees of freedom
due to solar, Earth, or other illumination sources.
Oumer et al. used a vocabulary tree with k-means clustering to perform training on collected
images of a target spacecraft under varying pose and illumination conditions [143]. SIFT feature
descriptors are used to match between a database of training images and the current image. Once
the best corresponding stored image is selected, an iterative PnP solver with RANSAC is used to
determine the pose of the current image relative to the stored image.
Fomin et al. compared the performance of a convolutional neural networks to a traditional
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 65
cascade classier (Viola-Jones) to identify natural features in captured image sequences of ISS
approaches using human-labeled training data [232]. They found that the neural network approach
gave better performance.
The rendering of target images for pose estimation has also been proposed for planetary and
asteroid encounters. Wright et al. propose a method for the Asteroid Redirect Robotic Mission
which uses high resolution imagery acquired during a prior survey mission phase to accurately
model and then render surface features [233]. The single bounce ray tracing rendering, which is
done onboard the spacecraft, includes accurately modeling illumination which is important because
the asteroid discussed rotates at 3.725 revolutions per hour. The onboard rendered landmarks are
correlated with extracted landmarks from imagery during descent and landing. The authors nd
that this method of pose estimation can operate in realtime at 3Hz using a 75 MHz SpaceCube
processor using 1 megapixel images.
Gang et al. [136] uses a combination of Moment Invariant, Fourier Descriptor, Region Covari-
ance, and Histogram of Gradient feature descriptors to perform pose estimation of a target satellite
based on a large dataset of rendered satellite images in various orientations. A kernel locality pre-
serving projection is used to reduce dimensionality in the feature vector and a k-nearest neighbor
algorithm is used to classify the input image based on the rendered images.
Spacecraft Target Modeling Methods
Determining the method to model a target as well as store that model onboard a vision-enabled
spacecraft is an area of active research. As D'Amico et al. state, \The denition of a proper
spacecraft model is a fundamental step of the pose estimation strategy," [208]. [216] uses a method
based on primitive shapes (cylinders, cones, rectangular solids) which can be discretized during run
time by a user selectable amount of points. This results in a point cloud as well as an adjacency
matrix indicating which points are connected. [219] notes that the model should be well organized
to improve eciency and therefore uses an index to refer to model components and their attributes
such as length, orientation, and even visibility in a given orientation. Storing this pre-computed
information on-board reduces the computational expense of the pose estimation process.
Dietrick and McMahon [234] explored the errors associated with three dierent shape delities
and their impact on pose estimation of a target asteroid. They found that when using a lower
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 66
delity model than the truth model, that pointing errors to the target remain bounded when using
LIDAR depth images.
Tzschichholz et al. [235] suggest fusing depth and visible images by solving dierent parts of
the pose estimation problem using the most appropriate sensor, as opposed to the standard method
of co-registraiton between the images from dierent sensors. Specically, depth images are used to
determine the pitch angle, yaw angle, and Z-position of the target object while the 2D visual images
determine the roll angle, X-position, and Y-position of the object. A least squares minimization
approach is the used to iteratively determine the target pose which produces the best reprojection.
This approach can operate at 60 frames per second on a desktop computer.
Opromolla et al. [207] noted that models may need to account for articulated components such
as rotating solar panels which may be operating during proximity operations.
For point-based methods (such as ICP) the method to select points from a geometric model is
important; as [123] notes, the computing time of point cloud based methods increases signicantly
with increasing number of points. ICP, for example, has a quadratic computational complexity in
the number of points [236]. [123] uses Poisson Disc Sampling to choose points.
Petit et al. [214] simplied existing 3D models to keep only the most signicant geometrical
features. In addition, the authors stored multiple 3D models of the same target onboard at dierent
ranges and used the appropriate model based on some a priori range estimate. This was in response
to noting that the appearance of the illuminated central spacecraft module in images at far distances
was not useful or signicant when compared to the solar panels which had sharper edges. [207]
emphasized that only the macroscopic features of the space targets should be modeled; 2D and 3D
surfaces were chosen as well as dimensions and unit normal vector directions. These surfaces were
discretized when required for point-based methods like ICP. [207] also included surface re
ectivity
data in the models so that simulated LIDAR could be generated onboard in real-time.
Miller et al. [53] proposed segmenting LIDAR depth images of a space target (specically, the
ISS) into primitive geometric objects such as planes, cones, and cylinders. Logical rules are used
to identify specic objects.
Most research on orbital pose estimation has focused on single rigid bodies, potentially with one-
axis articulations. However, the
exible nature of some spacecraft structures may require modeling
the
exible dynamics to accurately determine pose. [237] examines the problem of determining
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 67
shape, motion, and parameters of large
exible structures in space. Specically, range images (i.e.
imaging LIDAR) are collected looking downward onto a large
at
exible structure (i.e. solar panel
array). These range measurements are combined with a priori knowledge of vibrational modes in
a Kalman Filter to accurately determine the shape and motion of the structure over time without
requiring tracked features.
Planetary Target Modeling Methods
For planetary targets, known features on the surface can be used in a model-based non-cooperative
pose estimation scheme. Historically, this process has been part of the orbit determination function
in deep space missions in close proximity to some planetary body. For example, during the NEAR
mission to the asteroid Eros, craters were hand identied in captures images and used in conjunc-
tion with radiometric and laser altimeter data to rene models of Eros and determine the orbit of
the spacecraft (as well as various associated parameters such as solar pressure and propulsive ma-
neuvers) [238]. The authors present a computer vision method to autonomously identify and match
craters to known maps using size, shape, and Sun-direction descriptions of craters. Interestingly,
autonomous on-board orbit determination with 100 m level accuracy can replace meter level accu-
racy produced by ground processes because the ground-produced solutions must be propagated for
several days whereas the onboard-produced estimate is update continuously. [239] present a method
for pose estimation based on crater matching using the Hough transform to identify craters and
then using successive crater triples to determine pose in a framework which rejects pose outliers
using a random sampling approach.
Hanak and Crain [240] use an approach similar to star identication in modern star trackers for
the crater identication problem; three craters are arbitrarily selected from a list of detected craters
and then non-dimensional relationships between them are computed and compared to a database
of such features. Examples include the crater triangle angles and crater diameter to triangle length
ratios. This approach avoids the drawbacks of image correlation techniques including storage and
processing requirements. The author nds that crater identication using this techniques takes
approximately 50 milliseconds on a modern desktop computer.
As with other pose estimation methods, initialization is an essential step and in this case relies
on ground-produced initial estimates. Peterson et al. used a terrain matching algorithm approach
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 68
which does not rely on circular features and instead uses a three step process leveraging surface maps
available a priori: image rectication using a prior estimate of spacecraft state, coarse correlation in
the frequency domain of a downsampled image, and nally a ne adjustment using the 50 strongest
Harris corners [241]. Their algorithm was demonstrated using a high-delity rendered images of
the Moon. Liounis et al. describe an EKF-based approach to optical navigation in the Earth-Moon
system using feature correspondences on the lunar surface to determine line-of-sight vectors which
are used to estimate spacecraft state [242].
Rowell et al. [243] approached the planetary target by using template matching instead of
feature descriptor matching. Harris corners were used to identify locations of interest, from which
a 7x7 pixel region was extracted. This region was used to identify the same location in future
frames using an intensity-invariant correlation statistic. Periodically, the templates for a corner
location are updated by extracting a new region from the new images. The authors note that
SURF and other common feature descriptors are more complex and show at least some invariance
to orientation, scale, and ane transformations. However, this additional complexity may not
always be necessary; in their scenario of planetary landing, changes of orientation and scale from
frame to frame are gradual, thus not needing complex descriptors.
Mourikis et al. [244] present a method that combines matching mapped landmarks and frame-
to-frame tracking of unmapped feature points in an EKF framework to produce spacecraft pose
during planetary entry, descent, and landing. Landmark matching is done in a two step process:
rst, a Fast Fourier Transform is used to provide an initial correlation enabling a rough estimate
for two-dimensional position and second, then a spatial correlation is accomplished for 50-100
Harris corners in the image. Frame-to-frame tracking of unmapped features is done using spatial
correlation following a pre-processing step where a homography-based image-warping is applied
using the current set of feature matches.
Mass Property Estimation and Filtering
Pose estimation can also be extended to recover physical parameters of the target object. In [245] the
inertia matrix (up to an unknown scale factor) and center-of-mass are recovered from depth images
of a target. [246] uses point clouds of rotating rocket bodies to determine the axis of symmetry, spin
axis, and rotation center without conducting point set registrations (such as ICP). The authors use
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 69
a RANSAC framework to establish the axis of symmetry of the cylinder by randomly selecting many
two point pairs and their surface normals, which have been found using gradient operations on the
2D array of depth pixels, and nding the normal intersection point. The repeated determination
of this axis of symmetry over time allows for estimating motion propeties like the rotational axis
and the center of rotation. [247] similarly uses RANSAC as the primary method to recover pose by
using random samples of three non-colinear point correspondences to analytically compute a pose
sample and rejecting outliers. A pre-verication step based on a disparity gradient constraint is
used to reduce the number of tested samples.
Non-Model-based
Non-model-based non-cooperative pose estimation schemes do not assume knowledge about target
structure. Instead, discovering structure is often a component of the pose estimation process. Many
of these techniques fall into the category of Structure from Motion (SfM) in which multiple views
of the same scene are used to estimate both the pose of the camera as well as the scene's three
dimensional structure.
Dahlin [146] points out an interesting dichotomy on using vision systems in spacecraft: vision-
centric or lter-centric. In the rst approach, the vision system produces a complete pose estimate
for the target which is then passed to a lter which integrates this pose estimate with onboard
sensors such as IMUs, actuation signals, etc. In the lter-centric approach, the vision system
produces measurements (such as line-of-sight vectors to tracked features) which are then integrated
within the lter with other onboard sensors to produce a pose estimate.
Shtark and Gurl approached the non-cooperative pose estimation problem by using a stereo-
vision camera conguration to acquire images and then experimentally compared the performance
of estimating pose using EKF, UKF, or a Converted Measurement Kalman Filter (CMKF) | a
lter which relies on the conversion of measurement equations into linear equations via algebraic
manipulation [248]. The UKF and CMKF are shown to have better performance than the EKF for
this task.
Pesce et al. [249] present a method to estimate target state and inertia properties based on
stereo-vision depth measurements. An EKF and Iterated EKF are used and compared for this
estimation. A psuedo-measurement approach is used to estimate the inertia matrix which is not
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 70
fully observable in torque free motion.
In one example, [31] used SURF features, with outliers rejected by RANSAC, and an ane
camera model to perform SfM. The advantage of the ane model is that it can be approached by
many optimization techniques; the authors choose the Augmented Lagrange Multiplier solution.
Using an ane camera model as opposed to a full-perspective projection model is appropriate when
the object dimensions are small compared to distance from the object as is often the case in space
and the authors found that resulting pose estimates agreed with results from Bundle Adjustment,
the \Gold Standard" for SfM [31].
Jasiobedzki et al. [250] used SfM as the rst stage of a three step point cloud-based pose
estimation solution for non-cooperative targets. A stereo camera is used to create both dense
(10,000 to 100,000 points) and sparse (1000 points) point clouds. SfM is used to estimate the
target spacecraft 3D position. Geometric Probing is used to eciently estimate initial satellite
pose based on knowledge of the satellite model. Finally, ICP is used to rene and track target pose.
Dahlin et al.[146] used SIFT feature tracking producing a line-of-sight vector which is provided
to a 21-state EKF that additionally can track 93 target feature states. This procedure was tested
using simulated images of ISS approaches.
Researchers have proposed using Simultaneous Localization and Mapping (SLAM) techniques
to solve the pose estimation problem on orbit. SLAM is a real-time method to determine position
and orientation in an unknown environment which is simultaneous mapped during the process.
SLAM has seen widespread use in mobile terrestrial robots on Earth. An early example of SLAM
for small body navigation was presented by Johnson et al. which uses platform motion to produce
pose estimates of the target, 3D mapping, and dense surface reconstruction for hazard avoidance
[251]. Vassallo et al. propose using SLAM techniques to perform SLAM on orbit to determine
position (i.e. orbit determination) and attitude estimation relative to a central body which in this
case is the Moon [252]. At the heart of the SLAM technique is a recursive lter which estimates not
only the spacecraft state but also the estimated location of feature points (or landmarks) identied
in imagery and matched between frames using vision techniques. Features are matched by rst
detecting interest points, producing SURF descriptors for these features, and then using the SURF
descriptors to match features stored in the database. The method was tested against high-delity
rendered images of the Moon.
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 71
The sparseness of the depth information from LIDAR sensors can challenge pose estimation
algorithms which attempt to nd the mapping between a captured point cloud and a known 3D
model of a target resulting in estimation divergence. In addition, conducting SLAM using only a
single camera will result in a scale ambiguity that must be recovered separately. As a solution, [253]
combined a high resolution monocular camera with a low-resolution three-dimensional depth sensor
(notionally, LIDAR) to provide range measurements for tracked features in the optical images. The
optical images are used to perform frame-to-frame feature tracking. The depth sensor then provides
range values for these feature points. Note that since the depth sensor is a much lower resolution,
some form of interpolation must be used to nd the range for a feature point in the optical images.
The authors propose a Markov random eld model which exploits the fact that optical images and
depth images have similar properties such as second-order statistics. Finally, the authors present a
combined UKF-EKF-Particle Filter (PF) lter to perform SLAM.
Conway and Junkins proposed a method to determine the pose of planetary bodies using a
dense SLAM approach [254]. Specically, the geometry and the texture for small-bodies are to be
estimated. This dense mapping information would be used to create renderings using computer
graphics from arbitrary viewpoints including solar illumination which could be compared to cam-
era images to determine target pose using ICP. The updated map is used to improve the object
map, including albedo value at each voxel, and the process is iterated. This method builds upon
previous work by the authors where depth and color images are fused together to perform dense
mapping of surface geometry and texture for spacecraft and planetary bodies [255]. Computational
considerations are a concern as both methods use GPU-based rendering which presents a challenge
to onboard implementations.
Tweddle et al. [89] performed SLAM using stereo images and a factor graph approach following
the creation of a database of matched feature points at each step time using matched SURF feature
descriptors and RANSAC to eliminate outliers. [198] also used a factor graph SLAM methodology
to determine accurate pose, linear, and angular velocities estimates from sequential individual pose
measurements from individual images. [256] used a SLAM approach for planetary navigation that
leveraged known features, tracked features, and a laser range-nder to mitigate scale ambiguity.
Crater matching based on shape-space (ellipse parameters) matching was performed to match
features between catalog craters and those extracted from images.
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 72
Determining pose using tracked features is well-established, however, it often involves computa-
tionally expensive methods via either non-linear optimization or some form of recursive linearized
ltering requiring many states to track feature locations. Additionally, these solutions are often
highly-sensitive to initial pose estimates. [257] suggests modeling the target as a single spheroid (as-
suming the target's overall dimensions are known a priori) which yields a much faster, albeit lower
accuracy, pose estimation which is not as sensitive to the initial estimate. The authors nd that
this spheroid matching method can be used at a greater distance than traditional feature tracking
methods. Finally, this method could be useful to seed higher delity, feature-based pose estimation
methods. This type of spheroid reconstruction technique was proposed on the InspectorSat mission
concept where a satellite observers a target and determines pose based on a spheroid reconstruction
algorithm [258]. The authors note that the execution time of this approach is much less than other
higher accuracy pose estimation techniques. For example, the spheroid reconstruction was 100 to
1000 faster than the edge detection step of the image processing pipeline.
Li et al. [259] used a PF to improve the matching of SIFT feature descriptors from infrared
images of a target spacecraft by providing a predicted region in which the feature point would
appear in subsequent frames. Song et al. [260] use a two stage approach to determine pose from a
single camera. First, images at multiple vantage points are processed in a multiple view geometry
algorithm [261] to determine the camera position and orientation for each image capture. This pose
information is then ltered using an EKF to determine an improved estimate pose of the observing
spacecraft over time by incorporating models for orbital and rotational dynamics.
Oumer et al. [262] present a method for non-cooperative pose estimation based on the tracking
of 3D features using a bank of iterated, EKFs. A stereo-image or other depth image is used to
initialize the process, after which only single camera images are used to estimate pose. A dual-
quaternion 3D point registration method is used to estimate pose relative to initial pose. The
method does not require a model of the target to estimate pose. Features are selected in the rst
frame and tracked over subsequent frames. As features become occluded or, in the case of specular
re
ections, disappear, a new reference frame is chosen and new features are selected. Previous
work by Oumer and Panin used a similar approach but relied on stereo vision sensor to update
measurements for the EKF [263].
Poelman et al. [264] present a SfM approach to creating a 3D shape and map of a space target
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 73
using a sequence of single camera imagery. The approach uses non-linear bundle adjustment,
increases model density using a volumetric silhouette-carving technique, and converts the results
to a facet-based surface which is nally texture-mapped. This produces a 3D model which can
be viewed from arbitrary angles to generate movie sequences or create measurements from the
model. The method was demonstrated with images from the XSS-10 mission. SfM approaches
may be useful when real-time pose estimation is not required such as autonomously characterizing
spacecraft status or identifying preferred grappling points.
Lichter and Dubowsky present a method to determine pose and shape of a target object based
on a complete point cloud of a space target [265]. The cloud is used to create a geometric voxel
representation which is then used to nd a centroid and principal geometric axes. These measure-
ments (which are dierent from the actual center of mass and principal internal axes) are passed
along with an accurate dynamic model of the spacecraft to a Kalman Filter which produces pose es-
timates along with rotational and translational velocities. This target motion estimate is combined
with image data to create a probabilistic map of the target's shape.
2.7 Other Applications of Orbital Computer Vision
2.7.1 Image-based Visual Servoing for Robotics
In addition to pose estimation, computer vision is used to provide direct feedback to robotic ma-
nipulators on orbit. When this control is based on complete pose estimation of the target object, it
is referred to as position-based visual servoing and occurs following pose estimation. An example
of this is given by Dong and Zhu where pose is estimated for a target spacecraft using an EKF and
photogrammetry and then the robotic end-eector trajectory is dened and the joint angles are
derived using inverse kinematics [266]. When the robotic control algorithm is provided feedback
directly from the image pixels it is referred to as image-based visual servoing. For example, Huang
et al. perform image-based visual servoing by driving a robotic manipulator to align a detected
edge line (i.e. a solar panel edge) in the image of a spacecraft with a predetermined alignment set-
point [267]. Similarly, [142] presented a method of visual servoing based on controlling the position
of ducials in the image frame. A model of the relationship between the image features and the
robotic end-eector pose are described by a Jacobian matrix. The joint angles are servoed to reduce
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 74
the error of the features in the image plane relative to some specied setpoint. A Kalman Filter is
used to smooth measurement errors as well as compensate for the delay due to image processing
during the visual servoing process.
Sabatini et al.[268] proposed using image-based visual servoing in formation
ight experiments
where maneuvers are controlled directly from feature point locations in images. The authors found
that this technique works well in an ideal case, though they note challenges in actual implemen-
tation, namely, the diculty in determining relative velocity and distance to the target based on
single images along the line perpendicular to the image plane.
Sabatini et al. present a modied image-based visual servoing method for control of a space
robotic manipulator which assigns dierent tasks to dierent manipulator actuators (e.g. shoulder,
elbow, and wrist motors) [269]. In this case, two motors are assigned the task of image-based visual
servoing of the target, while the nal motor is responsible for keeping the target in the camera eld
of view. This is accomplished by creating a virtual target that is the center of feature points and
creating control laws which drive that to the center of the camera frame. An EKF which estimates
feature point dynamics is also used to improve overall system performance. [270] addresses the
realistic situation where a camera is mounted on a non-rigid manipulator by using acquired images
to evaluate elastic properties of the arm. These
exible dynamics can then be incorporated into
the servoing control law.
2.7.2 Data Quality Assurance
Orbital computer vision has been noted as useful for overcoming data quality issues encountered
when observing space objects using imaging instruments [28]. This is especially true during events
with short timelines such as rendezvous and proximity operations and planetary
ybys where there
is limited opportunity for human intervention and correction. One example is predicting, selecting,
and revising appropriate camera settings such as automatic gain control (AGC), pointing, and
potentially zoom based on a priori or sensed knowledge on target and scenario characteristics
(albedo, texture, solar-angle, etc.). [117] notes that challenges with AGC were encountered during
testing for the Hubble Space Telescope Servicing Mission 4 with AGC being too reactive to saturated
pixels; intelligent AGC may use a priori knowledge about the target or illumination conditions to
better select camera settings such as sensitivity or integration time.
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 75
2.7.3 Remote Sensing Image Analysis
Orbital computer vision can be used to automate image data product (e.g. electro-optical, radar,
mutli-spectral, hyper-spectral) analysis. This autonomy can be used to make decisions based on
the results such as change science collection plans or modify spacecraft behavior.
One of the earliest examples of onboard remote sensing analysis was demonstrated on the EO-1
spacecraft. The satellite included the Autonomous Sciencecraft Experiment
ight software which
is intelligent software capable of preforming onboard science processing, mission planning, and task
execution. An example of the utility of this approach is the ability to re-task the spacecraft based
on autonomous processing of captured images. On May 7, 2004, the onboard image processing
algorithms detected high interest activity in thermal imagery of Mount Erebus and autonomously
imaged the area again on a subsequent pass a few hours later
12
. Image processing algorithms
onboard EO-1 use the 220-band Hyperion hyperspectral imager to perform thermal classication
(i.e. volcanos),
ood recognition, and snow/water/ice/land classication [271].
Carozza and Bevilacqua suggested using remote sensing observations of Earth to determine the
attitude of the observing spacecraft in real-time [272]. The solution uses an iterative approach
involving optical
ow (Lucas-Kanade Tracker), homography estimation (Direct Linear Transform),
and robustness to outliers (RANSAC). The error sources of this approach are fully explored in [273]
to enable future concept designs. Kouyama et al. also used remote sensing images to determine
attitude of a spacecraft onboard using position information using SURF descriptor matching, for
example from GPS or traditional orbit determination [274]. When position is accurately determined
and well-registered base maps are available, this method can provide attitude accuracies of 0.02
degrees which is comparable to simple star trackers. The authors point out that this method may
be useful for small satellites hoping to reduce cost, mass, or power requirements.
A key step in the processing of remote sensing imagery is image registration which estimates
the oset in position and orientation between an image and the true scene. This allows precise
comparison between data collected at dierent times and by dierent images. Currently, image
registration for satellite derived data is done almost exclusively on the ground. However, the
ability to perform this step onboard a spacecraft is an important enabler to the fusion of data
12
https://earthobservatory.nasa.gov/IOTD/view.php?id=37043
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 76
from multiple satellites and sensors which can then respond in real-time to provide additional
observations during dynamic events.
Wang et al. [275] present methods to accomplish registration which are suited to the limited
processing capabilities of satellites, are appropriate to orbital situations where data is distributed be-
tween dierent satellites, and are robust to common issues in remote sensing data such as cloud cover
or motion. The registration problem is approached by detecting interest points with Dierence-
of-Gaussian operators, producing SIFT feature descriptors, matching SIFT feature points, using
RANSAC to reduce outliers, and using multiple-view geometry techniques to determine geometry
between the two observation sensors. The authors also prevent a Thin Plate Spline method to
accommodate the curved surface of Earth in observed images. A key advantage of the method
presented is the reduced data transmission requirements if the images are collected by dierent
spacecraft; although 10,000 feature points were extracted in a test case, this is signicantly smaller
than the size of raw image (MODIS-Terra and MODIS-Aqua images). Finally, the authors used
the technique the stabilize sequential images taken by GOES West observation satellite to reduce
instrument jitter between frames. [276] suggests the use of PCA to reduce the dimensionality of
SIFT descriptors from 128 dimensions to 20 dimensions or less to reduce computational time for
feature matching; this technique is demonstrated on images of the Eros asteroid.
The detection and classication of surface features by onboard computer vision algorithms can
be used to identify targets for subsequent imaging, target pose determination, or aid in navigation
tasks such as establishing the local gravity model. [28] uses statistical classication using a ran-
dom forest classier to determine features of interest on planetary bodies based on expert-labeled
training data. [277] explores the use of supervised machine learning techniques such as neural
networks, ensemble methods, support vector machines, and continuously-scalable template mod-
els to detect craters in planetary bodies. The authors nd that the SVM provides detection and
localization performance that is closets to human labelers. Proposed improvements to the SVM
technique including Fast Fourier Transforms and the overlap-and-add technique make the solution
computationally feasible for a spacecraft's limited computational resources. The technique was
demonstrated with archived images from the Viking mission to Mars.
Computer vision in space can enable the ecient use of communication bandwidth by au-
tonomously identifying and extracting regions of interest in large image les and collections for
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 77
delivery to the ground. Hayden et al. presented an onboard k-means classication method to clus-
ter images based on edge, color, frequency, and time features into categories such as clouds, rivers,
land, horizon, and desert [278]. Doubleday et al. [86] presented a software method to process,
interpret, and autonomously re-task a synthetic aperture radar system. The method demonstrated
an amplitude segmentation algorithm using airborne synthetic aperture radar as a surrogate for
future space-based applications.
Thompson et al. present an algorithm for autonomous onboard detection of planetary plumes
[279]. As these are transient events, it would be advantageous to autonomously detect them, capture
additional images, and select only the part of the frame containing the plume for delivery to the
ground. The method uses a Canny edge detector to nd pixels on the planetary horizon, performs
ellipse tting using RANSAC, and nally searches for pixels in an annular region outside the tted
ellipse to detect plumes. The method was tested successfully using images of Io and Enceladus.
McGovern and Wagsta [280] provide a brief overview of potential areas for machine learning
in space, including computer vision applications, as well as constraints such as limited processing
power, limited memory capacity, and the high radiation environment which can interrupt processing
algorithms or stored data.
2.7.4 Satellite Target Characterization
Orbital computer vision can be used to characterize space objects beyond pose estimation. For ex-
ample, one area for characterization of spacecraft is sensing of articulation of onboard components,
namely solar panels or scanning sensors. Researchers have approached this problem in several ways.
Curtis and Cobb used a manifold clustering strategy algorithm known as Enhanced Local Subspace
Anities to segment feature point trajectories corresponding to individual connected rigid bodies
(here, the primary satellite and a rotating solar panel) [281]. Once segmented, SfM and ICP al-
gorithms were used to estimate the orientation of both objects as well as the axis and angle of
articulation.
Computer vision can be used to characterize a planetary body during approach and landing.
For example, [259] presented a method for hazard detection based on the shadow identication in
optical images at a far distance and terrain height and slope estimation in LIDAR images at close
distances.
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 78
Qureshi et al. [282] presents a method for qualitative scene interpretation using motion segmen-
tation, object tracking, object identication, and pose estimation using shock graphs in coordination
with knowledge of current workspace tasks during space missions to perform safety monitoring and
task verication. The authors use a Space Shuttle mission (STS-87) as a case study and use ren-
dering software to create multiple views of 20 objects for a total of 633 unique views. The vision
system can label objects as, for example, entering, exiting, occluding, or coupled.
2.7.5 Computational Considerations
Bornstein et al. adapted an onboard processing techniques used on Mars rovers to identify sci-
ence targets of interest which is known as Rockster to run on a 49-core processor being developed
for space applications [283]. Analyzing a single image which takes 10-15 minutes on a 20 MIPS
RAD6000 takes 0.3 seconds on the parallel multi-core implementation. Kogge et al. analyzed the
energy usage of this algorithm on a multi-core implementation nding that the energy required
to communicate between cores as well as access memory internal to and external to each core is
\extremely signicant," whereas computation by itself will become relatively energy inexpensive
[284]. Future research on multi-core computing in space will involve not only eciently paral-
lelizing programs, but considering the energy implications of data usage across cores. Separately,
[285] explored the benets of using 48-core parallel multi-core processing for spacecraft proximity
operations using stereo vision.
Alexander et al. found a ten times improvement in the processing speed when using a 64-
core processor (Tile-64) over an FPGA implementation for terrain relative navigation during Mars
and primitive body landing scenarios; some computer vision steps were found to be naturally
parallelizable including spatial correlation and feature tracking [286]. Finally, the authors note
that the simplicity of programming a multi-core processor allowed rapid development as compared
to using FPGAs which would have \required many more work years." There is a large body of
research on the use of multi-core processors for general computer vision applications which may be
useful for space applications [287].
CHAPTER 2. SURVEY: ORBITAL COMPUTER VISION 79
2.7.6 Validation and Verication Eorts
English et al. [49] noted the current challenges in generalized verication testing for space-based
computer vision systems; systems have been designed for specic purposes or, even if created
for general purposes, were thoroughly tested in mission scenario specic ways prior to launch.
Establishing methodologies to ensure a vision system performs in general situations would reduce
cost and development time. According to the authors, vision systems should one day be plug and
play components much in the way communications or power subsystems may be.
2.8 Future Research Directions
Orbital computer vision has a long history and is an active eld of research, as this survey makes
clear. However, there are many areas which hold promise for future discoveries and contributions.
Satellite inspection and characterization beyond pose estimation appears undeveloped and may
nd fruitful techniques in the emerging areas of machine learning algorithms. For example, future
computer vision algorithms may be used to assess satellite health in addition to position and
attitude. Next, the use of illumination as a free variable in orbital computer vision applications
has gone generally unexploited. Typically, illumination conditions are chosen such that a target is
optimally lit [62]. Conversely, the specular re
ections from a known light source like the Sun could
be used to determine things about a target. The area of Re
ectance Transformation Imaging [288]
and Polynomial Texture Mapping [289] should be explored as well as the eld known as shape from
shadow [290].
In addition to new analysis techniques, existing sensors such as star trackers could be used to
provide more data to computer vision in space. For example, the large scale aggregation of star
tracker data may yield important insights to track debris and other objects in orbit. Star trackers,
in concert with cameras imaging planetary surfaces may be used to derive rough time when detailed
planetary maps are available. The eld of orbital computer vision will continue to be a very exciting
area of research.
Chapter 3
Survey: Cooperative Localization and
Sensor Selection
3.1 Introduction
Computer vision in space, as shown in the previous chapter, is an active area of research and
provides many techniques and perspectives that may be potentially useful in approaching the
problem statement proposed in Section 1.2. However, the majority of research has focused on the
problem of a single spacecraft observing a single spacecraft target. Therefore, in order to expand to
the situation where many satellites, each equipped with many cameras, are attempting to determine
their collective pose, we turn to the eld of robotics for potentially useful insights. Two areas of
research within the robotics domain appear particularly relevant: cooperative localization and
sensor selection. This chapter surveys past and current research in these two areas.
First, a note on terminology: in this chapter, we will use robot to generally refer to a physical
agent involved in localization, sensor selection, and path planning. In almost all uses, the term
robot could be replaced with any number of real world autonomous objects such as driverless cars,
unmanned aerial vehicles (UAVs), or, of course, satellites.
Cooperative localization is an active area of research in robotics which seeks to estimate the rela-
tive pose between individuals in a team of interacting physical agents (e.g. robots, UAVs, satellites).
Cooperative localization has been proposed for applications in ground robots [291], autonomous
80
CHAPTER 3. SURVEY: COOPERATIVE LOCALIZATION AND SENSOR SELECTION 81
vehicles [292], autonomous underwater vehicles [293], and autonomous aerial vehicles [294]. Some
researchers have addressed the advantages and methods of resource sharing in aggregated satellite
architectures [295]. Others have examined the reliability, availability, and throughput of satel-
lite swarms [296]. However, the cooperative localization problem for spacecraft remains primarily
unexplored.
Note that our problem statement has some very clear connections to the cooperative localization
concept as presented in a landmark paper by Roumeliotis and Bekey [297]: 1) a group of n robots
each moving with their own linear or nonlinear equations of motion 2) each robot can measure
both its own state (i.e. inertial sensors) and the relative state of other robots and 3) information
can be communicated with the group. Cooperative localization for very large robot networks may
make some schemes (e.g., EKF-based, MAP-based) intractable as the complexity scales withO(n
2
)
or O(n
3
) for n robots. The authors in [298] propose a PF-based approach which includes robot
clustering to reduce the overall algorithm's complexity. The interacting robots reduce the total
set of robots to a set of clusters and only communicate the cluster centroid information during
distributed localization. This is intended to reduce the computational complexity to O(nk) where
k is the number of clusters.
Although cooperative localization can be used for control in multi-agent systems, such as coordi-
nation and
ocking [299], we do not focus on the control problem and leave it to future researchers.
Additionally, this present literature review focuses on camera-only localization, there is a large
body of research on using multiple sensor modalities. For example, Meingast et al. [300] present a
method that uses both shared image feature points and radio frequency interferometry to establish
localization within a sensor network. Such hybrid techniques could have obvious applications for
distributed spacecraft whom must communicate using radio frequency; however, this topic will not
be addressed here and is left for future research.
3.2 Cooperative Localization
Determining the pose of a single agent relative to some external frame is known as localization within
robotics. When this is done collaboratively for more than one agent using shared measurements
within the team, this is referred to as cooperative localization. Cooperative localization can improve
CHAPTER 3. SURVEY: COOPERATIVE LOCALIZATION AND SENSOR SELECTION 82
pose estimates of individual members as well as compensating for limited sensing ability in some
subset of the team members. Note that Cooperative Navigation is a synonymous term [301]. As
a reminder, cooperative localization is unrelated to cooperative pose estimation as dened by the
space robotics community.
There are several surveys on the concept of distributed state estimation and cooperative lo-
calization. Kia et al. [302] provide a comprehensive and approachable review of the cooperative
localization problem, especially for EKF-based approaches. Indelman presented the state of the art
in distributed perception and estimation in 2014 [303]. Raman et al. provide a relatively recent
summary of multi-camera localization approaches to include vision-based methods, consensus and
belief propagation methods, and some approaches for the general non-xed 3D camera localization
method [304]. The objective of the localization depends on the application; some methods intend to
determine the camera pose for all cameras in the network while others include pose determination
for objects in the eld of view or, additionally, mapping of an unknown environment.
3.2.1 Participant Identication
Uniquely identifying spacecraft in camera elds of view is an initial step to solving the cooper-
ative localization problem; robots typically must know the identication attached to the relative
pose measurement they broadcast. Shen et al. address the identication problem in automobile
cooperative localization by matching past spatial information to new pose measurements [305].
Cooperative localization can still be accomplished if the identication of a target is unknown.
Cognetti et al. [306] use transmitted anonymous bearing measurements to localize a group of inter-
acting UAVs. Note that anonymous here is dened as a known transmitter sharing measurements
of unidentied vehicles. Because the measurements are anonymous, the UAVs must register their
views by treating the bearing measurements as generic features, much like unknown landmarks in
a scene. A registration scheme is used to nd the correspondence between these feature locations
in multiple views from participating views. A PF is used for probabilistic estimation.
Franchi et al. [307] performed cooperative localization using anonymous relative position mea-
surements. They note that the lack of identity leads to combinatorial ambiguity which must be
resolved during estimation. The authors propose a two stage scheme which performs an initial
registration step between measurements to provide a set of pose hypothesis to a data associator
CHAPTER 3. SURVEY: COOPERATIVE LOCALIZATION AND SENSOR SELECTION 83
and EKF. A set of EKF lters are used for the various pose measurements and the EKF with the
most hypothesis-associations is chosen as the best current estimate. Cooperative localization sys-
tems which admit anonymous measurements may be useful for cellular satellite applications since
each satellite would not need unique identifying characteristics | thus reducing manufacturing and
supply complexity.
3.2.2 Kalman Filtering
Roumeliotis and Bekey presented an early and in
uential approach to cooperative localization
based on a decentralized Kalman Filter approach [297]. Each robot uses measurements of its own
state as well as relative measurements of other robots to perform state estimation of the full team
of robots. Information is shared with the team only when relative measurements between robots
are made. Note also that this formulation allows the distribution of reduced dimension lters to
various members of the team which can be used to manage computational resource availability.
[308] generalizes the approach of [297] to non-pose measurements, specically, bearing, distance,
and orientation. An EKF approach is used. [309] presents a formulation for a general decentralized
Kalman Filter. Karam et al. use an EKF to fuse state estimates from its own internal sensors as
well as the shared estimates of neighboring vehicles [292].
Hu et al. [310] oer an overview of decentralized and distributed Kalman Filtering. A dis-
tributed lter is presented that uses covariance intersection to fuse measurements and state esti-
mates between neighboring nodes. [311] uses an interlaced EKF to break up a full EKF to multiple
nodes.
Kia et al. [302] present a decentralized EKF-based cooperative localization scheme which is
exactly equivalent to the centralized algorithm. The propagation stage relies only on local informa-
tion. Communication is only used when a relative pose measurement is made. A related approach
by the same authors can be found in [312].
Mourikis and Roumeliotis [313] analyze the uncertainty of cooperative localization due to sensor
quality, the number of team members, and the topology of the sensor network. Interestingly, the
authors nd that the steady state error of the team following a temporary reconguration will
return to the same level as during the original conguration. The authors specically highlight the
relationship between localization performance and the Laplacian eigenvalues of the vision graph as
CHAPTER 3. SURVEY: COOPERATIVE LOCALIZATION AND SENSOR SELECTION 84
an area for future research. Roumeliotis and Rekleitis [314] analyze the localization accuracy for a
Kalman Filter by the number of team members and sensor accuracy. They provide an analytical
expression for the bound on the rate of global uncertainty increase as a function of team size and
sensor uncertainty.
In [315], a Kalman Filter-based approach is used for multiple UAVs tracking an object. In
the distributed implementation, each vehicle runs a Kalman Filter which operates on shared UAV
location estimates and estimation directions to the target.
When measurements are shared between vehicles or broadcast, it is essential to keep track of any
data dependencies that develop to avoid the problem of double-counting [316]. Luft et al. approach
this challenge by using an EKF-based localization algorithm for robots where information is only
exchanged between robots currently obtaining a relative measurement and each robot only updates
its own pose and any correlations with estimates of other teammates. This approach is especially
useful for robots requiring point-to-point communication methods, as opposed to broadcast modes.
In addition, by storing and updating only one's own current pose estimate, computation and storage
requirements may be reduced. Specically, when two robots meet, they share both current estimate
of their relative pose and cross-correlation. Experiments involving real world data are compared
to two other cooperative localization techniques: a centralized EKF estimating complete pose of
team [297] and Covariance Intersection [317].
Wanasinghe et al. [318] use a Split Covariance Intersection technique to fuse measurements
which eliminates the need to track cross-correlations. Poses are exchanged with neighbors. A
Cubature Kalman Filter is used for estimation. This suboptimal distributed approach allows a
linear computational complexity in the number of neighbors as opposed to O(n
4
) for a centralized
implementation. See [319] by the same authors for a comparison with EKF-based approaches.
Montesano et al. [320] performed bearing only cooperative localization. The problem is solved
using three dierent approaches: an EKF, a PF, and a PF that transitions to an EKF when the
distribution is assessed to be Gaussian. [321] also used a PF which transitions to an EKF when
the distribution approaches Gaussian.
CHAPTER 3. SURVEY: COOPERATIVE LOCALIZATION AND SENSOR SELECTION 85
3.2.3 Maximum a Posteriori Estimation
Some authors have approached cooperative localization as an estimation that makes use of all past
measurements to perform Maximum a Posteriori estimation. In a key example, Howard et al.
presented a Bayesian approach to perform cooperative localization to determine the probability
distribution of robot poses for each member of the team over time based on observations within
the team [322]. The authors group observations into ve types, from the perspective of a single
robot (termed `self'): motion of self, motion of other robots, robot pose relative to self, self pose
relative to another robot, and nally relative pose between two other robots. All measurements are
communicated to the group and fused in a Bayesian framework, that in the specic implementation,
relies on a PF. A key component of this approach is maintaining a dependency tree between
measurements to prevent circular dependencies (and thus, over-convergence).
Howard et al. use a conjugate gradient descent optimization algorithm to nd a set of poses
which best matches the measurements from a team of robots over some time interval [291]. The
approach is distributed amongst the robot team by allowing each robot to estimate a subset of the
overall pose-measurement graph. Ahmad et al. [323] extended the work of [291] with the inclusion
of both static and moving targets part of the state estimation.
Tron and Vidal presented a distributed approach to the localization of a team of cameras
containing overlapping views that include image points which can be matched between views [324].
Their method relies on minimization on SE(3)N with a generalized classical consensus algorithm
to nd a globally consistent pose for all cameras in the network at a given moment. This may be a
potentially good method to initialize a cluster. Knuth and Barooah localize a camera network using
relative measurements between the cameras (including relative orientation, bearing, distance, and
position) to determine a maximum likelihood estimate of camera poses [325]. A gradient descent
method on SO(3)R3N is used to nd the estimates. This paper was inspired by [324].
Nerurkar et al. [326] present a Maximum a Posteriori estimator for distributed cooperative
localization which nds optimal pose estimates based on past measurement information. Opti-
mization is performed using Levenberg-Marquardt. As time progresses, the inclusion of many past
pose measurements increases the complexity of this estimation. Therefore, the authors include a
method to marginalize past robot poses to reduce the size of the optimization problem.
CHAPTER 3. SURVEY: COOPERATIVE LOCALIZATION AND SENSOR SELECTION 86
Nerurkar and Roumeliotis [327] looked at cooperative localization from an information transfer
perspective where past measurement information is stored onboard each robot and used to locally
compute the cross-covariance on demand. This is important for time-varying topologies with asyn-
chronous information. Two specic methods are analyzed: one where interacting robots share only
their own measurements and one where they share all measurements they have stored onboard,
including from other team members. The work includes a communication complexity analysis for
these scenarios.
Indelman et al. use a graph-baed approach for the same problem where measurement infor-
mation is stored onboard and the correlation terms are computed on demand [301]. A graph
formulation is used to maintain measurement updates and each robot carries its own graph which
is built up over time with new measurements.
Dieudonne et al. [328] nd that the static cooperative localization problem for relative obser-
vations (distance and bearing) is NP-hard using a proof based on complexity theory.
Leung et al. introduce the concept of \checkpoints" during cooperative localization in networks
where full connectivity is not assumed: times when sucient information has been collected to
apply the Markov property and produce a decentralized state estimate which is equivalent to a
centralized estimate, eectively reducing onboard information storage [329].
3.2.4 Distributed Estimation and Consensus
The cooperative localization process is related to consensus problems in networked systems. [330]
provides an important and readable survey of the eld including stability and convergence analysis
for consensus algorithms. Focus is on dynamical systems as well as switched networks whose
topologies change in time. One highlight is that characteristics of the graph, namely the Graph
Laplacian, can reveal insight into convergence performance including speed.
3.2.5 Mutual Localization Methods
There are a number of publications regarding the mutual localization of two vehicles which are
able to simultaneously observe each other using some form of relative pose estimation. This has
not been done in space to our knowledge; one spacecraft is typically the observer while another
spacecraft is a passive target without sensing capabilities.
CHAPTER 3. SURVEY: COOPERATIVE LOCALIZATION AND SENSOR SELECTION 87
Giguere et al. establish pose using mutually observing robots moving in a planar environment
[331]. Robot pose consists of two degrees of linear position and one degree of rotation for a total
of three degrees of freedom. The authors present an analytical calculation for the relative robot
pose between two robots each with a camera and two ducial markings. This analytical solution is
especially useful because it allows for an uncertainty evaluation using the Jacobian of the resulting
measurement functions for position. The authors compare the result of mutually observing robots
to a single robot observing another with three ducial markings on it and nd that the mutual
localization technique yields much better performance. They also present derivations which show
that two mutually observing robots are very similar to one robot with stereo-vision observing
the other. Experiments using this approach for quadcopter formation
ying were reported in
[332]. Each quadcopter was equipped with two colored markers. Fiducial detection was done
using OpenCV
1
and the resulting feature locations were used for pose estimation. Since the pose
estimates are based on analytical relations, the solutions can be found very rapidly (approximately
300 nanoseconds on a standard computer).
Dhiman et al. present the mutual localization of two cameras by use of reciprocal ducial
observation which does not rely on any form of inertial measurements such as gyroscopes or ac-
celerometers [333]. Each robot camera can observe two ducial markings mounted on the other
robot. This is sucient to form four correspondences between image coordinates and known 3D co-
ordinates (it is safely assumed that the positions of each set of two ducials in the respective robot
frames is known a priori). The authors present a generalization of the well known Perspective-3-
Point algorithm to estimate relative pose between cameras that involves 3D points in two dierent
3D frames. This is dierent than conventional approaches where only the 2D image coordinates
are given in two dierent 2D frames - the 3D coordinates are only known in one frame, typically
relative to the object upon which they are mounted. The authors found that this method can
localize more accurately than using ARToolKit ducial markings and can be used as an input for
3D reconstruction of unknown environments.
Dugas et al. [334] provide another example of localization between two mutually observing
cameras using ducials. Each camera extracts the image location of two ducials which is sucient
to solve for the relative pose between cameras. Note that the result is an analytical solution for
1
https://opencv.org/
CHAPTER 3. SURVEY: COOPERATIVE LOCALIZATION AND SENSOR SELECTION 88
the mutually observing localization problem.
Zhou and Roumeliotis [335] explore the determination of relative pose between two interacting
robots from a combination of range and bearing measurements combined with egomotion and nd
14 minimal problems which can be solved to recover pose. These measurements are made at
multiple timesteps where it is assumed that both robots can accurately estimate their change in
position and orientation between these steps using inertial sensors. The results may be useful for
initial relative pose estimation when direct vision-based pose estimates are unavailable or highly
inaccurate. Additional ndings appear in [336].
Another potentially useful approach to cooperative localization is based only on range measure-
ments between large swarms of interacting robots [337]. This method could be especially useful for
initial relative pose estimation where distance is inferred from object size in camera images. The
disadvantage of this approach, however, is communication complexity.
Feng et al. [338] present a cooperative localization scheme involving three planar robots and
one climbing robot. The motion of the robot team is exploited to provide sucient measurements
to solve for the pose of the climbing robot. Aragues et al. [339] used a multi-stage approach for
the localization of a set of planar robots where absolute orientation is estimated rst for the entire
team followed by position estimation. The nal step is a combined pose renement. Relative pose
measurements are used. The authors also include a distributed solution.
3.2.6 Cooperative Target Tracking
Cooperative target tracking is closely related to cooperative localization and involves fusing mea-
surements from multiple observers to estimate the position (and sometimes orientation) of one or
more targets. In some cases [340], both cooperative target tracking and cooperative localization
are performed simultaneously.
Schmitt et al. present a cooperative localization scheme for state estimation of a robotic soccer
team, its opponents, and the soccer ball [341]. Each robotic soccer player is equipped with a single
vision sensor. From this camera, each player localizes itself, the ball, and observed opponents. These
state estimates are then broadcast to the team and then iteratively incorporated into each players
own state. A Multiple Hypothesis Tracking framework is used to associate new observations with
existing object hypothesis. Additional research on cooperative localization and tracking approaches
CHAPTER 3. SURVEY: COOPERATIVE LOCALIZATION AND SENSOR SELECTION 89
for soccer can be found in [342].
Knuth and Barooah use relative pose measurements between vehicles to improve estimates for
localization using a distributed approach [343]. A measurement graph is used to conceptualize the
measurements relating the absolute pose of a vehicle between two time steps (inter-time) and the
measurement relating the relative pose between two vehicles at the same time step (inter-vehicle).
The inter-time measurements come from vehicle sensors such as odometry or intertial measurement
units. The inter-vehicle measurement comes from a relative pose sensor (e.g. computer vision). An
approach involving Lie groups is used to solve the least squares minimization problem to produce
pose estimates for the group of vehicles over time. The same authors, in [344], use gradient descent
in SE(3) to nd optimize robot poses over all time. A graph formulation is used which incorporates
all measurements. The authors include a distributed version.
Tron et al. [345] use manifolds in distributed consensus algorithms. The authors note that the
traditional denition of averaging is hard to apply for pose which complicates the use of consensus
algorithms for pose. The authors propose new consensus algorithms based on the use of manifolds
to represent rotations. Note that translation can be treated in the traditional sense and is thus
decoupled from this consensus on SE(3). Related work by the authors in [324] extends the manifold-
based method to the more challenging problem of camera localization based on images.
In another example of the use of manifolds, Sarlette and Sepulchre [346] perform both attitude
estimation and control using consensus manifolds, specically, SO(3), for attitude only.
Mirzaei et al. [347] derive analytical upper bounds for the uncertainty in target tracking with
multiple robots using an EKF. The upper bounds are functions of robot sensor characteristics and
the sensing graph connecting robots and targets. Although not discussed in the paper, this bound
may be useful for sensor selection to improve localization performance. Hausman et al. [348]
also approach the cooperative localization and target tracking problem using an EKF to estimate
the joint state. A control scheme is proposed to minimize target uncertainty and avoid sensor
occlusions.
Olfati-Saber addresses distributed Kalman Filtering for for non-homogeneous sensor networks
in [349].
Meyer et al. [340] introduce a framework (CoSLAT) to perform cooperative localization and
distributed target tracking. Measurements between the agents and targets are used for estimation.
CHAPTER 3. SURVEY: COOPERATIVE LOCALIZATION AND SENSOR SELECTION 90
A particle-based distributed belief propagation scheme is used to share information. Each agent
estimates its own state and the target state from all measurements the team takes. A factor graph
formulation is used to relate agent poses over time with relative pose measurements taken at various
time-steps.
3.2.7 Structure from Motion
This section reviews the cooperative localization schemes which use overlapping images of the same
scheme to determine the relative pose between two cameras as well as simultaneously extracting
information about the structure of the scene. This is known, among other names, as Structure
from Motion (SfM) and can optionally include information about the motion of systems between
two images. It is closely related to camera calibration which is covered in the next section.
Detecting Overlapping Fields of View
A preliminary step to estimating relative pose from multiple overlapping views is determing which
views overlap in the rst place. This will be the case for the initial steps of localization or camera
calibration. It cannot be assumed that the topology is known a priori; it often must be deduced by
the images themselves. For example, Wang [350] provides a brief survey of techniques to determine
the topology of a camera network from images.
One method to solve this is to use uncertainty measurements for the relative pose computed for
each camera pair [351]. Such a method relies on matching features between each possible pair of
frames and uses geometric constraints to reject unlikely overlapping views. [352] proposes a method
based on the union of the probability distributions of all point correspondences between any two
views. The authors note that two overlapping views will have a distribution with some clear peaks
while two non-overlapping views will tend toward uniform distributions. SIFT feature descriptors
are used for nearest neighbor matching. [353] also uses SIFT feature matching to estimate the
vision graph.
Wang et al. [354] presented a method to perform initial localization of a network of color depth
(RGB-D
2
) cameras with overlapping views. A edge-weighted graph is constructed of the sensing
2
Red Green Blue - Depth (RGBD) refers to a standard color camera images that also includes depth measurements
for each pixel.
CHAPTER 3. SURVEY: COOPERATIVE LOCALIZATION AND SENSOR SELECTION 91
topology where the weights were a function of overlap between the two camera views. This overlap,
which is unknown a priori, is found by aligning the 3D point clouds from the two sensors with, for
example, ICP. These initial pose estimates, as well as the selection of the minimal set of camera
pairs using the Floyd-Warshall algorithm, are used in a pose renement algorithm to localize all
the cameras in the network.
Non-overlapping Field of View Approaches
In situations with non-overlapping elds of view, research has focused on using an estimate of the
target trajectory model to coordinate two separate cameras. Anjum [355], for example, calibrates
multiple non-overlapping cameras observing a target moving in a plane whose trajectory when
not in view is modeled using linear regression between the previous and subsequent camera views.
Funiak et al. [356] address localization of a sparse minimally overlapping camera network observing
an object moving through a scene by over-parameterizing the camera pose to enable Gaussian
distribution modeling.
Rhyme et al. present a method to localize a non-overlapping camera network that observes a
target [357]. Image coordinates of the target are used to calibrate the network and knowledge of
the target dynamics can potentially compensate for lack of overlap between cameras. Maximum a
Posteriori (MAP) estimation is used. Since the relative motion model for passive objects in orbit is
well known, this may be a useful method for tracking non-team members within a satellite cluster.
Structure from Motion Techniques
Typically, SfM uses on an ordered set of images as captured, for example, by a single camera moving
through a world scene. Egbert and Steele present [194] an approach to the SfM problem involving
an unordered image set which uses a minimum spanning tree of the camera adjacency graph. The
adjacency graph can be determined several ways including by coarse location information, feature
correspondences, or color histogram comparisons as is used here. The camera calibration step,
which determines pose for individual cameras, is then ordered based on this minimum spanning
tree.
Although Bundle Adjustment is the standard method for solving the SfM problem with large
image sets, it is typically performed on a centralized processor. Ramamurthy et al. [358] present a
CHAPTER 3. SURVEY: COOPERATIVE LOCALIZATION AND SENSOR SELECTION 92
fully distributed bundle adjustment scheme using the Alternating Direction Method of Multipliers
method distributed across multiple processors. The authors specically highlight UAVs as a po-
tential use case. Note that the proposed method has linear complexity with respect to the number
of observations.
In [359] the authors present a multi-step process for choosing views in a SfM scenario involving
UAV inspection of buildings. A subset of feasible cameras viewpoints is created based on cluster-
ing to reduce the search space, specic views are selected using a greedy algorithm. Constraints
considered during the approach include coverage, view overlap, and resulting uncertainty.
This general concept has been scaled to localize 100,000's of cameras in the context of tourist
picture data sets where the photographer location or orientation may be unknown, but the same
subject appears in many dierent images [360].
Piasco et al. used multiple single-camera equipped UAVs with overlapping elds of view to
localize themselves [361]. Feature matching using SIFT descriptors provided the necessary point
correspondences and a ve-point algorithm was used to recover pose between each pair of cameras.
Indelman et al. present a cooperative localization scheme based on multiple-view geometry in which
images of the same scene are collected by a pair of robots not necessarily at the same time [301].
A computation of cross-covariance terms are introduced to allow the integration of this relative
pose measurement (from three view geometry) into the localization solution within each robot. A
xed-lag smoother is used to deal with the non-concurrent relative measurements.
Taylor and Shirmohammadi also approach cooperative localization using ducials - in this case
blinking LEDs [362]. This allows cameras to both identify and localize other cameras. Image
coordinates from multiple cameras of the same LEDs are used estimate camera pose. Finally, a
pose renement step is proposed which uses Bundle Adjustment to nd the set of camera poses
which best matches the available measurements.
Simultaneous Localization and Mapping
Simultaneous Localization and Mapping (SLAM), as the name implies, refers to simultaneously
solving the localization and mapping problem for a mobile platform. Typically the measurements
used for estimation are images or range measurements. SLAM is typically performed realtime in a
recursive manner, often using EKF frameworks, which dierentiates it from SfM which is typically
CHAPTER 3. SURVEY: COOPERATIVE LOCALIZATION AND SENSOR SELECTION 93
solved using all available measurements oine. Davison et al. [363] provide an overview of the
SLAM problem as well as the rst real-time single-camera SLAM application.
SLAM has typically been approached using a single observer platform. However, there is a body
of research on collaborative SLAM using mutliple interacting robots which will be highlighted here.
Examples of cooperative SLAM can be found in [364] and [365].
Schmuck and Chli [366] present a collaborative SLAM approach using multiple UAVs with
centralized map merging and Bundle Adjustment. This work includes a good overview of the
collaborative SLAM problem. Visual odometry is used onboard for individual UAV navigation.
Tribou et al. [367] perform SLAM using a rigidly connected set of three cameras (i.e. a \rig")
with non-overlapping elds of view of a scene which includes a moving target. In addition to the
general method, the authors highlight degenerate motions and estimator performance.
Walter and Leonard [368] demonstrated underwater SLAM using a team of heterogenous robots
based upon a delayed state EKF using both environment and inter-vehicle measurements. Sharma
et al. [369] address cooperative SLAM using a information consensus lter.
Tweddle and Miller provided an extensive overview of the SLAM problem as it relates to space
applications in the literature review of their dissertation [370]. They specically note the unique
challenges in SLAM when the environment is dynamic and, in their case, has unknown properties
which must be estimated such as center of mass and moments of inertia. Addressing these issues is
the contribution of their thesis. Note that the authors focused on a single observing platform, not
collaborative SLAM.
Unmanned Aerial Vehicles Applications
UAVs provide a platform for SfM by imaging targets from many locations autonomously. Multple
UAVs can be used to perform collaborative SfM (including cooperative localization) by fusing all
gathered images.
Soeder and Raquet use two UAVs to produce a stereo-imaging system where one camera is
located on each vehicle and the relative position and orientation between these cameras varies
over time [294]. The camera on one UAV (trailing camera) observes four known ducials on the
other UAV which is used to compute the relative pose between vehicles. This relative pose is then
used to enable the stereo-vision using both cameras to computer a depth map of the surrounding
CHAPTER 3. SURVEY: COOPERATIVE LOCALIZATION AND SENSOR SELECTION 94
environment. Bundle Adjustment is then used to produce vehicle position measurements from
sequential stereo images which is fed to a Kalman Filter used for vehicle navigation. Finally, the
authors used an experimental setup to determine the error characteristics associated with pose
estimation from the cooperative, ducial-based, algorithm. An external motion capture system
(Vicon) was used to produce these truth data.
Indelman et al. [371] [372] use three-view geometry to perform cooperative localization for a
group of aerial vehicles. Images are captured and compared to past images taken from the same
approximate location. Three such images are then used to estimate the poses of the vehicles using
multi-view geometry constraints which are then broadcast to the group.
Vemprala and Saripalli [373] present a method using UAVs with overlapping elds of view to
create point clouds of target objects. Features are extratected and matched using an Accelerated
KAZE descriptor. An initial relative pose estimation is established between two UAVs using a 5
point algorithm with RANSAC. Scene reconstruction and pose renement is done with Levenberg-
Marquardt optimization to solve the PnP problem.
Another example of vision-based multiple UAV localization using overlapping elds of views
is given in [374] where the team of UAVs use features in shared elds of view when observing
ground-based scenes. Note that in general, UAV team localization methods in the literature do not
directly sense the full pose of neighboring UAVs due to their speed and separation distance. [375]
presents a feature-based cooperative localization system where UAVs observe the same object and
fuse measurements with onboard inertial measurements. A Kalman Filter approach is used. The
authors include an observability analysis based on the number of features viewed.
3.2.8 Camera Calibration
Camera calibration is a closely related topic to cooperative localization and SfM. Perhaps the key
dierence is that camera calibration typically is used for stationary cameras whereas cooperative
localization and SfM rely on the motion of cameras to extract pose. Camera calibration techniques
may be especially useful for swarms consisting of spacecraft with low relative velocities.
A key paper [376] which outlines localization of a camera network using features points shared
between views appeared in 2004. The paper included an algorithm, termed the Distributed Alter-
nating Localization-Triangulation algorithm, localizes the network using these shared features. A
CHAPTER 3. SURVEY: COOPERATIVE LOCALIZATION AND SENSOR SELECTION 95
graph-based model for the network is used to dene a sparse camera network where not all cam-
eras observe the same eld of view. The authors also include a comparison of computational costs
versus communications costs for localization nding that using features onboard can be orders of
magnitude more ecient than sharing with neighbors via radio frequency communications.
Some calibration techniques rely on known objects viewed by multiple cameras. Kurillo et
al. [377] propose a technique based on two LED beacons moving through a scene. Initial pair-
wise camera pose (calibration) is estimated using multi-view geometry techniques. Next, a vision
graph is established to determine the preferred path between relative pose measurements to opti-
mize the global joint camera calibration. Edge weights in this graph are the number of common
points between cameras and Dijkstra's shortest path algorithm is used to minimize the number of
intermediate pose transformations (i.e. nodes in vision graph).
Devarajan et al. [353] present a decentralized method for camera calibration where each camera
produces an initial pose estimate with neighboring cameras sharing the same eld of view. These
initial relative pose estimates are created using Bundle Adjustment. A belief propagation scheme
is used to fuse the estimates from these individual camera clusters. A prior paper by the authors
[378] use belief propagation techniques where each node only communicates with neighbors using
scene points in shared elds of view. The authors specically address the unique challenges that
camera sensors create when using belief propagation such as the scale ambiguity for each camera
pair.
Medeiros et al. [379] suggest using clusters to reduce the communications requirements between
nodes. Clusters are formed around shared eld of view and, though less accurate than fully peer-
to-peer methods, show improved energy eciency. The authors use moving objects in shared elds
of view as feature points.
Mavrinac et al. [380] use a graph theoretic approach to model the calibration of a distributed
camera network. The authors dene communication, vision, and pose graphs, and use these to
perform distributed calibration, including the distribution of the feature matching step among the
nodes. The cameras are assumed xed and the scene is assumed static.
CHAPTER 3. SURVEY: COOPERATIVE LOCALIZATION AND SENSOR SELECTION 96
3.2.9 Space-based Cooperative Localization
There is an extensive body of knowledge on the estimation of relative position and attitude between
two spacecraft (see [381] for one such example and Chapter 2 for many more). However, estimating
the relative pose of a group of spacecraft collectively has received much less attention. Typically,
this has been part of \formation
ight" focusing primarily on the challenges of formation design
and control. Control requires state estimation for feedback. This section surveys the research of
state estimation for a spacecraft cluster.
Smith and Hadaegh approach the problem with parallel full state estimation of the entire
cluster on each spacecraft with a goal of reducing inter-satellite communication at the price of
relatively cheaper increased computation [382]. A standard Kalman Filter is used, but additional
communication is included to reduce the impact of \disagreement dynamics" arising from not
sharing actuation (i.e. propulsive events) information between satellites. The authors have written
extensively on the topic of distributed state estimation for spacecraft.
Mandic et al. presented a decentralized stable state observer for a discrete-time linear time
invariant system of independent spacecraft which blends the internal state estimate with neighbor
state estimates [383]. In their specic application, a formation of satellites uses range measure-
ments to estimate positions of neighboring satellites. Note that the communication topology is
an important aspect as estimates are only communicated to neighboring spacecraft. Menon and
Edwards use a nonlinear observer to estimate individual spacecraft position on relative position
measurements between spacecraft [384].
Covariance intersection has been proposed as a method for state estimation within distributed
spacecraft formations. The covariance intersection method presented by Arambel et al. use co-
variance intersection in conjunction with a standard Kalman Filter to accommodate measurements
with are suspected to be correlated [385]. This allows for two covariances to be \consistent": the
estimated covariance is an upper bound of the actual covariance. While the Kalman Filter is used
for the prediction step and the measurement update for known independent measurements (i.e.
gyroscope readings), the covariance intersection can be used to fuse measurements provided from
external sources (i.e. state estimates of neighboring spacecraft) which may actually be correlated
to one's own state. The key advantage of the covariance intersection method for fusing information
CHAPTER 3. SURVEY: COOPERATIVE LOCALIZATION AND SENSOR SELECTION 97
is that knowledge of the correlations need not be maintained by each vehicle, thus reducing storage
and computation requirements.
Dumitriu et al. used a combination of an EKF and covariance intersection for the distributed
estimation of a formation of spacecraft [386]. The EKF was used to process local measurements
while covariance intersection was applied to state estimates transferred from neighboring vehicles.
In the proposed scenario, three satellites were placed approximately 250 m apart in geostationary
transfer orbit where they use a radio frequency system to measure relative distance; attitude esti-
mation is not considered. As with Mandic et al. and Smith et al., understanding communication
and measurement topologies were an important element in the lter design.
Dumitriu et al., [387] used a covariance intersection technique to fuse position estimates derived
from a decentralized EKF. Note that this work did not address attitude estimation
Carrillo-Arce et al. [317] rely on covariance intersection to perform cooperative localization
where each robot maintains an estimate of only its own state and covariance. This estimate is
updated by internal sensors and relative measurements from neighbors. Covariance intersection
ensures maintenance of a consistent estimate between possibly correlated relative measurements.
[317] also provides a good overview of existing cooperative localization approaches.
Wang et al. [388] estimate the state of a spacecraft formation with a static communication
topology in a distributed manner by using an UKF to process relative measurements from neigh-
boring spacecraft. The measurements here are psuedorange and carrier phase derived from onboard
radio frequency transceivers.
Crassidis et al. [389] used covariance intersection to fuse quaternion-based attitude estimates
with unknown correlations which maintains the normalization constraint of quaternions. They
present a simulation for a simple scenario where local Kalman Filters are used on two individual
star trackers which share gyro measurements. The decentralized attitude estimation of a formation
of three or more spacecraft is also addressed.
Mandic et al. [383] use a linear observer to estimate the position and velocity of each vehicle in
a spacecraft formation in a decentralized manner. A consensus lter is used to blend state estimates
with neighboring spacecraft. The authors show that this observer converges asymptotically even
during arbitrary changes in the communications and sensing topologies.
Kang et al. [390] approach satellite cooperative localization from an architecture perspective
CHAPTER 3. SURVEY: COOPERATIVE LOCALIZATION AND SENSOR SELECTION 98
and dene communication and observation graphs. The authors present a distributed architecture
to estimate the position of spacecraft; each satellite maintains a Kalman Filter, generates relative
position measurements, and shares these measurements with other members of the team. The
authors note that loss of a single satellite does not require lter state restructuring. Note that this
work does not address distributed attitude estimation as it is assumed star trackers are available
on all vehicles and can be dierenced to determine relative attitude.
Blackmore and Hadaegh [391] present important observability conditions for distributed attitude
estimation in a system involving star trackers and ducials (beacons). If spacecraft do not have
star trackers, they must be able to determine relative pose to one with a star tracker using multiple
non-collinear ducial or a single ducial not parallel to the rotation vector of the target.
Hadaegh et al. [19] present the concept of self-centered localization where each spacecraft uses
inter-satellite measurements and shared satellite states to establish the position of other satellites
relative to itself. Attitude estimation is not addressed as it is assumed each spacecraft will have
high accuracy star trackers which can be simply dierenced between satellites.
Fergusen and How [392] compared several decentralized methods for estimation on a spacecraft,
with a focus on position estimation based on range measurements. The comparison included a
full order decentralized lter which estimates the full
eet state on each vehicle, a reduced order
cascade lter, and a Schmidt-Kalman Filter. Finally, the authors approached the problem of
all-to-all communications by examining hierarchic clustering topologies to reduce communication
complexity.
McLoughlin and Campbell [393] presented a distributed localization system for a spacecraft
formation using relative range and bearing measurements. A circular communications topology
is used to share state estimates sequentially among neighbors which are then combined with a
locally-running suboptimal EKF on each spacecraft. Note that the authors only estimate position
with this approach, leaving attitude measurements to individual star trackers.
[394] focuses on creating a stable observers for the distributed estimation of position in a space-
craft formation. The authors specically address time-varying topologies and present a switched
steady state Kalman gain approach selected according to the instantaneous toplogy. The estima-
tor proposed by the authors produces a set of constant estimation gains for each possible sensing
topology which reduces the computational complexity.
CHAPTER 3. SURVEY: COOPERATIVE LOCALIZATION AND SENSOR SELECTION 99
Fan et al. [395] described a consensus-based MEKF for attitude estimation of a distributed
system of attitude sensors. The state estimates were attitude, angular velocity, and gyroscope drift
for each node in the network. Note that the authors focused on applications where all nodes were
attached to a single spacecraft; however, the approach may be able to be generalized to separate
free
ying nodes.
3.2.10 Distributed Computer Vision
Another eld of research that oers insight for our research problem is distributed computer vision.
This research examines how computer vision approaches can both be extended to multi-camera ap-
plications as well as distributing the algorithms to network nodes. Radke [396] surveys distributed
computer vision which comprises techniques where nodes make local decisions based only on infor-
mation from their immediate neighbors. Advantages includes no single point of failure, potentially
reduced use of communications and computation resources, increased scalability. Many techniques
can be viewed as special types of the consensus problem. Distributed computer vision can be clas-
sied by cameras that have overlapping elds of view (FOV), non-overlapping FOVs, or a mixture.
The author states that the mixture problem has received very little research attention. A vision
graph denes how these FOVs overlap while a network graph denes the communication paths
between sensors. SfM is a known and well researched multi-camera calibration process involving
overlapping elds of view. With non-overlapping elds of view, the motion of objects sequentially
transiting elds of view has been used to solve the distributed camera calibration problem. Wolf
and Schlessman [397] also surveyed the existing literature pointing out that one of the benets
of properly designed distributed systems is that they should scale more easily than centralized or
decentralized systems as the number of camera nodes increases.
Tron and Vidal [398] present an important introduction to distributed computer vision algo-
rithms where camera nodes collaborate by fusing local information with information from other
cameras. Emphasis is placed on addressing classical computer vision problems: vision-graph dis-
covery and distributed estimation (localization, camera calibration, pose estimation, and object
tracking). To accomplish this, the authors note the use of basic distributed algorithms such as
spanning-tree algorithms, average-consensus algorithms, and belief propagation. The authors de-
tail solving the SfM problem in a distributed fashion.
CHAPTER 3. SURVEY: COOPERATIVE LOCALIZATION AND SENSOR SELECTION 100
3.3 Sensor Selection
Sensor selection is an extensive area of research. We highlight here past research which may be
useful for our specic problem of camera selection for mobile (i.e. free
ying) robots.
3.3.1 General Sensor Selection Problem
Joshi and Boyd [399] formulate the general sensor selection problem which selects a subset of
sensors which produce measurements which are a linear function of state. Due to computational
complexity, full enumeration is infeasible for moderately sized sets of sensors. The authors present
a convex optimization approach to nd a sub-optimal solution to the non-convex problem. This
relaxation makes the problem tractable.
Chepuri and Leus [400] present the adaptive sensor selection problem where selection occurs at
each timestep based on the current state estimate with a specic focus on non-linear measurement
models (like our camera localization problem). Here, the most informative sensors are selected based
on the Fisher Information matrix. The sensor selection is incorporated into the state space model
as a vector of boolean variables preceding the measurement function. The resulting optimization
problem is non-convex, so the authors propose that this boolean constraint is relaxed to the convex
box constraint. The resulting problem is a standard semi-denite programming problem which
cab be solved eciently. The authors also suggest a smoothing model to avoid frequent sensor
switching.
Potthast et al. [401] examine the impact on object recognition of feature selection and view
planning. Specically, they propose an online method to adaptively select either the features or
viewpoints which provide the most information for the recognition task. Choices are driven by
both reducing uncertainty in object state and minimizing the cost of selections. Three methods are
proposed and compared: expected loss of entropy, mutual information, and Jerey's Divergence.
The authors nd up to six times reduction of runtime due to smart feature selection and increased
recognition accuracy by including viewpoint selection.
CHAPTER 3. SURVEY: COOPERATIVE LOCALIZATION AND SENSOR SELECTION 101
3.3.2 Sensor Selection for Target Tracking
Information on the objects to be tracked can also be incorporated to perform camera placement or
selection. [402] optimized camera placement based on a distribution of the paths objects may take
in a 3D environment to maximize both the coverage and the resolution of objects in the path.
Isler and Bajcsy [403] take a geometric approach to the sensor selection problem in a target
tracking scenario, noting the applicability to camera networks. The authors present the general
sensor selection problem as a bicriteria optimization problem (cost and utility) and note the equiv-
alency to the NP-complete Knapsack problem. The authors present an approximation algorithm
which selects sensors that guarantee an estimation error that is within a factor of two of the least
possible error. Note that the authors include a proof that, for a planar environment, only 6 sensors
are sucient to approximate the utility of all sensors within a factor of two of the least possible
error.
Spletzer and Taylor [404] looked at the dynamic planning for sensor placement for target tracking
scenarios to optimize estimation accuracy. They note that the in general this can be formulated as a
non-convex optimization problem. However, they note that if the next conguration is dynamically
selected at each time set, local minima may be acceptable since the mobile agents only move a
small amount. The authors refer to this as \piecewise optimal" trajectories.
3.3.3 Sensor Selection for Cooperative Localization
Mourikis and Roumeliotis look at the problem of optimally scheduling measurements in a resource-
constrained team of robots [405] and present a method to determine an optimal set of sampling
frequencies for a quasi-static formation of robots based on the covariance matrix of the steady state
system. A convex optimization problem is presented and solved to determine the set of frequencies.
Xingbo et al. [406] also exploit the Fisher Information Matrix from the EKF to select sensors.
The updated matrix is available at each iteration of the EKF to be used to update sensors at the
next time step. Note that this is related to the posterior Cramer-Rao Lower Bound which identies
the lower bound on the achievable covariance matrix. An \add one sensors at a time" method is
used to reduce the computational complexity. Hadzic and Rodriguez [407] also suggest the use of
the Cramer-Rao Lower Bound in utility-based node selection
CHAPTER 3. SURVEY: COOPERATIVE LOCALIZATION AND SENSOR SELECTION 102
Hausman et al. [408] discuss cooperative localization and target tracking using UAVs with vari-
able camera-based sensing topologies to estimate the pose of all vehicles and the target. Selecting
the sensing topology is posed as an optimization problem on the joint EKF covariance of the entire
system.
The selection of relative pose measurements which minimize uncertainty of the joint pose in a
multi-camera calibration scenario has been explored by Bajramovic et al. [409]. The objective is to
choose a reducded set of relative pose measurements between cameras to establish an initial pose
estimate of the camera network which is further rened using bundle adjustment. The uncertainty
metric used is based on the Blake Zisserman Distribution [261] and assess the quality of a relative
pose measurement. An undirected graph is created using these uncertainty measurements as edge
weights and a shortest triangle paths algorithm is used to nd a set of relative pose estimates
which minimizes the joint pose uncertainty. Note that relative pose estimates are always computed
a prior for all node combinations and these estimates are required for establishing the uncertainty
metric. Also see Bajramovic's thesis for detailed treatment of both pose estimation algorithms and
the uncertainty-based multi-camera calibration problem [410].
Cortes and Serratosa [411] present a formulation where a shortest path algorithm is used to
decide the best relative pose measurements to be used for global pose estimation of a team. Edge
weight, in this case, is based on the regression error for a relative pose measurement extracted using
ICP between two point clouds.
3.3.4 Camera Placement and Selection
Camera placement and selection is a eld of research which is related to distributed vision. Specif-
ically, the objective is to choose the view or subset of available views which best contributes to
the distributed vision problem. This is also closely related to coverage and visibility problems. It
is important to note that choosing the \best" set of views can be dened by dierent objectives
which may be con
icting: uncertainty, robustness, power, etc.
Piciarelli et al. [412] provides an extensive survey of the existing literature of reconguration in
camera networks. This includes cameras which can be dynamically recongured in several congu-
ration spaces: position/orientation, pan/zoom/tilt, camera selection, state selection, and resource
allocation. The authors group the goals into coverage, task quality, and resource consumption.
CHAPTER 3. SURVEY: COOPERATIVE LOCALIZATION AND SENSOR SELECTION 103
Rowaihy [413] provide a survey of the more general problem of sensor selection where each sensor
has a utility and a cost which can be computed. Schemes are classied into coverage approaches,
target tracking and localization methods, and single/multiple mission assignment tasks. For our
current problem of localization, the authors note that sensors are typically chosen based on en-
tropy of measurements, information gain from measurements, or the target position uncertainty.
The authors note that this problem is typically as hard as the classic Knapsack Problem, which is
NP-complete.
Ercan et al. [414] examine the optimal placement and selection of cameras in a planar environ-
ment (i.e. room) for object tracking using energy constrained camera sensor networks. The authors
note that energy can be minimized by reducing the number of cameras involved in a task. The
placement problem is shown to be equivalent to the inverse kinematics problem in robotics and the
camera selection is posed as a combinatorial optimization problem and sub-optimal methods are
used to solve it in a computationally constrained environment.
De Rainville et al. [415] approach the more complicated problem of camera placement to cover
an unknown arbitrary 3D environment. The concept uses a human-dened virtual camera which is
created from the aggregation of a minimal number of individual physical cameras. The authors note
that this is related to both the well-known Art Gallery problem and Next Best View algorithms.
Here, the minimized cost function is pixel density over a 3D region of interest. A Monte Carlo
approach is used to nd this pixel density. Optimization is done in an iterative two stage process
where both individuals cameras and the joint camera conguration are optimized.
Soro et al. [416] examined camera selection for a static set of deployed xed cameras and used
volume coverage, energy requirements, and view angle metrics to perform optimization.
Within the framework of camera calibration, Br uckner et al. [417] estimate pose of the entire
camera network by selecting the shortest triangle path within the vision graph where nodes are
cameras, edges are relative pose measurements, and edge weights are pose uncertainties. Final
calibration is done using Bundle Adjustment. Initial overlap of cameras is determined using image-
based probability distributions of SIFT descriptors. During the initial stage, this overlap is actively
increased by using the camera pan-tilt-zoom actuators.
CHAPTER 3. SURVEY: COOPERATIVE LOCALIZATION AND SENSOR SELECTION 104
3.3.5 Coverage and Path Planning
There is an additional area of research which is worth mentioning: path planning. The general
path planning task is determining the trajectory of an robot (or team of robots) such that some
value is optimized while some other constraints are met. We look at two domains of path planning:
visibility path planning and coverage path planning.
Designing spacecraft clusters to track objects [418] and perform distributed target state esti-
mation [419] is a large are of research. However, these are typically involve spacecraft with large
baselines and target tracking is usually limited to position and velocity, not attitude.
Mavrinac and Chen [420] present a survey on coverage for camera networks and approach the
topic from both geometric area-/volume-based and topology-based approaches. A specic insight
provided by the authors involves the transition topology concept to track objects between views.
This framework can be used to select cameras as targets move out of view.
Galceran and Carreras provide a survey on robotics coverage path planning [421] where the task
is specied as determining a path which passes over all points in an area or volume while avoiding
obstacles. Some algorithms are classied as complete if they can provably guarantee complete
coverage of the free space, while others are heuristic if they cannot. They also note that most
coverage path planning assume a planar surface for the environment, though several examples for
3D environments are presented including UAVs. Multi-robot examples are also included.
Schwager et al. [422] present a distributed algorithm to position and orient cameras to cover
a planar environment with maximal resolution in a way that provably minimizes a cost function
of aggregate information per pixel on each platform. It is noted that this includes non-convex and
disconnected environments. Simulations and experiments were conducted using small UAVs. The
controller uses gradient descent in each UAV to minimize the cost function. Each UAV must know
only its own state and the state of UAVs that have overlapping eld of views.
3.4 Conclusion
Cooperative localization and sensor selection have signicant insights to oer satellite swarm state
estimation. This chapter has highlighted a variety of research which is useful for the proposed
dissertation problems.
Chapter 4
System Filter Design
4.1 Introduction
Future space missions may consist of multiple free
oating interacting satellites in close proximity.
These compact satellite swarms may require the ability to determine position and orientation of
individual swarm members in support of mission objectives like inspection, docking, assembly,
construction, or the creation of sparse apertures. This chapter presents a framework based on
optimal estimation where the spatial state (position, velocity, orientation, and angular velocity) of
then-satellites is estimated | known as localization in the robotics community | in a decentralized
manner using shared noisy relative pose measurements. This approach attempts to bridge research
in spacecraft attitude estimation and robotic cooperative localization.
A variety of swarm satellite missions have been proposed in the past. In the early 2000's, NASA
explored the Terrestrial Planet Finder mission consisting of multiple separated re
ectors to form
a virtual optical aperture [1]. Although ultimately cancelled, the project precipitated signicant
research exploration into the many-to-many relative spacecraft state estimation problem (see, for
example, [2]). Later in that decade, DARPA explored the Future, Fast, Flexible, Fractionated Free-
Flying project (better known as F6) to distribute typical spacecraft capabilities and processes among
many free-
ying satellites in close proximity [3]. More recently, DARPA pursed cellularization of
satellites which attempted to construct a single spacecraft by aggregating smaller ceullar satlets
[5]. The interest in swarm satellites has continued apace with recent research on the Autonomous
Assembly of a Recongurable Space Telescope (AAReST) virtual aperture mission [7] as well as
105
CHAPTER 4. SYSTEM FILTER DESIGN 106
swarms of many very small (100 gram) satellites in [9]. With some exceptions, these missions focused
on one-to-one pose estimation between spacecraft. Many-to-many pose estimation architectures,
referred to as cooperative localization, could provide resiliency, redundancy, and improved accuracy
Our problem can be stated: given a compact swarm of independent free
ying spacecraft
equipped with relative pose measurement systems, cooperatively estimate the position, velocity, ori-
entation, and angular velocity of all spacecraft using shared information in a decentralized manner
(see Figure 4.1 for an example conguration). We approach this problem using an EKF framework
that processes shared relative pose measurements to localize all spacecraft in the swarm. We specif-
ically use an MEKF to address the standard EKF limitations caused by quaternion states. Each
satellite in the group takes relative pose measurements of other group members and broadcasts
there relative measurements to the entire group. In addition, each spacecraft is equipped with star
trackers, gyroscopes, and a GPS-like absolute positioning system to measure its own absolute pose.
We use a relative pose measurement graph to model which relative measurements are available to
the MEKF estimators running on each spacecraft. After presenting the lter design, we show the
impact of swarm size and measurement graph topology on estimator performance.
4.2 Related Work
Relative state estimation between spacecraft is a mature and active research area (Opromolla et
al. provide an excellent recent survey [23]). However, the majority of work in this area has focused
on a single satellite observing a single target satellite. The present work is concerned on estimating
the state of many satellites in close proximity by fusing relative pose measurements. Some early
work in this area occurred under the aforementioned NASA Terrestrial Planet Finder mission.
[382] provides a good example. However, the mission assumed that relative orientation would be
measured by dierencing highly accurate star trackers and so the Kalman Filter-based cooperative
state estimation lter was limited to only translational motion [19] [393]. Other researchers have
addressed the swarm relative position estimation problem with decentralized consensus-based ap-
proaches [383] [11], non-linear observers [384], covariance intersection [385] [387] , EKF-covariance
intersection hybrids [386], UKFs [275], reduced-order Kalman Filters [392], and distributed Kalman
Filters [390]. There is an extensive body of knowledge in this area including the design of commu-
CHAPTER 4. SYSTEM FILTER DESIGN 107
x
y
z
1
2
3
4
5
Figure 4.1: A ve-satellite swarm. Arrows indicate relative pose measurements. Note that satellite
2 also collects star tracker measurements.
CHAPTER 4. SYSTEM FILTER DESIGN 108
nication topologies [393] and sensing graphs [394].
Less research on the full spacecraft pose (position and attitude) cooperative estimation problem
exists in the literature. Crassidis et al. [389] used a covariance intersection technique to fuse
quaternion-based attitude estimates with unknown correlation and explored the application to
formations of three or more spacecraft. Perhaps the closest to the present work is a recent paper
from Fan et al. [395]. The authors present an MEKF-based estimation scheme which runs on
networked individual intelligence nodes on the same spacecraft (e.g., star trackers, gyroscopes) to
collaboratively estimate the spacecraft state as well as parameters (such as bias) for the individual
nodes. However, the scheme is only applied to nodes on a single spacecraft, not a network of
multiple spacecraft. We have been unable to nd another work that discusses an MEKF-based
approach to estimating the full pose of a spacecraft swarm in a distributed manner.
Cooperative localization is an active topic in the robotics community. Cooperative localization
concerns using shared information to determine a robot's own state as well as the state of other
interacting robots [302]. The concept, as presented in a a landmark paper by Roumeliotis and
Bekey [297], bears distinct resemblance to our problem: 1) a group of robots moving with their
own linear or nonlinear equations of motion 2) each robot can measure both its own state (i.e.
inertial sensors) and the relative state of other robots and 3) information can be communicated
with the group. Cooperative localization approaches include Kalman Filters, EKFs [347], and
distributed EKFs [349]. Cooperative localization has been extended to the UAV applications [361],
though space applications have not been specically mentioned. This chapter attempts to bridge
the two communities by posing a space-related problem as a cooperative localization problem and
using an MEKF to solve it.
4.3 System Model
We begin by presenting the model used to represent the spatial conguration of the spacecraft
swarm. The system consists of n-satellites undergoing independent rigid body motion. We include
several assumptions about the dynamical model. First, we assume each spacecraft can be modelled
as a single rigid body with a known and time-invariant inertia tensor, J. Second, we assume that
the spacecraft are uncontrolled and, therefore, the only torques and forces they experience are
CHAPTER 4. SYSTEM FILTER DESIGN 109
disturbances. Finally, we ignore the possibility of collision where two satellites occupy the same
location.
4.3.1 Dynamical Model
Thei-th satellite's state is parameterized with the following state variables where q is a four element
quaternion representing the rotation from the inertial frame to the body frame,! is a three element
angular velocity vector of the rate of rotation from the inertial frame to the body frame expressed
in the body frame, and r and v are the three element position and velocity vectors of the satellite
in the HCW rotating frame.
x
i
=
q
B
i
I
!
B
i
I
B
i
r
H
v
H
=
q ! r v
(4.1)
All satellite states evolve in time according to the nonlinear equation of motion:
_ x =
_ q _ ! _ r _ v
= f (x;t) + g (x;t) w (4.2)
where w is additive zero mean white noise with a covariance of Q. The time derivative of the
inertial attitude is given by the following kinematic relationship [423]:
_ q =
1
2
2
6
4
!
0
3
7
5
q (4.3)
The time derivative of the angular velocity for a body under torque and possessing a 33 element
inertia tensor J in the body frame and relative to the center of mass is given by:
1
_ ! = J
1
(! (J!)) (4.4)
If we assume that torques are only caused by random disturbances such as solar radiation, atmo-
spheric drag, and Earth's magnetic eld, we can state this relation as:
_ ! =J
1
(! (J!)) + J
1
t
(4.5)
1
Note that some authors reverse the terms of the cross product and add a negative sign.
CHAPTER 4. SYSTEM FILTER DESIGN 110
where
t
is zero-mean Gaussian white noise with covariance of
t
I
3
to represent the disturbances.
For modeling the translational motion of the satellite the Hills-Clohessy-Wiltshire equations are
used which evolve the state in the rotating HCW frame. Note that this relationship approximates
actual relative motion in a central body gravitational eld.
2
6
4
_ r
_ v
3
7
5 =
2
6
6
6
6
6
6
6
6
6
6
6
6
6
6
4
0 0 0 1 0 0
0 0 0 0 1 0
0 0 0 0 0 1
3n
2
e
0 0 0 2n
e
0
0 0 0 2n
e
0 0
0 0 n
2
e
0 0 0
3
7
7
7
7
7
7
7
7
7
7
7
7
7
7
5
2
6
4
r
v
3
7
5 +
2
6
6
6
6
6
6
6
6
6
6
6
6
6
6
4
0
0
0
u
x
=m
t
u
y
=m
t
u
z
=m
t
3
7
7
7
7
7
7
7
7
7
7
7
7
7
7
5
(4.6)
where n
e
is the mean motion of the satellite and u = [u
x
;u
y
;u
z
] is a vector of forces expressed in
the HCW frame, and m
t
is the mass of the satellite. If, as with the rotational motion, we assume
the only external forces are random disturbances, then this relation can be stated as:
2
6
4
_ r
_ v
3
7
5 =
(n
e
)
2
6
4
r
v
3
7
5 +
1
m
t
2
6
4
0
f
3
7
5 (4.7)
where
f
is zero-mean Gaussian white noise with a covariance of
f
I
3
. In the remainder of the
paper, we omit n
e
in the
notation for brevity.
We can now summarize the stochastic dynamical model for a single satellite as:
f (x;t) =
2
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
4
1
2
2
6
4
!
0
3
7
5
q
J
1
(! (J!))
2
6
4
r
v
3
7
5
3
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
5
(4.8)
CHAPTER 4. SYSTEM FILTER DESIGN 111
g (x;t) =
2
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
4
0
41
J
1
0
31
1
mt
I
3
3
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
5
(4.9)
w =
0
14
t
0
13
f
T
(4.10)
The state of the entire satellite swarm, x, is a concatenation of the individual states:
x =
x
1
x
2
::: x
n
(4.11)
4.3.2 Measurements
The general measurement model for the system is given as:
y = h (x;t) + (4.12)
where h is a function which computes measurements based on system state and is a vector of
additive zero-mean Gaussian noise. Each satellite is capable of measuring its absolute orientation
(q
i
) in the inertial frame and its absolute position (r
i
) in the HCW frame. Here,
r
and
t
are
zero-mean Gaussian white measurement noise vectors for the orientation and position measurements
with covariances
star
I
3
and
gps
I
3
, respectively.
y
abs
=
2
6
6
6
6
4
q
BI
abs
r
H
abs
3
7
7
7
7
5
=
2
6
6
6
6
4
q
q(
r
)
r +
t
3
7
7
7
7
5
(4.13)
Satellites are also capable of measuring the relative pose between itself and another satellite con-
sisting of a relative orientation measurement (q
ij
) between the two body frames and a relative
CHAPTER 4. SYSTEM FILTER DESIGN 112
position measurement in the body frame of the observing satellite (r
ij
). The relative measurement
of the j-th satellite state relative to the i-th observing satellite is given by:
y
rel
ij
=
2
6
6
6
6
4
q
B
j
B
i
rel
ij
r
B
i
rel
ij
3
7
7
7
7
5
=
2
6
6
6
6
4
q
j
q
1
i
q (
q
)
T (q
i
) T
IH
(r
j
r
i
+
p
)
3
7
7
7
7
5
(4.14)
where
q
and
p
are zero-mean Gaussian white measurement noise vectors for the relative orienta-
tion and position measurements with covariances
q
I
3
and
p
I
3
, respectively.
In addition to absolute and relative pose measurements, each spacecraft measures its own an-
gular velocity using onboard gyroscopes. These angular velocity measurements are corrupted by
a time-varying bias with the following dynamics, where
v
and
u
are independent zero-mean
Gaussian white noise with covariances of
v
I
3
and
u
I
3
, respectively [423]:
~ !
i
=!
i
+
i
+
v
(4.15)
_
i
=
u
(4.16)
4.4 Cooperative Localization Filter
This section presents a lter running on each spacecraft in the swarm which uses both onboard and
shared measurements to estimate the full state of the entiren-satellite swarm. Each instance of the
lter is \full order" in that it estimates all 13n states (4 for quaternions, 3 for angular velocity,
3 for position, and 3 for linear velocity). The lter is decentralized in that each satellite produces
its own estimate of the swarm state that is independent of the estimates on other spacecraft lters.
The lter uses a Kalman Filter to estimate the translational motion and an MEKF to estimate the
rotational motion. A measurement graph is used to determine which relative measurements are
available for the lter. We begin with an overview of the MEKF.
CHAPTER 4. SYSTEM FILTER DESIGN 113
4.4.1 Filter Overview
The normalization constraint of quaternions (jjqjj = 1) poses challenges to the standard EKF
(see [423]) and so the MEKF was designed as a modication that allows consistent estimation
of quaternions [424]. The MEKF carries two orientation states. The rst, a quaternion, persists
between timesteps and represents the estimate of the true quaternion. The second orientation state,
which is actually updated by the lter in a given timestep, is an attitude error that represents the
error between the current quaternion estimate and the true quaternion. This relation is given by
q = q (a)
^ q (4.17)
where a is an attitude error representation and q (a) is its corresponding quaternion representation.
Any attitude error paramerization can be used; we will use twice the vector portion of the quaternion
(2q
1:3
). At every timestep, the updated attitude error is used to update the quaternion. The
attitude error is then set to zero and the process is repeated. In an MEKF, the quaterion is
propogated at each timestep, using the system model. Conversely, the attitude error state is not
propagated. MEKF lters have been used extensively in spacecraft attitude determination and
have found use in relative pose estimation for spacecraft as well [381][196][58]. We extend two
versions of the MEKF, one that includes gyroscope bias estimation and one that does not, into a
framework to estimate the full state of the n-satellite spacecraft swarm.
4.4.2 Measurement Graph
A relative pose measurement graph G is used to model the measurement sensor topology at any
given timestep. Although the present work deals only with constant measurement graphs, G can
be a time varying map. We dene G (i;j) as the incidence of a pose measurement taken by the
i-th satellite observing the j-th satellite. This can be easily represented by a matrix whose entries
are 1 if there is a measurement and 0 if there is not. When i = j, this represents an absolute
pose measurement taken with star trackers and GPS-like absolute positioning. Figure 4.2 shows an
example of two measurement graphs for a swarm of four satellites (note, self-loops corresponding
to absolute pose measurements are not shown for clarity). The corresponding incidence matrices
CHAPTER 4. SYSTEM FILTER DESIGN 114
1
2
3
4
1
2
3
4
Figure 4.2: Examples of relative pose measurement graphs
are given as:
G
a
= 1
4
(4.18)
G
b
=
2
6
6
6
6
6
6
6
4
0 1 0 0
0 0 1 0
0 0 0 1
1 0 0 0
3
7
7
7
7
7
7
7
5
(4.19)
4.4.3 Filter State
For our lter, we follow [17] and [425] in estimating the absolute orientation of every satellite in the
swarm. Although this is atypical for spacecraft pose estimation (see, for example, an MEKF-based
method that estimates relative state [381]), this convention is more natural for a swarm scenario.
Next, although the lter estimates the true state of the satellites in the swarm, the lter actually
updates an error state. The quaternion error state, referred to as an attitude error a, has already
been given. The remaining error states are given by:
x
nogyros
=
a ! r v
(4.20)
CHAPTER 4. SYSTEM FILTER DESIGN 115
where the error states are dened by:
! =! ^ !
r = r ^ r
v = v ^ v
(4.21)
For satellites equipped with onboard gyrsocopes, it is often more accurate and computationally
ecient to use the gyroscope measurements directly as state estimates. This is often referred to
as dynamic model replacement mode since the rotational dynamics, as given in Equation (4.5), no
longer need to be modelled (for a comparison of angular velocity estimation methods, including
dynamic model replacement mode, see [426]). However, gyroscopes do have a bias term which
then must be included in the lter state. Therefore, the satellite's own error state when using on
onboard gyroscope is given by:
x
gyros
=
a r v
(4.22)
where bias error, , is dened by:
=
^
b (4.23)
As before, the full error state of the satellite swarm can be written by concatenating the individuate
states:
x =
h
fx
i
g
n
j=1
i
(4.24)
x
i
=
8
>
>
<
>
>
:
x
gyros
; if self
x
nogyros
; otherwise
(4.25)
4.4.4 Measurement Updates
At every timestep absolute and relative pose measurements are provided to the lter. Position mea-
surements can be used directly by the lter, but quaternion measurements (~ q) must be converted
to attitude error representations (~ a) rst, using the propagated state estimate (^ q
):
~ a
abs
= 2
~ q
abs
^ q
1
1:3
(4.26)
CHAPTER 4. SYSTEM FILTER DESIGN 116
For the quaternion measurement of j relative to i, the attitude error can be found with:
~ a
rel
ij
= 2
~ q
1
rel
^ q
j
^ q
i
1
1:3
(4.27)
Using these representations, the relative and absolute measurements become:
~ y
i
=
2
6
4
~ a
i
~ r
i
3
7
5 (4.28)
~ y
rel
ij
=
2
6
4
~ a
rel
ij
~ r
rel
ij
3
7
5 (4.29)
In order to compute the optimal Kalman gain, the measurement sensitivity matrix is required. This
is a linearization of the measurement model, h, evaluated with the current state estimate:
H =
@h
@x
x=^ x
(4.30)
The sensitivity matrix for absolute pose measurements for thei-th satellite in a swarm ofn-satellites
is:
H
abs
=
2
6
4
0
3((i1)12)
I
3
0 0 0 0
3((ni)12)
0
3((i1)12)
0 0 I
3
0 0
3((ni)12)
3
7
5 (4.31)
The sensitivity matrix for a relative pose measurement between two satellites is given by [17]:
H
rel
ij
=
0
6((i1)12)
H
rel
ij
i
0
6((ni)12)
+
0
6((j1)12)
H
rel
ij
j
0
6((nj)12)
(4.32)
where
H
rel
ij
i
=
2
6
4
I
3
0 0 0
H
m
0 T
^ q
i
T
IH
0
3
7
5 (4.33)
H
rel
ij
j
=
2
6
4
T
^ q
i
T
^ q
j
T
0 0 0
0 0 T
^ q
i
T
IH
0
3
7
5 (4.34)
CHAPTER 4. SYSTEM FILTER DESIGN 117
where
H
m
=
T (^ q
i
) T
IH
(^ r
j
^ r
i
)
(4.35)
The full sensitivity matrix for satellite i estimating the state of all n-satellites in the swarm is
produced by stacking the corresponding H matrix for any relative or absolute measurement ~ y.
H
k
=
n
H
T
ij
j G (i;j) = 1
n
i=1
o
n
j=1
T
(4.36)
where
H
ij
=
8
>
>
<
>
>
:
H
abs
; i =j
H
rel
ij
; i6=j
(4.37)
The measurement noise matrix, R
k
, is formed in a block diagonal fashion from the noise matricies
(R
abs
and R
rel
) corresponding to the m-measurements.
R
k
=blkdiag
n
R
T
ij
j G (i;j) = 1
n
i=1
o
n
j=1
(4.38)
where
R
ij
=
8
>
>
<
>
>
:
R
abs
; i =j
R
rel
; i6=j
(4.39)
where
R
abs
=blkdiag
2
star
I
3
;
2
gps
I
3
(4.40)
R
rel
=blkdiag
2
q
I
3
;
2
p
I
3
(4.41)
4.4.5 Kalman Gain and Estimate Update
The Kalman gain computation follows the standard update equation as given by:
K
k
= P
k
H
T
k
H
k
P
k
H
T
k
+ R
k
1
(4.42)
CHAPTER 4. SYSTEM FILTER DESIGN 118
Similarly, the covariance is propagated using the standard method. Note that the Joseph form has
been used here to address numerical issues.
P
k
= (I K
k
H
k
) P
k
(I K
k
H
k
) + K
k
RK
T
k
(4.43)
The translational and angular velocity states are updated using the standard Kalman update:
^ x
k
= ^ x
k
+ K
k
~ y
k
h
^ x
k
(4.44)
Unique to the MEKF, the attitude error state is used to update the quaternion as follows [423].
^ q
k
=
^ a
k
2
1
^ q
k
(4.45)
Following this, ^ a
k
is set to zero for the next iteration.
4.4.6 Covariance Propagation
Like an EKF, the MEKF uses a linearized state model to propagate the covariance between
timesteps. The discrete form of covariance propagation is given by:
P
k+1
=
k
P
k
T
k
+ Q
k
(4.46)
where
k
is the discrete state transition matrix and Q
k
is the discrete process noise addition during
the given timestep. In order to nd these values to use in propagation, we must rst linearize the
state equations given by f and g.
_ x = Fx + Gw (4.47)
F =
@f
@x
x=^ x
G =
@g
@x
x=^ x
(4.48)
The linearized F can be used to propagate the state transition matrix, :
_
= F (4.49)
CHAPTER 4. SYSTEM FILTER DESIGN 119
Because there are dierent states for self-estimates and estimates of other spacecraft, we require
two forms of linearization
F
gyros
=
2
6
6
6
6
4
[^ !] I
3
0
0 0 0
0 0
3
7
7
7
7
5
(4.50)
F
nogyros
=
2
6
6
6
6
4
[^ !] I 0
0 J
1
([J^ !] [^ !] J) 0
0 0
3
7
7
7
7
5
(4.51)
G
gyros
=
2
6
6
6
6
6
6
6
4
I
3
0 0 0
0 I
3
0 0
0 0 0
33
0
0 0 0
1
mt
I
3
3
7
7
7
7
7
7
7
5
(4.52)
G
nogyros
=
2
6
6
6
6
6
6
6
4
0
33
0 0 0
0 J
1
0 0
0 0 0
33
0
0 0 0
1
mt
I
3
3
7
7
7
7
7
7
7
5
(4.53)
In order to determine the discrete process noise, we examine the continous process noise of the
additive Gaussian white noise, w.
w
nogyros
=
u
v
0
13
f
T
(4.54)
Q
gyros
=
2
6
6
6
6
6
6
6
4
v
I
33
0 0 0
0
u
I
33
0 0
0 0 0
33
0
0 0 0
t
I
33
3
7
7
7
7
7
7
7
5
(4.55)
w
nogyros
=
0
13
t
0
13
f
(4.56)
CHAPTER 4. SYSTEM FILTER DESIGN 120
Q
nogyros
=
2
6
6
6
6
6
6
6
4
0
33
0 0 0
0
t
I
33
0 0
0 0 0
33
0
0 0 0
f
I
33
3
7
7
7
7
7
7
7
5
(4.57)
The linearized MEKF state equations for a lter on the i-th satellite which is estimating its own
state and the state of all other members in an n-satellite swarm is given by concatenating the
various F, G, and Q matrices.
F =blkdiag
0
B
B
@
8
>
>
<
>
>
:
F
gyros
(!
j
); if self
F
nogyros
(!
j
); otherwise
1
C
C
A
n
j=1
(4.58)
G =blkdiag
0
B
B
@
8
>
>
<
>
>
:
G
gyros
(!
j
); if self
G
nogyros
(!
j
); otherwise
1
C
C
A
n
j=1
(4.59)
Q =blkdiag
0
B
B
@
8
>
>
<
>
>
:
Q
gyros
(!
j
); if self
Q
nogyros
(!
j
); otherwise
1
C
C
A
n
j=1
(4.60)
If the sampling interval is small enough, a discrete time version
k
and Q
k
can be found as follows:
k
= I + Ft +
1
2
F
2
t
2
+::: (4.61)
Q
k
= tGQG
T
(4.62)
For a more accurate estimate, numerical solutions to
k
and Q
k
can be found using a method
by van Loan as outlined by [423][196]. This numerical approach has the additional advantage of
nding the discrete process noise matrix as well. This solution must be found at each timestep.
First, a matrix is created using F, G, and Q. We call this A.
A =
2
6
4
F GQG
T
0 F
T
3
7
5 t (4.63)
CHAPTER 4. SYSTEM FILTER DESIGN 121
The matrix exponential is found numerically in software and
k
and Q
k
can be taken directly from
the result:
B =e
A
=
2
6
4
:::
k
1
Q
k
0
k
T
3
7
5 (4.64)
4.4.7 State Propagation
As with the covariance, the MEKF propogates the state estimate between timesteps based on
the system model. For the present application, we use multiple forms of propogation for each
dierent type of state. Position and linear velocity are propagated using an analytic solution to
the Clohessy-Wiltshire equations. Quaterions are propogated using a discrete model. Angular
velocity is propogated using the matrix exponential solution to the linearized state model, which
is computed numerically at each timestep. These methods are summarized below:
For position and velocity, an analytical solution for :
2
6
4
^ r
k+1
^ v
k+1
3
7
5 =
tk
2
6
4
^ r
k
^ v
k
3
7
5 =
2
6
4
rr
rv
vr
vv
3
7
5
k
2
6
4
^ r
k
^ v
k
3
7
5 (4.65)
rr
=
2
6
6
6
6
4
4 3 cosn
e
t 0 0
6 (sinn
e
tn
e
t) 1 0
0 0 cosn
e
t
3
7
7
7
7
5
(4.66)
rv
=
2
6
6
6
6
4
1
ne
sinn
e
t
2
ne
(1 cosn
e
t) 0
2
ne
(cosn
e
t 1)
1
ne
(4 sinn
e
t 3n
e
t) 0
0 0
1
ne
sinn
e
t
3
7
7
7
7
5
(4.67)
vr
=
2
6
6
6
6
4
3n
e
sinn
e
t 0 0
6n
e
(cosn
e
t 1) 0 0
0 0 n
e
sinn
e
t
3
7
7
7
7
5
(4.68)
CHAPTER 4. SYSTEM FILTER DESIGN 122
vv
=
2
6
6
6
6
4
cosn
e
t 2 sinn
e
t 0
2 sinn
e
t 4 cosn
e
t 3 0
0 0 cosn
e
t
3
7
7
7
7
5
(4.69)
Quaternions are propagated using a discrete model given by [423]:
^ q
k+1
=
q
k
(^ !
k
) ^ q
k
(4.70)
q
k
(^ !
k
) =
2
6
4
cos
1
2
j^ !
k
jt
I
3
[ ]
cos
1
2
j^ !
k
jt
3
7
5 (4.71)
=
sin
1
2
j^ !
k
jt
^ !
k
j ^ !
k
j
(4.72)
For propagating a satellite's own state, a continuous numerical propagation is used to increase
the estimation accuracy. Note also that angular velocity sensed directly from the gyroscopes and
adjusted with the current bias estimate is used for this propagation.
Angular velocity is propagated using a matrix exponential of the linearized system model to
nd the state transition matrix. This is computed numerically at each timestep.
^ !
k+1
=
!k
(^ !
k
) ^ q
k
(4.73)
!k
(^ !
k
) =e
(J
j
1
([J
j
^ !][^ !]J
j))
(4.74)
For estimating the satellite's own angular velocity, when gyroscopes are used in dynamic model
replacement mode, there is no angular velocity propagation. Instead the bias, which is assumed to
be slowly varying, is simply propagated forward with no change as:
^
k+1
=
^
k
(4.75)
4.4.8 Validation Gate
Pose measurements produced through the stereo vision and point cloud registration process are
modelled as including Gaussian random noise. However, this is only an approximation. The real
CHAPTER 4. SYSTEM FILTER DESIGN 123
relative pose measurements contain occasional outliers that have the potential to induce divergence
in the EKF. Without a robust outlier rejection scheme, the lter was seen to process spurious mea-
surements causing lter divergence. Furthermore, because the point cloud registration is initialized
with the current state estimate at each timestep, the relative pose measurements become further
corrupted leading to an unrecoverable lter state. At this point, a global registration scheme would
be required to re-initialize the lter (see Sharma and D'Amico for examples [204]).
In an attempt to avoid divergence, a standard validation gate was included within the lter
which has the eect of rejecting relative pose measurements which are deemed unlikely [427]. The
validation gate is established as follows. First the square of the Mahalanobis distance is computed
with the measurement residual and the residual covariance:
d
2
=
~ y
k
^ y
k
T
S
1
~ y
k
^ y
k
(4.76)
where
S = H
k
P
k
H
T
k
+ R
k
(4.77)
Next, this value (d
2
), which has a
2
distribution, is compared to a threshold selected at design
time to detect outliers outside some probability bound. In the present method, the validation gate
is only used for the relative pose measurements, not absolute measurements. An example of the
validation gate-based outlier rejection is seen in Figure 4.3 where lter divergence is avoided by
rejecting outliers caused by poor illumination of satellite number 3 observed by satellite number 1.
4.4.9 Filter Summary
This section has presented all the details of the cooperative localization lter based on an MEKF
implementation. We summarize its operation here. The lter is initialized with state and covariance
estimates (x
0
and P
0
). These estimates, along with the measurement sensitivity matrix H
k
and
noise covariance matrix R
k
for the current timestep, are used to compute the optimal Kalman
gain K. This gain is then used to blend the measurements (~ y
k
) and the current state estimate
to produce the optimal state estimate (^ x
k
). This state estimate is then propagated to the next
time step using a system model and the cycle repeats. More details on the standard Kalman Filter
implementation can be found in, for example, [423].
CHAPTER 4. SYSTEM FILTER DESIGN 124
0 20 40 60 80 100 120 140 160 180 200
Time (s)
-1.5
-1
-0.5
0
0.5
1
1.5
Quaternion Innovation
-0.15
-0.1
-0.05
0
0.05
0.1
0.15
0.2
Position Innovation
Innovations over Time (Detected Outliers Highlighted)
Figure 4.3: Outlier rejection example using a validation gate. Filled diamonds are quaternion
innovations, empty diamonds are position innovations. Red boxes indicate the measurement was
rejected.
CHAPTER 4. SYSTEM FILTER DESIGN 125
Table 4.1: Simulation parameters
Parameter Symbol Value Units
Principal Moment of Inertia (X) J
X
1.67 kgm
2
Principal Moment of Inertia (Y) J
y
8.33 kgm
2
Principal Moment of Inertia (Z) J
z
3.67 kgm
2
Mean Motion n
e
1.08e-3 s
1
Gyro Bias Noise
u
3.16e-10 s
3=2
Gyro Bias Noise
v
3.16e-7 s
1=2
Abs Attitude Measurement Noise
star
2.90e-5 rad
Rel Attitude Measurement Noise
q
0.03 rad
Abs Position Measurement Noise
GPS
0.01 m
Rel Position Measurement Noise
p
0.01 m
Quaternion Process Noise
pq
5e-4 s
1
Angular Velocity Process Noise
t
5e-6 s
2
Position Process Noise
pr
5e-5 m=s
Velocity Process Noise
f
1e-8 m=s
1
Maximum Initial Position r
0
10 m
Maximum Initial Velocity v
0
:005 m=s
Maximum Initial Angular Velocity !
0
:006 s
1
4.5 Simulations and Results
To assess the performance of the cooperative localization lter, this section presents results for three
simulations. The rst is a baseline scenario using a 10-satellite swarm with a complete measurement
graph. The second explores the performance impact of the measurement graph topology. The third
assessment examines the performance impact of the number of satellites in the swarm on estimation
accuracy. All simulations model a low Earth orbit mission at approximately 622 km altitude. The
parameters used for all three scenarios are found in Table 4.1.
4.5.1 Baseline Scenario
The rst scenario shows results for the estimation of a swarm of 10 satellites using a complete
measurement graph. Errors for both self-estimation (a satellite estimating its own state) as well as
the estimates of other spacecraft states are shown in Figure 4.5. Satellites also produce an estimate
of the bias for their own gyroscopes, the error of which is shown in Figure 4.4.
CHAPTER 4. SYSTEM FILTER DESIGN 126
0 50 100 150 200 250 300 350 400 450 500
-5
0
5
10
-6
Self Bias Estimate Error
0 50 100 150 200 250 300 350 400 450 500
-5
0
5
10
-6
0 50 100 150 200 250 300 350 400 450 500
-5
0
5
10
-6
Time (seconds)
Error (rad/s)
Figure 4.4: Bias estimation error for baseline scenario.
CHAPTER 4. SYSTEM FILTER DESIGN 127
Estimation Performance over 500 Seconds for a 10-Satellite Swarm with a Complete
Measurement Graph
0 50 100 150 200 250 300 350 400 450 500
-2
0
2
10
-5
Self Quaternion Estimate Error
0 50 100 150 200 250 300 350 400 450 500
-2
0
2
10
-5
0 50 100 150 200 250 300 350 400 450 500
-2
0
2
10
-5
Time (seconds)
Error (radians)
0 50 100 150 200 250 300 350 400 450 500
-0.02
0
0.02
Self Position Estimate Error
0 50 100 150 200 250 300 350 400 450 500
-0.02
0
0.02
0 50 100 150 200 250 300 350 400 450 500
-0.02
0
0.02
Time (seconds)
Error (meters)
0 50 100 150 200 250 300 350 400 450 500
-5
0
5
10
-3
Quaternion Estimate Error: Target Satellites Obsered by Sat 3
0 50 100 150 200 250 300 350 400 450 500
-5
0
5
10
-3
0 50 100 150 200 250 300 350 400 450 500
-2
0
2
10
-3
Time (seconds)
Error (radians)
0 50 100 150 200 250 300 350 400 450 500
-0.02
0
0.02
Position Estimate Error: Target Satellites Obsered by Sat 3
0 50 100 150 200 250 300 350 400 450 500
-0.02
0
0.02
0 50 100 150 200 250 300 350 400 450 500
-0.02
0
0.02
Time (seconds)
Error (meters)
Figure 4.5: (a) and (b) show each satellite's estimation error of its own orientation and position
as well as the covariance estimates of those states. (c) and (d) show the estimation errors for all
9 targets observed by satellite number 3 in the 10-satellite swarm. Red lines indicate covariance
estimates of the corresponding states.
CHAPTER 4. SYSTEM FILTER DESIGN 128
4.5.2 Performance Impact: Number of Satellites
The second scenario looks at the impact of the number of satellites to state estimation using a
complete graph sensing topology (G = 1). Four swarm populations were examined: 3, 6, 9, and 12
satellites. A 500-second scenario for each of these was run 30 times with random initial conditions
for each run. The average magnitudes of orientation and position error for the same three satellites,
as estimated by satellite number 3, in a swarm of increasing population are shown in Figure 4.8.
Although a swarm with 6 satellites greatly improves the orientation accuracy relative to a 3 satellite
swarm, adding additional members to the swarms appears to have diminishing returns.
4.5.3 Performance Impact: Measurement Graph Topology
The next performance study shows the impact that the sensing topology has on relative estimation
performance. We explore four topologies shown in Figure 4.7. The rst is a complete graph where
all relative measurements between every node are used. The second, a star topology, uses only
relative measurements to an arbitrarily chosen landmark node. The third, referred to as a chain,
uses a minimum spanning tree where all nodes are connected by only a single relative measurement.
The nal, a duplex chain, uses the same minimum spanning tree but relative measurements in both
directions are used. The same size swarm (n = 5) is used in all four cases. As before, the scenario
was run 30 times with random initial conditions. Average error magnitudes are shown in Figure
4.6.
4.5.4 Computational Performance
The results above show the performance impact in terms of relative estimation. However, larger
swarms and measurement topologies with many connections will use more computational resources
for estimation. In Figures 4.9 and 4.10, execution time per lter are shown for the corresponding
scenario. The clear exponential trend in computation time as a function of swarm size is evident
in Figure 4.10.
CHAPTER 4. SYSTEM FILTER DESIGN 129
Performance Impact on Estimation Error of Number of Satellites (30 Runs)
0 50 100 150 200 250 300 350 400 450 500
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
10
-3
Orientation Error (n = 3)
1
2
3
Time (seconds)
Error (radians)
0 50 100 150 200 250 300 350 400 450 500
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
10
-3
Position Error (n = 3)
1
2
3
Time (seconds)
Error (meters)
0 50 100 150 200 250 300 350 400 450 500
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
10
-3
Orientation Error (n = 6)
1
2
3
Time (seconds)
Error (radians)
0 50 100 150 200 250 300 350 400 450 500
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
10
-3
Position Error (n = 6)
1
2
3
Time (seconds)
Error (meters)
0 50 100 150 200 250 300 350 400 450 500
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
10
-3
Orientation Error (n = 9)
1
2
3
Time (seconds)
Error (radians)
0 50 100 150 200 250 300 350 400 450 500
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
10
-3
Position Error (n = 9)
1
2
3
Time (seconds)
Error (meters)
0 50 100 150 200 250 300 350 400 450 500
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
10
-3
Orientation Error (n = 12)
1
2
3
Time (seconds)
Error (radians)
0 50 100 150 200 250 300 350 400 450 500
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
10
-3
Position Error (n = 12)
1
2
3
Time (seconds)
Error (meters)
Figure 4.6: This gure shows the magnitude of orientation and position estimation error for three
satellites (1, 2, and 3) which are estimated by satellite number 3. (a) and (b) correspond to a
3-satellite swarm, (c) and (d) to a 6-satellite swarm, (e) and (f) to a 9-satellite swarm, and (g) and
(h) to a 12-satellite swarm. Note that the orientation error of satellite 3 (its own orientaiton) is
very low since high accuract star trackers are used.
CHAPTER 4. SYSTEM FILTER DESIGN 130
1
2
3
4
5
6
1
2
3
4
5
6
1
2
3
4
5
6
1
2
3
4
5
6
Figure 4.7: Relative pose measurement graphs: a) complete, b) star, c) chain, and d) duplex chain.
4.6 Conclusions
Cooperative localization methods for satellite swarms can enable future satellite mission concepts.
This paper presented an MEKF-based method to fuse relative pose measurements with onboard
sensing to produce decentralized estimates of the entire spacecraft swarm state.
CHAPTER 4. SYSTEM FILTER DESIGN 131
Performance Impact over 500 Seconds on Estimation Error of the Relative Pose
Measurement Graph (30 Runs)
0 50 100 150 200 250 300 350 400 450 500
0
0.001
0.002
0.003
0.004
0.005
0.006
0.007
0.008
0.009
0.01
Orientation Error (1)
1
2
3
4
5
6
Time (seconds)
Error (radians)
0 50 100 150 200 250 300 350 400 450 500
0
0.005
0.01
0.015
0.02
0.025
0.03
Position Error (1)
1
2
3
4
5
6
Time (seconds)
Error (meters)
0 50 100 150 200 250 300 350 400 450 500
0
0.001
0.002
0.003
0.004
0.005
0.006
0.007
0.008
0.009
0.01
Orientation Error (2)
1
2
3
4
5
6
Time (seconds)
Error (radians)
0 50 100 150 200 250 300 350 400 450 500
0
0.005
0.01
0.015
0.02
0.025
0.03
Position Error (2)
1
2
3
4
5
6
Time (seconds)
Error (meters)
0 50 100 150 200 250 300 350 400 450 500
0
0.001
0.002
0.003
0.004
0.005
0.006
0.007
0.008
0.009
0.01
Orientation Error (3)
1
2
3
4
5
6
Time (seconds)
Error (radians)
0 50 100 150 200 250 300 350 400 450 500
0
0.005
0.01
0.015
0.02
0.025
0.03
Position Error (3)
1
2
3
4
5
6
Time (seconds)
Error (meters)
0 50 100 150 200 250 300 350 400 450 500
0
0.001
0.002
0.003
0.004
0.005
0.006
0.007
0.008
0.009
0.01
Orientation Error (4)
1
2
3
4
5
6
Time (seconds)
Error (radians)
0 50 100 150 200 250 300 350 400 450 500
0
0.005
0.01
0.015
0.02
0.025
0.03
Position Error (4)
1
2
3
4
5
6
Time (seconds)
Error (meters)
Figure 4.8: This gure shows the magnitude of orientation and position estimation error for six
satellites which are estimated by satellite number 3. (a) and (b) correspond to a complete mea-
surement graph, (c) and (d) to a star topology, (e) and (f) to a chain graph, and (g) and (h) to a
duplex chain graph. Note that the orientation error of satellite 3 (its own orientation) is very low
since high accuracy star trackers are used.
CHAPTER 4. SYSTEM FILTER DESIGN 132
Complete Star Chain Duplex Chain
Measurement Graph Topology
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
Time (s)
Average Execution Time by Configuration
Figure 4.9: Filter execution time as a function of topology (averaged over 30 runs)
3 6 9 12
Number of Satellites
0
5
10
15
20
25
30
Time (s)
Average Execution Time by Configuration
Figure 4.10: Filter execution time as a function of number of satellites in the swarm (averaged over
30 runs)
Chapter 5
Pose Estimation
5.1 Introduction
The previous chapter introduced a lter which fused relative pose measurements between members
of a satellite swarm to estimate the overall system state. This chapter presents two commonly used
vision-based relative pose measurement pipelines as well as the capability to generate synthetic
images based on the scenario. The chapter concludes by using these two pose estimation schemes
to demonstrate the performance of the lter.
The literature on relative spacecraft pose estimation includes a large number of approaches to
remotely determine the pose of a target spacecraft. Opromolla et al. [207] provide a an excellent
and comprehensive recent survey of the area. Example approaches use time-of-
ight LIDAR cam-
eras [213][17], stereo cameras [155][54], and monocular cameras [204][154]. In the present work,
we use a monocular technique (model-based tracking) and a stereo-vision technique (point cloud
registration).
The rst pose measurement concept we introduce is based on ducial tags and model-based
edge tracking. A ducial tag which is easily recognized within the image is used to initialize the
pose. Specically, we use the ducial-based AprilTag system which provides relative position and
orientation for a single ducial (also known as a tag) [428]. This form of pose estimation is often
known as cooperative pose estimation since the target is built in such a way (i.e., with a ducial) to
support it. The initial coarse pose from the tag initializes a ne pose tracker known as model-based
tracking [222] which aligns a projected wireframe model with extracted images from a single image
133
CHAPTER 5. POSE ESTIMATION 134
[429]. This entire pose estimation concept uses single images | something often referred to as
monocular vision.
The second pose measurement concept uses two cameras in a stereo conguration to determine
the pose of the target object via point cloud registration. Stereo vision is a sensing method which
uses two separated cameras to perceive depth in a scene. This passive method has been used
extensively in robotics, including space applications [88]. Depth information is used to create point
clouds which are then registered (i.e. aligned in orientation and position) with a reference point
cloud of the target spacecraft generated a priori and stored onboard the spacecraft. For this
registration, we implement the ICP algorithm [430] typically used for tracking once a good initial
relative pose estimate is available (as the EKF itself requires good initialization, we assume such
an initial pose is available via either global point cloud registration methods [212] or other image-
based methods [117][206][207]). The primary method for tracking relative spacecraft pose after
initialization is ICP [23]. Like our present work, authors have combined stereo vision point clouds
and ICP to recover pose [205]. Other authors have used ICP with clouds generated from depth
cameras [145][210][206][207]. As in the present work, common ltering and smooth techniques such
as the EKF have been used to fuse these individual pose estimates [58][395].
In order to implement these pose estimation algorithms, two rendering pipelines have been
established to generate synthetic moncular and stereo images. One of the challenges of spacecraft
pose estimation is addressing the strong re
ections, stark shadows, and complex appearances caused
by a single dominating light source | the Sun. As a result, researchers have created experimental
facilities with realistic lighting or, as shown in this paper, have turned to increasingly photo-realistic
rendering to produce large libraries of images [49][136][137][142][139]. Several authors have used
high delity rendering software to simulate proximity spacecraft [146]. The present work follows a
similar course and uses the open source high delity Blender
1
rendering engine.
1
http://www.blender.org/
CHAPTER 5. POSE ESTIMATION 135
5.2 Image Rendering
5.2.1 Overview
This section summarizes the method to create realistic images of spacecraft to facilitate pose es-
timation and cooperative localization. As discussed previously, we are simulating two types of
sensors: monocular and stereo. Although the same rendering process is used, each has dierent
camera frames (F
C
) and camera parameters. For each timestep, the true state of all satellites (x),
the inertial Sun direction, the camera frameF
C
, and camera parameters (focal length, focal plan
size, image size, and baseline if applicable | shown in Table 5.1) are supplied to the rendering
pipeline to construct the virtual scene. These images are then rendered, indexed, and saved to
be used during state estimation. Note that the camera parameters for the monocular sensors are
specically chosen to provide 90 degree elds of view such that six body-xed face-mounted cameras
provide full spherical visibility to each spacecraft.
Table 5.1: Camera parameters for rendering monocular and stereo images.
Parameter Monocular Stereo
Focal Length 14 mm 45 mm
Focal Plane Size 32 mm 32 mm
Image Size 2000 x 2000 px 1024 x 1024 px
Baseline N/A 0.5 mm
5.2.2 Camera Frames
During simulation, the observed satellite must be in the eld of view of the observer sensor frame
to provide a pose measurement. This can be accomplished in several ways including orientating the
observing spacecraft to align a sensor with the target or equipping the observing spacecraft with a
number of sensors which collectively provide spherical coverage.
For monocular pose estimation, we assume that cameras are body-xed. Each satellite is
equipped with six sensors (i.e., cameras) which are centered on each of the six faces. The rotation
from the body frame to the sensor frame are given by q
C
i
B
where i = 1;:::; 6 and correspond to
the six faces (+z; +x; +y;x;y;z). The relative position of the sensor in the body frame is
given by r
B
C
i
B
or succinctly as r
C
i
. Without loss of generality, we assume all satellites are identical
including the same set of six visual sensors. We refer to the numbered camera frames withF
C
i
CHAPTER 5. POSE ESTIMATION 136
where i = 1;:::; 6
For stereo-vision pose estimation, we assume that cameras can dynamically track a target;
for the present work, we simplify this modeling to create a virtual frame co-located with the
observing spacecraft center of mass and aligned such that the sensor boresight is co-incident with
the relative position vector between the observed and observer spacecraft (conceptually, a target
tracking camera).
The camera frame on the i-th spacecraft observing the j-th spacecraft (written,F
C
ij
) can be
dened with T
C
j
B
i
representing a transformation from the observer body frame to the observer
camera frame:
T
C
ij
B
i
= T
q
C
ij
B
i
= T
0
B
@
2
6
4
(^ z ^ r
ij
)=
2
p
(1 + cos)=2
p
(1 + cos)=2
3
7
5
1
C
A (5.1)
where
cos = ^ z ^ r
ij
(5.2)
This produces the shortest arc rotation from the observer satellite frame to a frame in which ^ z
points in the direction of the target [431]. Note that ^ r
ij
has coordinates in the i-th body frame,
F
B
i
). The orientation and position of the target spacecraft in the camera frame, which are provided
to an image rendering program, can then be found from the relative orientation and position (q
ij
and r
ij
) as:
q
B
j
C
ij
= q
ij
q
C
ij
B
i
1
(5.3)
r
C
ij
= T
q
C
ij
B
i
r
B
i
ij
(5.4)
5.2.3 Spacecraft Models
To test the monocular and stereo-vision pose estimation pipelines, two dierent models are used.
For monocular vision, using the wide-eld of view cameras, the spacecraft models are cube-shaped
satellites with 40 cm sides. Each side includes a unique AprilTag identier. An example rendering
including four such satellites is shown in Figure 5.1.
Rendering for the stereo images use a freely available CAD model of the Maven spacecraft
CHAPTER 5. POSE ESTIMATION 137
Figure 5.1: Rendered image of several satellites viewed by a single sensor.
Figure 5.2: Two examples of path-traced rendering of the Maven spacecraft under (a) solar-only
illumination and (b) combined solar and global background illumination. Note: above images have
been brightened for readability.
produced by NASA.
2
The model was chosen because it exhibited both complexity (modeling of
2
https://nasa3d.arc.nasa.gov/
CHAPTER 5. POSE ESTIMATION 138
sub-components on the spacecraft) and a lack of symmetry (which simplies the pose estimation
process by reducing the potential for orientation ambiguity). Material properties representing
spacecraft surfaces including brushed aluminum, insulation, and glossy surfaces were selected. An
example rendering of the Maven spacecraft is shown in Figure 5.2.
5.3 Monocular Pose Estimation
Monocular pose estimation has two stages. First, a ducial detector is used to identify the cooper-
ative ducials on the satellites within the eld of view. This provides a coarse pose estimate as well
as unique identication for the object. The coarse pose estimate is used to initialize a model-based
tracker which uses a priori knowledge of the satellite shape to rene the coarse pose into a ne pose
which matches the detected edges in the image. The monocular pose estimation concept is shown
in Figure 5.3
For the rst stage, images are then processed by a ducial detector
3
to provide an estimate
for the pose of the tag within a scene. Each tag contains visual information to provide unique
identication and correspondence with a 3D position and orientation. The ducial detector outputs
the position and orientation of the tag in the camera frame. We assume that the relative position
and orientation of the tag relative to the attached body frame of the observed spacecraft (given by
q
T
i
B
and r
B
T
i
B
, respectively, where i = 1;:::;n
tags
and correspond to n-tags) are known. For our
scenario, we place a single tag at the center of each face (n
tags
= 6). For a satellite i using sensor
m to observe tag n on satellite j, the relative pose measurement is computed by:
y
rel
=
2
6
4
r
Cm
B
j
Cm
q
B
j
Cm
3
7
5 =
2
6
4
r
Cm
TnCm
T
q
CmTn
q
TnB
j
r
B
j
TnB
j
q
TnB
j
1
q
TnCm
3
7
5 (5.5)
This relation is used to transform the pose of an individual tag to the pose of the observed spacecraft
in the sensor frame. Note that the relative pose is in the sensor frame. This is important since the
error statistics may not be isotropic between frame dimensions (i.e.,
x
6=
y
6=
z
).
Once the coarse relative pose is extracted using Equation 5.5, it is passed to the pose tracking
algorithm. This algorithm is a robust version of a Moving Edges tracker [222]. We use an imple-
3
https://visp.inria.fr/en/
CHAPTER 5. POSE ESTIMATION 139
Monocular
Image
Fiducial
Detector
Coarse Pose
Estimate
Initialize
Model-based
Tracking
Fine Pose
Estimate
Cooperative
MEKF
State
Estimate
Tracking
Lost?
Re-initialize
Use Last
Fine Pose
Figure 5.3: Monocular pose estimation concept.
mentation found in the Visual Servoing Platform (ViSP) library.
4
Because tracking is an iterative
renement tool, it is possible to lose track due to noise, poor quality images, or other system con-
siderations. If tracking is lost, as judged by reprojection error metrics, then coarse pose estimation
using cooperative ducials is invoked until tracking can be re-established.
5.4 Stereo-vision Pose Estimation
In contrast to monocular pose estimation, stereo-based pose estimation uses algorithms to extract
depth information from the images. This depth allows an alternate method to determine pose:
point cloud registration. Note for this section, we assume that a coarse pose estimate is available
4
https://visp.inria.fr/en/
CHAPTER 5. POSE ESTIMATION 140
(gathered, for example, using ducial detection outlined in the previous section)
The pose estimation pipeline takes stereo image pairs at each time step and a rough estimate
of relative pose and produces an improved measurement of relative pose between the observing
spacecraft and the target spacecraft. This process is often referred to as pose tracking or pose
renement. The pipeline consists of several sequential steps. First, a stereo matching algorithm is
used to produce a disparity map between the two images. Next, the disparity map is projected into
a 3D point cloud using camera intrinsic parameters. Finally, an Iterative Closest Point variant is
used to recover the relative oset between the sensed pointed cloud and the reference model point
cloud. See Figure 5.4 for an overview.
Stereo
Image Pair
Disparity
Map Com-
putation
Point Cloud
Reprojection
Iterative
Closest Point
Registration
Fine Pose
Estimate
Cooperative
MEKF
State
Estimate
Coarse Pose
Estimate
Figure 5.4: Stereo pose estimation concept.
CHAPTER 5. POSE ESTIMATION 141
Figure 5.5: Image (a) shows a grayscale version of the left camera image from a rendered stereo
image pair and (b) shows the computed disparity between the left and right images of the stereo
pair in the left camera frame.
5.4.1 Point Cloud Creation
The rst step of pose estimation in stereo vision is to process the image pairs to produce a disparity
map such that every pixel has a corresponding value representing a horizontal shift between im-
ages. For the present work, we use an OpenCV
5
block matching implementation with a maximum
disparity size of 512 and a block size of 15. Note that the images are rst converted to grayscale
and rectied. The resulting disparity map is then further transformed into a point cloud by repro-
jecting the disparity into a frame centered at the left camera (an example is shown in Figure 5.5).
For a pixel with coordinates (u;v) and disparity d, the corresponding point in world homogeneous
coordinates is [261]:
2
6
6
6
6
6
6
6
4
X
Y
Z
W
3
7
7
7
7
7
7
7
5
=
2
6
6
6
6
6
6
6
4
1 0 0 c
x
0 1 0 c
y
0 0 1 f
0 0
1
b
0
3
7
7
7
7
7
7
7
5
2
6
6
6
6
6
6
6
4
u
v
d
1
3
7
7
7
7
7
7
7
5
(5.6)
5
https://opencv.org/
CHAPTER 5. POSE ESTIMATION 142
where f is focal length, c
x
and c
y
are principal points, b is the stereo baseline, and it is assumed
that the right and left principal points are the same. The 3D points can be found as:
2
6
6
6
6
4
x
y
z
3
7
7
7
7
5
=
2
6
6
6
6
4
X=W
Y=W
Z=W
3
7
7
7
7
5
(5.7)
Using this relation, a point cloud can be assembled. We perform basic spatial thresholding to
remove points outside a selected bounding region as well as those with a location behind the camera
(negative z component). The point clouds are then made available to the registration portion of
the pose estimation pipeline.
5.4.2 Point Cloud Registration
Pose estimation is completed by registering the measured point cloud to a reference point cloud
of the target spacecraft. This is done using the ICP algorithm with a point-to-plane metric. The
measured point cloud in the camera frame is initially aligned with the model using the current
predicted relative state, ^ q
ij
and ^ r
ij
. ICP then iteratively improves the alignment to produce a
measurement, ~ q
ij
and ~ r
ij
. We use a MATLAB implementation that includes point matching using
a kd-Tree and outlier removal using a inlier ratio set at 0.9 [432][430]. A critical pre-processing
step for ICP is to downsample the point cloud. We use a random downsampling to 10 percent of
the original measured point cloud population which results in a point cloud on the order of 1,000
points. This was found to have preferable performance to a grid- or voxel-based sampling method
in our simulation. A model point cloud is created a priori through Poisson-disk sampling of the
CAD model mesh [433]. This model point cloud is downsampled to approximately 7,500 points.
Figure 5.6 shows an example measured point cloud and its registration to a model point cloud.
5.4.3 Monte Carlo Error Characterization
In order to characterize the error statistics of the pose estimation pipeline, we performed Monte
Carlo analysis using a large number of rendered images produced by varying the relative pose of the
target and the Sun direction. Relative distance was sampled evenly between 3 and 33 m. At ranges
CHAPTER 5. POSE ESTIMATION 143
Figure 5.6: The left image (a) depicts a three-dimensional measured point cloud derived from a
stereo camera image pair and (b) shows the result of successful point cloud registration using ICP
with a point-to-plane metric.
closer than 3m, the camera eld of view covered too small a portion of the spacecraft for useful pose
estimation. For ranges beyond 33m, pose estimation performance degraded rapidly as the target
spacecraft occupied a smaller and smaller region of the camera eld of view. Relative orientation
was sampled over 20 points which were approximately uniformly spaced on the unit sphere. The
unit vector representing direction to the Sun was similarly approximately uniformly sampled on the
unit sphere resulting in 20 Sun direction samples. In total, 8,000 stereo pairs were rendered (16,000
single images) which took approximately 2.5 days on a scientic computing platform with eight
Nvidia Tesla K80 GPUs. This process was conducted twice: once using a single solar illumination
source and once using global spherical illumination (see Figure 3 for comparison). All stereo images
were then processed using the pose estimation pipeline and compared to truth information used to
render the images. The results appear in Figure 5.7.
Some overall trends are worth noting. First, distance appears relatively unaected in the close
range but can have signicant errors starting around 15 m. This may not be surprising as stereo
vision depth accuracy is known to degrade as the square of distance. A similar trend can be seen in
the orientation error as a function of distance to the target. Regarding solar-angle, at low Sun-angles
when a target spacecraft is between the observing spacecraft and the Sun, pose estimation accuracy
is signicantly reduced. This is due to the lack of illumination on the side of the spacecraft which the
observer sees. This eect disappears when global illumination is added (and the \shadowed" regions
CHAPTER 5. POSE ESTIMATION 144
Impact on Relative Pose Measurement Error Variance by Target Range and Sun Angle
5 10 15 20 25 30
Distance (m)
0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
Position Error Variance in Camera Frame (m)
Distance vs. Position Error Variance
X (Solar Illumination)
Y (Solar Illumination)
Z (Solar Illumination)
X (Global Illumination)
Y (Global Illumination)
Z (Global Illumination)
5 10 15 20 25 30
Distance (m)
0
0.005
0.01
0.015
0.02
0.025
0.03
0.035
0.04
Orientation Error Variance in Camera Frame
Distance vs. Orientation Error Variance
q1 (Solar Illumination)
q2 (Solar Illumination)
q3 (Solar Illumination)
q1 (Global Illumination)
q2 (Global Illumination)
q3 (Global Illumination)
0 20 40 60 80 100 120 140 160 180
Sun Angle (deg)
0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
0.1
Position Error Variance in Camera Frame (m)
Sun-Angle vs. Position Error Variance
X (Solar Illumination)
Y (Solar Illumination)
Z (Solar Illumination)
X (Global Illumination)
Y (Global Illumination)
Z (Global Illumination)
0 20 40 60 80 100 120 140 160 180
Sun Angle (deg)
0
0.005
0.01
0.015
0.02
0.025
0.03
0.035
0.04
Orientation Error Variance in Camera Frame
Sun-Angle vs. Orientation Error Variance
q1 (Solar Illumination)
q2 (Solar Illumination)
q3 (Solar Illumination)
q1 (Global Illumination)
q2 (Global Illumination)
q3 (Global Illumination)
Figure 5.7: This gure shows the relationship between the variance of relative pose measurements
(position and orientation) and the range to the target, (a) and (b), and the relative Sun angle be-
tween the observer and target, (c) and (d). Each gure contains two sets of lines, one for renderings
using only a single solar illumination source and one for renderings using global illumination of the
target.
CHAPTER 5. POSE ESTIMATION 145
Figure 5.8: This pair of stereo images presents an example of the potential for outliers in stereo
vision pose estimation. Note that the lower solar panel is visible in the right camera but does not
appear in the left. Stereo matching produces inaccurate depth information for that region resulting
in incorrect point cloud registration and, ultimately, relative pose measurement outliers.
become visible). Note that the rendering environment does not simulate direct solar light which
would normally blind | and potentially damage | a camera sensor. Therefore, error information
for low Sun-angles may be inconsequential since such geometries may be unlikely to produce usable
images for the pose estimation process (AGC systems [38][117][115] may be used to compensate for
lighting variations).
In addition to general error trends, pose estimation is subject to occasional outliers. As an
example, see a pair of stereo images in Figure 5.8 where re
ection from specular surfaces is signif-
icantly dierent between the two images. This dierence impacts stereo matching resulting in an
inaccurate disparity map and thus point cloud. The resulting point cloud registration produces an
erroneous result that must be addressed or it may cause divergence in the MEKF.
5.4.4 Simulation
Monocular Vision Pose Estimation
The monocular pose estimation pipeline was used to test the performance of the MEKF lter
introduced in Chapter 4. A ve satellite swarm was created with initial random placement within
CHAPTER 5. POSE ESTIMATION 146
Performance for a 5-Satellite Swarm
0 5 10 15 20 25 30 35 40 45 50
-0.1
0
0.1
Quaternion Estimate Error: Target Satellites Obsered for Filter 1
0 5 10 15 20 25 30 35 40 45 50
-0.2
0
0.2
0 5 10 15 20 25 30 35 40 45 50
-0.2
0
0.2
Time (seconds)
Error (radians)
0 5 10 15 20 25 30 35 40 45 50
-0.2
0
0.2
Position Estimate Error: Target Satellites Obsered by Filter 1
0 5 10 15 20 25 30 35 40 45 50
-0.2
0
0.2
0 5 10 15 20 25 30 35 40 45 50
-0.2
0
0.2
Time (seconds)
Error (meters)
Figure 5.9: (a) and (b) show each satellite's estimation error of its own orientation and position
as well as the covariance estimates of those states. (c) and (d) show the estimation errors for all
3 targets observed by satellite number 1 in the 4-satellite swarm. Red lines indicate covariance
estimates of the corresponding states.
a 10m sphere. The measurement incidence graph was set to all ones (G = 1
nn
) which indicates
a complete graph where all satellites collect and share relative pose measurements on all other
satellites resulting in n (1n) measurements at each timestep. The duration was 50 seconds.
1000 images were rendered for use as relative pose measurements. We use a validation gate for
outlier rejection. Results are shown in Figure 5.9 (note, in this simulation, only ducial-based pose
estimation is used).
Stereo Vision Pose Estimation
To test the performance of the developed cooperative MEKF lter using stereo images, we per-
formed two simulations: one without a validation gate and one with a validation gate for outlier
rejection. Both scenarios used a four-satellite swarm with the same initial conditions and noise
parameters (see Figure 5.10 for initial and nal swarm conguration). As with the monocular
CHAPTER 5. POSE ESTIMATION 147
Figure 5.10: The (a) starting conguration and the (b) nal conguration of the four spacecraft.
Note that colors represent body axes of the spacecraft: red is ^ x, green is ^ y, and blue is ^ z.
simulation, a complete graph was used at each timestep. The duration was 1000 seconds. 12,000
path-traced stereo vision image pairs (24,000 total images) were rendered for use as relative pose
measurements.
Figure 5.11 shows the resulting performance of the simulation without outlier rejection. Note
the deviation in both quaternion and position estimates throughout, but especially around t =
750s. The spurious relative pose measurements are attributable to a poor lighting conditions of
satellite 3 as viewed by satellite 1 (the Sun angle causes almost total shadowing). Performance with
outlier rejection, as shown in Figure 5.12, is better. The lter remains unaected by the spurious
measurements and is able to use the measurements of satellite 3 taken from satellites 2 and 4 to
compensate for the loss of measurements.
5.5 Conclusion
This chapter presented two commonly used pose estimation approaches and tested them together
with the cooperative lter established in Chapter 5. We nd that the system successfully converges
to an estimate of the system state which is within the expected error covariance bounds. It is
also worth noting that the use of an outlier rejection method to provide robustness is essential
for image-based pose estimation - especially with local methods like ICP and model-based track-
ing which can converge to incorrect local minima. Finally, we have found that it is important to
CHAPTER 5. POSE ESTIMATION 148
Performance for a 4-Satellite Swarm without Outlier Rejection
0 50 100 150 200 250 300 350 400 450 500
-2
0
2
10
-5
Self Quaternion Estimate Error
0 50 100 150 200 250 300 350 400 450 500
-2
0
2
10
-5
0 50 100 150 200 250 300 350 400 450 500
-2
0
2
10
-5
Time (seconds)
Error (radians)
0 50 100 150 200 250 300 350 400 450 500
-0.02
0
0.02
Self Position Estimate Error
0 50 100 150 200 250 300 350 400 450 500
-0.02
0
0.02
0 50 100 150 200 250 300 350 400 450 500
-0.02
0
0.02
Time (seconds)
Error (meters)
0 100 200 300 400 500 600 700 800 900 1000
-0.05
0
0.05
Quaternion Estimate Error: Target Satellites Obsered by Sat 1
0 100 200 300 400 500 600 700 800 900 1000
-0.05
0
0.05
0 100 200 300 400 500 600 700 800 900 1000
-0.05
0
0.05
Time (seconds)
Error (radians)
0 100 200 300 400 500 600 700 800 900 1000
-0.2
0
0.2
Position Estimate Error: Target Satellites Obsered by Sat 1
0 100 200 300 400 500 600 700 800 900 1000
-0.2
0
0.2
0 100 200 300 400 500 600 700 800 900 1000
-0.2
0
0.2
Time (seconds)
Error (meters)
Figure 5.11: (a) and (b) show each satellite's estimation error of its own orientation and position
as well as the covariance estimates of those states. (c) and (d) show the estimation errors for all
3 targets observed by satellite number 1 in the 4-satellite swarm. Red lines indicate covariance
estimates of the corresponding states.
CHAPTER 5. POSE ESTIMATION 149
Performance for a 4-Satellite Swarm with Outlier Rejection
0 100 200 300 400 500 600 700 800 900 1000
-2
0
2
10
-5
Self Quaternion Estimate Error
0 100 200 300 400 500 600 700 800 900 1000
-2
0
2
10
-5
0 100 200 300 400 500 600 700 800 900 1000
-2
0
2
10
-5
Time (seconds)
Error (radians)
0 100 200 300 400 500 600 700 800 900 1000
-0.2
0
0.2
Self Position Estimate Error
0 100 200 300 400 500 600 700 800 900 1000
-0.2
0
0.2
0 100 200 300 400 500 600 700 800 900 1000
-0.2
0
0.2
Time (seconds)
Error (meters)
0 100 200 300 400 500 600 700 800 900 1000
-0.05
0
0.05
Quaternion Estimate Error: Target Satellites Obsered by Sat 1
0 100 200 300 400 500 600 700 800 900 1000
-0.05
0
0.05
0 100 200 300 400 500 600 700 800 900 1000
-0.05
0
0.05
Time (seconds)
Error (radians)
0 100 200 300 400 500 600 700 800 900 1000
-0.2
0
0.2
Position Estimate Error: Target Satellites Obsered by Sat 1
0 100 200 300 400 500 600 700 800 900 1000
-0.2
0
0.2
0 100 200 300 400 500 600 700 800 900 1000
-0.2
0
0.2
Time (seconds)
Error (meters)
Figure 5.12: (a) and (b) show each satellite's estimation error of its own orientation and position
as well as the covariance estimates of those states. (c) and (d) show the estimation errors for all
3 targets observed by satellite number 1 in the 4-satellite swarm. Red lines indicate covariance
estimates of the corresponding states.
CHAPTER 5. POSE ESTIMATION 150
correctly characterize the error of the pose estimation schemes prior to implementation with the
MEKF. Because the MEKF in this work assumes correctly modelled Gaussian measurement error,
any deviation can cause lter divergence. For example, if the entries in the measurement covariance
matrix (R) are too optimistic compared to actual performance, the lter will assume greater mea-
surement knowledge and overweight measurements accordingly. Therefore, it is recommended that
a rigorous characterization process of the measurement error is undertaken for any pose measure-
ment scheme using realistic images. This characterization is also useful when it comes to selecting
a reduced set from all possible measurements as well be discussed in the following chapter.
Chapter 6
Sensor Selection
6.1 Introduction
For a swarm of n spacecraft observing all other n 1 spacecraft, there are n (n 1) possible
relative pose measurements. However, only a subset of these measurements may be needed to
achieve satisfactory localization performance. Reducing the number of required pose measurements
reduces the power and computation required to collect, store, share, and process relative sensor
data in the localization lter. Finally, for some relative pose estimation modalities, not all pose
measurements are equal in quality; for example, vision-based pose measurements which rely on
passive optical cameras are heavily impacted by illumination conditions, distance to the target, and
relative target geometry. An image of a spacecraft under full illumination typically allows for easier
pose estimation than the image of a heavily shadowed spacecraft. Therefore, it may be advantageous
to select relative pose measurements which are most likely to provide good measurements.
This chapter explores several sensor selection strategies using a subset of possible pose mea-
surements at each timestep. The rst strategy selects a random spanning tree from the set of
all spanning trees for the complete relative pose measurement graph. The second strategy seeks
to maximize lter performance and uses a weighted version of the predicted covariance at each
timestep as a proxy for performance. The nal strategy chooses a random set of relative pose
measurements from all possible measurements which do not necessarily span the graph. This nal
naive strategy is used as a baseline to facilitate comparison.
151
CHAPTER 6. SENSOR SELECTION 152
6.2 Related Work
Previous work on this subject has generally focused on sensor scheduling for satellite formations
which are assumed to maintain a static structure. A key work in this area by McLoughlin and
Campbell addressed the innite horizon sensor scheduling problem for a xed formation and pre-
sented a solution approach based on a two step search algorithm to nd a periodic sensor schedule
which maximizes the information collected [393]. Separately, for general robotic systems, Mourikis
and Roumeliotis examined the problem of selecting the relative sensor collection frequencies for a
team of robots moving in formation in order to maximize the positioning accuracy of the group
[405]. We are unaware of a work which addresses the sensor scheduling problem for a swarm of
satellites using relative pose measurements.
The literature related to sensor selection is extensive and admits research from estimation
theory, information theory, optimal control, and robotics [405]. A comprehensive overview is given
by Hero and Cochran [434]. A number of authors have commented on the complexity of the general
sensor selection problem as well as the diculty of specic subsets. For example, Ye et al. [435]
show that selecting an optimal set of sensors to minimize the trace of the error covariance for a
Kalman Filter is NP-hard in a general case (see also Zhang et al. [436]). Analysis on the complexity
of similar related problems can be found in the recent literature (e.g., Le Ny et al. [437], Joshi
and Boyd [399], Huber [438]). Creating a set of feasible sensors to select can be dicult as well;
Chamon et al. [439] provide the example that nding the smallest subset of sensors which ensures
observability is NP-hard.
Because of the combinatorial complexity of the problem, authors have traditionally identied
approximate or sub-optimal solutions. Early work in this area, including landmark papers by
Meier et al. [440], Mehra [441], and Athans [442], frame the problem in optimal control theory and
proposed solutions based on dynamic programming. Recent work has included heuristic or greedy
approaches [406][436], search-based approaches which attempt to prune branches [438], stochastic
approaches which randomly select sensors according to some probability distribution [443], and
convex relaxation of the original non-convex problem [399][438].
Sensor selection for Kalman Filtering for both linear and non-linear systems has received atten-
tion from researchers [444]. Le Ny et al. presented a relaxation approach for scheduling sensors in
CHAPTER 6. SENSOR SELECTION 153
a continuous time Kalman Filter [437]. Chamon et al. provided a greedy approach which includes
near-optimal guarantees on performance [439]. McLoughlin and Campbell [18] explored the prob-
lem of sensor selection within a satellite swarm and provided a periodic sensor scheduling solution.
Our specic problem diers in that we are interested in estimating both position and orientation
and the satellite swarm formation cannot be assumed static.
6.3 Measurement Graphs
In order to organize the available measurements among and between the n-satellites, we use a
measurement graph to conceptually represent the measurements. A graph is a set of nodes and a
set of edges which connect these nodes. For the present work, the nodes represent satellites and
the edges between nodes represent relative measurements between satellites. Self-loops connecting
a node to itself represent absolute measurements like star trackers and GPS. Because there are two
possible measurements between each spacecraft (i.e., i viewed by j and j viewed by i) a directed
graph is used. The graph which contains all possible measurements is denotedG and, since we
have assumed all satellites can view all other satellites in the swarm, the graph is complete | all
possible edges exist. An illustration ofG for the measurements available to spacecraft 1 is shown in
Figure 6.1. Furthermore, we can representG using an adjacency matrix G where non-zero entries
G (i;j) indicate the existence of an edge (or measurement) from node (or satellite) i to j.
The present work is concerned with selecting a subset of available measurements (sub-graphG
k
)
to be used at each timestep k. Several of the methods explored rely on the concepts of spanning
trees and minimum spanning trees and are thus brie
y summarized. A spanning tree connects all
nodes within a graph using the minimum possible number of edges. A complete undirected graph
ofn-nodes hasn
n2
possible spanning trees according to Cayley's formula. If weights, representing
factors such as measurement accuracy or utility, are added to all edges of measurement graphG,
a minimum spanning tree (MST) can be found which is a spanning tree with the minimum total
edge weight.
CHAPTER 6. SENSOR SELECTION 154
1
2
3
4
5
6
Figure 6.1: Graph of all possible measurements for a 6-satellite swarm.
6.4 Sensor Selection Strategies
6.4.1 Overview
A satellite swarm consisting ofn satellites that each measure the relative pose of all other satellites
will produce n (n 1) measurements at each timestep. In this section, several approaches are
presented which use only mn (n 1) relative pose measurements at each timestep.
6.4.2 Uniform Spanning Trees (RANDOM TREE)
The rst strategy randomly selects a spanning tree from then
n2
possible spanning trees ofG. The
spanning tree is uniformly sampled using the Aldous-Broder algorithm at each timestep [445]. It is
worth noting that this strategy does not require state estimate knowledge to select the measurement
graphG
k
.
6.4.3 Maximum Covariance Trace (GREEDY)
Selection is based on the projected error covariance resulting from the m measurement subset.
The problem can be stated as a problem to minimize an objective (J) which is a function of
predicted estimated state (^ x
k
), estimated covariance (P
k1
), and a binary selection vector w =
w
1
;w
2
;w
3
;:::;w
n(n1)
which encodes if a measurement is used (w = 1) or not used (w = 0) [446].
The selection vector can be cast into the incidence matrix G (w) as an alternative representation
where G (i;j) determines if a measurement from satellite i of satellite j is taken.
CHAPTER 6. SENSOR SELECTION 155
minimize
x
J
^ x
k
; P
k1
; w
subject to kwk =m
w2f0; 1g
n(n1)
(6.1)
There are a number of cost functions that can be considered for this minimization problem. For
example, McLoughlin and Campbell [18] consider the trace of the Fisher Information matrix and
the trace of the error covariance matrix. Chepuri and Leus [446] connect the choice to traditional
optimal control criteria: trace of error covariance (A-optimal), largest eigenvalue of error covariance
(E-optimal), and log-determinant of error covariance (D-optimal). We consider the trace of the
error covariance. This objective function is amenable to a scaling vector which prioritizes certain
estimated states. We adopt two forms of scaling: c
a
with sizen 1 which is used to determine the
weighting of covariance errors for individual satellites and c
b
with the size of a single state state,
x
i
, which provides relative weighting between translation and orientation errors. The objective
function is therefore
J
^ x
k
; P
k1
; w
=tr
diag
^
P
k
diag (c)
(6.2)
where the total weighting is given by
c =
c
a
(1) c
b
c
a
(2) c
b
::: c
a
(n) c
b
(6.3)
and the estimated covariance is found via the MEKF with
P
k
= (I K
k
H
k
) P
k
(I K
k
H
k
) + K
k
RK
T
k
K
k
= P
k
H
T
k
H
k
P
k
H
T
k
+ R
k
1
(6.4)
here H
k
and R
k
are functions of the sensor selection vector, w.
Equation 6.1 is known as a binary integer programming problem and can be solved in a sub-
optimal way using greedy algorithms which is the approach taken here. At the start of each
timestep, the sensor selection vector is set to w
k
= 0. Each feasible candidate sensor is then selected
individually and used to compute the objective function in Equation 6.2. The sensor resulting in
the minimum value is then set to 1 in w
k
. The remaining sensors are selected individually and
CHAPTER 6. SENSOR SELECTION 156
used (along with the updated w
k
) to compute the new objective function. This process is repeated
until all speciedm measurements have been selected. Given a sensor selection vector during each
round, the measurement matrix (H) and measurement covariance matrix (R) can be computed.
To determine a feasible candidate measurement set, the visibility of the object is veried by
using the propagated state estimate ^ x
k
at timestep k. Visibility is checked by projecting the
maximum dimension of the object from 3D space into the 2D camera frame and verifying that the
2D points lie within the frame. Note that this predicted visibility is not equivalent to true visibility
since the estimated and true states are generally not equal. If a relative measurement is deemed
infeasible due to visibility, it is removed from the candidate set prior to sensor selection. A test for
occlusions due to other satellites in the camera frame is left for future work.
6.4.4 Randomly Selected Edges (RANDOM)
The nal strategy randomly selects (n 1) edges in the measurement graphG at each timestep.
The edges need not form a connected graph (i.e., some nodes may be unreachable from other nodes).
Like the UST strategy, this sensor selection does not need state knowledge.
6.5 Sensor Selection Strategies Performance Comparison
6.5.1 Comparison of All Strategies
The sensor selection strategies described in Section 6.4 were used in a Monte Carlo simulation
to compare state estimation performance. A static complete graph was used as a benchmark.
The simulation consisted of n = 10 satellites. At each timestep, only ve measurements were
selected (out of the possible 90 relative pose measurements) to be incorporated into the lter.
Satellite states were randomly initialized within a 10 m cube region with random initial linear and
angular velocities within2:5cm=s and1
=s, respectively. Relative pose measurement errors were
rx
=
ry
= 0:25
rz
= 0:12 m and
qx
=
qy
=
qz
= 0:03 radians. In addition, all errors degrade
as a square of the distance between the observer and the target. Each simulation run lasted 500
seconds. 20 simulations were run and the orientation and position magnitude errors (root mean
square) were computed (see Figure 6.2).
CHAPTER 6. SENSOR SELECTION 157
0 100 200
0
0.005
0.01
0.015
0.02
0.025
0.03
Orientation (rad)
GREEDY
0 100 200
RANDOM
0 100 200
RANDOM TREE
0 100 200
COMPLETE
0 100 200
0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
Position (m)
GREEDY
0 100 200
RANDOM
0 100 200
RANDOM TREE
0 100 200
COMPLETE
Figure 6.2: State estimation mean (RMS) error over 20 Monte Carlo runs (200 seconds each) using
simulated measurement data for various sensor selection strategies.
CHAPTER 6. SENSOR SELECTION 158
6.5.2 Greedy and Random Selection Comparison
In order to test the proposed sensor selection approach, two dierent simulations are produced.
The rst uses Equation 5.5 to generate synthetic relative pose measurements with Gaussian noise.
The second approach renders simulated sensor images which are then processed to extract relative
pose of target spacecraft.
Synthetic Data
A Monte Carlo simulation was run using random initial conditions which initially placed the 10
satellites within a 12m cube region with initial velocities with1 cm/s and1 degree/s. Relative
pose measurement errors were
rx
=
ry
= 0:25
rz
= 0:05 m and
qx
=
qy
=
qz
= 0:01 radians.
In addition, all errors degrade as a square of the distance between the observer and the target.
Ten individual 200-second simulations were run and the MEKF estimated state for all satellites.
For each run, the performance of a greedy sensor selection strategy was compared to randomly
selecting the same number of sensors at each timestep. The results are shown in Table 6.1. Figure
6.3 shows the resulting sensor schedules for both greedy and random selection. The values indicate
the number of times each relative measurement is selected during the 200-second simulation.
Orientation Error (rad) Position Error (m)
Full 200 Seconds Last 100 Seconds Full 200 Seconds Last 100 Seconds
Run Greedy Random Greedy Random Greedy Random Greedy Random
1 0.0026 0.0035 0.0024 0.0028 0.0115 0.0151 0.0062 0.0107
2 0.0028 0.0040 0.0029 0.0038 0.0108 0.0137 0.0082 0.0090
3 0.0034 0.0044 0.0036 0.0040 0.0106 0.0178 0.0081 0.0151
4 0.0028 0.0038 0.0022 0.0030 0.0103 0.0132 0.0067 0.0093
5 0.0026 0.0029 0.0023 0.0019 0.0095 0.0157 0.0070 0.0093
6 0.0040 0.0043 0.0044 0.0037 0.0132 0.0188 0.0110 0.0142
7 0.0027 0.0038 0.0029 0.0037 0.0101 0.0138 0.0072 0.0088
8 0.0028 0.0033 0.0027 0.0033 0.0088 0.0109 0.0068 0.0093
9 0.0029 0.0036 0.0024 0.0026 0.0138 0.0133 0.0084 0.0088
10 0.0037 0.0048 0.0034 0.0041 0.0126 0.0160 0.0096 0.0129
Table 6.1: Simulation results for 10-run Monte Carlo analysis. Magnitude of orientation and
position errors are shown. Bold indicates the lowest error between greedy and random sensor
selection.
CHAPTER 6. SENSOR SELECTION 159
Figure 6.3: Each entry shows the cumulative number of relative measurements selected over simu-
lation run for random (left) and greedy (right) sensor selection. Rows correspond to the observer
and columns indicate the observed satellite.
Image-based Data
A second set of simulations was run using image-based rendering to produce relative pose mea-
surements to test lter performance. Three such 100-second duration runs were simulated. Five
satellites were placed within a 12 m cube region with initial velocities with1 cm/s and1 de-
gree/s. Estimation performance for both greedy sensor selection and random selection are shown
in Table 6.2. Note that greedy generally out performs random selection given the same number of
selected measurements (ve).
Orientation Error (rad)
Satellite 2 Satellite 3 Satellite 4 Satellite 5
Run Greedy Random Greedy Random Greedy Random Greedy Random
1 0.0202 0.0171 0.0104 0.0076 0.0105 0.0209 0.0066 0.0096
2 0.0088 0.0104 0.0285 0.0211 0.0130 0.0229 0.0120 0.0431
3 0.0073 0.0095 0.0074 0.0074 0.0130 0.0329 0.0107 0.0122
Position Error (m)
Satellite 2 Satellite 3 Satellite 4 Satellite 5
Run Greedy Random Greedy Random Greedy Random Greedy Random
1 0.0276 0.0291 0.0233 0.0341 0.0262 0.0552 0.0150 0.0332
2 0.0206 0.0265 0.0251 0.0492 0.0164 0.0233 0.0191 0.0192
3 0.0110 0.0172 0.0212 0.0254 0.0330 0.0517 0.0085 0.0144
Table 6.2: Simulation results using rendered images for three individual runs using the same initial
conditions. Magnitude of orientation and position errors are shown. Bold indicates lowest error
between greedy and random sensor selection.
CHAPTER 6. SENSOR SELECTION 160
6.6 Conclusion
This chapter has presented several methods to reduce the number of measurements used at each
timestep. The approach which seeks to minimize the expected state estimate covariance is shown to
perform best during Monte Carlo simulations, including using monocular rendered images. Because
a greedy algorithm is used to perform the minimization, there are no guarantees on performance.
Thus, it is unsurprising that occasionally random selection of measurements outperforms greedy
selection. Finally, we note that performance of random selection indicates this may be a useful
strategy especially when it is advantageous to avoid the overhead of the selection algorithms or if
knowledge of the full swarm state is unavailable.
Chapter 7
Conclusion and Future Work
This work has explored the problem of satellite swarm localization and presented a solution ap-
proach using ltering and sensor selection. This work provides for many avenues of future research
which will expand on the contributions contained within.
The rst area for future research is identifying applications for this work. Examples include
modular spacecraft which require the rendezvous and joining of many individual smaller component
satellites on orbit. Another example is the use of a closely space swarm to explore planetary ring
systems where a group of satellites can jointly explore this environment. The results of this thesis
can also be used to explore design trades and sensing architectures that would support a realization
of the swarm localization capability.
The present work used an MEKF due to its extensive use and heritage in space systems. How-
ever, future research may include the inclusion of many other types of lters such as PFs. Addi-
tionally, although this thesis added outlier rejection via validation gates, lters which are designed
to be robust may be a fruitful area for exploration. Alternative data fusion approaches may have
advantages from a computational standpoint as well. Finally, the approach presented in this work
assumed that each spacecraft would run a lter that estimated the full state of the entire swarm.
Future research may explore consensus approaches where spacecraft may only have insight into a
subset of the cooperative group states.
We have addressed two pose estimation techniques in this paper. However, there are many
other techniques in the literature [23] and planned for use in space. Future work could apply the
framework presented in this thesis to alternative sensing modalities including 3D depth cameras
161
CHAPTER 7. CONCLUSION AND FUTURE WORK 162
and machine learning approaches to pose estimation [447]. In addition to using simulated imagery,
a next step is to also use cameras or sensors in a realistic laboratory environment. Real images
meant to mimic a space environment may reveal challenges in using pose estimation data that the
present work has not needed to address. Finally, although high-delity rendering software was used
in this work, the images were not subject to aberration, shot noise, and other departures from ideal
image formation which may provide challenges to pose estimation.
Regarding sensor selection, an area for future work is exploring the performance of other sensor
selections strategies such as convex optimization [399] and stochastic approaches [443]. In addition,
comparisons between the time to perform selection and the time to process measurements may be
helpful for understanding the ecacy of sensor selection in general.
The problems, solutions, and results presented in this work hopefully contribute to an increased
understanding of how groups of satellites may interact on orbit during future missions and how a
collaborative group can provide better performance as a team than as individual entities.
Bibliography
[1] M. Aung, A. Ahmed, M. Wette, D. Scharf, J. Tien, G. Purcell, M. Regehr, and B. Landin, \An
overview of formation
ying technology development for the terrestrial planet nder mission,"
in 2004 IEEE Aerospace Conference Proceedings (IEEE Cat. No.04TH8720), vol. 4, (Big Sky,
MT, USA), pp. 2667{2679, IEEE, 2004.
[2] D. P. Scharf, F. Y. Hadaegh, Z. H. Rahman, J. F. Shields, and G. Singh, \An overview of
the formation and attitude control system for the Terrestrial Planet Finder formation
ying
interferometer," in International Symposium on Formation Flying Missions and Technologies,
(Washington D.C.), Sept. 2004.
[3] O. Brown and P. Eremenko, \The Value Proposition for Fractionated Space Architectures,"
in Space 2006, (San Jose, California), American Institute of Aeronautics and Astronautics,
Sept. 2006.
[4] O. Brown, P. Eremenko, and P. Collopy, \Value-Centric Design Methodologies for Fraction-
ated Spacecraft: Progress Summary from Phase I of the DARPA System F6 Program," in
AIAA SPACE 2009 Conference & Exposition, (Pasadena, California), American Institute of
Aeronautics and Astronautics, Sept. 2009.
[5] D. Barnhart, L. Hill, M. Turnbull, and P. Will, \Changing Satellite Morphology through
Cellularization," in AIAA SPACE 2012 Conference & Exposition, (Pasadena, California),
American Institute of Aeronautics and Astronautics, Sept. 2012.
[6] L. Barrios, T. Collins, R. Kovac, and W.-M. Shen, \Autonomous 6d-docking and manipula-
tion with non-stationary-base using self-recongurable modular robots," in 2016 IEEE/RSJ
International Conference on Intelligent Robots and Systems (IROS), (Daejeon, South Korea),
pp. 2913{2919, IEEE, Oct. 2016.
[7] C. Underwood, S. Pellegrino, V. J. Lappas, C. P. Bridges, and J. Baker, \Using
CubeSat/micro-satellite technology to demonstrate the Autonomous Assembly of a Recon-
gurable Space Telescope (AAReST)," Acta Astronautica, vol. 114, pp. 112{122, Sept. 2015.
[8] D. Wokes, S. Smail, P. Palmer, and C. Underwood, \Pose Estimation for In-Orbit Self-
Assembly of Intelligent Self-Powered Modules," in AIAA Guidance, Navigation, and Control
Conference, (Chicago, Illinois), American Institute of Aeronautics and Astronautics, Aug.
2009.
[9] F. Y. Hadaegh, S. J. Chung, and H. M. Manohara, \On Development of 100-Gram-Class
Spacecraft for Swarm Applications," IEEE Systems Journal, vol. 10, pp. 673{684, June 2016.
[10] M. Hedman, M. Tiscareno, J. Burns, P. Nicholson, and M. Johnson, \Scouting Saturn's Rings
with Small Spacecraft," in 1st Interplanetary CubeSat Workshop, (MIT), May 2012.
163
BIBLIOGRAPHY 164
[11] T. T. Vu and A. R. Rahmani, \Distributed Consensus-Based Kalman Filter Estimation and
Control of Formation Flying Spacecraft: Simulation and Validation," in AIAA Guidance,
Navigation, and Control Conference, (Kissimmee, Florida), American Institute of Aeronautics
and Astronautics, Jan. 2015.
[12] A. Rahmani, O. Ching, and L. A. Rodriguez, \On separation principle for the distributed esti-
mation and control of formation
ying spacecraft," in Proceedings of Conference on Spacecraft
Formation Flying Missions and Technologies, 2013.
[13] W. Bezouska and D. Barnhart, \Sensor Selection Strategies for Satellite Swarm Collaborative
Localization," in AAS/AIAA Astrodynamics Specialist Meeting, (Portland, ME), AAS/AIAA,
Aug. 2019.
[14] W. Bezouska and D. A. Barnhart, \Visual sensor selection for satellite swarm cooperative
localization," in Sensors and Systems for Space Applications XII (K. D. Pham and G. Chen,
eds.), (Baltimore, United States), p. 4, SPIE, May 2019.
[15] W. Bezouska and D. Barnhart, \Spacecraft Pose Estimation and Swarm Localizaiton Per-
formance under Varying Illumination and Viewing Conditions," in AAS/AIAA Space
ight
Mechanics Meeting, (Kauai, HI), AAS/AIAA, Jan. 2019.
[16] W. Bezouska and D. Barnhart, \Decentralized Cooperative Localization with Relative Pose
Estimation for a Spacecraft Swarm," in 2019 IEEE Aerospace Conference, (Big Sky, MT,
USA), pp. 1{13, IEEE, Mar. 2019.
[17] J. O. Woods and J. A. Christian, \Lidar-based relative navigation with respect to non-
cooperative objects," Acta Astronautica, vol. 126, pp. 298{311, Sept. 2016.
[18] T. H. McLoughlin and M. Campbell, \Scalable Sensing, Estimation, and Control Architecture
for Large Spacecraft Formations," Journal of Guidance, Control, and Dynamics, vol. 30,
pp. 289{300, Mar. 2007.
[19] F. Y. K. Hadaegh, \Rule-based estimation and control of formation
ying spacecraft," in 2nd
International Conference on Intelligent Technologies, (Bangkok), Nov. 2001.
[20] R. B. Friend, \Orbital Express program summary and mission overview," in SPIE Defense
and Security Symposium (R. T. Howard and P. Motaghedi, eds.), (Orlando, FL), p. 695803,
Apr. 2008.
[21] S. Chien, J. Doubleday, K. Ortega, D. Tran, J. Bellardo, A. Williams, J. Piug-Suari, G. Crum,
and T. Flatley, \Onboard autonomy and ground operations automation for the Intelligent
Payload Experiment (IPEX) CubeSat Mission," in International Symposium on Articial
Intelligence, Robotics and Automation in Space, (Turin, Italy), Sept. 2012.
[22] T. A. Estlin, B. J. Bornstein, D. M. Gaines, R. C. Anderson, D. R. Thompson, M. Burl,
R. Casta~ no, and M. Judd, \AEGIS Automated Science Targeting for the MER Opportunity
Rover," ACM Transactions on Intelligent Systems and Technology, vol. 3, pp. 1{19, May
2012.
[23] R. Opromolla, G. Fasano, G. Runo, and M. Grassi, \A review of cooperative and uncoop-
erative spacecraft pose determination techniques for close-proximity operations," Progress in
Aerospace Sciences, vol. 93, pp. 53{72, Aug. 2017.
BIBLIOGRAPHY 165
[24] A. Flores-Abad, O. Ma, K. Pham, and S. Ulrich, \A review of space robotics technologies for
on-orbit servicing," Progress in Aerospace Sciences, vol. 68, pp. 1{26, July 2014.
[25] J. M. Rebord~ ao, \Space optical navigation techniques: an overview," in 8th Ibero American
Optics Meeting/11th Latin American Meeting on Optics, Lasers, and Applications (M. F.
P. C. Martins Costa, ed.), (Porto, Portugal), p. 87850J, Nov. 2013.
[26] L. Yawen, B. Yuming, and Z. Gaopeng, \Survey of measurement of position and pose for
space non-cooperative target," in 2015 34th Chinese Control Conference (CCC), (Hangzhou,
China), pp. 5101{5106, IEEE, July 2015.
[27] R. Ticker and P. Callen, \Robotics on the International Space Station: Systems and Technol-
ogy for Space Operations, Commerce and Exploration," in AIAA SPACE 2012 Conference
& Exposition, (Pasadena, California), American Institute of Aeronautics and Astronautics,
Sept. 2012.
[28] T. J. Fuchs, D. R. Thompson, B. D. Bue, J. Castillo-Rogez, S. A. Chien, D. Gharibian,
and K. L. Wagsta, \Enhanced
yby science with onboard computer vision: Tracking and
surface feature detection at small bodies: ENHANCED FLYBY SCIENCE," Earth and Space
Science, vol. 2, pp. 417{434, Oct. 2015.
[29] A. Castano, A. Fukunaga, J. Biesiadecki, L. Neakrase, P. Whelley, R. Greeley, M. Lemmon,
R. Castano, and S. Chien, \Automatic detection of dust devils and clouds on Mars," Machine
Vision and Applications, vol. 19, pp. 467{482, Oct. 2008.
[30] S. Chien, D. Mclaren, D. Tran, A. G. Davies, J. Doubleday, and D. Mandl, \Onboard Prod-
uct Generation on Earth Observing One: A Pathnder for the Proposed Hyspiri Mission
Intelligent Payload Module," IEEE Journal of Selected Topics in Applied Earth Observations
and Remote Sensing, vol. 6, pp. 257{264, Apr. 2013.
[31] L. Baglivo, A. Del Bue, M. Lunardelli, F. Setti, V. Murino, and M. De Cecco, \A Method for
Asteroids 3d Surface Reconstruction from Close Approach Distances," in Proceedings of the
8th International Conference on Computer Vision Systems, ICVS'11, (Berlin, Heidelberg),
pp. 21{30, Springer-Verlag, 2011.
[32] J. Christian, H. Hinkel, S. Maguire, C. D'Souza, and M. Patangan, \The Sensor Test for
Orion RelNav Risk Mitigation (STORRM) Development Test Objective," in AIAA Guidance,
Navigation, and Control Conference, (Portland, Oregon), American Institute of Aeronautics
and Astronautics, Aug. 2011.
[33] K. Dekome and J. M. Barr, \Trajectory control sensor engineering model detailed test objec-
tive," Tech. Rep. 19930012225, NASA, Jan. 1991.
[34] S. MacLean and H. Pinkney, \Machine vision in space," Canadian Aeronautics and Space
Journal, vol. 39, no. 2, pp. 63{77, 1993.
[35] M. Oda, ETS-VII: Achievements, Troubles and Future. Proceedings of the 6th International
Symposium on Articial Intelligence and Robotics & Automation in Space, June 2001.
[36] S. Bhaskaran, J. Riedel, S. Synnott, and T. Wang, \The Deep Space 1 autonomous navigation
system - A post-
ight analysis," in Astrodynamics Specialist Conference, (Denver, Colorado,
USA), American Institute of Aeronautics and Astronautics, Aug. 2000.
BIBLIOGRAPHY 166
[37] A. Cangahuala, S. Bhaskaran, and B. Owen, \Science benets of onboard spacecraft naviga-
tion," Eos, Transactions American Geophysical Union, vol. 93, pp. 177{178, May 2012.
[38] T. M. Davis and D. Melanson, \XSS-10 microsatellite
ight demonstration program results,"
in Proceedings of the SPIE, Volume 5419 (P. Tchoryk, Jr. and M. Wright, eds.), pp. 16{25,
Aug. 2004.
[39] M. Uo, K. Shirakawa, T. Hashimoto, T. Kubota, and J. Kawaguchi, \Hayabusa Touching-
Down to Itokawa -Autonomous Guidance and Navigation-," Advances in the Astronautical
Sciences, vol. 22, no. 1, pp. 32{41, 2006.
[40] Yang Cheng, A. Johnson, and L. Matthies, \MER-DIMES: a planetary landing application
of computer vision," in 2005 IEEE Computer Society Conference on Computer Vision and
Pattern Recognition (CVPR'05), (San Diego, CA, USA), pp. 806{813 vol. 1, IEEE, 2005.
[41] T. E. Rumford, \Demonstration of autonomous rendezvous technology (DART) project sum-
mary," in AeroSense 2003 (P. Tchoryk, Jr. and J. Shoemaker, eds.), (Orlando, FL), pp. 10{19,
Aug. 2003.
[42] A. Accomazzo, P. Ferri, S. Lodiot, A. Hubault, R. Porta, and J.-L. Pellon-Bailon, \The rst
Rosetta asteroid
yby," Acta Astronautica, vol. 66, pp. 382{390, Feb. 2010.
[43] N. Mastrodemos, D. G. Kubitschek, and S. P. Synnott, \Autonomous Navigation for the
Deep Impact Mission Encounter with Comet Tempel 1," Space Science Reviews, vol. 117,
pp. 95{121, Mar. 2005.
[44] A. C. M. Allen, C. Langley, R. Mukherji, A. B. Taylor, M. Umasuthan, and T. D. Barfoot,
\Rendezvous lidar sensor system for terminal rendezvous, capture, and berthing to the In-
ternational Space Station," in SPIE Defense and Security Symposium (R. T. Howard and
P. Motaghedi, eds.), (Orlando, FL), p. 69580S, Apr. 2008.
[45] M. Adler, W. Owen, and J. Riedel, \Use of MRO Optical Navigation Camera to Prepare for
Mars Sample Return," in Concepts and Approaches for Mars Exploration, vol. 1679, (Houstin,
TX), p. 4337, June 2012.
[46] Y. Roux and P. da Cunha, \The GNC Measurement System for the Automated Transfer
Vehicle," in Proceedings of the 18th International Symposium on Space Flight Dynamics,
vol. 548, (Munich, Germany), p. 111, 2004.
[47] B. Cavrois, A. Vergnol, A. Donnard, P. Casiez, and O. Mongrard, \LIRIS demonstrator on
ATV5: a step beyond for European non cooperative navigation system," in AIAA Guidance,
Navigation, and Control Conference, (Kissimmee, Florida), American Institute of Aeronautics
and Astronautics, Jan. 2015.
[48] B. Naasz, J. V. Eepoel, S. Queen, C. M. Southward, and J. Hannah, \Flight Results from the
HST SM4 Relative Navigation Sensor System," in 33rd Annual AAS Guidance and Control
Conference, (Breckenridge, CO, United States), Jan. 2010.
[49] C. English, G. Okouneva, P. Saint-Cyr, A. Choudhuri, and T. Luu, \Real-time dynamic pose
estimation systems in space: Lessons learned for system design and performance evaluation,"
International Journal of Intelligent Control and Systems, vol. 16, no. 2, pp. 79{96, 2011.
BIBLIOGRAPHY 167
[50] K. Michel and A. Ullrich, \Scanning time-of-
ight laser sensor for rendezvous manoeuvres," in
Proc. of the 8 th ESA Workshop on Advanced Space Technologies for Robotics and Automation
\ASTRA, 2004.
[51] S. Persson, S. Veldman, and P. Bodin, \PRISMA|A formation
ying project in implemen-
tation phase," Acta Astronautica, vol. 65, pp. 1360{1374, Nov. 2009.
[52] D. A. K. Pedersen, A. H. Jrgensen, M. Benn, T. Denver, P. S. Jrgensen, J. B. Bjarn,
A. Massaro, and J. L. Jrgensen, \MicroASC instrument onboard Juno spacecraft utilizing
inertially controlled imaging," Acta Astronautica, vol. 118, pp. 308{315, Jan. 2016.
[53] K. Miller, J. Masciarelli, and R. Rohrschneider, \Advances in multi-mission autonomous
rendezvous and docking and relative navigation capabilities," in 2012 IEEE Aerospace Con-
ference, (Big Sky, MT), pp. 1{9, IEEE, Mar. 2012.
[54] B. E. Tweddle, T. P. Settereld, A. Saenz-Otero, and D. W. Miller, \An Open Research
Facility for Vision-Based Navigation Onboard the International Space Station: An Open
Research Facility for Vision-Based," Journal of Field Robotics, vol. 33, pp. 157{186, Mar.
2016.
[55] L. M. Simms, \Space-based telescopes for actionable renement of ephemeris pathnder mis-
sion," Optical Engineering, vol. 51, p. 011004, Jan. 2012.
[56] Y. Tsuda, M. Yoshikawa, M. Abe, H. Minamino, and S. Nakazawa, \System design of the
Hayabusa 2|Asteroid sample return mission to 1999 JU3," Acta Astronautica, vol. 91,
pp. 356{362, Oct. 2013.
[57] R. Olds, A. May, C. Mario, R. Hamilton, C. Debrunner, and K. Anderson, \The Application
of Optical-Based Feature Tracking to OSIRIS-REx Asteroid Sample Collection," Advances
in the Astronautical Sciences Guidance, Navigation and Control, vol. 154, 2015.
[58] J. M. Galante, J. Van Eepoel, C. D'Souza, and B. Patrick, \Fast Kalman Filtering for Relative
Spacecraft Position and Attitude Estimation for the Raven ISS Hosted Payload," in AAS
Guidance Navigation and Control Conference, (Breckenridge, CO), Feb. 2016.
[59] T. Smith, J. Barlow, M. Bualat, T. Fong, C. Provencher, H. Sanchez, and E. Smith, \As-
trobee: A New Platform for Free-Flying Robotics on the International Space Station," in
13th International Symposium on Articial Intelligence, Robotics, and Automation in Space
(i-SAIRAS), (Beijing, China), June 2016.
[60] J. L. Forshaw, G. S. Aglietti, N. Navarathinam, H. Kadhem, T. Salmon, A. Pisseloup, E. Jof-
fre, T. Chabot, I. Retat, R. Axthelm, S. Barraclough, A. Ratclie, C. Bernal, F. Chaumette,
A. Pollini, and W. H. Steyn, \RemoveDEBRIS: An in-orbit active debris removal demonstra-
tion mission," Acta Astronautica, vol. 127, pp. 448{463, Oct. 2016.
[61] K. J. Okseniuk, S. B. Chait, P. Z. Schulte, and D. A. Spencer, \Prox-1: Automated Proximity
Operations on an ESPA Class Platform," in 29th Annual AIAA/USU Conference on Small
Satellite, (Logan, Utah, USA), 2015.
[62] N. Inaba, M. Oda, and M. Hayashi, \Visual Servoing of Space Robot for Autonomous Satellite
Capture," TRANSACTIONS OF THE JAPAN SOCIETY FOR AERONAUTICAL AND
SPACE SCIENCES, vol. 46, no. 153, pp. 173{179, 2003.
BIBLIOGRAPHY 168
[63] J. Riedel, D. Eldred, B. Kennedy, D. Kubitscheck, A. Vaughan, R. Werner, S. Bhaskaran, and
S. Synnott, \AutoNav Mark3: Engineering the Next Generation of Autonomous Onboard
Navigation and Guidance," in AIAA Guidance, Navigation, and Control Conference and
Exhibit, (Keystone, Colorado), American Institute of Aeronautics and Astronautics, Aug.
2006.
[64] S. Chien, R. Sherwood, D. Tran, B. Cichy, G. Rabideau, R. Castano, A. Davies, R. Lee,
D. Mandl, S. Frye, B. Trout, J. Hengemihle, J. D'Agostino, S. Shulman, S. Ungar, T. Brakke,
D. Boyer, J. V. Gaasbeck, R. Greeley, T. Doggett, V. Baker, J. Dohm, and F. Ip, \The EO-1
autonomous science agent," in Proceedings of the Third International Joint Conference on
Autonomous Agents and Multiagent Systems, 2004. AAMAS 2004., pp. 420{427, July 2004.
[65] D. Barnhart, R. Hunter, A. Weston, V. Chioma, M. Steiner, and W. Larsen, \XSS-10 micro-
satellite demonstration," in AIAA Defense and Civil Space Programs Conference and Exhibit,
(Huntsville,AL,U.S.A.), American Institute of Aeronautics and Astronautics, Oct. 1998.
[66] T. M. Davis and D. Melanson, \XSS-10 micro-satellite
ight demonstration," in proc. of
Georgia Institute of Technology Space Systems Engineering Conference, paper no. GT-SSEC.
D, vol. 3, 2005.
[67] T. Davis, M. T. Baker, T. Belchak, and W. Larsen, \XSS-10 Micro-Satellite Flight Demon-
stration Program," in AIAA/USU Conference on Small Satellites, Aug. 2003.
[68] H. Yano, T. Kubota, H. Miyamoto, T. Okada, D. Scheeres, Y. Takagi, K. Yoshida, M. Abe,
S. Abe, O. Barnouin-Jha, A. Fujiwara, S. Hasegawa, T. Hashimoto, M. Ishiguro, M. Kato,
J. Kawaguchi, T. Mukai, J. Saito, S. Sasaki, and M. Yoshikawa, \Touchdown of the Hayabusa
Spacecraft at the Muses Sea on Itokawa," Science, vol. 312, pp. 1350{1353, June 2006.
[69] T. Kubota, T. Hashimoto, J. Kawaguchi, M. Uo, and K. Shirakawa, \Guidance and Naviga-
tion of Hayabusa Spacecraft for Asteroid Exploration and Sample Return Mission," in 2006
SICE-ICASE International Joint Conference, pp. 2793{2796, Oct. 2006.
[70] T. Kubota, S. Sawai, T. Misu, T. Hashimoto, J. Kawaguchi, and A. Fujiwara, \Autonomous
Landing System for MUSES-C Sample Return Mission," in Articial Intelligence, Robotics
and Automation in Space, Proceedings of the Fifth International Symposium, vol. 440, (No-
ordwijk, The Netherlands), p. 615, Aug. 1999.
[71] T. Kubota, S. Sawai, T. Hashimoto, and A. Fujiwara, \Robotics technology for asteroid
sample return mission MUSES-C," in 6th International Symposium on Articial Intelligence,
Robotics and Automation in Space (i-SAIRAS), (Montreal, Que., Canada), June 2001.
[72] D. Pinard, S. Reynaud, P. Delpy, and S. E. Strandmoe, \Accurate and autonomous navigation
for the ATV," Aerospace Science and Technology, vol. 11, pp. 490{498, Sept. 2007.
[73] H. Wartenberg and P. Amadieu, \ATv: Rendezvous with ISS," On Station, vol. 11, pp. 17{19,
2002.
[74] J. Christian, M. Patangan, H. Hinkel, K. Chevray, and J. Brazzel, \Comparison of Orion
Vision Navigation Sensor Performance from STS-134 and the Space Operations Simulation
Center," in AIAA Guidance, Navigation, and Control Conference, American Institute of
Aeronautics and Astronautics, Aug. 2012.
BIBLIOGRAPHY 169
[75] A. Reuilh, C. Bonneau, O. Bonnamy, and P. Ferri, \Rosetta AOCS Behaviour in the First
Years of Operations," IFAC Proceedings Volumes, vol. 40, no. 7, pp. 383{388, 2007.
[76] S. Groomes, Overview of the DART Mishap Investigation Results. NASA, May 2006.
[77] R. T. Howard and T. C. Bryan, \DART AVGS Flight Results," in Sensors and Systems for
Space Applications, vol. 6555, (Orlando, Florida, USA), pp. 65550L{65550L{10, SPIE, 2007.
[78] D. G. Kubitschek, N. Mastrodemos, R. A. Werner, B. M. Kennedy, S. P. Synnott, G. W. Null,
S. Bhaskaran, J. E. Riedel, and A. T. Vaughan, \Deep Impact Autonomous Navigation : the
trials of targeting the unknown," in 29th Annual AAS Guidance and Control Conference,
(Breckenridge, CO, United States), Feb. 2006.
[79] R. B. Frauenholz, R. S. Bhat, S. R. Chesley, N. Mastrodemos, W. M. Owen, Jr, and M. S.
Ryne, \Deep Impact Navigation System Performance," Journal of Spacecraft and Rockets,
vol. 45, pp. 39{56, Jan. 2008.
[80] M. Abrahamson, \Autonomous Navigation Performance During the Hartley 2 Comet Flyby,"
in SpaceOps 2012 Conference, (Stockholm, Sweden), American Institute of Aeronautics and
Astronautics, June 2012.
[81] M. R. Leinz, C.-T. Chen, M. W. Beaven, T. P. Weismuller, D. L. Caballero, W. B. Gaumer,
P. W. Sabasteanski, P. A. Scott, and M. A. Lundgren, \Orbital Express Autonomous Ren-
dezvous and Capture Sensor System (ARCSS) Flight Test Results," in Proc. SPIE 6958,
vol. 6958, (Orlando, Florida, USA), pp. 69580A{69580A{13, SPIE, 2008.
[82] P. Bodin, R. Noteborn, R. Larsson, T. Karlsson, S. D'Amico, J. S. Ardaens, M. Delpech, and
J.-C. Berges, \The Prisma Formation Flying Demonstrator : Overview and Conclusions from
the Nominal Mission," Advances in the Astronautical Sciences, vol. 144, pp. 441{460, 2012.
[83] T. Karlsson, R. Larsson, B. Jakobsson, and P. Bodin, \The PRISMA Story : Achievements
and Final Escapades," in 5th International Conference on Spacecraft Formation Flying Mis-
sions and Technologies, (Munich, Germany), 2013.
[84] J. L. Jrgensen and M. Benn, \VBS - The Optical Rendezvous and Docking Sensor for
PRISMA," NordicSpace, pp. 16{19, 2010.
[85] H. Benningho, T. Tzschichholz, T. Boge, and G. Gaias, \A Far Range Image Processing
Method for Autonomous Tracking of an Uncooperative Target," in 12th symposium on ad-
vanced space technologies in robotics and automation, (Noordwijk, The Netherlands), 2013.
[86] J. Doubleday, S. Chien, C. Norton, K. Wagsta, D. R. Thompson, J. Bellardo, C. Francis, and
E. Baumgarten, \Autonomy for remote sensing - Experiences from the IPEX CubeSat," in
2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), pp. 5308{
5311, July 2015.
[87] D. Sternberg, A. Hilton, D. Miller, B. McCarthy, C. Jewison, D. Roascio, J. James, and
A. Saenz-Otero, \Recongurable ground and
ight testing facility for robotic servicing, cap-
ture, and assembly," in 2016 IEEE Aerospace Conference, (Big Sky, MT, USA), pp. 1{13,
IEEE, Mar. 2016.
BIBLIOGRAPHY 170
[88] D. Fourie, B. E. Tweddle, S. Ulrich, and A. Saenz-Otero, \Flight Results of Vision-Based Nav-
igation for Autonomous Spacecraft Inspection of Unknown Objects," Journal of Spacecraft
and Rockets, vol. 51, pp. 2016{2026, Nov. 2014.
[89] B. E. Tweddle, T. P. Settereld, A. Saenz-Otero, D. W. Miller, and J. J. Leonard, \Experi-
mental evaluation of on-board, visual mapping of an object spinning in micro-gravity aboard
the International Space Station," in 2014 IEEE/RSJ International Conference on Intelligent
Robots and Systems, (Chicago, IL, USA), pp. 2333{2340, IEEE, Sept. 2014.
[90] D. Miller and A. Saenz-Otero, \SPHERES 63rd ISS Test Session," tech. rep., MIT, Feb. 2015.
[91] N. Takeishi, T. Yairi, Y. Tsuda, F. Terui, N. Ogawa, and Y. Mimasu, \Simultaneous estima-
tion of shape and motion of an asteroid for automatic navigation," in 2015 IEEE International
Conference on Robotics and Automation (ICRA), pp. 2861{2866, May 2015.
[92] M. Strube, \Raven: An On-Orbit Relative Navigation Demonstration Using International
Space Station Visiting Vehicles," in AAS GN&C Conference, (Breckenridge, CO, United
States), Jan. 2015.
[93] J. Galante, J. Van Eepoel, M. Strube, N. Gill, M. Gonzalez, A. Hyslop, and B. Patrick,
\Pose Measurement Performance of the Argon Relative Navigation Sensor Suite in Simulated-
Flight Conditions," in AIAA Guidance, Navigation, and Control Conference, (Minneapolis,
Minnesota), American Institute of Aeronautics and Astronautics, Aug. 2012.
[94] C. Mario and C. Debrunner, \Robustness and Performance Impacts of Optica-based Feature
Tracking to OSIRIS-REx Asteroid Sample Collection Mission," in AAS, vol. AAS 16-087,
2016.
[95] L. M. Simms, V. Riot, W. De Vries, S. S. Olivier, A. Pertica, B. J. Bauman, D. Phillion, and
S. Nikolaev, \Optical payload for the STARE pathnder mission," in SPIE Defense, Security,
and Sensing (K. D. Pham, H. Zmuda, J. L. Cox, and G. J. Meyer, eds.), (Orlando, Florida,
United States), p. 804406, May 2011.
[96] M. Bualat, J. Barlow, T. Fong, C. Provencher, and T. Smith, \Astrobee: Developing a
Free-
ying Robot for the International Space Station," in AIAA SPACE 2015 Conference
and Exposition, (Pasadena, California), American Institute of Aeronautics and Astronautics,
Aug. 2015.
[97] L. Walker, \Automated proximity operations using image-based relative navigation," in 26th
Annual USU/AIAA Conference on Small Satellites, (Logan, Utah, USA), Aug. 2012.
[98] A. Yol, E. Marchand, F. Chaumette, K. Kanani, and T. Chabot, \Vision-based navigation in
low earth orbit," in Int. Symp. on Articial Intelligence, Robotics and Automation in Space,
i-SAIRAS'16, 2016.
[99] A. F. R. L. (AFRL), Fact Sheet: Automated Navigation and Guidance Experiment for Local
Space (ANGELS). United States Air Force, July 2014.
[100] \Companion Satellite released from Tiangong-2 Space Lab for Orbital Photo Shoot," Oct.
2016.
BIBLIOGRAPHY 171
[101] J. C. Bastante, J. Vasconcelos, M. Hagenfeldt, L. F. Pe~ n n, J. Dinis, and J. Rebord~ ao, \Design
and development of PROBA-3 rendezvous experiment," Acta Astronautica, vol. 102, pp. 311{
320, Sept. 2014.
[102] J. Bowen, A. Tsuda, J. Abel, and M. Villa, \CubeSat Proximity Operations Demonstration
(CPOD) mission update," in 2015 IEEE Aerospace Conference, (Big Sky, MT), pp. 1{8,
IEEE, Mar. 2015.
[103] A. E. Johnson, Y. Cheng, J. Montgomery, N. Trawny, B. E. Tweddle, and J. Zheng, \Design
and Analysis of Map Relative Localization for Access to Hazardous Landing Sites on Mars," in
AIAA Guidance, Navigation, and Control Conference, (San Diego, California, USA), Ameri-
can Institute of Aeronautics and Astronautics, Jan. 2016.
[104] C. Underwood, S. Pellegrino, V. Lappas, C. Bridges, B. Taylor, S. Chhaniyara, T. Theodorou,
P. Shaw, M. Arya, J. Breckinridge, K. Hogstrom, K. Patterson, J. Steeves, L. Wilson, and
N. Horri, \Autonomous Assembly of a Reconguarble Space Telescope (AAReST) { A Cube-
Sat/Microsatellite Based Technology Demonstrator," in 27th Annual AIAA/USU Conference
on Small Satellites (SmallSat 2013), (Logan, Utah, USA), Aug. 2013.
[105] J. Obermark, G. Creamer, B. E. Kelm, W. Wagner, and C. G. Henshaw, \SUMO/FREND:
vision system for autonomous satellite grapple," in Defense and Security Symposium (R. T.
Howard and R. D. Richards, eds.), (Orlando, Florida, USA), p. 65550Y, Apr. 2007.
[106] C. P. Bridges, B. Taylor, N. Horri, C. I. Underwood, S. Kenyon, J. Barrera-Ars, L. Pryce, and
R. Bird, \STRaND-2: Visual inspection, proximity operations and nanosatellite docking," in
2013 IEEE Aerospace Conference, (Big Sky, MT), pp. 1{8, IEEE, Mar. 2013.
[107] T. V. Peters, J. Branco, D. Escorial, L. T. Castellani, and A. Cropp, \Mission analysis for
PROBA-3 nominal operations," Acta Astronautica, vol. 102, pp. 296{310, Sept. 2014.
[108] D. Sternberg, T. F. Sheerin, and G. Urbain, \INSPECT Sensor Suite for On-Orbit Inspection
and Characterization with Extravehicular Activity Spacecraft," in 45th International Con-
ference on Environmental Systems, (Bellevue, WA, USA), 45th International Conference on
Environmental Systems, 2015.
[109] C. R. McBryde and L. Glenn, \End-to-End Testing of a Dual Use Imaging Sensor for Small
Satellites," Journal of Small Satellites, vol. 5, pp. 435{448, Feb. 2016.
[110] H. Zhang, H. Sang, and X. Shen, \Expanding the usage of the star sensor in spacecraft," in
Second International Conference on Spatial Information Technology (C. Wang, S. Zhong, and
J. Wei, eds.), (Wuhan, China), p. 679521, Nov. 2007.
[111] J. E. Darling, K. A. Legrand, P. Galchenko, H. Pernicka, K. J. DeMars, A. T. Shirley, J. S.
McCabe, C. L. Schmid, S. J. Haberberger, and A. J. Mundahl, \Development and Flight of
a Stereoscopic Imager for Use in Spacecraft Close Proximity Operations," Advances in the
Astronautical Sciences Guidance, Navigation and Control, vol. 157, 2016.
[112] X. Du, B. Liang, W. Xu, and Y. Qiu, \Pose measurement of large non-cooperative satellite
based on collaborative cameras," Acta Astronautica, vol. 68, pp. 2047{2065, June 2011.
[113] M. Benn and J. Jrgensen, \Autonomous vision-based detection of non-stellar objects
y-
ing in formation with camera point of view," International Journal of Space Science and
Engineering, vol. 2, no. 1, p. 49, 2014.
BIBLIOGRAPHY 172
[114] R. Noteborn, P. Bodin, R. Larsson, and C. Chasset, \Flight Results from the PRISMA Optical
Line of Sight Based Autonomous Rendezvous Experiment," in 4th international conference on
spacecraft formation
ying missions & technologies, (St-Hubert, Quebec, Canada), pp. 18{20,
2011.
[115] M. Benn, Vision Based Navigation Sensors for Spacecraft Rendezvous and Docking. PhD
thesis, Technical University of Denmark (DTU), 2011.
[116] S. J. Hannah, \A relative navigation application of ULTOR technology for automated ren-
dezvous and docking," in Defense and Security Symposium (R. T. Howard and R. D. Richards,
eds.), (Orlando (Kissimmee), FL), p. 62200E, May 2006.
[117] S. J. Hannah, \ULTOR passive pose and position engine for spacecraft relative navigation,"
in SPIE Defense and Security Symposium (R. T. Howard and P. Motaghedi, eds.), (Orlando,
FL), p. 69580I, Apr. 2008.
[118] J. LeCroy, D. Hallmark, P. Scott, and R. Howard, \Comparison of navigation solutions
for autonomous spacecraft from multiple sensor systems," in SPIE Defense and Security
Symposium (R. T. Howard and P. Motaghedi, eds.), (Orlando, FL), p. 69580D, Apr. 2008.
[119] T. P. Weismuller and M. R. Leinz, \GN&C Technology Demonstrated by the Orbital Express
Autonomous Rendezvous and Capture Sensor System," in AAS 06-016, (Breckenridge, CO),
Feb. 2006.
[120] J. A. Christian and S. Cryan, \A Survey of LIDAR Technology and its Use in Spacecraft Rela-
tive Navigation," in AIAA Guidance, Navigation, and Control (GNC) Conference, American
Institute of Aeronautics and Astronautics, Aug. 2013.
[121] J. Pereira do Carmo, B. Moebius, M. Pfennigbauer, R. Bond, I. Bakalski, M. Foster, S. Bellis,
M. Humphries, R. Fisackerly, and B. Houdou, \Imaging lidars for space applications," in
Optical Engineering + Applications (R. J. Koshel, G. G. Gregory, J. D. Moore, Jr., and D. H.
Krevor, eds.), (San Diego, California, USA), p. 70610J, Aug. 2008.
[122] F. Amzajerdian, V. E. Roback, A. Bulyshev, P. F. Brewster, and G. D. Hines, \Imaging
Flash Lidar for Autonomous Safe Landing and Spacecraft Proximity Operation," in AIAA
SPACE 2016, (Long Beach, California), American Institute of Aeronautics and Astronautics,
Sept. 2016.
[123] K. Klionovska and H. Benningho, \Initial Pose Estimation using PMD Sensor during the
Rendezvous Phase in On-Orbit Servicing Missions," in Proceedings of the 27th AAS/AIAA
Space Flight Mechanics Meeting, (San Antonio, TX, USA), 2017.
[124] T. Tzschichholz, L. Ma, and K. Schilling, \Model-based spacecraft pose estimation and motion
prediction using a photonic mixer device camera," Acta Astronautica, vol. 68, pp. 1156{1167,
Apr. 2011.
[125] M. Nayak, J. Beck, and B. Udrea, \Design of relative motion and attitude proles for three-
dimensional resident space object imaging with a laser rangender," in 2013 IEEE Aerospace
Conference, (Big Sky, MT), pp. 1{16, IEEE, Mar. 2013.
[126] M. V. Nayak, B. Udrea, B. Marsella, and J. Beck, \Application of a laser rangender for space
object imaging and shape reconstruction," in AAS/AIAA Space
ight Mechanics Meeting,
Kauai, HI, 2013.
BIBLIOGRAPHY 173
[127] M. Nayak, J. Beck, and B. Udrea, \Real-time attitude commanding to detect coverage
gaps and generate high resolution point clouds for RSO shape characterization with a laser
rangender," in 2013 IEEE Aerospace Conference, (Big Sky, MT), pp. 1{14, IEEE, Mar.
2013.
[128] A. Bulyshev, F. Amzajerdian, E. Roback, and R. Reisse, \A super-resolution algorithm for
enhancement of
ash lidar data:
ight test results," in IS&T/SPIE Electronic Imaging (C. A.
Bouman and K. D. Sauer, eds.), (San Francisco, California, USA), p. 90200B, Mar. 2014.
[129] F. M. Kolb, M. Windm uller, M. R ossler, B. M obius, P. Casiez, B. Cavrois, and O. Mongrard,
\THE LIRIS-2 3d IMAGING LIDAR ON ATV-5," in 13th Symposium on Advanced Space
Technologies in Robotics and Automation, p. 1, 2015.
[130] L. Blarre, N. Perrimon, C. Moussu, P. Da Cunha, and S. Strandmoe, \Atv videometer qual-
ication," in Proc. of the 55th Int. Astronautical Congress, Vancouver, Canada, 2004.
[131] B. G. U. Moebius and K.-H. Kolk, \RendezVous sensor for automatic guidance of transfer
vehicles to ISS concept of the operational modes depending on actual optical and geometrical-
dynamical conditions," in International Symposium on Optical Science and Technology (E. W.
Taylor, ed.), (San Diego, CA, USA), p. 298, Oct. 2000.
[132] R. T. Howard, A. F. Heaton, R. M. Pinson, C. L. Carrington, J. E. Lee, T. C. Bryan, B. A.
Robertson, S. H. Spencer, J. E. Johnson, and M. S. El-Genk, \The Advanced Video Guidance
Sensor: Orbital Express and the Next Generation," in AIP Conference Proceedings, vol. 969,
(Albuquerque (New Mexico)), pp. 717{724, AIP, 2008.
[133] R. T. Howard, A. F. Heaton, R. M. Pinson, and C. K. Carrington, \Orbital Express Advanced
Video Guidance Sensor," in 2008 IEEE Aerospace Conference, (Big Sky, MT, USA), pp. 1{10,
IEEE, Mar. 2008.
[134] R. Pinson, R. Howard, and A. Heaton, \Orbital Express Advanced Video Guidance Sen-
sor: Ground Testing, Flight Results and Comparisons," in AIAA Guidance, Navigation and
Control Conference and Exhibit, (Honolulu, Hawaii), American Institute of Aeronautics and
Astronautics, Aug. 2008.
[135] E. Martin, D. Maharaj, R. Richards, J. W. Tripp, J. Bolger, and D. King, \RELAVIS: the
development of a 4d laser vision system for spacecraft rendezvous and docking operations,"
in Defense and Security (R. D. Habbit, Jr. and P. Tchoryk, Jr., eds.), (Orlando, FL), p. 69,
Sept. 2004.
[136] M. Gang, J. Zhiguo, L. Zhengyi, Z. Haopeng, and Z. Danpei, \Full-viewpoint 3d Space
Object Recognition Based on Kernel Locality Preserving Projections," Chinese Journal of
Aeronautics, vol. 23, pp. 563{572, Oct. 2010.
[137] H. Zhang, W. Zhang, and Z. Jiang, \Space object, high-resolution, optical imaging simulation
of space-based systems," in SPIE Defense, Security, and Sensing, (Baltimore, Maryland),
pp. 838511{838511{7, May 2012.
[138] M. Lingenauber, S. Kriegel, M. Kabecker, and G. Panin, \A dataset to support and bench-
mark computer vision development for close range on-orbit servicing," in Proceedings of AS-
TRA 2015-13th Symposium on Advanced Space Technologies in Robotics and Automation,
ESA (European Space Agency), 2015.
BIBLIOGRAPHY 174
[139] A. F. Velasquez, G. Marani, T. Evans, M. R. Napolitano, J. A. Christian, and G. Doretto,
\Virtual simulator for testing a vision based pose estimation system for autonomous capture of
satellites with interface rings," in 21st Mediterranean Conference on Control and Automation,
(Platanias, Chania - Crete, Greece), pp. 1597{1602, IEEE, June 2013.
[140] J. Woods and J. Christian, \Glidar: An OpenGL-based, Real-Time, and Open Source 3d
Sensor Simulator for Testing Computer Vision Algorithms," Journal of Imaging, vol. 2, p. 5,
Jan. 2016.
[141] W. Zhang, Z. Jiang, H. Zhang, and J. Luo, \Optical Image Simulation System for Space
Surveillance," in 2013 Seventh International Conference on Image and Graphics, (Qingdao,
China), pp. 721{726, IEEE, July 2013.
[142] Y. Shi, B. Liang, X. Wang, W. Xu, and H. Liu, \Modeling and simulation of space robot
visual servoing for autonomous target capturing," in 2012 IEEE International Conference on
Mechatronics and Automation, (Chengdu, China), pp. 2275{2280, IEEE, Aug. 2012.
[143] N. W. Oumer, S. Kriegel, H. Ali, and P. Reinartz, \Appearance learning for 3d pose detection
of a satellite at close-range," ISPRS Journal of Photogrammetry and Remote Sensing, vol. 125,
pp. 1{15, Mar. 2017.
[144] B. Ouyang, Q. Yu, J. Xiao, and S. Yu, \Dynamic pose estimation based on 3d Point Clouds,"
in 2015 IEEE International Conference on Information and Automation, (Lijiang, China),
pp. 2116{2120, IEEE, Aug. 2015.
[145] J. Fausz, M. Turbe, K. Betts, J. Wetherbee, and S. Pollard, \HWIL Simulation Testbed for
Space Superiority Applications," in AIAA SPACE 2009 Conference & Exposition, (Pasadena,
California), American Institute of Aeronautics and Astronautics, Sept. 2009.
[146] E. Dahlin, Vision Navigation Performance for Autonomous Orbital Rendezvous and Docking.
PhD thesis, Rice University, May 2015.
[147] T. Barrett, S. Schultz, W. Bezouska, and M. Aherne, \Demonstration of Technologies for
Autonomous Micro-Satellite Assembly," in AIAA SPACE 2009 Conference & Exposition,
(Pasadena, California), American Institute of Aeronautics and Astronautics, Sept. 2009.
[148] Y. Chen, Y. Huang, and X. Chen, \Development of simulation testbed for autonomous On-
Orbit Servicing technology," in 2011 IEEE 5th International Conference on Robotics, Au-
tomation and Mechatronics (RAM), (Qingdao, China), pp. 148{153, IEEE, Sept. 2011.
[149] M. Romano, D. A. Friedman, and T. J. Shay, \Laboratory Experimentation of Autonomous
Spacecraft Approach and Docking to a Collaborative Target," Journal of Spacecraft and
Rockets, vol. 44, pp. 164{173, Jan. 2007.
[150] G. Zhang, P. Vela, P. Tsiotras, and D.-M. Cho, \Ecient Closed-Loop Detection and Pose
Estimation for Vision-Only Relative Localization in Space with A Cooperative Target," in
AIAA SPACE 2014 Conference and Exposition, (San Diego, CA), American Institute of
Aeronautics and Astronautics, Aug. 2014.
[151] G. Gefke, A. Janas, R. Chiei, M. Sammons, and B. B. Reed, \Advances in Robotic Servicing
Technology Development," in AIAA SPACE 2015 Conference and Exposition, (Pasadena,
California), American Institute of Aeronautics and Astronautics, Aug. 2015.
BIBLIOGRAPHY 175
[152] M. Priggemeyer, E. G. Kaigom, and J. Robmann, \Virtual testbeds for on-orbit servicing
design and implementation of space robotics applications," in 2015 IEEE International Sym-
posium on Systems Engineering (ISSE), (Rome, Italy), pp. 124{129, IEEE, Sept. 2015.
[153] H. Benningho, F. Rems, E.-A. Risse, and C. Mietner, \European Proximity Operations
Simulator 2.0 (EPOS) - A Robotic-Based Rendezvous and Docking Simulator," Journal of
large-scale research facilities JLSRF, vol. 3, Apr. 2017.
[154] J.-F. Shi, S. Ulrich, and S. Ruel, \Spacecraft Pose Estimation using Principal Component
Analysis and a Monocular Camera," in AIAA Guidance, Navigation, and Control Conference,
(Grapevine, Texas), American Institute of Aeronautics and Astronautics, Jan. 2017.
[155] L. Zhang, F. Zhu, Y. Hao, and W. Pan, \Optimization-based non-cooperative spacecraft
pose estimation using stereo cameras during proximity operations," Applied Optics, vol. 56,
p. 4522, May 2017.
[156] J. Kelsey, J. Byrne, M. Cosgrove, S. Seereeram, and R. Mehra, \Vision-Based Relative Pose
Estimation for Autonomous Rendezvous And Docking," in 2006 IEEE Aerospace Conference,
(Big Sky, MT, USA), pp. 1{20, IEEE, 2006.
[157] K. Harris, A. Baba, J. DiGregorio, T. Grande, C. Castillo, T. Zuercher, B. Udrea, and M. V.
Nayak, \Experimental characterization of a miniature laser rangender for resident space
object imaging," in 23rd AAS/AIAA Space
ight Mechanics Conference, Kauai, HI, 2013.
[158] J. A. Christian, L. Benhacine, J. Hikes, and C. D'Souza, \Geometric Calibration of the
Orion Optical Navigation Camera using Star Field Images," The Journal of the Astronautical
Sciences, vol. 63, pp. 335{353, Dec. 2016.
[159] J. Enright, D. Sinclair, C. Grant, G. McVittie, and T. Dzamba, \Towards Star Tracker Only
Attitude Estimation," in 24th Annual AIAA/USU Conference on Small Satellites, (Logan,
Utah, USA), Aug. 2010.
[160] C. Liebe, \Accuracy performance of star trackers - a tutorial," IEEE Transactions on
Aerospace and Electronic Systems, vol. 38, pp. 587{599, Apr. 2002.
[161] K. Ho, \A survey of algorithms for star identication with low-cost star trackers," Acta
Astronautica, vol. 73, pp. 156{163, Apr. 2012.
[162] M. Na and P. Jia, \A survey of all-sky autonomous star identication algorithms," in 2006
1st International Symposium on Systems and Control in Aerospace and Astronautics, pp. 6
pp.{901, Jan. 2006.
[163] C. R. McBryde and E. G. Lightsey, \A star tracker design for CubeSats," in 2012 IEEE
Aerospace Conference, pp. 1{14, Mar. 2012.
[164] B. Spratling and D. Mortari, \A Survey on Star Identication Algorithms," Algorithms, vol. 2,
pp. 93{107, Jan. 2009.
[165] M. D. Shuster, \The quest for better attitudes," The Journal of the Astronautical Sciences,
vol. 54, pp. 657{683, Dec. 2006.
[166] C. Liebe, \Pattern recognition of star constellations for spacecraft applications," IEEE
Aerospace and Electronic Systems Magazine, vol. 7, pp. 34{41, June 1992.
BIBLIOGRAPHY 176
[167] W. M. Owen, \Methods of Optical Navigation," in AAS Space
ight Mechanics Conference,,
(New Orleans, LA, USA), Feb. 2011.
[168] J. A. Christian, \Optical Navigation Using Planet's Centroid and Apparent Diameter in
Image," Journal of Guidance, Control, and Dynamics, vol. 38, pp. 192{204, Feb. 2015.
[169] J. A. Christian, \Accurate Planetary Limb Localization for Image-Based Spacecraft Naviga-
tion," Journal of Spacecraft and Rockets, vol. 54, pp. 708{730, May 2017.
[170] G. E. Lightsey and J. A. Christian, \Onboard Image-Processing Algorithm for a Spacecraft
Optical Navigation Sensor System," Journal of Spacecraft and Rockets, vol. 49, pp. 337{352,
Mar. 2012.
[171] J. R. Guinn, J. E. Riedel, S. Bhaskaran, R. S. Park, A. T. Vaughan, W. M. Owen, T. Ely,
M. Abrahamsson, and T. Martin-Mur, \The Deep-space Positioning System Concept: Au-
tomating Complex Navigation Operations Beyond the Earth," in AIAA SPACE 2016, (Long
Beach, California), American Institute of Aeronautics and Astronautics, Sept. 2016.
[172] S. R. Schwartz, S. Ichikawa, P. Gankidi, N. Kenia, and G. D. J. Thangavelautham, \Optical
Navigation for Interplanetary CubeSats," in 40th Annual AAS Guidance, Navigation and
Control Conference, (Breckenridge, CO, United States), 2017.
[173] J. Luo, B. Gong, J. Yuan, and Z. Zhang, \Angles-only relative navigation and closed-loop
guidance for spacecraft proximity operations," Acta Astronautica, vol. 128, pp. 91{106, Nov.
2016.
[174] D. C. Wonden and D. K. Geller, \Relative Angles-Only Navigation and Pose Estimation
For Autonomous Orbital Rendezvous," Journal of Guidance, Control, and Dynamics, vol. 30,
pp. 1455{1469, Sept. 2007.
[175] P. Zhang, M. Jia, X. Chen, F. Wu, and J. Gong, \Application of Optical Navigation in
Chang'e-5t1 Mission," in 2015 International Astronautical Congress, 2015.
[176] S. Yazdkhasti, S. Ulrich, and J. Sasiadek, \Computer Vision for Real-Time Relative Naviga-
tion with a Non-Cooperative and Spinning Target Spacecraft," in ASTRA 2015, (Noordwijk,
The Netherlands), May 2015.
[177] T. Ye and F. Zhou, \Autonomous space target recognition and tracking approach using star
sensors based on a Kalman lter," Applied Optics, vol. 54, p. 3455, Apr. 2015.
[178] G. Zhang, Z. Wang, J. Du, T. Wang, and Z. Jiang, \A Generalized Visual Aid System
for Teleoperation Applied to Satellite Servicing," International Journal of Advanced Robotic
Systems, vol. 11, p. 28, Feb. 2014.
[179] Z. Dang and Y. Zhang, \Relative position and attitude estimation for Inner-Formation Grav-
ity Measurement Satellite System," Acta Astronautica, vol. 69, pp. 514{525, Sept. 2011.
[180] G. Zhang, M. Kontitsis, N. Filipe, P. Tsiotras, and P. A. Vela, \Cooperative Relative Navi-
gation for Space Rendezvous and Proximity Operations using Controlled Active Vision: Co-
operative Relative Navigation for Space Rendezvous and Proximity Operations," Journal of
Field Robotics, vol. 33, pp. 205{228, Mar. 2016.
BIBLIOGRAPHY 177
[181] C. W. Sklair, L. B. Gatrell, W. A. Ho, and M. Magee, \Optical target location using
machine vision in space robotics tasks," in Cooperative Intelligent Robotics in Space (R. J. P.
de Figueiredo and W. E. Stoney, eds.), (Boston, MA), p. 380, Feb. 1991.
[182] M. Fiala, \Designing Highly Reliable Fiducial Markers," IEEE Transactions on Pattern Anal-
ysis and Machine Intelligence, vol. 32, pp. 1317{1324, July 2010.
[183] L. B. Gatrell, W. A. Ho, and C. W. Sklair, \Robust image features: concentric contrasting
circles and their image extraction," in Cooperative Intelligent Robotics in Space II (W. E.
Stoney, ed.), (Boston, MA), pp. 235{244, SPIE, Mar. 1992.
[184] R. W. Dabney and R. T. Howard, Closed-loop autonomous docking system. Google Patents,
Apr. 1992.
[185] Zhang Yongkang, \Control system design for micro satellite in-cabin based on visual naviga-
tion," in International Astronautical Congress, vol. IAC-15,B4,6A,8,x30626, 2015.
[186] W. Xu, Y. Liu, B. Liang, Y. Xu, and W. Qiang, \Autonomous Path Planning and Experiment
Study of Free-
oating Space Robot for Target Capturing," Journal of Intelligent and Robotic
Systems, vol. 51, pp. 303{331, Mar. 2008.
[187] P. Fuss, C. Moussu, P. Da Cunha, and S. Strandmoe, \Study of the diraction by the corner
cube," in Optical Systems Design (L. Mazuray, R. Wartmann, A. Wood, J.-L. Tissot, and
J. M. Raynor, eds.), (Glasgow, Scotland, United Kingdom), p. 710015, International Society
for Optics and Photonics, Sept. 2008.
[188] M. Pertile, M. Mazzucato, L. Bottaro, S. Chiodini, S. Debei, and E. Lorenzini, \Uncertainty
evaluation of a vision system for pose measurement of a spacecraft with ducial markers,"
in 2015 IEEE Metrology for Aerospace (MetroAeroSpace), (Benevento, Italy), pp. 283{288,
IEEE, June 2015.
[189] N. Ogawa, F. Terui, and J. Kawaguchi, \Trade-O Study in Possible Scenarios for Precise
Landing of Asteroid Probe Using Multiple Markers," in International Symposium on Articial
Intelligence, Robotics and Automation in Space, (Turin, Italy), Sept. 2012.
[190] Z. Wen, Y. Wang, J. Luo, A. Kuijper, N. Di, and M. Jin, \Robust, fast and accurate vision-
based localization of a cooperative target used for space robotic arm," Acta Astronautica,
vol. 136, pp. 101{114, July 2017.
[191] M. Fiala, \ARTag, a Fiducial Marker System Using Digital Techniques," in 2005 IEEE Com-
puter Society Conference on Computer Vision and Pattern Recognition (CVPR'05), vol. 2,
(San Diego, CA, USA), pp. 590{596, IEEE, 2005.
[192] R. Haralick, H. Joo, C. Lee, X. Zhuang, V. Vaidya, and M. Kim, \Pose estimation from
corresponding point data," IEEE Transactions on Systems, Man, and Cybernetics, vol. 19,
pp. 1426{1446, Dec. 1989.
[193] B. Sabata and J. Aggarwal, \Estimation of motion from a pair of range images: A review,"
CVGIP: Image Understanding, vol. 54, pp. 309{324, Nov. 1991.
[194] D. Eggert, A. Lorusso, and R. Fisher, \Estimating 3-D rigid body transformations: a com-
parison of four major algorithms," Machine Vision and Applications, vol. 9, pp. 272{290,
Mar. 1997.
BIBLIOGRAPHY 178
[195] O. Faugeras, Three-dimensional computer vision: a geometric viewpoint. Articial intelli-
gence, Cambridge, Mass: MIT Press, 1993.
[196] B. E. Tweddle and A. Saenz-Otero, \Relative Computer Vision-Based Navigation for Small
Inspection Spacecraft," Journal of Guidance, Control, and Dynamics, vol. 38, pp. 969{978,
May 2015.
[197] N. Qi, Q. Xia, Y. Guo, J. Chen, and Z. Ma, \Pose measurement model of space cooperative
target capture based on zoom vision system," Advances in Mechanical Engineering, vol. 8,
p. 168781401665595, June 2016.
[198] D.-M. Cho, P. Tsiotras, G. Zhang, and M. Holzinger, \Robust Feature Detection, Acquisition
and Tracking for Relative Navigation in Space with a Known Target," in AIAA Guidance,
Navigation, and Control (GNC) Conference, (Boston, MA), American Institute of Aeronau-
tics and Astronautics, Aug. 2013.
[199] G. Arantes, E. M. Rocco, I. M. da Fonseca, and S. Theil, \Far and proximity maneuvers of a
constellation of service satellites and autonomous pose estimation of customer satellite using
machine vision," Acta Astronautica, vol. 66, pp. 1493{1505, May 2010.
[200] Bin Jia and Ming Xin, \Vision-Based Spacecraft Relative Navigation Using Sparse-Grid
Quadrature Filter," IEEE Transactions on Control Systems Technology, vol. 21, pp. 1595{
1606, Sept. 2013.
[201] V. A. Grishin, \Precision estimation of camera position measurement based on docking
marker observation," Pattern Recognition and Image Analysis, vol. 20, pp. 341{348, Sept.
2010.
[202] X. Zhu, X. Song, X. Chen, and H. Lu, \Flying spacecraft detection with the earth as the
background based on superpixels clustering," in 2015 IEEE International Conference on
Information and Automation, (Lijiang, China), pp. 518{523, IEEE, Aug. 2015.
[203] D. Bayard and P. Brugarolas, \On-board vision-based spacecraft estimation algorithm for
small body exploration," IEEE Transactions on Aerospace and Electronic Systems, vol. 44,
pp. 243{260, Jan. 2008.
[204] S. Sharma and S. D'Amico, \Comparative assessment of techniques for initial pose estimation
using monocular vision," Acta Astronautica, vol. 123, pp. 435{445, June 2016.
[205] F. Terui, H. Kamimura, and S. i. Nishida, \Motion Estimation to a Failed Satellite on Or-
bit using Stereo Vision and 3d Model Matching," in 2006 9th International Conference on
Control, Automation, Robotics and Vision, (Singapore), pp. 1{8, IEEE, Dec. 2006.
[206] J. Ventura, A. Fleischner, and U. Walter, \Pose Tracking of a Noncooperative Spacecraft
During Docking Maneuvers Using a Time-of-Flight Sensor," in AIAA Guidance, Navigation,
and Control Conference, (San Diego, California, USA), American Institute of Aeronautics
and Astronautics, Jan. 2016.
[207] R. Opromolla, G. Fasano, G. Runo, and M. Grassi, \Pose Estimation for Spacecraft Relative
Navigation Using Model-Based Algorithms," IEEE Transactions on Aerospace and Electronic
Systems, vol. 53, pp. 431{447, Feb. 2017.
BIBLIOGRAPHY 179
[208] S. D'Amico, M. Benn, and J. L. Jrgensen, \Pose Estimation of an Uncooperative Spacecraft
from Actual Space Imagery," in Proceedings of 5th International Conference on Spacecraft
Formation Flying Missions and Technologies, 2013.
[209] S. Ruel, C. English, M. Anctil, and P. Church, \3dlasso: Real-time pose estimation from
3d data for autonomous satellite servicing," in Proc. ISAIRAS 2005 Conference, Munich,
Germany, vol. 5, 2005.
[210] K. Shahid and G. Okouneva, \Intelligent LIDAR scanning region selection for satellite pose
estimation," Computer Vision and Image Understanding, vol. 107, pp. 203{209, Sept. 2007.
[211] F. Aghili, M. Kuryllo, G. Okouneva, and C. English, \Fault-Tolerant Position/Attitude Esti-
mation of Free-Floating Space Objects Using a Laser Range Sensor," IEEE Sensors Journal,
vol. 11, pp. 176{185, Jan. 2011.
[212] A. P. Rhodes, J. A. Christian, and T. Evans, \A Concise Guide to Feature Histograms with
Applications to LIDAR-Based Spacecraft Relative Navigation," The Journal of the Astro-
nautical Sciences, vol. 64, pp. 414{445, Dec. 2017.
[213] L. Liu, G. Zhao, and Y. Bo, \Point Cloud Based Relative Pose Estimation of a Satellite in
Close Range," Sensors, vol. 16, p. 824, June 2016.
[214] A. Petit, E. Marchand, and K. Kanani, \Vision-based space autonomous rendezvous: A case
study," in 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, (San
Francisco, CA), pp. 619{624, IEEE, Sept. 2011.
[215] K. Kanani, A. Petit, E. Marchand, T. Chabot, and B. Gerber, \Vision Based Navigation for
Debris Removal Missions," in 63rd International Astronautical Congress (IAC-12), (Naples,
Italy), 2012.
[216] J.-F. Shi, S. Ulrich, S. Ruel, and M. Anctil, \Uncooperative Spacecraft Pose Estimation
Using an Infrared Camera During Proximity Operations," in AIAA SPACE 2015 Conference
and Exposition, (Pasadena, California), American Institute of Aeronautics and Astronautics,
Aug. 2015.
[217] F. Yu, Z. He, B. Qiao, and X. Yu, \Stereo-Vision-Based Relative Pose Estimation for the Ren-
dezvous and Docking of Noncooperative Satellites," Mathematical Problems in Engineering,
vol. 2014, pp. 1{12, 2014.
[218] G. Biondi, S. Mauro, T. Mohtar, S. Pastorelli, and M. Sorli, \Attitude recovery from feature
tracking for estimating angular rate of non-cooperative spacecraft," Mechanical Systems and
Signal Processing, vol. 83, pp. 321{336, Jan. 2017.
[219] S. D'Amico, M. Benn, and J. L. Jrgensen, \Pose estimation of an uncooperative spacecraft
from actual space imagery," in 5th International Conference on Spacecraft Formation Flying
Missions and Technologies, vol. 2, (Munich, Germany), pp. 171{189, 2014.
[220] S. Sharma and S. D'Amico, \Reduced-Dynamics Pose Estimation for Non-Cooperative Space-
craft Rendezvous using Monocular Vision," in 38th AAS Guidance and Control Conference,
Breckenridge, Colorado, 2017.
BIBLIOGRAPHY 180
[221] Y. Zou, X. Wang, T. Zhang, and J. Song, \Combining point and edge for satellite pose
tracking under illumination varying," in 2016 12th World Congress on Intelligent Control
and Automation (WCICA), (Guilin, China), pp. 2556{2560, IEEE, June 2016.
[222] A. Petit, E. Marchand, and K. Kanani, \Tracking complex targets for space rendezvous
and debris removal applications," in 2012 IEEE/RSJ International Conference on Intelligent
Robots and Systems, (Vilamoura-Algarve, Portugal), pp. 4483{4488, IEEE, Oct. 2012.
[223] H. Ueno, Y. Wakabayashi, H. Takahashi, and Y. Inoue, \Evaluation of On-line 3d Motion
Measurement by On-orbit Monitoring Image," in i-SAIRAS 2014, (Montreal, Canada), June
2014.
[224] Z. Xu, Y. Shang, and X. Ma, \Satellite-rocket docking ring recognition method based on
mathematical morphology," in Applied Optics and Photonics China (AOPC2015) (L. Li,
K. P. Thompson, and L. Zheng, eds.), (Beijing, China), p. 967602, Oct. 2015.
[225] Y. He, B. Liang, X. Du, X. Wang, and D. Zhang, \Measurement of relative pose between
two non-cooperative spacecrafts based on graph cut theory," in 2014 13th International Con-
ference on Control Automation Robotics & Vision (ICARCV), (Singapore), pp. 1900{1905,
IEEE, Dec. 2014.
[226] V. Kumar, H. B. Hablani, and R. Pandiyan, \Relative Navigation of Satellites in Formation
Using Monocular Model-Based Vision," IFAC Proceedings Volumes, vol. 47, no. 1, pp. 497{
504, 2014.
[227] C. Liu and W. Hu, \Relative pose estimation for cylinder-shaped spacecrafts using single
image," IEEE Transactions on Aerospace and Electronic Systems, vol. 50, pp. 3036{3056,
Oct. 2014.
[228] A. Cropp, P. Palmer, and C. Underwood, \Pose Estimation of Target Satellite for Proximity
Operations," in Small Satellite Conference, (Logan Utah), 2000.
[229] H. Zhang, Z. Jiang, and A. Elgammal, \Satellite recognition and pose estimation using homeo-
morphic manifold analysis," IEEE Transactions on Aerospace and Electronic Systems, vol. 51,
pp. 785{792, Jan. 2015.
[230] X. Wu, Y. Yi, Y. Xie, H. Cui, L. Tan, T. Song, and J. Sun, \A New Feature Identication
Algorithm of Non-Cooperative Spacecraft Based on Convolution Neural Network," Journal
of Computational and Theoretical Nanoscience, vol. 13, pp. 8946{8951, Nov. 2016.
[231] H. Zhang and Z. Jiang, \Multi-view space object recognition and pose estimation based on
kernel regression," Chinese Journal of Aeronautics, vol. 27, pp. 1233{1241, Oct. 2014.
[232] I. Fomin, A. Bakhshiev, and D. Gromoshinskii, \Study of Using Deep Learning Nets for Mark
Detection in Space Docking Control Images," Procedia Computer Science, vol. 103, pp. 59{66,
2017.
[233] C. A. Wright, J. Van Eepoel, A. Liounis, M. Shoemaker, K. DeWeese, and K. Getzandanner,
\Relative Terrain Imaging Navigation (RETINA) Tool for the Asteroid Redirect Robotic
Mission (ARRM)," in 39th Guidance and Control Conference, (Breckenridge, CO, United
States), Feb. 2016.
BIBLIOGRAPHY 181
[234] A. B. Dietrich and J. W. McMahon, \Robust Orbit Determination with Flash Lidar Around
Small Bodies," Journal of Guidance, Control, and Dynamics, vol. 41, pp. 2163{2184, Oct.
2018.
[235] T. Tzschichholz, T. Boge, and K. Schilling, \Relative pose estimation of satellites using
PMD-/CCD-sensor data fusion," Acta Astronautica, vol. 109, pp. 25{33, Apr. 2015.
[236] H. Mora, J. M. Mora-Pascual, A. Garc a-Garc a, and P. Mart nez-Gonz alez, \Computational
Analysis of Distance Operators for the Iterative Closest Point Algorithm," PLOS ONE,
vol. 11, p. e0164694, Oct. 2016.
[237] M. Lichter and S. Dubowsky, \Shape, Motion, and Parameter Estimation of Large Flexi-
ble Space Structures using Range Images," in Proceedings of the 2005 IEEE International
Conference on Robotics and Automation, (Barcelona, Spain), pp. 4476{4481, IEEE, 2005.
[238] J. K. C. Miller, \Autonomous landmark tracking orbit determination strategy," in
AAS/AIAA Astrodynamics Specialist Conference, (Big Sky, MT, USA), Aug. 2003.
[239] Y. Cheng, A. E. Johnson, L. H. Matthies, and C. F. Olson, \Optical landmark detection for
spacecraft navigation," in Proceedings of the 13th Annual AAS/AIAA Space Flight Mechanics
Meeting (D. J. Scheeres, M. E. Pittelkau, R. J. Proulx, and L. A. Cangahuala, eds.), vol. 114
III, (Ponce, Puerto Rico), pp. 1785{1803, Feb. 2002.
[240] C. Hanak and T. Crain, \Crater Identication Algorithm for the Lost in Low Lunar Orbit
Scenario," in 33rd Annual AAS Guidance and Control Conference, (Breckenridge, CO, United
States), Feb. 2010.
[241] K. Peterson, H. Jones, C. Vassallo, A. Welkie, and W. Whittaker, \Terrain-Relative Planetary
Orbit Determination," in International Symposium on Articial Intelligence, Robotics, and
Automation in Space, (Turin, Italy), Sept. 2012.
[242] J. A. Christian, \Autonomous Navigation System Performance in the Earth-Moon System,"
in AIAA SPACE 2013 Conference and Exposition, (San Diego, CA), American Institute of
Aeronautics and Astronautics, Sept. 2013.
[243] N. Rowell, S. Parkes, and M. Dunstan, \Image Processing for Near Earth Object Opti-
cal Guidance Systems," IEEE Transactions on Aerospace and Electronic Systems, vol. 49,
pp. 1057{1072, Apr. 2013.
[244] A. Mourikis, N. Trawny, S. Roumeliotis, A. Johnson, A. Ansar, and L. Matthies, \Vision-
Aided Inertial Navigation for Spacecraft Entry, Descent, and Landing," IEEE Transactions
on Robotics, vol. 25, pp. 264{280, Apr. 2009.
[245] U. Hillenbrand and R. Lampariello, \Motion and parameter estimation of a free-
oating space
object from range data for motion prediction," in Proceedings of i-SAIRAS, 2005.
[246] H. C. Gomez Martinez and B. Eissfeller, \Autonomous Determination of Spin Rate and
Rotation Axis of Rocket Bodies based on Point Clouds," in AIAA Guidance, Navigation, and
Control Conference, (San Diego, California, USA), American Institute of Aeronautics and
Astronautics, Jan. 2016.
BIBLIOGRAPHY 182
[247] L. Chen, B. Guo, and W. Sun, \Relative Pose Measurement Algorithm of Non-cooperative
Target based on Stereo Vision and RANSAC," International Journal of Soft Computing and
Software Engineering, vol. 2, pp. 26{35, Apr. 2012.
[248] T. Shtark and P. Gurl, \Tracking a Non-Cooperative Target Using Real-Time Stereovision-
Based Control: An Experimental Study," Sensors, vol. 17, p. 735, Mar. 2017.
[249] V. Pesce, M. Lavagna, and R. Bevilacqua, \Stereovision-based pose and inertia estimation of
unknown and uncooperative space objects," Advances in Space Research, vol. 59, pp. 236{251,
Jan. 2017.
[250] P. Jasiobedzki, M. Greenspan, and G. Roth, \Pose Determination and Tracking for Au-
tonomous Satellite Capture," in Proceeding of the 6th International Symposium on Articial
Intelligence, Robotics and Automation in Space, (St-Hubert, Quebec, Canada), 2001.
[251] A. Johnson, Yang Cheng, and L. Matthies, \Machine vision for autonomous small body
navigation," in 2000 IEEE Aerospace Conference. Proceedings (Cat. No.00TH8484), vol. 7,
(Big Sky, MT, USA), pp. 661{671, IEEE, 2000.
[252] C. Vassallo, W. Tabib, and K. Peterson, \Orbital SLAM," in 2015 12th Conference on Com-
puter and Robot Vision, pp. 305{312, June 2015.
[253] G. Hao, X. Du, J. Zhao, H. Chen, J. Song, and Y. Song, \Dense surface reconstruction based
on the fusion of monocular vision and three-dimensional
ash light detection and ranging,"
Optical Engineering, vol. 54, p. 073113, July 2015.
[254] D. Conway and J. L. Junkins, \Real-Time Mapping and Localization Under Dynamic Lighting
for Small-Body Landings," in 37th Annual Conference on Guidance and Control, (Brecken-
ridge, CO, United States), Feb. 2015.
[255] D. T. Conway and J. L. Junkins, \Fusion of Depth and Color Images for Dense Simultaneous
Localization and Mapping," Journal of Image and Graphics, pp. 64{69, 2014.
[256] F. Andert, N. Ammann, and B. Maass, \Lidar-Aided Camera Feature Tracking and Vi-
sual SLAM for Spacecraft Low-Orbit Navigation and Planetary Landing," in Advances in
Aerospace Guidance, Navigation and Control (J. Bordeneuve-Guib e, A. Drouin, and C. Roos,
eds.), pp. 605{623, Cham: Springer International Publishing, 2015.
[257] D. S. Wokes and P. L. Palmer, \Heuristic Pose Estimation of a Passive Target Using a Global
Model," Journal of Guidance, Control, and Dynamics, vol. 34, pp. 293{299, Jan. 2011.
[258] C. Brunskill and P. Palmer, \The InspectorSat mission, operations and testbed for relative
motion simulation," in 5th International Conference on Spacecraft Formation Flying Missions
and Technologies, vol. 48, (Munich, Germany), pp. 2638{2652, 2013.
[259] Z. Li, F. Ge, W. Chen, W. Shao, B. Liu, and B. Cheng, \Particle lter-based relative rolling
estimation algorithm for non-cooperative infrared spacecraft," Infrared Physics & Technology,
vol. 78, pp. 58{65, Sept. 2016.
[260] L. Song, L. Zhi, X. Ma, and Y. Xie, \A Monocular-based Relative Position and Attitude
Filtering for Unknown Targets," in International Astronautical Congress, Oct. 2015.
BIBLIOGRAPHY 183
[261] R. Hartley and A. Zisserman, Multiple view geometry in computer vision. Cambridge, UK ;
New York: Cambridge University Press, 2nd ed ed., 2003.
[262] N. W. Oumer and G. Panin, \Tracking and pose estimation of non-cooperative satellite
for on-orbit servicing," in International Symposium on Articial Intelligence, Robotics and
Automation in Space, (Turin, Italy), Sept. 2012.
[263] N. W. Oumer and G. Panin, \3d point tracking and pose estimation of a space object us-
ing stereo images," in Pattern Recognition (ICPR), 2012 21st International Conference on,
pp. 796{800, IEEE, 2012.
[264] C. Poelman, R. Radtke, and H. Voorhees, \Automatic Reconstruction of Spacecraft 3d Shape
from Imagery," in Advanced Maui Optical and Space Surveillance Technologies Conference,
p. E30, 2008.
[265] M. Lichter and S. Dubowsky, \State, shape, and parameter estimation of space objects from
range images," in IEEE International Conference on Robotics and Automation, 2004. Pro-
ceedings. ICRA '04. 2004, (New Orleans, LA, USA), pp. 2974{2979 Vol.3, IEEE, 2004.
[266] G. Dong and Z. H. Zhu, \Autonomous robotic capture of non-cooperative target by adaptive
extended Kalman lter based visual servo," Acta Astronautica, vol. 122, pp. 209{218, May
2016.
[267] P. Huang, L. Chen, B. Zhang, Z. Meng, and Z. Liu, \Autonomous Rendezvous and Docking
with Nonfull Field of View for Tethered Space Robot," International Journal of Aerospace
Engineering, vol. 2017, pp. 1{11, 2017.
[268] M. Sabatini, G. B. Palmerini, R. Monti, and P. Gasbarri, \Image based control of the
\PINOCCHIO" experimental free
ying platform," Acta Astronautica, vol. 94, pp. 480{492,
Jan. 2014.
[269] M. Sabatini, R. Monti, P. Gasbarri, and G. B. Palmerini, \Adaptive and robust algorithms
and tests for visual-based navigation of a space robotic manipulator," Acta Astronautica,
vol. 83, pp. 65{84, Feb. 2013.
[270] M. Sabatini, R. Monti, P. Gasbarri, and G. Palmerini, \Deployable space manipulator com-
manded by means of visual-based guidance and navigation," Acta Astronautica, vol. 83,
pp. 27{43, Feb. 2013.
[271] S. Chien, D. Tran, S. Schaer, G. Rabideau, A. G. Davies, T. Doggett, R. Greeley, F. Ip,
V. Baker, J. Doubleday, and others, \Onboard Science Product Generation on the Earth
Observing One Mission and Beyond," in ESA Special Publication, vol. 673, 2009.
[272] A. Bevilacqua, A. Gherardi, and L. Carozza, \A vision-based approach for high accuracy
assessment of satellite attitude," in 2009 IEEE 12th International Conference on Computer
Vision Workshops, ICCV Workshops, (Kyoto, Japan), pp. 743{750, IEEE, Sept. 2009.
[273] L. Carozza and A. Bevilacqua, \Error analysis of satellite attitude determination using a
vision-based approach," ISPRS Journal of Photogrammetry and Remote Sensing, vol. 83,
pp. 19{29, Sept. 2013.
BIBLIOGRAPHY 184
[274] T. Kouyama, A. Kanemura, S. Kato, N. Imamoglu, T. Fukuhara, and R. Nakamura, \Satellite
Attitude Determination and Map Projection Based on Robust Image Matching," Remote
Sensing, vol. 9, p. 90, Jan. 2017.
[275] Y. Wang, J. Ng, M. J. Garay, and M. C. Burl, \Onboard image registration from invariant
features," in International Symposium on Articial Intelligence, Robotics and Automation in
Space, (Los Angeles, CA, USA), Feb. 2008.
[276] T. Tianyuan, K. Zhiwei, L. Jin, and H. Xin, \Small celestial body image feature matching
method based on PCA-SIFT," in 2015 34th Chinese Control Conference (CCC), (Hangzhou,
China), pp. 4629{4634, IEEE, July 2015.
[277] M. C. Burl and P. G. Wetzler, \Onboard object recognition for planetary exploration," Ma-
chine Learning, vol. 84, pp. 341{367, Sept. 2011.
[278] D. Hayden, S. Chien, D. R. Thompson, and R. Castano, \Onboard Clustering of Aerial Data
for Improved Science Return," in IJCAI Workshop on AI in Space, 2009.
[279] D. R. Thompson, M. Bunte, R. Casta~ no, S. Chien, and R. Greeley, \Image processing onboard
spacecraft for autonomous plume detection," Planetary and Space Science, vol. 62, pp. 153{
159, Mar. 2012.
[280] A. McGovern and K. L. Wagsta, \Machine learning in space: extending our reach," Machine
Learning, vol. 84, pp. 335{340, Sept. 2011.
[281] D. H. Curtis and R. Cobb, \Satellite Articulation Sensing using Computer Vision," in 55th
AIAA Aerospace Sciences Meeting, American Institute of Aeronautics and Astronautics, Jan.
2017.
[282] F. Qureshi, D. Macrini, D. Chung, J. Maclean, S. Dickinson, and P. Jasiobedzki, \A Com-
puter Vision System for Spaceborne Safety Monitoring," in Proceedings, 8th International
Symposium on Articial Intelligence, Robotics and Automation in Space (iSAIRAS), 2005.
[283] B. Bornstein, T. Estlin, B. Clement, and P. Springer, \Using a multicore processor for rover
autonomous science," in 2011 Aerospace Conference, (Big Sky, USA), pp. 1{9, IEEE, Mar.
2011.
[284] P. M. Kogge, B. J. Bornstein, and T. A. Estlin, Energy usage in an embedded space vi-
sion application on a tiled architecture. Pasadena, CA: Jet Propulsion Laboratory, National
Aeronautics and Space Administration, 2011.
[285] P. McCall, G. Torres, K. LeGrand, M. Adjouadi, C. Liu, J. Darling, and H. Pernicka, \Many-
core computing for space-based stereoscopic imaging," in 2013 IEEE Aerospace Conference,
(Big Sky, MT), pp. 1{7, IEEE, Mar. 2013.
[286] J. Alexander, Yang Cheng, W. Zheng, N. Trawny, and A. Johnson, \A Terrain Relative
Navigation sensor enabled by multi-core processing," in 2012 IEEE Aerospace Conference,
(Big Sky, MT), pp. 1{11, IEEE, Mar. 2012.
[287] T. P. Chen, D. Budnikov, C. J. Hughes, and Y.-K. Chen, \Computer Vision on Multi-Core
Processors: Articulated Body Tracking," in Multimedia and Expo, 2007 IEEE International
Conference on, (Beijing, China), pp. 1862{1865, IEEE, July 2007.
BIBLIOGRAPHY 185
[288] M. Manfredi, G. Williamson, D. Kronkright, E. Doehne, M. Jacobs, E. Marengo, and G. Bear-
man, \Measuring changes in cultural heritage objects with Re
ectance Transformation Imag-
ing," in 2013 Digital Heritage International Congress (DigitalHeritage), (Marseille, France),
pp. 189{192, IEEE, Oct. 2013.
[289] T. Malzbender, D. Gelb, and H. Wolters, \Polynomial texture maps," in Proceedings of the
28th annual conference on Computer graphics and interactive techniques - SIGGRAPH '01,
(Not Known), pp. 519{528, ACM Press, 2001.
[290] A. Abrams, K. Miskell, and R. Pless, \The Episolar Constraint: Monocular Shape from
Shadow Correspondence," in 2013 IEEE Conference on Computer Vision and Pattern Recog-
nition, (Portland, OR, USA), pp. 1407{1414, IEEE, June 2013.
[291] A. Howard, M. Matark, and G. Sukhatme, \Localization for mobile robot teams using max-
imum likelihood estimation," in IEEE/RSJ International Conference on Intelligent Robots
and System, vol. 1, (Lausanne, Switzerland), pp. 434{439, IEEE, 2002.
[292] N. Karam, F. Chausse, R. Aufrere, and R. Chapuis, \Localization of a Group of Communicat-
ing Vehicles by State Exchange," in 2006 IEEE/RSJ International Conference on Intelligent
Robots and Systems, (Beijing, China), pp. 519{524, IEEE, Oct. 2006.
[293] D. Eickstedt and M. Benjamin, \Cooperative Target Tracking in a Distributed Autonomous
Sensor Network," in OCEANS 2006, (Boston, MA, USA), pp. 1{6, IEEE, Sept. 2006.
[294] J. Soeder and J. Raquet, \Image-Aided Navigation Using Cooperative Binocular Stereopsis:
Cooperative Binocular Stereopsis," Navigation, vol. 62, pp. 239{248, Sept. 2015.
[295] C. M. Jewison, B. McCarthy, D. C. Sternberg, D. Strawser, and C. Fang, \Resource Aggre-
gated Recongurable Control and Risk-Allocative Path Planning for On-orbit Servicing and
Assembly of Satellites," in AIAA Guidance, Navigation, and Control Conference, (National
Harbor, Maryland), American Institute of Aeronautics and Astronautics, Jan. 2014.
[296] S. Engelen, E. Gill, and C. Verhoeven, \On the reliability, availability, and throughput of
satellite swarms," IEEE Transactions on Aerospace and Electronic Systems, vol. 50, pp. 1027{
1037, Apr. 2014.
[297] S. I. Roumeliotis and G. A. Bekey, \Distributed Multi-Robot Localization," in Distributed
Autonomous Robotic Systems 4 (L. E. Parker, G. Bekey, and J. Barhen, eds.), pp. 179{188,
Tokyo: Springer Japan, 2000.
[298] A. Prorok, A. Bahr, and A. Martinoli, \Low-cost collaborative localization for large-scale
multi-robot systems," in 2012 IEEE International Conference on Robotics and Automation,
(St Paul, MN, USA), pp. 4236{4241, IEEE, May 2012.
[299] N. Moshtagh, A. Jadbabaie, and K. Daniilidis, \Vision-based Distributed Coordination and
Flocking of Multi-agent Systems," in Robotics: Science and Systems I, Robotics: Science and
Systems Foundation, June 2005.
[300] M. Meingast, M. Kushwaha, Songhwai Oh, X. Koutsoukos, A. Ledeczi, and S. Sastry, \Fusion-
based localization for a Heterogeneous camera network," in 2008 Second ACM/IEEE Inter-
national Conference on Distributed Smart Cameras, (Palo Alto, CA, USA), pp. 1{8, IEEE,
Sept. 2008.
BIBLIOGRAPHY 186
[301] V. Indelman, P. Gurl, E. Rivlin, and H. Rotstein, \Graph-based cooperative navigation using
three-view constraints: Method validation," in Proceedings of the 2012 IEEE/ION Position,
Location and Navigation Symposium, (Myrtle Beach, SC, USA), pp. 769{776, IEEE, Apr.
2012.
[302] S. S. Kia, S. Rounds, and S. Martinez, \Cooperative Localization for Mobile Agents: A Re-
cursive Decentralized Algorithm Based on Kalman-Filter Decoupling," IEEE Control Systems
Magazine, vol. 36, pp. 86{101, Apr. 2016.
[303] V. Indelman, \Distributed Perception and Estimation: a Short Survey," in Principles of
Multi-Robot Systems, workshop in conjunction with Robotics Science and Systems (RSS)
Conference, (Italy), July 2015.
[304] R. Raman, S. Bakshi, and P. K. Sa, \Multi-camera localisation: a review," International
Journal of Machine Intelligence and Sensory Signal Processing, vol. 1, no. 1, p. 91, 2013.
[305] X. Shen, H. Andersen, W. K. Leong, H. X. Kong, M. H. A. Jr, and D. Rus, \A General Frame-
work for Multi-vehicle Cooperative Localization Using Pose Graph," IEEE Transactions on
Intelligent Transportation Systems (Submitted), 2016.
[306] M. Cognetti, P. Stegagno, A. Franchi, G. Oriolo, and H. H. Bultho, \3-D mutual localization
with anonymous bearing measurements," in 2012 IEEE International Conference on Robotics
and Automation, (St Paul, MN, USA), pp. 791{798, IEEE, May 2012.
[307] A. Franchi, G. Oriolo, and P. Stegagno, \Mutual localization in a multi-robot system with
anonymous relative position measures," in 2009 IEEE/RSJ International Conference on In-
telligent Robots and Systems, (St. Louis, MO, USA), pp. 3974{3980, IEEE, Oct. 2009.
[308] A. Martinelli, F. Pont, and R. Siegwart, \Multi-Robot Localization Using Relative Observa-
tions," in Proceedings of the 2005 IEEE International Conference on Robotics and Automa-
tion, (Barcelona, Spain), pp. 2797{2802, IEEE, 2005.
[309] T. Berg and H. Durrant-Whyte, \General decentralized Kalman lters," in Proceedings of
1994 American Control Conference - ACC '94, vol. 2, (Baltimore, MD, USA), pp. 2273{
2274, IEEE, 1994.
[310] J. Hu, L. Xie, and C. Zhang, \Diusion Kalman Filtering Based on Covariance Intersection,"
IEEE Transactions on Signal Processing, vol. 60, pp. 891{902, Feb. 2012.
[311] S. Panzieri, F. Pascucci, and R. Setola, \Multirobot Localisation Using Interlaced Extended
Kalman Filter," in 2006 IEEE/RSJ International Conference on Intelligent Robots and Sys-
tems, (Beijing, China), pp. 2816{2821, IEEE, Oct. 2006.
[312] S. S. Kia, J. Hechtbauer, D. Gogokhiya, and S. Martinez, \Server-Assisted Distributed Coop-
erative Localization Over Unreliable Communication Links," IEEE Transactions on Robotics,
vol. 34, pp. 1392{1399, Oct. 2018.
[313] A. Mourikis and S. Roumeliotis, \Performance analysis of multirobot Cooperative localiza-
tion," IEEE Transactions on Robotics, vol. 22, pp. 666{681, Aug. 2006.
[314] S. I. Roumeliotis and I. M. Rekleitis, \Propagation of Uncertainty in Cooperative Multirobot
Localization: Analysis and Experimental Results," Autonomous Robots, vol. 17, pp. 41{54,
July 2004.
BIBLIOGRAPHY 187
[315] B. Bethke, M. Valenti, and J. How, \Cooperative Vision Based Estimation and Tracking
Using Multiple UAVs," in Advances in Cooperative Control and Optimization (P. M. Pardalos,
R. Murphey, D. Grundel, and M. J. Hirsch, eds.), vol. 369, pp. 179{189, Berlin, Heidelberg:
Springer Berlin Heidelberg, 2007.
[316] L. Luft, T. Schubert, S. I. Roumeliotis, and W. Burgard, \Recursive Decentralized Collabo-
rative Localization for Sparsely Communicating Robots," in Robotics: Science and Systems
XII, Robotics: Science and Systems Foundation, 2016.
[317] L. C. Carrillo-Arce, E. D. Nerurkar, J. L. Gordillo, and S. I. Roumeliotis, \Decentralized
multi-robot cooperative localization using covariance intersection," in 2013 IEEE/RSJ Inter-
national Conference on Intelligent Robots and Systems, (Tokyo), pp. 1412{1417, IEEE, Nov.
2013.
[318] T. R. Wanasinghe, G. K. I. Mann, and R. G. Gosine, \Decentralized Cooperative Localization
Approach for Autonomous Multirobot Systems," Journal of Robotics, vol. 2016, pp. 1{18,
2016.
[319] T. R. Wanasinghe, G. K. I. Mann, and R. G. Gosine, \A Jacobian free approach for multi-
robot relative localization," in 2014 IEEE 27th Canadian Conference on Electrical and Com-
puter Engineering (CCECE), (Toronto, ON, Canada), pp. 1{6, IEEE, May 2014.
[320] L. Montesano, J. Gaspar, J. Santos-Victor, and L. Montano, \Cooperative localization by
fusing vision-based bearing measurements and motion," in 2005 IEEE/RSJ International
Conference on Intelligent Robots and Systems, (Edmonton, Alta., Canada), pp. 2333{2338,
IEEE, 2005.
[321] L. Wang, J. Wan, Y. Liu, and J. Shao, \Cooperative localization method for multi-robot
based on PF-EKF," Science in China Series F: Information Sciences, vol. 51, pp. 1125{1137,
Aug. 2008.
[322] A. Howard, M. Mataric, and G. Sukhatme, \Putting the 'I' in 'team': an ego-centric ap-
proach to cooperative localization," in 2003 IEEE International Conference on Robotics and
Automation (Cat. No.03CH37422), vol. 1, (Taipei, Taiwan), pp. 868{874, IEEE, 2003.
[323] A. Ahmad, G. D. Tipaldi, P. Lima, and W. Burgard, \Cooperative robot localization and
target tracking based on least squares minimization," in 2013 IEEE International Conference
on Robotics and Automation, (Karlsruhe, Germany), pp. 5696{5701, IEEE, May 2013.
[324] R. Tron and R. Vidal, \Distributed image-based 3-D localization of camera sensor networks,"
in Proceedings of the 48h IEEE Conference on Decision and Control (CDC) held jointly with
2009 28th Chinese Control Conference, (Shanghai), pp. 901{908, IEEE, Dec. 2009.
[325] J. Knuth and P. Barooah, \Maximum-likelihood localization of a camera network from hetero-
geneous relative measurements," in 2013 American Control Conference, (Washington, DC),
pp. 2374{2379, IEEE, June 2013.
[326] E. Nerurkar, S. Roumeliotis, and A. Martinelli, \Distributed maximum a posteriori estimation
for multi-robot cooperative localization," in 2009 IEEE International Conference on Robotics
and Automation, (Kobe), pp. 1402{1409, IEEE, May 2009.
BIBLIOGRAPHY 188
[327] E. D. Nerurkar and S. I. Roumeliotis, \Asynchronous Multi-Centralized Cooperative Local-
ization," in 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems,
(Taipei), pp. 4352{4359, IEEE, Oct. 2010.
[328] Y. Dieudonne, O. Labbani-Igbida, and F. Petit, \On the solvability of the localization problem
in robot networks," in 2008 IEEE International Conference on Robotics and Automation,
(Pasadena, CA, USA), pp. 480{485, IEEE, May 2008.
[329] K. Leung, T. Barfoot, and H. Liu, \Decentralized Localization of Sparsely-Communicating
Robot Networks: A Centralized-Equivalent Approach," IEEE Transactions on Robotics,
vol. 26, pp. 62{77, Feb. 2010.
[330] R. Olfati-Saber, J. A. Fax, and R. M. Murray, \Consensus and Cooperation in Networked
Multi-Agent Systems," Proceedings of the IEEE, vol. 95, pp. 215{233, Jan. 2007.
[331] P. Giguere, I. Rekleitis, and M. Latulippe, \I see you, you see me: Cooperative localization
through bearing-only mutually observing robots," in 2012 IEEE/RSJ International Confer-
ence on Intelligent Robots and Systems, (Vilamoura-Algarve, Portugal), pp. 863{869, IEEE,
Oct. 2012.
[332] I. Rekleitis, P. Babin, A. DePriest, S. Das, O. Falardeau, O. Dugas, and P. Giguere, \Experi-
ments in Quadrotor Formation Flying Using On-Board Relative Localization," in IEEE/RSJ
International Conference on Intelligent Robots and Systems, (Hamburg, Germany), Oct. 2015.
[333] V. Dhiman, J. Ryde, and J. J. Corso, \Mutual localization: Two camera relative 6-DOF pose
estimation from reciprocal ducial observation," in 2013 IEEE/RSJ International Conference
on Intelligent Robots and Systems, (Tokyo), pp. 1347{1354, IEEE, Nov. 2013.
[334] O. Dugas, P. Giguere, and I. Rekleitis, \6dof Cooperative Localization for Mutually Observing
Robots," in International Symposium on Robotics Research, (Singapore), Dec. 2013.
[335] X. S. Zhou and S. I. Roumeliotis, \Determining the robot-to-robot 3d relative pose using
combinations of range and bearing measurements: 14 minimal problems and closed-form
solutions to three of them," in 2010 IEEE/RSJ International Conference on Intelligent Robots
and Systems, (Taipei), pp. 2983{2990, IEEE, Oct. 2010.
[336] X. S. Zhou and S. I. Roumeliotis, \Determining the robot-to-robot 3d relative pose using
combinations of range and bearing measurements (Part II)," in 2011 IEEE International
Conference on Robotics and Automation, (Shanghai, China), pp. 4736{4743, IEEE, May
2011.
[337] A. Cornejo and R. Nagpal, \Distributed Range-Based Relative Localization of Robot
Swarms," in Algorithmic Foundations of Robotics XI (H. L. Akin, N. M. Amato, V. Isler,
and A. F. van der Stappen, eds.), vol. 107, pp. 91{107, Cham: Springer International Pub-
lishing, 2015.
[338] Y. Feng, Z. Zhu, and J. Xiao, \Heterogeneous Multi-Robot Localization in Unknown 3d
Space," in 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, (Bei-
jing, China), pp. 4533{4538, IEEE, Oct. 2006.
[339] R. Aragues, L. Carlone, G. Calaore, and C. Sagues, \Multi-agent localization from noisy
relative pose measurements," in 2011 IEEE International Conference on Robotics and Au-
tomation, (Shanghai, China), pp. 364{369, IEEE, May 2011.
BIBLIOGRAPHY 189
[340] F. Meyer, F. Hlawatsch, and H. Wymeersch, \Cooperative simultaneous localization and
tracking (coslat) with reduced complexity and communication," in 2013 IEEE International
Conference on Acoustics, Speech and Signal Processing, (Vancouver, BC, Canada), pp. 4484{
4488, IEEE, May 2013.
[341] T. Schmitt, R. Hanek, S. Buck, and M. Beetz, \Cooperative probabilistic state estimation
for vision-based autonomous mobile robots," in Proceedings 2001 IEEE/RSJ International
Conference on Intelligent Robots and Systems. Expanding the Societal Role of Robotics in the
the Next Millennium (Cat. No.01CH37180), vol. 3, (Maui, HI, USA), pp. 1630{1637, IEEE,
2001.
[342] C.-H. Chang, S.-C. Wang, and C.-C. Wang, \Vision-based cooperative simultaneous local-
ization and tracking," in 2011 IEEE International Conference on Robotics and Automation,
(Shanghai, China), pp. 5191{5197, IEEE, May 2011.
[343] J. Knuth and P. Barooah, \Distributed collaborative localization of multiple vehicles from
relative pose measurements," in 2009 47th Annual Allerton Conference on Communication,
Control, and Computing (Allerton), (Monticello, IL, USA), pp. 314{321, IEEE, Sept. 2009.
[344] J. Knuth and P. Barooah, \Collaborative 3d localization of robots from relative pose mea-
surements using gradient descent on manifolds," in 2012 IEEE International Conference on
Robotics and Automation, (St Paul, MN, USA), pp. 1101{1106, IEEE, May 2012.
[345] R. Tron, R. Vidal, and A. Terzis, \Distributed pose averaging in camera networks via consen-
sus on SE(3)," in 2008 Second ACM/IEEE International Conference on Distributed Smart
Cameras, (Palo Alto, CA, USA), pp. 1{10, IEEE, Sept. 2008.
[346] A. Sarlette, R. Sepulchre, and N. E. Leonard, \Cooperative attitude synchronization in satel-
lite swarms: a consensus approach," IFAC Proceedings Volumes, vol. 40, no. 7, pp. 223{228,
2007.
[347] F. M. Mirzaei, A. I. Mourikis, and S. I. Roumeliotis, \On the Performance of Multi-robot
Target Tracking," in Proceedings 2007 IEEE International Conference on Robotics and Au-
tomation, (Rome, Italy), pp. 3482{3489, IEEE, Apr. 2007.
[348] K. Hausman, G. Kahn, S. Patil, J. Muller, K. Goldberg, P. Abbeel, and G. S. Sukhatme,
\Occlusion-aware multi-robot 3d tracking," in 2016 IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS), (Daejeon, South Korea), pp. 1863{1870, IEEE, Oct.
2016.
[349] R. Olfati-Saber, \Distributed Kalman ltering for sensor networks," in 2007 46th IEEE Con-
ference on Decision and Control, (New Orleans, LA, USA), pp. 5492{5498, IEEE, 2007.
[350] X. Wang, \Intelligent multi-camera video surveillance: A review," Pattern Recognition Let-
ters, vol. 34, pp. 3{19, Jan. 2013.
[351] F. Bajramovic and J. Denzler, \Global Uncertainty-based Selection of Relative Poses for
Multi Camera Calibration.," in BMVC, vol. 2, pp. 745{754, 2008.
[352] M. Bruckner, F. Bajramovic, and J. Denzler, \Geometric and probabilistic image dissimilarity
measures for common eld of view detection," in 2009 IEEE Conference on Computer Vision
and Pattern Recognition, (Miami, FL), pp. 2052{2057, IEEE, June 2009.
BIBLIOGRAPHY 190
[353] D. Devarajan, Zhaolin Cheng, and R. Radke, \Calibrating Distributed Camera Networks,"
Proceedings of the IEEE, vol. 96, pp. 1625{1639, Oct. 2008.
[354] X. Wang, Y. S ekercio glu, and T. Drummond, \Vision-Based Cooperative Pose Estimation
for Localization in Multi-Robot Systems Equipped with RGB-D Cameras," Robotics, vol. 4,
pp. 1{22, Dec. 2014.
[355] N. Anjum, \Camera Localization in Distributed Networks Using Trajectory Estimation,"
Journal of Electrical and Computer Engineering, vol. 2011, pp. 1{13, 2011.
[356] S. Funiak, C. Guestrin, M. Paskin, and R. Sukthankar, \Distributed localization of networked
cameras," in Proceedings of the fth international conference on Information processing in
sensor networks - IPSN '06, (Nashville, Tennessee, USA), p. 34, ACM Press, 2006.
[357] A. Rahimi, B. Dunagan, and T. Darrell, \Simultaneous calibration and tracking with a net-
work of non-overlapping sensors," in Proceedings of the 2004 IEEE Computer Society Confer-
ence on Computer Vision and Pattern Recognition, 2004. CVPR 2004., vol. 1, (Washington,
DC, USA), pp. 187{194, IEEE, 2004.
[358] K. N. Ramamurthy, C.-C. Lin, A. Aravkin, S. Pankanti, and R. Viguier, \Distributed Bun-
dle Adjustment," in 2017 IEEE International Conference on Computer Vision Workshops
(ICCVW), (Venice), pp. 2146{2154, IEEE, Oct. 2017.
[359] C. Hoppe, A. Wendel, S. Zollmann, K. Pirker, A. Irschara, H. Bischof, and S. Kluckner,
\Photogrammetric camera network design for micro aerial vehicles," in Computer vision
winter workshop (CVWW), vol. 8, pp. 1{3, 2012.
[360] S. Agarwal, Y. Furukawa, N. Snavely, I. Simon, B. Curless, S. M. Seitz, and R. Szeliski,
\Building Rome in a day," Communications of the ACM, vol. 54, p. 105, Oct. 2011.
[361] N. Piasco, J. Marzat, and M. Sanfourche, \Collaborative localization and formation
ying
using distributed stereo-vision," in 2016 IEEE International Conference on Robotics and
Automation (ICRA), (Stockholm, Sweden), pp. 1202{1207, IEEE, May 2016.
[362] C. J. Taylor and B. Shirmohammadi, \Self Localizing Smart Camera Networks and their
Applications to 3d Modeling," in Proceedings Workshop on Distributed Smart Cameras (DSC,
Citeseer, 2006.
[363] A. J. Davison, I. D. Reid, N. D. Molton, and O. Stasse, \MonoSLAM: Real-Time Single
Camera SLAM," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29,
pp. 1052{1067, June 2007.
[364] J. M. Perron, R. Huang, J. Thomas, L. Zhang, P. Tan, and R. T. Vaughan, \Orbiting a Mov-
ing Target with Multi-Robot Collaborative Visual SLAM," in 3rd Workshop on Multi VIew
Geometry in RObotics (MVIGRO) at Robotics Science and Systems (RSS 2015 Workshop),
July 2015.
[365] Gab-Hoe Kim, Jong-Sung Kim, and Ki-Sang Hong, \Vision-based simultaneous localization
and mapping with two cameras," in 2005 IEEE/RSJ International Conference on Intelligent
Robots and Systems, (Edmonton, Alta., Canada), pp. 1671{1676, IEEE, 2005.
BIBLIOGRAPHY 191
[366] P. Schmuck and M. Chli, \Multi-UAV collaborative monocular SLAM," in 2017 IEEE In-
ternational Conference on Robotics and Automation (ICRA), (Singapore), pp. 3863{3870,
IEEE, May 2017.
[367] M. J. Tribou, S. L. Waslander, and D. W. Wang, \Scale recovery in multicamera cluster
SLAM with non-overlapping elds of view," Computer Vision and Image Understanding,
vol. 126, pp. 53{66, Sept. 2014.
[368] M. Walter and J. Leonard, \An Experimental investigation of cooperative SLAM," IFAC
Proceedings Volumes, vol. 37, pp. 880{885, July 2004.
[369] R. Sharma, C. Taylor, D. Casbeer, and R. Beard, \Distributed Cooperative SLAM using
an Information Consensus Filter*," in AIAA Guidance, Navigation, and Control Conference,
(Toronto, Ontario, Canada), American Institute of Aeronautics and Astronautics, Aug. 2010.
[370] B. E. Tweddle and D. W. Miller, Computer vision-based localization and mapping of an
unknown, uncooperative and spinning target for spacecraft proximity operations. PhD thesis,
Massachusetts Institute of Technology, 2013.
[371] V. Indelman, P. Gurl, E. Rivlin, and H. Rotstein, \Distributed vision-aided cooperative
localization and navigation based on three-view geometry," in 2011 Aerospace Conference,
(Big Sky, USA), pp. 1{20, IEEE, Mar. 2011.
[372] V. Indelman, P. Gurl, E. Rivlin, and H. Rotstein, \Real-Time Vision-Aided Localization and
Navigation Based on Three-View Geometry," IEEE Transactions on Aerospace and Electronic
Systems, vol. 48, pp. 2239{2259, July 2012.
[373] S. Vemprala and S. Saripalli, \Vision based collaborative localization for multirotor vehicles,"
in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), (Dae-
jeon, South Korea), pp. 1653{1658, IEEE, Oct. 2016.
[374] L. Merino, J. Wiklund, F. Caballero, A. Moe, J. De Dios, P.-E. Forssen, K. Nordberg, and
A. Ollero, \Vision-based multi-UAV position estimation," IEEE Robotics & Automation Mag-
azine, vol. 13, pp. 53{62, Sept. 2006.
[375] I. V. Melnyk, J. A. Hesch, and S. I. Roumeliotis, \Cooperative vision-aided inertial navi-
gation using overlapping views," in 2012 IEEE International Conference on Robotics and
Automation, (St Paul, MN, USA), pp. 936{943, IEEE, May 2012.
[376] W. Mantzel, Hyeokho Choi, and R. Baraniuk, \Distributed camera network localization,"
in Conference Record of the Thirty-Eighth Asilomar Conference on Signals, Systems and
Computers, 2004., vol. 2, (Pacic Grove, CA, USA), pp. 1381{1386, IEEE, 2004.
[377] G. Kurillo, Zeyu Li, and R. Bajcsy, \Wide-area external multi-camera calibration using vision
graphs and virtual calibration object," in 2008 Second ACM/IEEE International Conference
on Distributed Smart Cameras, (Palo Alto, CA, USA), pp. 1{9, IEEE, Sept. 2008.
[378] D. Devarajan and R. J. Radke, \Calibrating Distributed Camera Networks Using Belief Prop-
agation," EURASIP Journal on Advances in Signal Processing, vol. 2007, Dec. 2006.
[379] H. Medeiros, H. Iwaki, and J. Park, \Online distributed calibration of a large network of wire-
less cameras using dynamic clustering," in 2008 Second ACM/IEEE International Conference
on Distributed Smart Cameras, (Palo Alto, CA, USA), pp. 1{10, IEEE, Sept. 2008.
BIBLIOGRAPHY 192
[380] A. Mavrinac, Xiang Chen, and K. Tepe, \Feature-based calibration of distributed smart
stereo camera networks," in 2008 Second ACM/IEEE International Conference on Distributed
Smart Cameras, (Palo Alto, CA, USA), pp. 1{10, IEEE, Sept. 2008.
[381] S.-G. Kim, J. L. Crassidis, Y. Cheng, A. M. Fosbury, and J. L. Junkins, \Kalman Filtering
for Relative Spacecraft Attitude and Position Estimation," Journal of Guidance, Control,
and Dynamics, vol. 30, pp. 133{143, Jan. 2007.
[382] R. Smith and F. Hadaegh, \Parallel Estimation and Control Architectures for Deep-Space
Formation Flying Spacecraft," in 2006 IEEE Aerospace Conference, (Big Sky, MT, USA),
pp. 1{12, IEEE, 2006.
[383] M. Mandic, B. Acikmese, and J. Speyer, \Application of a Decentralized Observer with a
Consensus Filter to Distributed Spacecraft Systems," in AIAA Guidance, Navigation, and
Control Conference, (Toronto, Ontario, Canada), American Institute of Aeronautics and
Astronautics, Aug. 2010.
[384] P. P. Menon and C. Edwards, \An observer based distributed controller for formation
ying
of satellites," in Proceedings of the 2011 American Control Conference, (San Francisco, CA),
pp. 196{201, IEEE, June 2011.
[385] P. Arambel, C. Rago, and R. Mehra, \Covariance intersection algorithm for distributed
spacecraft state estimation," in Proceedings of the 2001 American Control Conference. (Cat.
No.01CH37148), (Arlington, VA, USA), pp. 4398{4403 vol.6, IEEE, 2001.
[386] S. Marques, D. Dumitriu, B. Udrea, L. Pe~ n n, A. Caramagno, J. Ara ujo, J. Bastante, and
P. Lima, \Optimal guidance and decentralised state estimation applied to a formation
ying
demonstration mission in GTO," IET Control Theory & Applications, vol. 1, pp. 532{544,
Mar. 2007.
[387] D. Dumitriu, S. Marques, P. U. Lima, and B. Udrea, \Decentralized, low-communication
state estimation and optimal guidance of formation
ying spacecraft," in 16th AAS/AIAA
Space Flight Mechanics Meeting, 2006.
[388] X. Wang, K. Zhao, and Z. You, \Coordinated Motion Control of Distributed Spacecraft with
Relative State Estimation," Journal of Aerospace Engineering, vol. 29, p. 04015068, May
2016.
[389] J. L. Crassidis, Y. Cheng, C. K. Nebelecky, and A. M. Fosbury, \Decentralized Attitude
Estimation Using a Quaternion Covariance Intersection Approach," The Journal of the As-
tronautical Sciences, vol. 57, pp. 113{128, Jan. 2009.
[390] B. Kang, F. Hadaegh, D. Scharf, and N. Ke, \Decentralized and self-centered estimation
architecture for formation
ying of spacecraft," in 16th International Symposium on Space
Flight Dynamics, (Pasadena, CA), Dec. 2001.
[391] L. Blackmore and F. Hadaegh, \Necessary and Sucient Conditions for Attitude Estimation
in Fractionated Spacecraft Systems," in AIAA Guidance, Navigation, and Control Confer-
ence, (Chicago, Illinois), American Institute of Aeronautics and Astronautics, Aug. 2009.
[392] P. Ferguson and J. How, \Decentralized Estimation Algorithms for Formation Flying Space-
craft," in AIAA Guidance, Navigation, and Control Conference and Exhibit, (Austin, Texas),
American Institute of Aeronautics and Astronautics, Aug. 2003.
BIBLIOGRAPHY 193
[393] T. McLoughlin and M. Campbell, \Distributed Estimate Fusion Filter for Large Spacecraft
Formations," in AIAA Guidance, Navigation and Control Conference and Exhibit, (Honolulu,
Hawaii), American Institute of Aeronautics and Astronautics, Aug. 2008.
[394] B. A ckme se, D. P. Scharf, J. M. Carson, and F. Y. Hadaegh, \Distributed Estimation for
Spacecraft Formations Over Time-Varying Sensing Topologies," IFAC Proceedings Volumes,
vol. 41, no. 2, pp. 2123{2130, 2008.
[395] C. Fan, Z. Meng, and X. Liu, \Multiplicative quaternion extended consensus Kalman lter for
attitude and augmented state estimation," in 2016 35th Chinese Control Conference (CCC),
(Chengdu, China), pp. 8043{8048, IEEE, July 2016.
[396] R. J. Radke, \A Survey of Distributed Computer Vision Algorithms," in Handbook of Ambient
Intelligence and Smart Environments (H. Nakashima, H. Aghajan, and J. C. Augusto, eds.),
pp. 35{55, Boston, MA: Springer US, 2010.
[397] M. Wolf and J. Schlessman, \Distributed Smart Cameras and Distributed Computer Vision,"
in Handbook of Signal Processing Systems (S. S. Bhattacharyya, E. F. Deprettere, R. Leupers,
and J. Takala, eds.), pp. 465{479, New York, NY: Springer New York, 2013.
[398] R. Tron and R. Vidal, \Distributed Computer Vision Algorithms," IEEE Signal Processing
Magazine, vol. 28, pp. 32{45, May 2011.
[399] S. Joshi and S. Boyd, \Sensor Selection via Convex Optimization," IEEE Transactions on
Signal Processing, vol. 57, pp. 451{462, Feb. 2009.
[400] S. P. Chepuri and G. Leus, \Sparsity-promoting adaptive sensor selection for non-linear
ltering," in 2014 IEEE International Conference on Acoustics, Speech and Signal Processing
(ICASSP), (Florence, Italy), pp. 5080{5084, IEEE, May 2014.
[401] C. Potthast, A. Breitenmoser, F. Sha, and G. S. Sukhatme, \Active multi-view object recog-
nition: A unifying view on online feature selection and view planning," Robotics and Au-
tonomous Systems, vol. 84, pp. 31{47, Oct. 2016.
[402] R. Bodor, P. Schrater, and N. Papanikolopoulos, \Multi-camera positioning to optimize task
observability," in Proceedings. IEEE Conference on Advanced Video and Signal Based Surveil-
lance, 2005., (Como, Italy), pp. 552{557, IEEE, 2005.
[403] V. Isler and R. Bajcsy, \The sensor selection problem for bounded uncertainty sensing mod-
els," in IPSN 2005. Fourth International Symposium on Information Processing in Sensor
Networks, 2005., (Los Angeles, CA, USA), pp. 151{158, IEEE, 2005.
[404] J. R. Spletzer and C. J. Taylor, \Dynamic Sensor Planning and Control for Optimally Track-
ing Targets," The International Journal of Robotics Research, vol. 22, pp. 7{20, Jan. 2003.
[405] A. Mourikis and S. Roumeliotis, \Optimal sensor scheduling for resource-constrained local-
ization of mobile robot formations," IEEE Transactions on Robotics, vol. 22, pp. 917{931,
Oct. 2006.
[406] X. Wang, H. Zhang, L. Han, and P. Tang, \Sensor selection based on the sher information
of the Kalman lter for target tracking in WSNs," in Proceedings of the 33rd Chinese Control
Conference, (Nanjing, China), pp. 383{388, IEEE, July 2014.
BIBLIOGRAPHY 194
[407] S. Hadzic and J. Rodriguez, \Utility based node selection scheme for cooperative localization,"
in 2011 International Conference on Indoor Positioning and Indoor Navigation, (Guimaraes,
Portugal), pp. 1{6, IEEE, Sept. 2011.
[408] K. Hausman, J. M uller, A. Hariharan, N. Ayanian, and G. S. Sukhatme, \Cooperative multi-
robot control for target tracking with onboard sensing
," The International Journal
of Robotics Research, vol. 34, pp. 1660{1677, Nov. 2015.
[409] F. Bajramovic, M. Br uckner, and J. Denzler, \An Ecient Shortest Triangle Paths Algo-
rithm Applied to Multi-camera Self-calibration," Journal of Mathematical Imaging and Vi-
sion, vol. 43, pp. 89{102, June 2012.
[410] F. Bajramovic, Self-calibration of multi-camera systems. Berlin: Logos-Verl, 2010. OCLC:
846412708.
[411] X. Cort es and F. Serratosa, \Cooperative pose estimation of a
eet of robots based on in-
teractive points alignment," Expert Systems with Applications, vol. 45, pp. 150{160, Mar.
2016.
[412] C. Piciarelli, L. Esterle, A. Khan, B. Rinner, and G. L. Foresti, \Dynamic Reconguration in
Camera Networks: A Short Survey," IEEE Transactions on Circuits and Systems for Video
Technology, vol. 26, pp. 965{977, May 2016.
[413] H. Rowaihy, S. Eswaran, M. Johnson, D. Verma, A. Bar-Noy, T. Brown, and T. La Porta,
\A survey of sensor selection schemes in wireless sensor networks," in Defense and Security
Symposium (E. M. Carapezza, ed.), (Orlando, Florida, USA), p. 65621A, Apr. 2007.
[414] A. O. Ercan, D. B. Yang, A. El Gamal, and L. J. Guibas, \Optimal Placement and Selection of
Camera Network Nodes for Target Localization," in Distributed Computing in Sensor Systems
(P. B. Gibbons, T. Abdelzaher, J. Aspnes, and R. Rao, eds.), vol. 4026, pp. 389{404, Berlin,
Heidelberg: Springer Berlin Heidelberg, 2006.
[415] F.-M. De Rainville, J.-P. Mercier, C. Gagne, P. Giguere, and D. Laurendeau, \Multisensor
placement in 3d environments via visibility estimation and derivative-free optimization," in
2015 IEEE International Conference on Robotics and Automation (ICRA), (Seattle, WA,
USA), pp. 3327{3334, IEEE, May 2015.
[416] S. Soro and W. Heinzelman, \Camera selection in visual sensor networks," in 2007 IEEE
Conference on Advanced Video and Signal Based Surveillance, (London, UK), pp. 81{86,
IEEE, Sept. 2007.
[417] M. Br uckner, F. Bajramovic, and J. Denzler, \Intrinsic and extrinsic active self-calibration
of multi-camera systems," Machine Vision and Applications, vol. 25, pp. 389{403, Feb. 2014.
[418] J. Grith, L. Singh, and J. How, \Optimal Microsatellite Cluster Design for Space-Based
Tracking Missions," in AIAA Guidance, Navigation and Control Conference and Exhibit,
(Hilton Head, South Carolina), American Institute of Aeronautics and Astronautics, Aug.
2007.
[419] L. Singh, E. Grith, and J. D. Grith, \Optimal Satellite Formation Determination for
Distributed Target State Estimation," in 2007 American Control Conference, (New York,
NY, USA), pp. 978{984, IEEE, July 2007.
BIBLIOGRAPHY 195
[420] A. Mavrinac and X. Chen, \Modeling Coverage in Camera Networks: A Survey," Interna-
tional Journal of Computer Vision, vol. 101, pp. 205{226, Jan. 2013.
[421] E. Galceran and M. Carreras, \A survey on coverage path planning for robotics," Robotics
and Autonomous Systems, vol. 61, pp. 1258{1276, Dec. 2013.
[422] M. Schwager, B. J. Julian, M. Angermann, and D. Rus, \Eyes in the Sky: Decentralized
Control for the Deployment of Robotic Camera Networks," Proceedings of the IEEE, vol. 99,
pp. 1541{1561, Sept. 2011.
[423] F. L. Markley and J. L. Crassidis, Fundamentals of spacecraft attitude determination and
control. No. 33 in Space technology library, New York: Springer, 2014. OCLC: ocn882605422.
[424] E. Leerts, F. Markley, and M. Shuster, \Kalman Filtering for Spacecraft Attitude Estima-
tion," Journal of Guidance, Control, and Dynamics, vol. 5, pp. 417{429, Sept. 1982.
[425] A. Rhodes, E. Kim, J. A. Christian, and T. Evans, \LIDAR-based Relative Navigation of Non-
Cooperative Objects Using Point Cloud Descriptors," in AIAA/AAS Astrodynamics Specialist
Conference, (Long Beach, California), American Institute of Aeronautics and Astronautics,
Sept. 2016.
[426] J. Thienel and F. L. Markley, \Comparison of Angular Velocity Estimation Methods for
Spinning Spacecraft," in AIAA Guidance, Navigation, and Control Conference, (Portland,
Oregon), American Institute of Aeronautics and Astronautics, Aug. 2011.
[427] J. Leonard and H. Durrant-Whyte, \Mobile robot localization by tracking geometric bea-
cons," IEEE Transactions on Robotics and Automation, vol. 7, pp. 376{382, June 1991.
[428] E. Olson, \AprilTag: A robust and
exible visual ducial system," in 2011 IEEE International
Conference on Robotics and Automation, (Shanghai, China), pp. 3400{3407, IEEE, May 2011.
[429] A. Comport, E. Marchand, M. Pressigout, and F. Chaumette, \Real-time markerless track-
ing for augmented reality: the virtual visual servoing framework," IEEE Transactions on
Visualization and Computer Graphics, vol. 12, pp. 615{628, July 2006.
[430] P. Besl and N. D. McKay, \A method for registration of 3-D shapes," IEEE Transactions on
Pattern Analysis and Machine Intelligence, vol. 14, pp. 239{256, Feb. 1992.
[431] S. Melax, \The Shortest Arc Quaternion," in Game Programming Gems (M. DeLoura, ed.),
pp. 214{218, Charles River Media, 2000.
[432] Y. Chen and G. Medioni, \Object modelling by registration of multiple range images," Image
and Vision Computing, vol. 10, pp. 145{155, Apr. 1992.
[433] M. Corsini, P. Cignoni, and R. Scopigno, \Ecient and Flexible Sampling with Blue Noise
Properties of Triangular Meshes," IEEE Transactions on Visualization and Computer Graph-
ics, vol. 18, pp. 914{924, June 2012.
[434] A. O. Hero and D. Cochran, \Sensor Management: Past, Present, and Future," IEEE Sensors
Journal, vol. 11, pp. 3064{3075, Dec. 2011.
[435] L. Ye, S. Roy, and S. Sundaram, \On the Complexity and Approximability of Optimal Sen-
sor Selection for Kalman Filtering," in 2018 Annual American Control Conference (ACC),
(Milwaukee, WI), pp. 5049{5054, IEEE, June 2018.
BIBLIOGRAPHY 196
[436] H. Zhang, R. Ayoub, and S. Sundaram, \Sensor selection for Kalman ltering of linear
dynamical systems: Complexity, limitations and greedy algorithms," Automatica, vol. 78,
pp. 202{210, Apr. 2017.
[437] J. Le Ny, E. Feron, and M. A. Dahleh, \Scheduling Continuous-Time Kalman Filters," IEEE
Transactions on Automatic Control, vol. 56, pp. 1381{1394, June 2011.
[438] M. F. Huber, \On multi-step sensor scheduling via convex optimization," in 2010 2nd Inter-
national Workshop on Cognitive Information Processing, (Elba Island, Italy), pp. 376{381,
IEEE, June 2010.
[439] L. F. O. Chamon, G. J. Pappas, and A. Ribeiro, \The mean square error in Kalman ltering
sensor selection is approximately supermodular," in 2017 IEEE 56th Annual Conference on
Decision and Control (CDC), (Melbourne, Australia), pp. 343{350, IEEE, Dec. 2017.
[440] L. Meier, J. Peschon, and R. Dressler, \Optimal control of measurement subsystems," IEEE
Transactions on Automatic Control, vol. 12, pp. 528{536, Oct. 1967.
[441] R. Mehra, \Optimization of measurement schedules and sensor designs for linear dynamic
systems," IEEE Transactions on Automatic Control, vol. 21, pp. 55{64, Feb. 1976.
[442] M. Athans, \On the Determination of Optimal Costly Measurement Strategies for Linear
Stochastic Systems," IFAC Proceedings Volumes, vol. 5, pp. 303{313, June 1972.
[443] V. Gupta, T. H. Chung, B. Hassibi, and R. M. Murray, \On a stochastic sensor selection
algorithm with applications in sensor scheduling and sensor coverage," Automatica, vol. 42,
pp. 251{260, Feb. 2006.
[444] D. R. Fuhrmann, \One-step optimal measurement selection for linear gaussian estimation
problems," in 2007 International Waveform Diversity and Design Conference, (Pisa, Italy),
pp. 224{227, IEEE, June 2007.
[445] A. Broder, \Generating random spanning trees," in 30th Annual Symposium on Foundations
of Computer Science, (Research Triangle Park, NC, USA), pp. 442{447, IEEE, 1989.
[446] S. P. Chepuri and G. Leus, \Sensor selection for estimation, ltering, and detection," in 2014
International Conference on Signal Processing and Communications (SPCOM), (Bangalore,
India), pp. 1{5, IEEE, July 2014.
[447] S. Sharma, C. Beierle, and S. D'Amico, \Pose estimation for non-cooperative spacecraft
rendezvous using convolutional neural networks," in 2018 IEEE Aerospace Conference, (Big
Sky, MT), pp. 1{12, IEEE, Mar. 2018.
Abstract (if available)
Abstract
This work is concerned with how a group of satellites operating in close proximity might collectively determine the relative state of individual satellites using vision-based measurements. This capability has applications in aggregated and modular satellites, orbital construction and repair, and planetary and asteroid exploration. Whereas prior work has traditionally limited analysis to one-to-one satellite rendezvous scenarios, the present work treats the problem as a whole to focus on collaborative solutions facilitated by information sharing within the group of satellites. The solution presented in this thesis uses a Mutliplicative Extended Kalman Filter to fuse relative vision-based measurements collected by individual satellites as well as other onboard sensors such as star trackers and gyroscopes. The resulting collaborative estimate that uses all available information is more accurate than if satellites estimate relative state in isolation. Furthermore, the filter framework is leveraged to select only a subset of measurements at each timestep with the goal of reducing power consumption, computation load, or communication bandwidth during operations. This selection strategy is posed as a minimization problem to select a specified number of measurements that reduce the weighted trace of the expected state covariance matrix for the full system state. A heuristic approach is used to perform this minimization that incorporates the measurement error characteristics of the specified vision-based sensor. The filtering and sensor selection methods are then tested using both simulated noisy measurement data as well as simulated vision-based data using rendered images. Finally, this work includes an extensive survey of the literature on orbital computer vision applications and on the subject of collaborative localization.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Relative-motion trajectory generation and maintenance for multi-spacecraft swarms
PDF
An analytical and experimental study of evolving 3D deformation fields using vision-based approaches
PDF
Incorporation of mission scenarios in deep space spacecraft design trades
PDF
Long range stereo data-fusion from moving platforms
PDF
Increased fidelity space weather data collection using a non-linear CubeSat network
PDF
A declarative design approach to modeling traditional and non-traditional space systems
PDF
The development of an autonomous subsystem reconfiguration algorithm for the guidance, navigation, and control of aggregated multi-satellite systems
PDF
Optimization of relative motion rendezvous via modified particle swarm and primer vector theory
PDF
Unreal engine testbed for computer vision of tall lunar tower assembly
PDF
Optimizing element & system compliance of robotic, gecko adhesion-based grippers to unprepared space debris targets
PDF
Optimal guidance trajectories for proximity maneuvering and close approach with a tumbling resident space object under high fidelity J₂ and quadratic drag perturbation model
PDF
New approaches to satellite formation-keeping and the inverse problem of the calculus of variations
PDF
Characterization of space debris from collision events using ballistic coefficient estimation
PDF
Trajectory mission design and navigation for a space weather forecast
PDF
Development and evaluation of high‐order methods in computational fluid dynamics
PDF
Systems engineering and mission design of a lunar South Pole rover mission: a novel approach to the multidisciplinary design problem within a spacecraft systems engineering paradigm
PDF
Designing an optimal software intensive system acquisition: a game theoretic approach
PDF
Modeling and simulation testbed for unmanned systems
PDF
Experimental and numerical investigations of charging interactions of a dusty surface in space plasma
PDF
Thermal and deformation analysis of multiphase sulfur concrete extrusion for planetary construction
Asset Metadata
Creator
Bezouska, William A.
(author)
Core Title
Cooperative localization of a compact spacecraft group using computer vision
School
Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Astronautical Engineering
Publication Date
12/16/2019
Defense Date
09/05/2019
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
computer vision,cooperative localization,formation flying,information gain,Kalman filtering,model-based tracking,multiplicative extended kalman filter,OAI-PMH Harvest,pose estimation,rendering,robotics,satellite,satellite servicing,spacecraft,stereo vision,swarm
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Kunc, Joseph (
committee chair
), Barnhart, David (
committee member
), Erwin, Dan (
committee member
), Helvajian, Henry (
committee member
)
Creator Email
bezouska@usc.edu,willbez@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c89-256409
Unique identifier
UC11673870
Identifier
etd-BezouskaWi-8079.pdf (filename),usctheses-c89-256409 (legacy record id)
Legacy Identifier
etd-BezouskaWi-8079.pdf
Dmrecord
256409
Document Type
Dissertation
Rights
Bezouska, William A.
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
computer vision
cooperative localization
formation flying
information gain
Kalman filtering
model-based tracking
multiplicative extended kalman filter
pose estimation
rendering
robotics
satellite
satellite servicing
spacecraft
stereo vision
swarm