Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
A robotic system for benthic sampling along a transect
(USC Thesis Other)
A robotic system for benthic sampling along a transect
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
A ROBOTIC SYSTEM FOR BENTHIC SAMPLING ALONG A TRANSECT
by
Jnaneshwar Das
A Thesis Presented to the
FACULTY OF THE VITERBI SCHOOL OF ENGINEERING
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
MASTER OF SCIENCE
(COMPUTER SCIENCE)
May 2008
Copyright 2008 Jnaneshwar Das
Dedication
This thesis is dedicated to my parents for their unconditional love and support.
ii
Acknowledgements
First of all, I would like to thank my advisor Prof. Gaurav Sukhatme, for envisioning this
fascinatingproject,encouragingmetotryoutnewideas,helpingmewithmyexperiments,
andalsoallowingmetousetheswimmingpoolathisresidencefordeployments. Secondly,
IowemygratitudetoProf. DavidCaronforagreeingtobeapartofthethesiscommittee
and for being enthusiastic and showing positive interest in my work. The time I spent
with him in the field, collecting data and performing experiments (much before I even
started working on this thesis), has helped me understand the subtle facts of marine life
and I look forward to the application of this work in real marine experiments!
I was delighted to have Prof. Stefan Schaal in my thesis committee and I would like
to thank him for his support and willingness.
IhavetothanktheentireNAMOSteamforbeingaconstantsourceofinspirationand
knowledge. I am especially grateful to Carl Oberg for his ideas and help in fabricating
the robot and setting up the experiments.
I am immensely grateful to Irina Strelnik for helping me purchase materials and
components. Without her help, this work would not have been possible. I also owe my
gratitude to Kusum Shori for helping me throughout with administrative issues.
iii
I also want to thank all my labmates for their support, and for making the writing of
this thesis a lot of fun. I thank Arvind Pereira for being a source of encouragement and
for his constant help, Jon Binney for helping me with vision related doubts, Jonathan
Kelly for his help with L
A
T
E
X and state estimation, Eric Wade and Marin Kobilarov for
their help with system dynamics, Srikanth Saripalli for lending me the Gumstix stack to
kick-start my experiments, Karthik Dantu for being there at the lab late at night and
helping me lift equipment, Bin Zhang for being such a good listener and for his advice,
Sameera Poduri for her insight, and DeWitt Latimer for sharing many interesting facts
related to robots and systems. Lastly, I would like to thank Mrinal Kalakrishnan for
being a good friend, a patient sounding board, and a good critic.
iv
Table of Contents
Dedication ii
Acknowledgements iii
List Of Figures vii
Abstract ix
Chapter 1: INTRODUCTION 1
1.1 Problem statement and assumptions . . . . . . . . . . . . . . . . . . . . . 1
1.2 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Chapter 2: SYSTEM DESCRIPTION 6
2.1 Packaging and support infrastructure . . . . . . . . . . . . . . . . . . . . . 7
2.2 Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.4 Actuation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.5 Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.6 Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.7 Sampling parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Chapter 3: SYSTEM MODEL 25
3.1 Euler-Lagrange equations of motion . . . . . . . . . . . . . . . . . . . . . 26
3.2 Simplified pitch-neglected model . . . . . . . . . . . . . . . . . . . . . . . 34
3.3 Linearized model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.3.1 Trigonometric approximation . . . . . . . . . . . . . . . . . . . . . 36
3.3.2 First order model using Taylor series expansion . . . . . . . . . . . 39
Chapter 4: LOCATION ESTIMATOR 42
4.1 The Kalman filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.2 The forward-backward filter . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.3 Smoother . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.4 Data collection and pre-processing . . . . . . . . . . . . . . . . . . . . . . 47
4.5 Filter design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Chapter 5: EXPERIMENTAL RESULTS 50
v
Chapter 6: VISUAL RECONSTRUCTION 60
6.1 Problem statement and approach . . . . . . . . . . . . . . . . . . . . . . . 60
6.2 The need for location estimates . . . . . . . . . . . . . . . . . . . . . . . . 61
6.3 Assumption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
6.4 Camera model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
6.5 Reprojection scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Chapter 7: CONCLUSION AND FUTURE WORK 67
References 69
vi
List Of Figures
1.1 The robot positioned on a PVC guide-rail. . . . . . . . . . . . . . . . . . . 2
2.1 Schematic of the robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Dimensions of the robot (side view) . . . . . . . . . . . . . . . . . . . . . 9
2.3 Dimensions of the robot (front view) . . . . . . . . . . . . . . . . . . . . . 10
2.4 Dimensions of the robot (top view) . . . . . . . . . . . . . . . . . . . . . . 11
2.5 Cross-sectional view of the deployment scheme . . . . . . . . . . . . . . . 12
2.6 Images grabbed by onboard camera. . . . . . . . . . . . . . . . . . . . . . 13
2.7 Block diagram showing the subsystems of the robot . . . . . . . . . . . . 14
2.8 Magnetometer data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.9 Thruster force profile (force vs digital thruster command) . . . . . . . . . 16
2.10 Software workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.11 State transition diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.12 Plot of raw thruster input . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.13 Actuator control primitive . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.14 Sampling scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.15 The two periods associated with one sampling point . . . . . . . . . . . . 22
2.16 Figure showing sampling time for a sampling point . . . . . . . . . . . . . 23
vii
2.17 Comparison of sampling time of two sampling points . . . . . . . . . . . . 24
3.1 Free body diagram for the robot . . . . . . . . . . . . . . . . . . . . . . . 27
3.2 Robot frame w.r.t the world frame . . . . . . . . . . . . . . . . . . . . . . 29
3.3 The Simulink block for the robot model . . . . . . . . . . . . . . . . . . . 39
3.4 Plot showing the linearized model’s response to step control input . . . . 40
3.5 Plot showing the linearized model’s response to triangular control input . 41
4.1 Raw sensor data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.2 Plot showing raw and processed sensor data . . . . . . . . . . . . . . . . . 48
5.1 The experimental tank setup . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.2 Results from the tank experiment (trial 1) . . . . . . . . . . . . . . . . . . 53
5.3 The swimming pool setup . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
5.4 Illustration of the experimental setup at the pool . . . . . . . . . . . . . . 55
5.5 Forward-backward filter estimates for trial 2 in the tank . . . . . . . . . . 56
5.6 Smoother estimates for trial 2 in the tank . . . . . . . . . . . . . . . . . . 57
5.7 Forward-backward filter and smoother estimates for the pool trial . . . . . 58
5.8 Standard deviation of position estimates . . . . . . . . . . . . . . . . . . . 59
6.1 Image capturing scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
6.2 The image reconstruction scheme . . . . . . . . . . . . . . . . . . . . . . . 65
6.3 Reconstructed image showing surface features . . . . . . . . . . . . . . . . 66
viii
Abstract
This thesis presents the design of a novel robotic system capable of long-term benthic
sampling along a transect. The robot is built to traverse back and forth along a mechan-
ical guide-rail at the bottom of a water body. This work describes the system design,
the experimental setup, and shows results from localization tests with the robot in a
laboratory tank and a shallow swimming pool.
ix
Chapter 1
INTRODUCTION
Underwater observing systems are used for making physical, chemical and biological
measurements of aquatic environments. Automating such systems can lead to significant
savings in scientist time, while increasing the possibility of new discoveries in limnology
and oceanography.
Thisworkintroducesanovel benthic robotic observing systemdesignedtoperiodically
patrolatransectwhilecapturingimagesofthewateraboveit. Itdescribestheelectrome-
chanical design of the robot and its propulsion and support system. Using a combination
of inertial sensing, absolute position measurements at the transect endpoints, and a sim-
ple dynamic model, we are able to apply a Kalman smoother to obtain accurate position
estimates for the robot.
1.1 Problem statement and assumptions
Our focus in this work is on the development of a system for long-term deployment. We
envisage a robotic sentinel that is able to maintain a presence underwater for weeks or
months at a time allowing seasonal-scale repeatable measurements. Mobility is energy
1
Figure 1.1: The robot positioned on a PVC guide-rail. The phone handset next to the
robot is approximately 20 cm long and is shown for scale.
2
expensive, and a long-term presence may naturally suggest a Lagrangian approach where
the robot moves with the water mass (e.g., a set of drifters or a glider). For increased
precision and repeatability we considered an alternative approach wherein a guide-rail is
used to mechanically support the motion of the robot as it moves to and fro between the
transect endpoints. The advantages of this approach are discussed below.
Our design is energy-friendly since energy is needed only to move, not to hold sta-
tion. It is also unobtrusive relative to the water surface. Importantly, the approach is
potentially highly repeatable and precise. The main contributions of this thesis are 1. the
design of the prototype robot, and 2. the experimental evidence in support of its precise
and repeatable positioning ability. Since this is our first report on this novel robot, we
make the following assumptions (we plan to relax some of these in future work). For the
purposes of the work reported here the guide-rail is rigid and has negligible slack. The
yaw of the robot is mechanically restricted. The robot is not equipped with sensors to
directly sense its precise location on the guide-rail. Finally, we do not envisage real-time,
fine-grained location estimates being necessary for robot operation.
We utilize a state estimation technique that uses frequent measurements from iner-
tial sensors, and infrequent, periodic absolute position information every time the robot
reaches the end of the guide-rail. This is achieved by using a forward-backward Kalman
filter and a smoother.
3
1.2 Related work
Thereissignificantinterestandactivityintheconstructionoflarge(continent-scale)earth
observingsystems. ProminentexamplesincludetheNationalEcologicalObservatoryNet-
work (NEON) [1] (a network of observatories connected together), and the NEPTUNE
ocean observatory [2] (a heavily instrumented, cabled ocean observing system to char-
acterize information from the ocean and the ocean floor). NEON and NEPTUNE both
envisagerobots(tetheredandfree-swimming)aspartoftheirdesignforsemi-autonomous
data gathering and sample collection.
Autonomous underwater vehicles (AUVs) are an obvious choice for underwater sam-
pling because of their mobility, a good example being the REMUS system [4]. How-
ever, AUVs cannot operate for extended periods of time because of energy constraints.
Autonomous sea gliders [8] are a more energy-friendly alternative, generating forward
propulsion by varying the buoyancy, allowing them to span thousands of kilometers with
a single charge cycle. More recent developments have resulted in gliders which harvest
propulsive energy from the heat flow between the vehicle engine and the thermal gradient
of the temperate and tropical ocean [20]. However both platforms essentially use dead-
reckoning for state estimation (occasionally bounding the error with accurate absolute
position information on resurfacing). Significant recent work on AUV autonomy is being
done in other areas. In [7] two approaches are investigated: a geometric approach in
which a mobile robot moves within a field of static nodes and all nodes are capable of
estimating the range to their neighbours acoustically; and visual odometry from stereo
4
cameras. The Networked Aquatic Microbial Observing Systems (NAMOS) [18] is an ex-
ample of an Autonomous Surface Vehicle (ASV) collaborating with a static set of sensor
nodes to form an observing system for large-scale aquatic sampling.
The localization technique exploited in this work is the forward-backward Kalman
filter and the Kalman smoother which has found numerous applications in robotics. Ex-
amplesincludethe3Dattitudeestimationforlocalizationofplanetaryrovers[14], and[6]
tomaintainacceptableprecisionofvehiclepositioningevenduringGPSsatelliteblackout
(e.g., when the vehicle enters a tunnel). A Kalman smoother has been used underwater
for tasks such as object tracking [12].
Finally, we note that the work reported here owes a significant piece of its intellectual
heritage to the Networked Infomechanical Systems (NIMS) project [5, 10] which has
producedasetofroboticsystemssupportedbycablewaystructuressuspendedwithinthe
environment. VariousNIMSsystemshavebeenusedtosenseaspectsoftheforestcanopy,
a river confluence, and a lake. These systems have all featured a gondola carrying the
sensor payload suspended from a cableway and a pulley mechanism for gondola mobility.
In contrast, the system reported here hovers above a guide-rail due to buoyant force, and
uses a thruster for propulsion.
5
Chapter 2
SYSTEM DESCRIPTION
This chapter discusses the design and the deployment scheme of the system. The desired
features for the system are as follows,
1. The robot should be capable of reliable underwater operation for extended periods
of time.
2. The robot should consume very low power and have enough onboard energy to last
the specified operational time (ideally days).
3. The robot should stay upright so that the onboard camera can capture images.
4. The robot should be positively buoyant so that in events such as discharged bat-
teries, leaks, or mission completion, it can disengage from the rail and resurface
utilizing its positive buoyancy.
5. The guide-rail assembly should be portable and easily deployable.
6. The robot should have a simple sensing scheme that can reliably detect endpoints.
6
Inthefollowingsectionswediscussourimplementationtoincorporateeachfeature. More
detailed information about the component specifications and costs can be found on the
project website, http://robotics.usc.edu/∼jnaneshd/rbss/ .
2.1 Packaging and support infrastructure
Figure 2.1: Schematic of the robot
The components of the robot are packaged into two separate levels, each being an
off-the-shelf drybox for underwater use (Figure 2.1). The upper level is a small (27 cm
x 10.5 cm x 10.5 cm), transparent unit which houses the computing hardware, inertial
measurement unit, power converter, safety fuses, motor control board and an upward
looking camera. The lower level is a larger drybox (31 cm x 20 cm x 12.5 cm) which
7
holds the batteries, magnetic power switches, and hall-effect sensor for marker detection.
A communication and power link between the upper and the lower units is established
via cables running through a tygon hose, clamped to the dry-boxes using tapped hose
fittings. Thedryboxesarefixedtoacustom-madealuminumframe,onwhichthethruster
is mounted. At the very bottom of this frame is a roller assembly that links the robot to
the guide rail. Stainless steel guide rods at each end restrict yaw. The robot is powered
by two 7Ah, 12V sealed lead-acid batteries housed within the lower deck. One battery is
dedicated for the computing and sensing hardware, whereas the other battery is used by
the thruster. A DC-DC voltage converter supplies the required voltages to the Gumstix
stack, motor control board and the sensor suite.
Hermetically sealed reed switch assemblies were mounted within the robot housing to
serve as power switches. They are triggered externally with small neodymium magnets,
allowing easy power cycling and emergency shutdown.
The guide-rail is a multi-segment PVC pipe, with embedded neodymium magnets at
theendpointstoserveasmarkers. Thepipeismountedtosupportingstructuresateither
endusingaluminumstructuralfittings. Thewholeassemblyistetheredusingnylonropes
and a pulley assembly to allow easy immersion and resurfacing. Figure 3 illustrates the
deployment setup.
2.2 Computing
TheprimarycomputinghardwareontherobotistheGumstix600MHzsingleboardcom-
puter. It handles the general workflow, secondary tasks such as capturing and storing
8
Figure 2.2: Dimensions of the robot (side view)
images, communication (when on the surface), sensing, and control. Appropriate expan-
sion cards provide IO and storage functionalities (RS232, I
2
C, Ethernet, 802.11b and
microSD). These boards are available in a stackable format with a small overall package
size. The small form factor and fanless design result in relatively low power consumption
(∼ 450mA) compared to other computing platforms making the Gumstix stack an ideal
choice for this robot. The software has been written using C and C++ to run on the
Linux operating system.
9
Figure 2.3: Dimensions of the robot (front view)
2.3 Sensing
Therobot’ssensorsuiteconsistsofaccelerometers, inclinometers, aproximitysensor, and
a camera.
10
Figure 2.4: Dimensions of the robot (top view)
Inertial measurement unit
TheMicroStrain3DM-GInertialMeasurementUnit(IMU)withatri-axialaccelerometer,
inclinometer and rate-gyro provides acceleration, angular velocity, and attitude informa-
tion respectively. The IMU comes with an RS232 interface which can run both in polled
and free-running mode. We obtain data from the IMU by polling it at a rate of 16Hz.
Proximity sensor
A key element in the system is the ability to accurately and reliably detect the endpoints
of the transect. This can be achieved by using mechanical switches, optical sensors,
and close-range or long-range magnetic proximity sensors. Mechanical switches need
carefulmountingandwaterproofing. Closerangeproximitysensorssufferfromplacement
11
Figure 2.5: Cross-sectional view of the deployment scheme
constraintsforreliabletriggering. Tocircumventtheseproblems,wedecidedtouseahall-
effect sensor to detect magnetic markers embedded at the endpoints of the guide-rail.
We used the Philips KMZ51 hall-effect sensor packaged within the CMPS03 electronics
compass module. The compass was mounted with its plane perpendicular to the Earth’s
magneticfield,suchthatoneofthetwoavailablemagneticsensorswasaxiallyalignedwith
the markers. Raw sensor data was read over the I
2
C bus and thresholded appropriately
within the the sensing software. Using this setup, the robot reliably detected magnetic
markers at a distance of 20 cm with a field of view of 6 cm. Figure 2.8 shows the readings
from this sensor corresponding to the robot traversal on the guide-rail.
Camera
The robot uses an upward looking Logitech digital camera to grab images of the water
surface. The secondary goal of this thesis is to stitch these images together using position
12
Figure 2.6: Images grabbed by the upward looking camera during a traversal in a swim-
ming pool. A measuring tape suspended on the water served both as an image source
and ground truth for future experiments on image reconstruction
estimatesfromtherobot’sofflinestateestimator. ThecameraisinterfacedtotheGumstix
stack through a USB host interface. Currently, the robot captures images only between
thrusts.
2.4 Actuation
The robot is actuated using a single Seabotix BTD150 bidirectional underwater thruster.
Its force profile is shown in Figure 2.9. The thruster is controlled using a Roboteq AX500
motor control board which receives commands from the Gumstix stack over the RS232
interface.
13
Figure 2.7: Block diagram showing the subsystems of the robot
14
Figure 2.8: Magnetometer data from a pool test spanning 3.5 m. The robot required
thirteen thrusts to traverse the transect. Peaks in the magnetometer data were detected
each time the robot reached an endpoint.
15
Figure 2.9: Thruster force profile (force vs digital thruster command)
2.5 Workflow
We implemented the state transition sequence shown in Figure 2.11. Traversal in each
direction is implemented as a series of ’hops’, with a specified time interval between
each hop. The software runs as three independent processes. The first process reads
IMU data over the RS232 interface at the desired rate, and updates the information in
shared memory. The second process reads magnetometer data over the I
2
C interface,
and updates the same in the shared memory. The last process runs the main control loop
andis responsibleformaintainingtherobotstate, runningthrusteractuationcommands,
grabbing images and storing them, and detecting endpoints based on magnetometer data
read from the shared memory. Figure 2.10 shows the software workflow of the robot.
16
Figure 2.10: Software workflow for the robot.
17
2.6 Control
Open-loop control was chosen over other techniques to keep the system simple (recall
that the focus here is to test the accuracy at which we can reconstruct location estimates
for the robot post-traverse). A triangular thruster actuation pattern was experimentally
identified to be an ideal control primitive, since it allowed smooth traversal between
locations. A PD controller will be considered in the future for pitch stabilization, when
the system is deployed in a waterbody with significant environmental disturbances.
Actuation primitive
The control primitive for the robot is shown in Figure 2.13 The control primitive was
chosen to be a triangular wave with a duration of t
a
seconds, and peak thrust of p units.
The hop length and traversal velocity can be varied by choosing appropriate values of t
a
and p. We can also use trapezoidal input for longer traversal lengths. This will have a
ramp-up and ramp-down width of t
r
seconds, and constant thrust for t
c
seconds. The
peak thrust is p units.
2.7 Sampling parameters
A typical mission, implementing the traversal in hops, can be parameterized by a spatial
resolution and a temporal resolution. The spatial resolution defines the spacing between
hops, whereas the temporal resolution defines the time between hops. In this section we
discuss the nature of sampling for our system.
18
Figure 2.11: The state transition diagram of the robot. The robot maintains internal
state-actionpairswhichareupdatedbasedonmagnetometermeasurements. Thelocation
on the pipe is maintained as coarse belief states viz. right end, free space, and left end.
The fine-grained locations are computed offline by the robot’s state estimator.
19
Figure 2.12: Plot of raw thruster input during a hop (digital thruster command vs time)
(a) Triangular input (b) Trapezoidal Input
Figure 2.13: Illustration of actuator control primitives
20
Analysis
Figure 2.14 shows a mission with uniform spatial resolution. We make the assumption
that the traversal is symmetric, i.e., the dynamics of the robot is the same for both
directions of traversal.
Figure 2.14: Illustration showing a uniformly divided mission
If N is number of points to sample, then, number of hops is given by r = N−1. If s
is the traversable span of the system and h is the expected hop length, then,
N −1 =
s
h
(2.1)
N =
s
h
+1 (2.2)
(2.3)
21
If the time spent on a hop is t, then the time for a sample point to be revisited for the
first direction is,
t
1
= 2(N −n)t (2.4)
For the other direction of traversal, the time before revisit is,
t
2
= 2(n−1)t (2.5)
Therefore, the total time before the next cycle starts is given by,
T =t
1
+t
2
(2.6)
T = 2(N −n)t+2(n−1)t (2.7)
T = 2(N −1)t (2.8)
Hence T also gives us the time period of a complete traversal. Best sampling density is
Figure 2.15: The two periods associated with one sampling point
22
Figure 2.16: Figure showing sampling time for a sampling point
achieved at the point where t
1
= t
2
2(N −n)t = 2(n−1)t (2.9)
N −n =n−1 (2.10)
n =
N +1
2
(2.11)
This point corresponds to the location at the middle of the guide-rail. All other locations
are sampled non-uniformly within a traversal cycle, the worst scenario being when t
1
or
t
2
= 0. This happens at n =N and n = 1. These points correspond to the endpoints of
the guide-rail. The sampling density becomes progressively better as we move towards
the center of the transect, being the best at a sampling point closest to the center.
23
Figure 2.17: Comparison of sampling time of two sampling points
24
Chapter 3
SYSTEM MODEL
Since the robot is constrained to a guide-rail, its movement is restricted to the x-axis,
giving it only one translational degree of freedom. Additionally, by restricting yaw we
eliminate one of the three rotational degrees of freedom. Rolling is practically negligible
when environmental disturbances like underwater currents are absent. Hence, for all
practical purposes the robot possess only two degrees of freedom: translation along the
x-axis, and rotation around the pitch or y-axis. The free-body diagram [3] shown in
Figure 3 illustrates various forces acting on the robot. We neglect added mass (inertia
added due to accelerating or decelerating fluid that moves along with the robot body),
and assume hydrodynamic damping to be linear because of slow speeds involved. Since
the center of buoyancy is above the center of mass, the robot experiences a restoring
torque which tries to keep it upright.
Because of the effect of the gravity field, positive buoyancy, and unrestricted pitch,
the dynamics of the robot is non-linear. The non-linearity can be reduced by restricting
pitching, but this will have two undesired effects:
25
1. Currently, we ensure that the guide-rail is perfectly horizontal (i.e., its pitch is
negligible). However, in real deployments, this will be difficult to ensure. With
restricted pitch, the robot cannot utilize its restoring torque to stay naturally up-
right. Rather, it will be perpendicular to the guide-rail and this is undesirable for
our application where we wish to capture images of the surface.
2. By restricting the pitch, we lose the ability to use pitch and the pitch-rate data
for state estimation. However, with a linear approximation of the non-linear model
(since our pitch angle is always small) we can use data from two additional sen-
sors in our state estimation approach. With an onboard three axis accelerometer,
inclinometer, and rate-gyro, this will potentially give us better estimates than just
using accelerometer data.
Keeping these points in mind, we chose not to restrict pitching. In the following sections,
we determine the equations of motion of the system, and various candidate models that
can be used for location estimation, discussed in the next chapter.
3.1 Euler-Lagrange equations of motion
We derive the equations of motion using the Euler-Lagrange formulation [15]. This ap-
proach is advantageous because our robot is a constrained system and the Lagrangian
approach allows us to ignore the effect of the guide-rail in our analysis. Instead we are
able to work only with a set of generalized coordinates and forces. We choose the coor-
dinates to be x, the location of the point of contact of the roller and the guide-rail; and
φ, the pitch angle of the robot. These coordinates completely characterize the motion of
26
Figure 3.1: Free body diagram for the robot
27
the robot. Figure 3.1 shows our choice of coordinates. In this discussion, we show the
steps in arriving at the equations of motion.
The Lagrangian of a system is given by,
L =T −U (3.1)
Where, L is the Lagrangian, T is the kinetic energy of the system, and U is the potential
energy of the system. In the following section, we show the equations for the kinetic
energy and the potential energy of the system, and derive the equations of motion using
the Euler-Lagrange formulation.
Kinetic energy
The kinetic energy of the robot is given by,
T =T
translational
+T
rotational
(3.2)
T =
1
2
Mv
2
x
+
1
2
Mv
2
y
+
1
2
I
˙
φ
2
(3.3)
M is the mass of the robot, I is the moment of inertia of the robot about its center of
mass,v
x
isthevelocityofthecenterofmassalongtheglobalxaxis, andv
y
isthevelocity
of the center of mass along the global y axis.
28
Figure 3.2: Robot frame w.r.t the world frame
29
To obtain the velocities along the x and y axes, we start by representing the position
of the center of mass of the robot in terms of the generalized coordinate x. Note that the
pitch angle is the same at all points on the robot.
x
c
=x−lsinφ (3.4)
y
c
=lcosφ (3.5)
Where l is the distance between the center of mass and the roller axis, x
c
is the position
of the center of mass along the global x axis, and y
c
is the position of the center of mass
along the global y axis; both represented in terms of the generalized coordinates x and
φ. Therefore,
˙ x
c
= ˙ x−l
˙
φcosφ (3.6)
˙ y
c
=−l
˙
φsinφ (3.7)
v
2
=v
2
x
+v
2
y
(3.8)
v
2
= ˙ x
2
+l
2
˙
φ
2
−2l˙ x
˙
φcosφ (3.9)
(3.10)
The translational kinetic energy is hence given by,
T
translational
=
1
2
M ˙ x
2
+
1
2
Ml
2
˙
φ
2
−Ml˙ x
˙
φcosφ (3.11)
30
and the rotational kinetic energy is given by,
T
rotational
=
1
2
I
˙
φ
2
(3.12)
T =
1
2
M ˙ x
2
+
1
2
Ml
2
˙
φ
2
+Ml˙ x
˙
φcosφ+
1
2
I
˙
φ
2
(3.13)
(3.14)
Potential energy
The potential energy of the robot can be obtained by computing the total work done in
displacing the robot from it’s upright position.
τ
w
=Mglsinφ (3.15)
τ
b
=F
b
l
b
sinφ (3.16)
τ = (F
b
l
b
−Mgl)sinφ (3.17)
Where, F
b
is the buoyant force, l
b
is the distance between the center of mass and the
center of buoyancy, g is the acceleration due to gravity, and τ is the restoring torque.
Work done in displacing the robot by a small angle dφ is given by,
dW =τdφ (3.18)
31
Therefore, the total work done in displacing the robot from pitch angle of 0
◦
to a pitch
angle of φ
◦
is given by,
W =
Z
φ
0
τdφ (3.19)
W =
Z
φ
0
(F
b
l
b
−Mgl)sinφdφ (3.20)
W =−(F
b
l
b
−Mgl)[cosφ]
φ
0
(3.21)
W = (F
b
l
b
−Mgl)(1−cosφ) (3.22)
Hence, the potential energy of the robot at a pitch angle of φ is given by,
V
φ
= (F
b
l
b
−Mgl)(1−cosφ) (3.23)
Equations of motion
L =T −U
φ
(3.24)
L =
1
2
M ˙ x
2
+
1
2
Ml
2
˙
φ
2
−M ˙ x
˙
φlcosφ+
1
2
I
˙
φ
2
−(F
b
l
b
−Mgl)(1−cosφ) (3.25)
d
dt
∂L
∂˙ q
−
∂L
∂q
=F
conservative
+F
non−conservative
(3.26)
For the x coordinate,
d
dt
∂L
∂˙ x
−
∂L
∂x
=F cosφ−C
d
˙ x−f (3.27)
∂L
∂˙ x
=M ˙ x−Ml
˙
φcosφ (3.28)
d
dt
∂L
∂˙ x
=M¨ x−Ml
¨
φcosφ+Ml
˙
φ
2
sinφ (3.29)
32
∂L
∂x
= 0 (3.30)
d
dt
∂L
∂˙ x
−
∂L
∂x
=M¨ x−Ml
¨
φcosφ+Ml
˙
φ
2
sinφ (3.31)
M¨ x−Ml
¨
φcosφ+Ml
˙
φ
2
sinφ =F cosφ−F
d
−f (3.32)
For the φ coordinate,
∂L
∂
˙
φ
=Ml
2
˙
φ−Ml˙ xcosφcos+Iφ (3.33)
d
dt
∂L
∂
˙
φ
=Ml
2
¨
φ+I
¨
φ−Ml¨ xcosφ+Ml˙ x
˙
φsinφ (3.34)
∂L
∂φ
=Ml˙ x
˙
φsinφ−(F
b
l
b
−Mgl)sinφ (3.35)
d
dt
∂L
∂
˙
φ
−
∂L
∂φ
=Ml
2
¨
φ+I
¨
φ−Ml¨ xcosφ+(F
b
l
b
−Mgl)sinφ (3.36)
Ml
2
¨
φ+I
¨
φ+Ml¨ xcosφ+(F
b
l
b
−Mgl)sinφ =−F(l−l
th
)+F
d
(l+l
d
)cosφ (3.37)
On rearranging, we obtain,
¨ x =
F cosφ−C
d
˙ x−f −Ml
˙
φ
2
sinφ+Ml
¨
φcosφ
M
(3.38)
¨
φ =
Mglsinφ+F
b
l
b
sinφ+Ml¨ xcosφ−F(l−l
th
)+F
d
(l+l
d
)cosφ
Ml
2
+I
(3.39)
C
d
is the hydrodynamic damping constant, F is the thruster force, l
d
and l
th
are the
distances of the center of drag and point of thruster force from the center of mass.
33
Equations 3.38 and 3.39 are the second order equations of motions of the system. In
the next section we show simplified models derived from these system of equations.
3.2 Simplified pitch-neglected model
During experimental trials, the robot was observed to pitch by a maximum angle of 1.5
◦
from the mean orientation. On ignoring this, we obtain a simplified linear model given
by the following equations,
M¨ x =F −C
d
˙ x (3.40)
¨ x =
F
M
−
C
d
M
˙ x (3.41)
We model the robot as a discrete linear dynamical system in the state space, represented
as
x
k
=Ax
k−1
+Bu
k−1
(3.42)
The state vector is x
k
= [s
k
,v
k
,a
k
]
T
where s
k
, v
k
and a
k
are the position, velocity and
acceleration at timestep k. The control input u
k
is the force from the thruster which is
given by
u
k
=φ(n
k
) (3.43)
φ is a function which maps the input command ’n’ to a force (see Figure 2.9 for the
thruster force profile). A is the state transition matrix and B is the control matrix.
34
Using a first order solution to 3.41 with a discretization of Δt, Equation (3.42) can be
expanded to,
s
k
v
k
a
k
=
1 Δt 0
0 1 Δt
0
−C
d
M
0
s
k−1
v
k−1
a
k−1
+
0
0
1
M
u
k−1
(3.44)
The damping constant and the mass of the system were determined empirically to be
17.15 Ns/m, and 10.33 kg respectively. The control loop with a discretization (Δt).The
statespaceequationcanbediscretizedusingthefollowingequations,Equation3.44hence
reduces to
s
k
v
k
a
k
=
1 0.12 0
0 1 0.12
0 −1.6956 0
s
k−1
v
k−1
a
k−1
+
0
0
0.0968
u
k−1
(3.45)
Equation 3.45 represents the simplified discrete dynamical model of the system used in
the following section on location estimation.
35
3.3 Linearized model
3.3.1 Trigonometric approximation
Since our pitching angles are small (under 1.5
◦
from upright position), we approximate
cosφ ≈ 1 and sinφ ≈ φ. Using this in Equations 3.38 and 3.39, and neglecting higher
order terms, we get,
¨ x =
F −C
d
˙ x+C
d
˙
φl
d
+Ml
¨
φ
M
(3.46)
¨
φ =
Mglφ+F
b
l
b
φ+Ml¨ x−F(l−l
th
)+C
d
˙ x(l+l
d
)−C
d
˙
φl
d
(l+l
d
)
Ml
2
+I
(3.47)
On simplifying the equations, we get,
¨ x =
Ml
2
+I−Mll
th
MI
C
d
F −
Mll
d
−I−Ml
2
MI
˙ x+
MF
b
ll
b
−M
2
l
2
g
MI
C
d
φ
+
Il
d
+Ml
2
l
d
−Mll
2
d
MI
˙
φ (3.48)
¨
φ =
l−lth
I
F +
l
d
−l
I
C
d
˙ x+
F
b
l
b
−Mgl
I
φ+
ll
d
−ld
2
I
C
d
(3.49)
On collecting and simplifying the constant terms, we get,
¨ x =C
1
F +C
2
˙ x+C
3
φ+C
4
˙
φ (3.50)
¨
φ =K
1
F +K
2
˙ x+K
3
φ+K
4
˙
φ (3.51)
36
The state space representation is now given by a first order discretization of the model
presented in Equations 3.50 and 3.51, with a sampling rate Δt. The state vector is given
by x
k
= [s
k
,v
k
,a
k
,φ
k
,ω
k
,α
k
]
T
where s
k
, v
k
are from 3.44, φ
k
, ω
k
and α
k
are pitch angle,
pitch rate, and angular acceleration around the pitch axis respectively.
s
k
v
k
a
k
φ
k
ω
k
α
k
=
1 Δt 0 0 0 0
0 1 Δt 0 0 0
0 C
2
0 C
3
C
4
0
0 0 0 1 Δt 0
0 0 0 0 1 Δt
0 K
2
0 K
3
K
4
0
s
k−1
v
k−1
a
k−1
φ
k−1
ω
k−1
α
k−1
+
0
0
C
1
0
0
K
1
u
k−1
(3.52)
The system parameters were determined empirically. Moment of inertia was calculated
around the center of mass using the parallel axis theorem on the individual moments
due to prominent masses in the system viz. the batteries (cuboidal, with c.m. at the
center), two dryboxes(cuboidal with c.m at the geometric center), frame (c.m calculated
by hanging the frame from various points. ) Buoyant force was calculated by measuring
the difference between dry and wet weights, and by individually measuring the buoyant
forces of the additional packs. The values for the parameters were determined to be as
follows,
M = 10.33Kg,F
B1
= 1.2N,L
B1
= 0.42m,F
B2
= 9.6050N,L
B2
= 0.31N, (3.53)
C
D
= 17.5179Ns/m,L
D
= 0.355m,L
TH
= 0.18m,L
W
= 0.2574m,I = 1.1349m (3.54)
37
Where, F
B1
and F
B2
are the buoyant forces due to the robot body and the buoyancy
packs respectively, L
B1
and L
B2
are the length to the centers of buoyancy from the roller
axis. C
d
is the drag coefficient. L
d
is the length to the center of drag from the roller
axis,L
th
is the distance of the thruster from the roller axis. L
w
is the distance of the
center of mass from roller axis, I is the moment of inertia about the center of mass. The
modified system constants are as follows,
C
1
= 0.0968,C
2
=−1.3069,C
3
=−4.8164,C
4
= 0.4574 (3.55)
K
1
= 0.1586,K
2
= 1.5,K
3
=−19.2657,K
4
=−1.5420 (3.56)
The state space equation for the system is hence given by,
s
k
v
k
a
k
φ
k
ω
k
α
k
=
1 Δt 0 0 0 0
0 1 Δt 0 0 0
0 −1.3069 0 −4.8164 0.4574 0
0 0 0 1 Δt 0
0 0 0 0 1 Δt
0 1.5486 0 −19.2657 −1.5420 0
s
k−1
v
k−1
a
k−1
φ
k−1
ω
k−1
α
k−1
+
0
0
0.0968
0
0
0.1586
u
k−1
(3.57)
38
Figure 3.3: The Simulink block for the robot model
3.3.2 First order model using Taylor series expansion
The system can be linearized using the Taylor series expansion around the current state,
and considering only the first order terms [16]. The general non-linear system model is
given by,
˙ x =f(x,u,t) (3.58)
The Taylor series linearization of Equation 3.58 is given by,
f(x) =f(x
a
,u
a
,t
a
)+
∂f
∂x
xa
(x−x
a
)+
∂f
∂u
ua
(u−u
a
) (3.59)
The Jacobian matrix of f(x) gives us the gradient of the function. This was computed
using the Symbolic Math Toolbox in MATLAB [11]. This model can be used with an
extended Kalman filter (EKF) [16].
For this work, we have shown results only for the simplified pitch-neglected model
with just the accelerometer data. As a part of our future work, we plan to compare this
result with the estimates from the linearized models to evaluate the improvement (or the
lack thereof) with two additional sensors.
39
0 1 2 3 4 5 6 7 8 9 10
0
20
40
thruster command
−0.5
0
0.5
pitch angular acceleration (rad/s^2)
−0.1
0
0.1
pitch rate (rad/s)
−2
0
2
pitch angle (degrees)
−0.5
0
0.5
acceleration (m/s^2)
−0.2
0
0.2
velocity (m/s)
0
0.2
0.4
position (m)
Time offset: 0
Figure 3.4: Plot showing the linearized model’s response to step control input
40
0 1 2 3 4 5 6 7 8 9 10
0
20
40
thruster command
−0.1
0
0.1
pitch angular acceleration (rad/s^2)
−0.05
0
0.05
pitch rate (rad/s)
−2
0
2
pitch angle (degrees)
−0.05
0
0.05
acceleration (m/s^2)
−0.1
0
0.1
velocity (m/s)
0
0.2
0.4
position (m)
Time offset: 0
Figure 3.5: Plot showing the linearized model’s response to triangular control input
41
Chapter 4
LOCATION ESTIMATOR
We are interested in determining the position of the robot on the guide-rail as a function
of time. Since our robot is a data-logging sentinel, we are not particularly interested in
online estimates of the state. Instead, we want to compute the estimates offline from
the logged sensor data, which could then be used to position-stamp the payload data.
This will allow us to stitch images together, generate spatio-temporal distributions of
temperature and other aquatic parameters of interest. Our goal is to use an estimation
technique which would not require absolute sensors like wheel encoders since they do
not perform reliably underwater (slip, no contact with a rigid surface, sensor failure).
Instead,wefusefrequentdatafrominternalsensorssuchasaccelerometers,andinfrequent
absolute position data available every time our robot reaches an endpoint on the guide
rail. Since we want to obtain our estimates offline, we propose to use a forward-backward
Kalman filter along with a smoother. A forward Kalman filter propagates forward in
time, estimating the state of the robot at every timestep. Once the filter reaches the next
absolute measurement (at the end of the guide-rail), a backward filter is run. This filter
uses the control inputs and acceleration in a manner similar to the forward filter, but
42
propagates backward in time. The backward filter eventually reaches the timestep where
the forward filter started estimating. The smoother combines the estimates from these
filters resulting in position estimates with reduced uncertainty.
4.1 The Kalman filter
The Kalman filter [9, 16, 19] is a recursive filter used extensively as a state estimator for
dynamicalsystemsinthepresenceofnoisymeasurements. Itfollowsapredictor-corrector
cycle; firstpredictingthestateoftherobotusingtheprocessmodel, andthenupdatingit
with the measurements from the sensors. The discrete Kalman filter uses the state space
given by,
x
k
=Ax
k−1
+Bu
k−1
+w
k−1
(4.1)
y
k
=Hx
k
+r
k
(4.2)
where x
k
is the state of the system at timestep k,
y
k
is the measurement at timestep k,
u
k−1
is control input at timestep k−1,
q
k−1
∼ N(0,Q
k−1
) is the process noise at timestep k−1,
r
k−1
∼ N(0,R
k−1
) is the measurement noise at timestep k−1,
A is the state transition matrix of the dynamical model,
B is the control matrix,
and H is the measurement model matrix.
43
Predict step
x
−
k
=Ax
k−1
+Bu
k−1
(4.3)
P
−
k
=AP
k−1
A
T
+Q
k−1
(4.4)
Update step
v
k
=y
k
−Hx
−
k
(4.5)
K
k
=
P
−
k
H
T
HP
−
k
H
T
+R
k
(4.6)
x
k
=x
−
k
+K
k
v
k
(4.7)
P
k
=P
−
k
−K
k
S
k
K
T
k
(4.8)
where, x
−
k
and P
−
k
are the estimated mean and covariance of the state before seeing the
measurement at timestep k, v
k
is the measurement residual at timestep k, K
k
is the
filter gain, which tells how much the predictions should be corrected at timestep k, x
k
and P
k
are the estimated mean and covariance of the state respectively after seeing the
measurement at timestep k.
4.2 The forward-backward filter
Forward filter
The forward filter uses the same principle as the standard Kalman filter discussed earlier.
It propagates the filter forward in time starting from the first timestep. For the sake of
brevity, we skip the discussion of the formulation for this filter.
44
Backward filter
The backward filter propagates the system back in time. Given the current state and the
previous input, the process model propagates the system back in time to determine the
stateattheprevioustimestep. Themeasurementmodelremainsunchanged. Equation4.1
can be modified to propagate the system backward in time as follows,
x
k
=Ax
k−1
+Bu
k−1
(4.9)
Ax
k−1
=x
k
−Bu
k−1
(4.10)
x
k−1
=A
−1
x
k
−(A
−1
B)u
k−1
(4.11)
Predict step
x
−
k−1
=A
−1
x
k
−(A
−1
B)u
k−1
(4.12)
P
−
k
=A
−1
P
k−1
A
−1
T
+Q
k−1
(4.13)
Update step
The update step remains the same, given by,
v
k
=y
k
−Hx
−
k
(4.14)
K
k
=
P
−
k
H
T
HP
−
k
H
T
+R
k
(4.15)
x
k
=x
−
k
+K
k
v
k
(4.16)
45
P
k
=P
−
k
−K
k
S
k
K
T
k
(4.17)
Hence, the backward filter provides us with the optimal state estimates starting from the
lasttimestep, propagatingbackintimetillitreachesthefirstestimate. Asaconsequence
of this, the backward filter estimates are the best closer to the last timestep, as opposed
to the forward filter estimates which are better closer to the first timestep.
4.3 Smoother
The estimates from the forward and backward Kalman filters are combined using the
smoother. It computes estimates with lower uncertainty by weighting the estimates from
each filter with their covariances [13] as follows.
P
s
= (P
−1
f
+P
−1
b
)
−1
(4.18)
x
s
= (P
−1
f
x
f
+P
−1
b
x
b
)P
−1
s
(4.19)
P
f
,P
b
andP
s
arethecovariancesfortheestimatesfromtheforwardfilter,backwardfilter
and the smoother respectively. x
f
, x
b
and x
s
are the state estimates from the forward
filter, backward filterand the smoother respectively. The combinationof the forward and
the backward filter estimates using the smoother equation, (4.19), results in estimates
which are more accurate towards the beginning and the end, and less accurate around
the center of a specific estimation episode.
46
4.4 Data collection and pre-processing
Figure 4.1 shows a segment of the control input and the raw data logged by the sensor
suite. Thesensorslogdataatdifferentsamplingrates. Henceweinterpolateandresample
thedatasetsatthesamesamplingratesothatallthedatastreamsareinsync. Thisdata
is then inputed to the forward-backward Kalman filter to generate the location estimates
foreachtimestep. Figure4.2showsthrusterinput,rawaccelerometerdataforthex-axis,
the pitch angle, the gravity compensated acceleration data, and the pitch rate from one
of the test datasets.
Figure 4.1: Raw data from the accelerometer and the magnetometer, shown along with
the digital thruster input
47
Figure 4.2: The plot shows thruster input, raw acceleration, pitch angle, corrected ac-
celeration and pitch rate logged during a test in the tank. This data was used for the
estimation results shown in Chapter 5
48
4.5 Filter design
The filter uses the discrete state space model for the robot (Equation 3.45) as the process
model. Acceleration measurements from the IMU are corrupted with components of the
gravity field. This needs to be first removed to obtain the acceleration of the robot along
the global axes. The acceleration along the x-axis of the global frame is computed as
follows.
a
true
=z
x
cosφ+z
z
sinφ (4.20)
a
true
is the true acceleration along the x axis in the global frame, z
x
is the measured
acceleration on the x axis of the robot body frame , z
z
is the acceleration measured along
the z axis of the robot body frame, and φ is the pitch angle of the robot in the global
frame. The measurement model from (Equation 4.2) is given by,
z
k
=
0 0 1
s
k
v
k
a
k
+r
k
(4.21)
z
k
is the corrected measured acceleration along the global x-axis. The process and mea-
surement noise covariances were empirically determined.
Q =
σ
2
s
0 0
0 σ
2
v
0
0 0 σ
2
a
=
(0.00072)
2
0 0
0 (0.006)
2
0
0 0 (0.05)
2
(4.22)
R =σ
2
imu
= (0.03)
2
(4.23)
49
Chapter 5
EXPERIMENTAL RESULTS
The system was testedattwo locations, anindoorexperimental tank with dimensions 1.5
m x 0.9 m x 1.5 m (Figure 5.1), and a swimming pool (Figure 5.3). The first set of exper-
iments were carried out in the tank. The robot was mounted on a graduated guide-rail
supported by appropriate structural fittings. A pulley assembly allowed fast immersion
and resurfacing of the whole assembly. A directional wireless antenna outside the tank
in close proximity to the tank wall made it possible to maintain a continuous communi-
cation link between the robot and a laptop external to the tank. All experiments were
video recorded so that ground truth position information could be obtained by manually
processing the imagery. We performed the tank experiments in three phases. The first
phase involved the developmental iterations to test the robot structure, actuation, work-
flow and control. The second phase was to determine the process model, and the third
phase was to implement and evaluate the location estimator. The first set of experiments
were dedicated to end marker detection, transect traversal, thruster control primitive
determination, and reliability and performance tests. The second phase involved step
50
(a) An experimental setup at the tank (1.5m x 0.9m x 1.5m). The robot was deployed on a guide rail
with a traversable span of 0.64m, and at a depth which allowed reliable communication (the robot
surface was 4cm below water surface). The rail was carefully graduated for retrieval of ground truth
on position. A HD camera recorded the experiments at 30fps while the robot clock was available on
a laptop screen in the video frame
(b) Illustration of the tank setup with the guide-rail at the bottom
Figure 5.1: The experimental tank setup
51
response tests, experiments to determine the drag coefficient, and thruster pull tests to
obtain the force profile.
Finally, we carried out eight trial runs, spanning a total traversed distance of 16
m, to implement and evaluate the location estimator. Each trial involved four complete
traversalsoftheguiderail. Threesuchtraversalswereselectedrandomly(outof24total),
and the ground truth position information was obtained from the videos. The forward
Kalman filter, the backward Kalman filter, and the smoother were run on these data
sets. Figure 5.2 shows the resulting location estimates for trial 1, and Figure 5.5 and 5.6
show trial 2 results. The pool served as a larger testbed with a traversable span of 3.45
m. On an average, the robot needed thirteen thrusts to traverse the length. We have
not logged fine-grained ground truth position information from the pool tests. Hence,
we have not evaluated the position estimates against ground truth. However, we assert
that the estimator generates reasonable location estimates in this setting. Figure 5.7
shows the results from the pool experiment. One of our goals for the near future is
to perform experiments in transects involving longer spans with ground truth. The
standard deviations of position estimates for the forward filter, backward filter, and the
smootherareshowninFigure5.8. Theycorrespondtothedatafromthetankexperiment
where fine-grained ground truth was available. The root mean squared error from the
forward filter was 0.042 m, the backward filter 0.044 m, and the smoother 0.037 m.
52
(a) Forward-backward filter estimates for trial 1 in the tank
(b) Smoother estimates for trial 1 in the tank
Figure 5.2: Results from the tank experiment (trial 1)
53
Figure 5.3: The deployment setup at an outdoor swimming pool. It had a depth of 1.5m
and a traversable span of 3.45m. A measuring tape, with colored markers was suspended
on the water surface, directly above the guide rail. This served as a source for images
and coarse ground truth (from the images). Wireless communication link was available
as long as the robot was at the surface.
54
Figure 5.4: Illustration of the experimental setup at the pool. The guide-rail assembly
was raised and lowered using a pulley assembly which was operated operated from a
single point. A wireless router allowed communication with the robot when it was on the
surface. The measuring tape was suspended on the surface, and images were captured
using the robot’s onboard camera.
55
Figure 5.5: Forward-backward filter estimates for trial 2 in the tank
56
Figure 5.6: Smoother estimates for trial 2 in the tank
57
Figure 5.7: Forward-backward filter and smoother estimates for the pool trial
58
Figure 5.8: Standard deviation in position estimates for the forward filter, the backward
filter, and the smoother
59
Chapter 6
VISUAL RECONSTRUCTION
The secondary goal of this thesis is to demonstrate an application of state estimation for
the robot in capturing and reconstructing surface activities in the waterbody. The robot
moves to and fro on the guide rail, capturing images of the water surface periodically.
The images are captured between hops, when the robot is stationary. They are time-
stamped and stored on the onboard storage media. Once the offline state estimator has
been run, we have the optimum pose estimate for the robot. Equipped with this data, we
can now reconstruct a temporal panorama of the water surface, where we show images
correspondingtoatime-stampdepictingtheactivityataparticularlocationintheglobal
frame.
6.1 Problem statement and approach
Imagesofthewatersurfacearecapturedbetweeneveryhopusingarelativelycheapdigital
CCD camera mounted on the upper level of the robot. The camera has a small aperture,
and consequently a large depth-of-field. It is good for capturing landscapes and ideal for
our application where we want everything between the camera and the water surface to
60
Figure 6.1: Images are captured by the robot’s onboard camera after every hop. These
images are time-stamped and stored. Offline state estimates can be used to reproject the
images to generate a temporal-spatial panorama.
be in focus. We wish to generate a spatio-temporal panorama from these images. This
will help marine biologists observe phenomena at the surface of the waterbody.
6.2 The need for location estimates
Image stitching or panorama generation is a widely studied problem in the image pro-
cessing community, with a lot of existing literature on the topic. The algorithms for
stitching images follow three key steps 1) Feature point extraction from the images 2)
Feature point matching across images 3) Blending. The key requirements for stitching
images are 1) The image should be rich in features 2) The features should be static.
In our application, both the factors are absent. We capture images with the camera
pointing up, hence most images will just have the the sky in the image plane, and hence
very sparse set of features (e.g., clouds). Also, even if there were any features in the
watermass, (e.g., a school of fish), they will be dynamic in nature, without any model
61
for their occurrence or distribution. Hence, our approach is to use the location estimates
generated by the state estimator to reproject the image back into the 3D world frame.
6.3 Assumption
The primary assumption for the generation of the surface projection is that the image
source is at the water surface. However, this will not be the case when the robot is
deployed in a lake or marina, since the object of interest can be anywhere between the
robot and the surface. This is especially true for applications such as fish-counting (one
of the motivations for this project). Also, we assume that when the images are captured,
the roll and pitch angle of the robot are negligible. This is a fair assumption, since we
capture images only between hops when the robot is stationary.
6.4 Camera model
A pin-hole camera model was identified using the Camera Calibration Toolbox for MAT-
LAB [17]. Calibration was performed on a planar calibration target, and the intrinsic
parameters for the calibration matrix was determined. The following equation gives the
projection of the point in 3D space to the camera image plane.
x
p
y
p
1
=KK
xs
f
ys
f
1
(6.1)
62
Where KK is the camera calibration matrix given by,
KK =
α
x
s x
0
0 α
y
y
0
0 0 1.0000
(6.2)
Where α
x
and α
y
are the focal lengths measured in width and height of the pixels, s is a
factor accounting for the skew due to non-rectangular pixels, and x
0
and y
0
correspond
to the principal point. From the calibration experiments, the calibration matrix was
determined to be,
KK =
622.0329 0 322.8046
0 612.0962 218.4944
0 0 1.0000
(6.3)
On taking the inverse of the camera calibration matrix, and ignoring radial distortion
effects, we obtain,
x
c
y
c
d
=f ·KK
−1
x
p
y
p
1
(6.4)
Where, d is the reprojection depth, assumed in our case to be the distance of the water
surface from the camera. Since when the image was captured, the camera was at position
63
x relative to the frame of reference for the system, we need to translate the reprojected
image. For generality, we do this using the homogeneous transformation matrix given by,
H =
cosαcosβ cosαsinβsinγ−sinαcosγ cosαsinβcosγ +sinαsinγ x
t
sinαcosβ sinαsinβsinγ +cosαcosγ sinαsinβcosγ−cosαsinγ y
t
−sinβ cosβsinγ cosβcosγ z
t
0 0 0 1
. (6.5)
Where α, β and γ are yaw, pitch and roll angles respectively. We made an assumption
of the roll and pitch angle to be negligible. Also from the design of the system, yaw is
restricted. Therefore, α = β = γ = 0. Also, z
t
=y
t
=0 since the robot is constrained to
the axis of the guide rail. The homogeneous transformation matrix hence reduces to,
H =
0 0 0 x
t
0 0 0 0
0 0 0 0
0 0 0 1
. (6.6)
Note that we can model roll and pitch angles in the homogeneous transformation matrix.
The final mapping from image pixel to the global frame is given by,
x
g
y
g
d
=H·f ·KK
−1
x
p
y
p
1
(6.7)
64
Figure 6.2: The image reconstruction scheme
6.5 Reprojection scheme
We start by generating a matrix corresponding to the global view-frame. This maps to
the span of the deployment. The dimension of the reprojected image matrix is calculated
from the image resolution and the span, and the depth of the deployment. The time-
stamp of each logged image is retrieved, and the position of the robot is obtained from
the offline state estimates. Equation 6.7 is then used to reproject each image to the
global frame. Figure 6.1 shows the reprojected image from the images captured during
a swimming pool experiment. For this reprojection process, the ground truth position
information was used.
65
Figure 6.3: An image frame reconstructed using the coarse ground truth from the mea-
suringtape(capturedoneachsourceimage). Theparallellinesontheimagesontheright
aretransmissioncablesapproximately10metersabovethewatersurface. Reprojectionis
done with the target assumed to be at the water surface. Hence, being at a much greater
distance, the transmission lines appear in multiple frames
66
Chapter 7
CONCLUSION AND FUTURE WORK
This thesis presented a novel underwater robotic system for benthic sampling along a
transect. A state estimation approach that relies on frequent inertial measurements, and
infrequent position updates was proposed and implemented. We presented initial exper-
imental results that confirm that this approach gives us reliable position estimates with
bounded uncertainty and low estimation error. The experimental setup and deployment
process for the system was described for two different testbeds. We also showed prelimi-
nary results for generation of visual panorama of the water surface from images captured
during the traversal along the transect.
Future goals
In the future, we want to relax some of the assumptions made for this work. This will
allow us to perform experiments in the field (e.g., a lake or a marina). Some of the key
problems we plan to work on are as follows:
67
Guide-rail pitch
For this work, an important assumption was that the guide-rail is horizontal with negli-
gible pitch. As a result, the model for the robot was assumed to be symmetric in forward
and backward direction. Care was taken during experiments to ensure that the rail was
horizontal. However, for real deployments in marinas or lakes, it is difficult to guarantee
this. We plan to relax this assumption in the robot model. This can be achieved in two
ways. The pitch of the pipe can be included in the robot’s equations of motion, allowing
the process model to be more accurate, resulting in lower estimation error. However, it is
a challenging task to accurately measure the pitch of the guide-rail in real deployments.
An alternate approach is to model the pitch as an external bias, and augment the state
vector with it. Hence, the state estimator can predict the pitch as a part of its estimates.
Online state estimation
Since the robot is treated primarily as a sentinel system, offline estimation is used to
estimate the pose of the system. However, to perform any useful control (e.g pitch stabi-
lization, velocity control, trajectory tracking, etc.), we need to have an online estimator.
The plan for the near future is to run a standard forward Kalman filter to estimate the
instantaneous state of the robot. This could include estimation of terms like the pitch of
the pipe, mentioned in the previous section.
Feedback control
The current prototype uses open-loop control because of the lack of an online state es-
timator. However, to perform more interesting sampling, tasks the robot will need to
68
move at a desired velocity, or perform tasks such as pitch-stabilization in the presence
of external disturbances. This naturally calls for closed loop feedback control. Our goal
is to utilize online state estimates to implement a PID controller to maintain a desired
velocity, pitch angle, or to travel to a desired location on the guide rail.
Uniform sampling
In section 2.7, we saw that that the sampling density for the robot varies with respect to
the position on the guide rail. Points at the center of the transect are sampled uniformly,
whereas points at other locations are not. We want to investigate the possibility of
choosing trajectories which allow the robot to sample with a preset sampling density.
Cable based deployment
A guide-rail was chosen as support infrastructure for the robot because of its ease of use
and the resulting simplicity in the system model. Initial experiments showed us that
with a cable aided system, the robot would experience considerable rolling, yawing, and
complexityinactuation(asaresultofthecableslackgeneratedduetotherobot’spositive
buoyancy, the dynamics of the robot will be governed by its location on the cable). These
issues were circumvented by using a rigid guide-rail instead of a cable. However, in field
deployments involving large spans, a cable guided robot might have certain advantages
over a guide-rail based system (faster deployment, robust, more portable). As a part of
our future work, we plan to explore this possibility, and look for approaches to scale our
state estimation methodology to such a setup.
69
References
[1] “National Ecological Observatory Network Project (NEON).” [Online]. Available:
http://www.nsf.gov/bio/neon/start.htm
[2] “North East Pacific Time-integrated Undersea Networked Experiments
(NEPTUNE).” [Online]. Available: http://www.neptune.washington.edu
[3] F. Beer, J. J. E. Russell, E. Eisenberg, and R. Sarubbi, Vector Mechanics for Engi-
neers: Statics and Dynamics, 2001, vol. 2008, no. 30 March 2008.
[4] J. Bellingham, C. Goudey, T. Consi, J. Bales, D. Atwood, J. Leonard, and C. Chrys-
sostomidis, “Asecondgenerationsurveyauv,” Proceedings of the Symposium on Au-
tonomous Underwater Vehicle Technology (AUV ’94), pp. 148–155, 19-20 Jul 1994.
[5] P.H.Borgstrom, M.J.Stealey, M.A.Batalin, andW.J.Kaiser, “NIMS3D:ANovel
Rapidly Deployable Robot for 3-Dimensional Applications,” in 2006 IEEE/RSJ In-
ternational Conference on Intelligent Robots and Systems (IROS’06), Oct. 2006, pp.
3628–3635.
[6] D. Bouvet and G. Garcia, “Civil-engineering articulated vehicle localization: so-
lutions to deal with GPS masking phases,” Proceedings of the IEEE International
Conference on Robotics and Automation (ICRA’00), vol. 4, pp. 3499–3504 vol.4,
2000.
[7] P. Corke, C. Detweiler, M. Dunbabin, M. Hamilton, D. Rus, and I. Vasilescu, “Ex-
periments with Underwater Robot Localization and Tracking,” in Proceedings of the
IEEE International Conference on Robotics and Automation (ICRA’07), 10-14April
2007, pp. 4556–4561.
[8] R. E. Davis, C. C. Eriksen, and C. P. Jones, “Autonomous buoyancy-driven under-
water gliders,” In: Griffiths, G. (ed), Technology and applications of autonomous
underwater vehicles. Taylor and Francis, London, pp. 37–58, 2003.
[9] M.S.GrewalandA.P.Andrews, Kalman Filtering: Theory and Practice. Prentice-
Hall, 1993.
[10] B. Jordan, M. Batalin, and W. Kaiser, “NIMS RD: A Rapidly Deployable Cable
Based Robot,” in Proceedings of the IEEE International Conference on Robotics and
Automation (ICRA’07), 10-14 April 2007, pp. 144–150.
70
[11] “Symbolic Math Toolbox 3.2.3,” The MathWorks. [Online]. Available:
http://www.mathworks.com/products/symbolic/
[12] R. B. Nicklas, “An Application of a Kalman Filter Fixed Interval Smoothing Algo-
rithm to Underwater Target Tracking,” Master’s thesis, 1989.
[13] S. Reynolds, “Fixed interval smoothing: Revisited,” Journal of Guidance, vol. 13,
no. 5, 1990.
[14] S. Roumeliotis, G. Sukhatme, and G. Bekey, “Smoother based 3D Attitude Esti-
mation for Mobile Robot Localization ,” in Proceedings of the IEEE International
Conference on Robotics and Automation (ICRA’99), vol. 3, 1999, pp. 1979–1986
vol.3.
[15] L. Sciavicco and B. Siciliano, Modelling and Control of Robot Manipulators (Ad-
vanced Textbooks in Control and Signal Processing), 2nd ed. Springer, January
2005.
[16] D. Simon, Optimal State Estimation: Kalman, H Infinity, and Nonlinear Ap-
proaches. Wiley-Interscience, 2006.
[17] K. Strobl, W. Sepp, S. Fuchs, C. Paredes, and K. Arbter, Camera
Calibration Toolbox for MATLAB. [Online]. Available: http://www.dlr.de/rm-
neu/en/desktopdefault.aspx/tabid-3925/
[18] G. S. Sukhatme, A. Dhariwal, B. Zhang, C. Oberg, B. Stauffer, and D. A. Caron,
“The Design and Development of a Wireless Robotic Networked Aquatic Microbial
Observing System,” Environmental Engineering Science, vol. 24, no. 2, pp. 205–215,
2006.
[19] F. v. d. van der Heijden, R. Duin, D. d. de Ridder, and D. M. J. Tax, Classifica-
tion, Parameter Estimation and State Estimation: An Engineering Approach Using
MATLAB. John Wiley & Sons, November 2004.
[20] D. Webb, P. Simonetti, and C. Jones, “SLOCUM: an underwater glider propelled
by environmental energy,” in IEEE Journal of Oceanic Engineering, vol. 26, no. 4,
Oct 2001, pp. 447–452.
71
Abstract (if available)
Abstract
This thesis presents the design of a novel robotic system capable of long-term benthic sampling along a transect. The robot is built to traverse back and forth along a mechanical guide-rail at the bottom of a water body. This work describes the system design, the experimental setup, and shows results from localization tests with the robot in a laboratory tank and a shallow swimming pool.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Data-driven robotic sampling for marine ecosystem monitoring
PDF
Multi-robot strategies for adaptive sampling with autonomous underwater vehicles
PDF
Adaptive sampling with a robotic sensor network
PDF
Coalition formation for multi-robot systems
PDF
Mobile robot obstacle avoidance using a computational model of the locust brain
PDF
Design and control of a two-mode monopod
PDF
Macroscopic approaches to control: multi-robot systems and beyond
PDF
Informative path planning for environmental monitoring
PDF
Identification, control and visually-guided behavior for a model helicopter
PDF
Effectiveness of engineering practices for the acquisition and employment of robotic systems
PDF
The task matrix: a robot-independent framework for programming humanoids
PDF
Active sensing in robotic deployments
PDF
Design of adaptive automated robotic task presentation system for stroke rehabilitation
PDF
Machine learning of motor skills for robotics
PDF
Optimization-based whole-body control and reactive planning for a torque controlled humanoid robot
PDF
Biologically inspired mobile robot vision localization
PDF
Intelligent robotic manipulation of cluttered environments
PDF
Rethinking perception-action loops via interactive perception and learned representations
PDF
Advancing robot autonomy for long-horizon tasks
PDF
Learning objective functions for autonomous motion generation
Asset Metadata
Creator
Das, Jnaneshwar
(author)
Core Title
A robotic system for benthic sampling along a transect
School
Viterbi School of Engineering
Degree
Master of Science
Degree Program
Computer Science
Publication Date
04/29/2008
Defense Date
03/27/2008
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
aquatic robotics,localization,marine robotics,OAI-PMH Harvest,robotics
Language
English
Advisor
Sukhatme, Gaurav S. (
committee chair
), Caron, David A. (
committee member
), Schaal, Stefan (
committee member
)
Creator Email
jnaneshd@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-m1199
Unique identifier
UC1417664
Identifier
etd-Das-20080429 (filename),usctheses-m40 (legacy collection record id),usctheses-c127-60086 (legacy record id),usctheses-m1199 (legacy record id)
Legacy Identifier
etd-Das-20080429.pdf
Dmrecord
60086
Document Type
Thesis
Rights
Das, Jnaneshwar
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Repository Name
Libraries, University of Southern California
Repository Location
Los Angeles, California
Repository Email
cisadmin@lib.usc.edu
Tags
aquatic robotics
localization
marine robotics
robotics