Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Navigation and guidance of an autonomous surface vehicle
(USC Thesis Other)
Navigation and guidance of an autonomous surface vehicle
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
NAVIGATION AND GUIDANCE OF AN AUTONOMOUS SURFACE
VEHICLE
by
Arvind Antonio de Menezes Pereira
A Thesis Presented to the
FACULTY OF THE VITERBI SCHOOL OF ENGINEERING
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
MASTER OF SCIENCE
(ELECTRICAL ENGINEERING)
May 2007
Copyright 2007 Arvind Antonio de Menezes Pereira
ii
Dedication
Dedicated to my family, who despite being on the other side of world
during all this time that I have been pursuing my higher studies, have
always been the closest to me…
iii
Acknowledgements
I would like to thank my advisor Professor Gaurav Sukhatme, for being the best
guide I could have asked for – for encouraging me to experiment, think on my own,
and developing me into a more confident person. I am indebted to Professor David
Caron, who is the Co-PI on the NAMOS project, for being so supportive, and patient
with helping me do my experiments, even when they did not always seem to be the
most important tasks from our projects point of view.
Jnaneshwar Das, the latest team member on the NAMOS team has been a team-mate,
I owe a lot to in getting the Roboduck autonomous. He has been there at every
system test, building circuitry and putting the boat together – helping us make better
progress than we have ever done before. His enthusiasm and support played a big
role in making this happen! Bin Zhang and Amit Dhariwal have provided valuable
suggestions and helped me diagnose and fix problems. I have to thank everyone at
the Robotic Embedded Systems Laboratory (RESL), particularly Srikanth Saripalli,
Gabe Sibley, Jonathan Kelly, Jon Binney, Sameera, Dewitt Latimer and Marin
Kobilarov for providing me with valuable insights, and pointing me to very useful
literature, besides sharing some of their tried and tested code-snippets which are part
of the software nuts-and-bolts that make the boat functional. I have also got to thank
Boyoon Jung, who writes code that delights the person reading it, for his IMU and
GPS data acquisition code that has been used on the Roboduck. Mansi Shah, Karthik
Dantu and Onur Sert also deserve special mention for giving very useful suggestions.
iv
I have to thank the entire NAMOS team for helping out with deployments – Carl
Oberg, Beth Stauffer, Stefanie Moorthi, Lindsay Darjany, Xuemei Bai and Ericka
Seubert, have been of great help whenever they have been part of Roboduck-II tests.
My room-mates, Arun Radhakrishnan and Vivek Kini have been of great help during
and after deployments.
I am deeply grateful to Kusum Shori who has so patiently helped me with purchasing
equipment with an efficiency that always baffles me. Without her having purchased
the boat’s new computer at the speed of light, (well ok, not quite!) vision-aided
obstacle avoidance that seems to be working would probably not be possible.
I would also like to thank my friends at NIO, India for being there with me during
my formative years, Pramod Maurya, in particular, the friend who accompanied me
through those long nights we spent working at the laboratory on the guts of ROSS
and Maya AUV. Our work from those good old days has helped speed up
Roboduck’s development in many ways. Many thanks go to the AUV team at NIO –
Dr.Elgar de Sa, Mr.R.G.Prabhudesai, Antonio Mascarenhas, R. Madhan, Shivanand
Prabhudesai, Dr.Ehrlich De Sa and all the others. Also to Dr.Antony Joseph who
made life at NIO so much more fun when I was there.
v
I am deeply grateful to Professor Srikanth Narayanan who took time out of his busy
schedule at very short notice to be part of my committee and making this Thesis’
possible.
And last but not the least, I have to thank Luis Sebastiao and Professor Antonio
Pascoal from the Institute for Systems and Robotics, Instituto Superior Técnico,
Lisbon for having got me interested in marine and underwater robotics and Luis for
the wonderful and insightful suggestions he has given, quite a few of which helped
make this work a success.
Before ending, I must thank everyone at Redondo Beach for being so friendly,
helpful and providing us with so much support during every deployment of the
Roboduck-II at the harbor. Special mention goes to all the Harbor patrol and Fire
department personnel at Redondo Beach. They are the best I know!!!
Whatever I write about in this thesis is because of help from all these great people
and others I haven’t been able to mention here - I thank them all from the bottom of
my heart.
- Arvind
vi
Table of Contents
Dedication ii
Acknowledgements iii
List of Tables vii
List of Figures viii
Abbreviations x
Abstract xi
Chapter 1 : Introduction 1
Chapter 2 : System Design 6
2.1 Software Design 7
2.2 Hardware Design Considerations 16
Chapter 3 : Vessel Models and Autopilot Design 19
3.1 Vessel Models 19
3.2 Design of the Heading Autopilot 25
Chapter 4 : LOS Guidance System Design, Considerations and Analysis 32
4.1 Introduction 32
4.2 Line of Sight Guidance 34
4.3 Test Results at Redondo Beach Harbor 37
Chapter 5 : Obstacle Avoidance using Stereo Vision 44
5.1 Introduction 44
5.2 Stereo Vision 46
5.3 The Small Vision System 47
5.4 Obstacle Avoidance Algorithm using Stereo Disparities 51
5.5 Conclusion 61
Chapter 6 : Conclusion and Future Work 64
6.1 Summary and Contributions 64
6.2 Future Work 67
Bibliography 69
Appendix : Sensor Characteristics 72
vii
List of Tables
Table 1 : Open loop transient response characteristics............................................ 24
Table 2 : Inferences from Polar Histograms ........................................................... 58
Table 3 : Inferences from Inverse Distance Polar Histogram .................................. 60
Table 4 : Specifications for the Garmin GPS 16A................................................... 72
Table 5 : Specifications of the 3DMG rate-gyro ..................................................... 73
viii
List of Figures
Figure 1 : Roboduck-I at Lake Fulmor ..................................................................... 2
Figure 2 : Roboduck-II off the Harbor Patrol dock at Redondo Beach. ..................... 4
Figure 3 : Screen-shot of the NAMOS-GUI Controller with a waypoint file............. 7
Figure 4 : Shared memory and processes that share the data................................... 13
Figure 5 : Hardware on the Roboduck-II .............................................................. 18
Figure 6 : Coordinate frame on the Roboduck-II used for modelling. ..................... 19
Figure 7 : Plot of the Step input command (u1) and Yaw Rate output (y1) signals.. 22
Figure 8 : Measured Open-loop Step Response v/s simulated Model Outputs......... 23
Figure 9 : Step, Pole-Zero map, Impulse response, Bode plot of Nomoto model..... 24
Figure 10 : Heading Autopilot structure from the simulink simulation.................... 27
Figure 11 : Effect of tuning the gains for different values of
p
K and
d
K on yaw... 28
Figure 12 : Yaw Rate response under Closed-Loop mode with various gains ......... 29
Figure 13 : Heading autopilot response to tracking a step-like input ....................... 30
Figure 14 : Line Of Sight waypoint guidance test at Redondo Beach...................... 37
Figure 15 : A quiver plot of the GPS data and heading of the vehicle ..................... 39
Figure 16 : Guidance and Autopilot outputs during Guidance maneuver ................ 40
Figure 17 : Guidance controller test results in meters, relative to the start location . 41
Figure 18 : How disparities are calculated............................................................. 47
Figure 19 : Accuracy tests using SVS to find range to a checkerboard target.......... 48
Figure 20 : Obstacle Avoidance System flowchart ................................................. 50
Figure 21 (a) & (b): Polar Histograms, image and disparity image for near obstacle54
ix
Figure 22 (a) & (b) : Polar Histograms, image and disparity image for very near
obstacle.................................................................................................................. 55
Figure 23 (a) & (b) : Polar Histograms, image and disparity image – no obstacles
close by.................................................................................................................. 56
Figure 24 : Inverse Distance Polar Histograms for Figure 21 (a) and (b)................. 59
Figure 25 : Inverse Distance Polar Histograms for Figure 22 (a) and (b)................. 59
Figure 26 : Inverse Distance Polar Histograms for Figure 23 (a) and (b)................. 59
Figure 27 : Data collected by Roboduck-II from a single winch profile using direct
winch profiling commands through the NAMOS-GUI. .......................................... 65
x
Abbreviations
ASC Autonomous Surface Craft
ASV Autonomous Surface Vehicle
GPS Global Positioning System
HAB Harmful Algal Blooms
IMU Inertial Measurement Unit
NAMOS Networked Aquatic Microbial Observatory System
SLAM Simultaneous Localization and Mapping
USV Unmanned Surface Vehicle
xi
Abstract
This thesis describes the work done in transforming a small boat with two aft
thrusters and a single rudder into an Autonomous Surface Vehicle which is
capable of operation in diverse marine environments such as lakes, rivers,
marinas and harbors. The sensors utilized are a Global Positioning System, a 3
degree of freedom Inertial Measurement Unit consisting of gyroscopes,
accelerometers and an integrated magnetometer compass. The guidance system is
aided by a stereo vision system that uses stereo disparities to make inferences
about obstacles. The boat is currently being used to collect data used by a team of
researchers, to study the effect of harmful algal blooms. After a brief description
of the system design, we discuss the design and implementation of the heading
autopilot, line of sight guidance system and the stereo vision-based obstacle
avoidance system.
1
Chapter 1 : Introduction
An Autonomous Surface Vehicle (ASV), is a vehicle capable of traversing relatively
unknown environments using onboard sensing, guidance and autopilot capabilities.
This thesis describes the design and implementation of these capabilties on a boat -
Roboduck-II. Roboduck-II is one of two robotic boats that is part of a sensing project
– NAMOS (Networked Aquatic Microbial Observatory Systems, http://www-
robotics.usc.edu/~namos), whose aim is to study marine environments which have
been impacted by Harmful Algal Blooms (HABs). The NAMOS team consists of
biologists and engineers who believe that the use of robotic craft to perform
autonomous sampling can improve the efficiency of data collection, at high quality
since robots can be programmed to perform sampling tasks with higher repeatability
without appreciably losing efficiency over time
1
. In the future, we envisage
collaborative data-collection among robotic sampling platforms, which will provide
richer data-sets without the need for extensive man-power.
At present we use two platforms to perform sampling. The first, Roboduck-I (Figure
1) was primarily designed to perform biological sampling in lakes with minimal
impact on the water. To do this, Roboduck-I used air-propulsion via a large propeller
mounted at a height of about 20cm in the rear section of the boat. Steering was
1
A robotic vehicle may run out of batteries or fuel, but its performance usually is not a decreasing
function of time. A battery replacement would typically result in similar performance for similar
conditions.
2
achieved using a rudder that was partially immersed in the water (for better turning
efficiency) although a large portion of the rudder was placed behind the propeller so
as to take advantage of the high-velocity air jet from the propeller to turn the boat.
The craft could be operated by joy-stick or through an autonomous program that ran
on an embedded computer in it. The boat could be tasked with a set of way-points so
that it could navigate to pre-designated locations.
Figure 1 : Roboduck-I at Lake Fulmor
Roboduck-I could collect up to 6 water samples using a small water-sampling system
and could also collect chlorophyll and temperature data through two probes mounted
3
centrally at the bottom of its hull. Roboduck-I does not have the capability of
carrying a large payload. It cannot be outfitted with larger sensors or more batteries.
Roboduck-II is designed to perform longer duration missions with larger sensor
payloads. Roboduck-II also has the capability of doing vertical profiles using a
winch system.
Roboduck-II is essentially the Q-boat built by the Oceanscience Group,
(http://www.oceanscience.com), which we have outfitted with computing
capabilities and electronics that allow computer/manual control of the boat’s
actuators. The Q-boat is equipped with a servo-motor, an RC-receiver/transceiver
and a motor-controller board that is capable of speed control of the two DC motors
driving the boat’s propeller shafts. The motor controller board accepts commands
via serial port or radio control.
4
Figure 2 : Roboduck-II off the Harbor Patrol dock at Redondo Beach.
This thesis describes the conversion of the remote-controlled boat (Roboduck-II) into
an Autonomous Surface Vehicle that is capable of navigating relatively complex
water-bodies including lakes and marinas. At the time of writing this document, the
vehicle can perform autonomous maneuvers under GPS-aided line-of-sight guidance,
navigation and simple stereo vision-aided obstacle avoidance.
We begin by describing the design principles behind the hardware and software on
the boat, the sensors used and their characteristics and interfacing strategies. In
chapter 3, we describe a model of the boat, followed by the design of a generic
5
Proportional Derivative controller which uses rate-feedback for damping. Using this
heading autopilot based on PD-control, we next describe how it can be used by a
Line-Of-Sight (LOS) guidance controller to task the boat to go to prescribed way-
points. I also describe the design of a track-following controller, although this has
not been implemented at the time of writing. We present ideas on how “Station-
Keeping” can be attempted, which is something that we will require in the future.
Finally, we describe how stereo-vision can be used for obstacle avoidance and
possibly localization and present results with a simple obstacle avoidance system that
has been tested on Roboduck-II.
6
Chapter 2 : System Design
The applications envisaged for Roboduck-II require it to be operational both in
relatively busy and complex water-bodies such as marinas, harbors as well as lakes
which might have weeds which need to be avoided. There might also be the need to
stay away from main-stream traffic in marinas or to allow it to cross busy parts of
harbors. At the same time, we would prefer that the vehicle be able to carry out
missions that will allow biological sampling of the water-bodies over a period of at
least 6-8 hours. Therefore while on the one hand we are looking at saving power in
order to ensure that the vehicle is capable of safely navigating these more complex
water-bodies, we need to use sophisticated methods of sensing the local environment
using sensors such as cameras. Consequently we need to provide for a means to
process the large amounts of data produced by the cameras in a timely manner so as
to provide the boat with the capability to avoid or steer around/away obstacles in its
path.
The decision to use an x86 compatible computer was driven primarily by the fact
that most micro-controllers at the time of writing are not yet completely capable of
processing large amounts of vision data in real-time. Besides, we wanted to be able
to capitalize upon our experience of writing code on the Linux platform which is
well supported on the computer we use, to minimize time in re-learning a new
platform. We have begun using a micro-controller to do some monitoring and
7
peripheral tasks, but it does not handle any of the primary tasks of controlling the
vehicle.
2.1 Software Design
The software for Roboduck-II has two parts – the control, guidance, communication
and navigation programs that reside on the boat, and the software that communicates
with the boat and helps create mission files and monitor the status of the vehicle. In
the following sections we describe the capabilities and methodologies behind their
design:
Figure 3 : Screen-shot of the NAMOS-GUI Controller with a waypoint file.
8
2.1.1 Mission creation and vehicle health creation software
The NAMOS-GUI application, was initially developed to control Roboduck-I. This
application was developed in Visual C# Dot Net, and currently runs on Windows, the
operating system which is preferred by the users of the boat.
2.1.1.1 Map Calibration using user GPS data
This application, shown in (Figure 3) allows a user to import a rectified aerial picture
of the region of operation either from any source that provides linearly scaled maps
such as Google Earth. After the picture has been loaded, the software allows the user
to calibrate the map using a 2 point calibration. This has two advantages – calibrating
the map to the GPS being used on the boat reduces the chances of the boat running
into obstacles that are indicated to be water-bodies by its own erroneous GPS
reading. Also, it is possible that the user has access to a very precise positioning
device such as a DGPS receiver and GPS or an RTK system. Using such a system,
the local map can be calibrated much more accurately than the calibration currently
available from software like Google Earth.
2.1.1.2 Mission file creation
Once a calibrated map has been loaded, the user can connect to the boat and view the
boat on the display overlaid on the map in the form of a yellow cross-hair. The user
can also select from a set of tab-pages to decide what he/she wants to do with the
boat. If the user wants to create a path for the boat, they can click away at way-points
9
on the calibrated map and generate a mission file (with the corresponding latitude
longitude pairs) for the boat to begin execution. In a typical biological sampling
experiment this mission file will also contain instructions for winch operation. The
sensor package is usually lowered to different depths while the boat halts for samples
at a particular location. After retrieving the sensor package (the Hydrolab Sonde), the
boat can proceed to the next location that the mission file indicates. There is also the
possibility that we may want to operate the Sonde to collect data while the boat is in
motion. Under this mode of operation, we lower the instrument just below the
surface and make the boat go through a set of way-points at a designated speed.
2.1.1.3 Operator Error prevention
The GUI software also allows a user to specify “Keep-Out” areas, so that the user
can guard against mistakenly tasking the boat to go to a location that is on land. This
reduces operator-induced mission errors. It does not however interpolate safe
intermediate way-points at the time of writing. The software also auto-generates the
underlying mission script file that the computer on the boat uses. There are two types
of errors that are eliminated by following this methodology. The first involves
elimination of syntax errors such as spelling mistakes as well as in parameter values.
Also, logical errors are prevented, since the GUI suggests values that are achievable
under the current circumstances, such as limiting the way-point specifications to
points within the map. This prevents the boat for making a dash for a location it
10
might not get to such as the equator, just because the user may have made a mistake
and forgotten to type in a value for the latitude.
2.1.1.4 Manual boat control using a joystick
The GUI software also allows the user to use a game-controller to control the boat.
The user can then operate the boat manually by choosing the “manual operation”
check-box on the GUI. The software then routes commands from the joystick to the
boat.
2.1.1.5 Online data monitoring from boat
Besides this, the GUI allows for overlaid plotting of boat trajectories, interpolated
data collected from the sensors and so on. This makes understanding data coming
from the boat very intuitive, since we can essentially see where the data has and is
coming from. Also, the latest location of the boat can also be viewed as a cross-hair
over-laid on the map.
2.1.1.6 Details regarding communications with the boat
Communication between the NAMOS-GUI and the computer on the boat is handled
using TCP/IP socket communications. The computer on the boat runs a program
called dbex. This program is essentially a server which accepts communications
from clients on a specified port. It will accept commands from only one client at a
given time. The GUI program is a client, and connects to the dbex server. It provides
11
the server with a heart-beat packet at 1 Hz, which also serves as an update request for
the boat’s state which the GUI displays for the user to see. This information includes
GPS latitude, longitude, heading, desired-heading and so on.
While designing the software for Roboduck-II, we wanted it to be modular, robust,
fast and fault-tolerant with easy methods for fault-detection so that corrective actions
can be taken if a condition that results in erroneous behavior causes the software to
malfunction. On a vehicle like ours, we have sensors that were mentioned briefly
earlier, which produce data that is usually consumed by one or more logical blocks.
Hence using a publish/subscribe mechanism is an elegant way of sharing this
information.
To maintain fault-tolerance, sensor data collecting programs are different from the
programs that perform communications, obstacle avoidance and control. This allows
us to have a separate process that monitors the overall health of the system, and
allows it to restart a process that may have crashed due to an error condition that we
have not yet accounted for. All these programs have read/write access to shared
memory. We use time-stamps on the data to ensure that it is not stale data from a
dead sensor. This also allows us to collect data from all the sensors at the most
optimal rates for each of them, while incorporating this information during the sensor
data consumption stage (when timing information is required).
12
The computer in the boat runs a standard Ubuntu Linux distribution and uses the
Linux 2.6 kernel. The two primary programs that constitute the most important
decision-making parts of the boat are:
1) dbex
2) roboduck
These programs require sensor data which is provided to them by:
1) readGPS
2) readIMU
3) readTCM
When the programs listed above are executing, the boat is capable of performing
LOS guidance and going from one way-point to another with the assumption that it
will not run into any obstacles. As such, this setup can be easily ported to run on a
much smaller and lower-power processor, since it does not take up large amounts of
CPU utilization. Autonomous operation of the boat in a marina would however
require an obstacle avoidance system. This functionality is provided on the
Roboduck-II by the obs_avoid program.
13
Figure 4 : Shared memory and processes that share the data
In the following text, we describe each of these in more detail.
2.1.2 dbex
This program is primarily designed to be the communications channel between the
NAMOS-GUI and the other core-processes running on the boat. It shuttles
commands and data back and forth between the GUI and the boat’s computer. The
program is also capable of operating the sensor when the GUI is sending commands
for winch-operation or to download data from the winch. This process is carried out
by communicating with an intermediate micro-controller board located in the winch-
box which can move the sensor raise/lower the sensor package or communicating
with the sensor through a short-range low-power wireless link. This program is also
14
the one that handles mission execution by which we mean the tasks that the user has
programmed the boat to perform in a sequential order through the GUI, which
include moving from one way-point to another, waiting at a location for a specified
amount of time (to perform sampling), raising/lowering the sensors to specified
depths and so on. Another functionality that has been added to this program is the
ability to issue commands that perform system-identification through studying the
frequency response of the boat. The system identification section like most others,
requires that the other main program “roboduck”, be executing.
2.1.3 roboduck
This program is like the kernel of the navigation and guidance system on the boat. It
contains the control loops that make the vehicle capable of performing autonomous
guidance. This program implements a generic Proportional – Integral – Derivative
(PID) controller which is capable of two feedback forms, which I will describe in
more detail in the next chapter. It has also got the ability to generate control
commands at regular intervals, which by default are set to be generated at 10Hz. The
Line Of Sight guidance controller also runs at the same rate at this time. We discuss
it in more detail in the Chapter 4.
15
2.1.4 readGPS
This is a generic GPS reading program designed to read and parse GPS strings. It
reads GPS data and places it into the sensor data structure in shared-memory along
with the time-stamp when the data was received.
2.1.5 readIMU
This is a program that reads a 3DMG rate-gyro and compass sensor. Data from this
sensor is placed into shared memory variables along with time-stamps indicating the
time when they were read.
2.1.6 readTCM
This is a program that reads a PNI TCM2 compass and places its data into shared
memory variables. It is an optional program but can be used to keep a check on the
3DMG sensor data. If used by itself, due to its low band-width, autopilot gains need
to be modified. Also, it would help to use a complimentary filter between this sensor
and the rate-gyros from the 3DMG to help improve the overall dynamic response of
the system.
2.1.7 obs_avoid
This program acquires images from the Videre STH-MDCS stereo-head at 30 frames
per second. The resolution of each image is 320 x 240. It then uses an area
correlation algorithm, using the SVS library API developed at SRI International,
16
[Konolige, 1997], to compute a disparity map on these images. Then a filtered
version of this image is used to produce estimates of obstacle locations and is used to
generate commands that will be used to effect evasive steering action. Details
regarding this and test results are covered in the chapter titled Obstacle Avoidance.
2.2 Hardware Design Considerations
The Roboduck-II is 84” long, and is 28” at its broadest section and is 3.4 feet high at
its highest portion (the winch mast). Its weight in air is approximately 43 kg. It is
propelled by two DC motors that are capable of taking it to a top speed of about
1.55m/second. It comes with a Futaba RC transmitter/receiver through which it can
be operated manually by sending joystick commands to operate the thrusters and
rudder.
Before developing and testing our autopilot, we needed to ensure that the boat could
be brought back into manual-mode through a fail-safe mechanism. This required
some hardware design, (joint work with Jnaneshwar Das). The basic idea behind the
fail-safe work is as follows –
The AX-1500 thruster control board which is located in a box in the aft section,
accepts serial port commands which can be used to control the motors. As such, the
boat can be controlled without a failsafe through the RC control, by using a software
17
watch-dog on the computer to switch control back to RC in case the program on the
main-computer crashes. This strategy is not reliable since the computer could also
get into an unrecoverable state where it is actually sending commands to, but is
unable to communicate over the wireless link. The computer on the boat will need to
be reset, but this may not be possible unless it is retrieved using another boat.
We built a circuit that has a threshold-based switch control setup, and can drive a
relay. We switch control inputs between the radio receiver’s lines for the thrusters to
the serial port line from the computer using this relay when a switch on the joystick
is toggled over. Switching back puts the computer back in control.
The hardware on the Roboduck-II is shown in Figure 5.
18
Figure 5 : Hardware on the Roboduck-II
19
Chapter 3 : Vessel Models and Autopilot Design
3.1 Vessel Models
This section describes the controller design for the heading auto-pilot. In this
preliminary section, we begin with a system model for our Roboduck-II.
Figure 6 : Coordinate frame on the Roboduck-II used for modelling.
20
Let us assume that {} w v u v , ,
0
=
r
is the body-referenced velocity. These velocities are
known as the surge velocity (along the axis of the boat), the sway velocity (towards
the side of the boat), and the heave velocity (along the z-axis).
Also, we have {}
G G G G
z y x r , , =
r
is the location of the center of mass in body co-
ordinates, {} r q p , , = r
is the rotation vector, consisting of roll rate, pitch rate and
yaw rate in the local body coordinates. The vector { } Z Y X F , , =
r
represents the
external force in body-coordinates.
Since the boat operates on the surface, we will assume for now that all the other
actions such as roll, pitch, their rates and sway are zero. We also assume that
Roboduck-II is symmetric about the x-z plane.
The following equations [Triantafyllou and Hover, 2003], provide a model for the
dynamics of the vessel for the linear case with one or more thrusters (symmetrically
located if more than one) in the bow, and a rudder.
X u X u X m
u u
+ = &
&
) ( (1)
Y r mU Y v Y r Y mx u Y m
r v r G v
+ + = + ) ( ) ( ) ( & &
& &
(2)
() N r U mx N v N r N I v N mx
G r v r zz v G
+ = + ) ( ) ( & &
& &
(3)
F s
U mx N N
mU Y Y
dt
s d
N I N mx
mx Y m
G r v
r v
r zz v G
G v
r
r
r
& &
&
+
=
(4)
21
F s P s M
r
r
&
r
+ =
F M s P M s
r
r
&
r
1 1 + =
F B s A s
r
r r
+ = (5)
In Equations (1),(2),(3) we have terms from the imposed forces by thrusters and
rudders {} N Y X , , . Equation (4), which is known as the Sway-Yaw equation, we
notice that it is not directly related to the surge equation and is therefore the effects
of the surge force are considered to be de-coupled from that of the sway and yaw
system. The reason for including the equations above is to indicate what terms play
the most dominant roles on a surface vessel under the linear case, with (quite a few)
simplifying assumptions. Finally, the state-space representation of the equations is
provided in equation (5), where {} r v s , =
r
is the state vector and the external
force/moment vector {} N Y F = ,
r
is the input vector.
Unfortunately, many of these terms are very difficult to measure empirically since
they require access to tow-tank facilities or resorting to Computational Fluid
Dynamics (CFD) analysis. At the time of writing, we have not attempted either of
these two methods of getting these coefficients, and have attempted to perform
system identification based on the data returned by our sensors, although measuring
or computing the coefficients above for our vessel, would help us design much better
controllers for that plant.
22
One way that is often used by control engineers to get a model of a system is to
analyze its step response. In our case we are interested in knowing the rate of change
of the yaw angle of the boat (rotation about the z-axis). We obtain the response i.e.
yaw-rate from the rate gyro sensors, while the input is the command provided to the
thrusters. MATLAB has a tool for System Identification which can be used to
estimate models from data-sets. By using this tool on a time-domain data-sets, we
have obtain the following models both in parametric form.
Figure 7 : Plot of the Step input command (u1) and Yaw Rate output (y1) signals.
The Figure 7 shown above shows the measured step response of the vessel. Using
MATLAB, we obtain a first order Nomoto model [Fossen, 1994], [Triantafyllou et.
23
al, 2003], [Amerongen, 1982] that closely approximates this response. The simulated
Nomoto model’s transfer function response is shown in Figure 8, alongside the
actual input data. At this point it is probably worth noting that the data used to
perform this system identification is bound to have been noisy, since it was collected
during a test run of the boat in the sea under windy conditions, and hence there is a
very good chance that we have wave effects in the data that was collected.
Figure 8 : Measured Open-loop Step Response v/s simulated Model Outputs.
Figure 8 shows the output models from the system-identification tool-box. We
assume that we are interested only in transient response, since the transfer function
24
we can achieve by this method is not accurate enough to allow us to attempt much
more than that.
Figure 9 : Step, Pole-Zero map, Impulse response, Bode plot of Nomoto model
The model shown above has the following characteristics in its open-loop transient
response:
Rise Time Settling Time Final Value
4.67 seconds 8.31 seconds 0.000878
Table 1 : Open loop transient response characteristics
The 1
st
-order Nomoto model we obtain through system identification is:
1 1245 . 2
022478 . 0
) (
) (
+
=
s s
s
R
&
25
3.2 Design of the Heading Autopilot
The heading autopilot uses differential thrusters to perform steering. This works
quite well at slow speeds, but is not efficient at higher speeds, partially because a
thruster’s efficiency drops at a higher rpm. The optimal strategy will be to start out
using differential steering using the thrusters and then to begin performing more of
the turning using rudder action, except for maneuvers that need very high turn-rates
in which case a coupled strategy that utilizes both the rudder and the thrusters can be
used. In this work however, a steering command, alludes to the operation of the
thrusters in differential mode, i.e. by introducing a difference in the thrusts generated
by both thrusters to bring about a turning moment.
Assuming both the propellers behave identically and produce the same thrust for the
same rotation speeds, and that the motor-controller board ensures that both the
motors rotate at the same speed for the same command, we can come up with two
numbers that are proportional to the surge force and the yawing moment.
Before going on to the actual design of a PID, let us go back to the transfer function
that we have obtained from System Identification which is:
(6)
(7)
Here 1245 . 2 = and K=0.022478
1 1245 . 2
022478 . 0
) (
) (
1 ) (
) (
+
=
+
=
s s
s
s
K
s
s
R
R
&
&
26
Usually, one needs an integral term in a controller when there is a steady-state error
in the output. This is typically the case in over-damped systems. Typically a heading
control system for a small vessel such as Roboduck-II does not exhibit this
characteristic, as we have been able to verify empirically. It tends to get pushed over
the reference angle by waves and other high-frequency disturbances. Hence,
damping that can be provided by a faster signal like the rate-sensor becomes more
critical. Traditionally, if there is a reference signal that needs to be tracked, a
derivative term would be computed by multiplying the derivative gain with the
derivative of the same error term. This is not a productive way of handling the
situation as typically the sensor that we are using may have a limited bandwidth and
may not be able to capture the true fast-changing signal that is needed to provide
damping. Hence, it is not uncommon to use a sensor with higher bandwidth that is
capable of measuring the rate of change of the measured signal that is being tracked,
since it will be tracking the changes in the error term much more reliably. A few
other ways to handle this would be to either use state-estimators such as Kalman
filters or Complimentary filters to fuse information from the slower and faster
sensors to get better estimates from them, which are closer to the true signal. Another
reason for omitting integral action is that integral action can sometimes produce
chattering at the limits. At present, the controller code for the autopilot has the
capability to take on either proportional only, PI, PD or PID controller forms.
Additionally, it is also possible to choose between computing the derivative term
directly from the input signal or from a rate sensor.
27
The control law that we are use is:
r K K
d measured desired p al differenti
= ) ( (8)
The gains
p
K &
d
K were chosen empirically.
Here the differential-mode thrust
al differenti
and common-mode thrust
common
, are
used to produce thruster voltages by using the relation given in equation below. For
all of the tests conducted in this work, the surge controller (for speed-control) was
always in open-loop mode.
common al differenti right
common al differenti left
T
T
+ =
=
(9)
The thrusters are then provided with these commands.
The figure below shows a block diagram of the auto-pilot which we use on the
Roboduck-II.
Figure 10 : Heading Autopilot structure from the simulink simulation.
28
Simulink is the simulation software provided by Mathworks, which was used to tune
the heading controller. We present results from the simulation for a few gain values
that were used in practice, as well as gains obtained by tuning the system for better
transient response with better damping. The reason for a seemingly large
d
K gain is
actually because the PID is using degrees, and the angular rates are in radians/sec.
Hence, only after this conversion will the two quantities be closely related.
Figure 11 : Effect of tuning the gains for different values of
p
K and
d
K on yaw
As we can see from the figure above, the system can be made critically damped so
that it achieves a steady state much faster, although that affects the rise time slightly.
29
The present gains used on the vehicle have a decent response but have a small over-
shoot.
The figures 11 and 12 show the yaw and yaw-rate responses of the vehicle for a
commanded change in angle of 50 degrees. As one can see, when the gain for
d
K is
increased, the system is highly damped. If not, there is a small oscillatory behavior in
the yaw direction as the vehicle overshoots slightly and then corrects itself to a more
stable position.
Figure 12 : Yaw Rate response under Closed-Loop mode with various gains
30
We add that the Nomoto model used here has limitations. To begin with, it is a first
order simplification of the second order Nomoto model, which is it self based upon
the Davidson-Schiff model which is described in equations 1-5. The Nomoto models
are different for different speeds of the vehicle, and hence a different transfer
function needs to be computed each time. The transfer function obtained above was
obtained when the vehicle was turning at zero forward speed. The actual response of
the vehicle can be expected to be a little slower since the moments generated by the
thrusters will be smaller due to the loss in efficiency of the speed of the thrusters.
Figure 13 : Heading autopilot response to tracking a step-like input
31
Figure 13 shown above demonstrates the response of the heading auto-pilot to a step-
like desired heading input. As we can see the response is close to the response we
had expected from the simulink simulation (Figure 11) for a similar angular input
using the Nomoto model using gains
p
K =0.9 and
d
K =1.8. This plot was obtained
using actual data from the vehicle during a guidance test. The reference signal is
actually a guidance control output. This plot demonstrates that the heading autopilot
is capable of achieving, and maintaining the heading of the vehicle within ±3 degrees
of the desired input signal after it has come within ±5 degrees of the desired input,
even under disturbances from wind and waves. It tracks this heading reasonably
well, unless the error (when another heading is set), is large enough to require larger
changes in heading than the vehicle’s response permits.
32
Chapter 4 : LOS Guidance System Design, Considerations and
Analysis
4.1 Introduction
The biological application that we have been developing Roboduck for use with
would require that we be able to specify a set of locations that the robot must visit in
order to perform sampling. The easiest form of doing this is the Line-Of-Sight
guidance system. Here, we use an outer loop around the inner heading auto-pilot
loop to compute a desired heading angle to be provided to the heading controller to
attempt to attain.
As mentioned in the section regarding the software description, we have a Graphical
User Interface which allows us to create a set of way-points which are sent to the
boat for execution. These way-points are latitude-longitude pairs. This pair
represents the intersection of a meridian of longitude with latitude. We obtain this
location information from the widely available GARMIN GPS 16/A Global
Positioning System, which uses the NAVSTAR constellation of satellites.
GPS is a system of satellites that orbit the earth while making an angle of 55 degrees
with the equator. They encode range information into their carrier signals. When a
33
receiver gets the GPS signal from a satellite, it can extract the time it took the signal
to get to itself. The accuracy of this GPS fix is usually within 15m, with Selective
Availability disabled. This accuracy can become as good as anywhere between 3-5m
if the GPS is capable of using WAAS, (Wide Area Augmentation System), 95% of
the time. WAAS, like DGPS (Differential GPS), corrects for ionospheric delays,
skews in the satellite clocks and orbit tracking errors, thus improving the accuracy of
the triangulation. Our GPS is capable of receiving WAAS corrections, and hence the
accuracy on this data (which had WAAS enabled), should be <3 meters with a 95%
confidence.
Our algorithm performs local computations on the position data to compute an
absolute command for the heading controller. Instead of using lat-long to compute
the error in position, we prefer using the Universal Transverse Mercator projection
[Snyder, 1987], [Army, 1973] co-ordinate system. UTM is a grid-based co-ordinate
projection which divides the world into 60 vertical zone grids. The UTM system
allows us to operate in a space where we can easily use the rules from Cartesian co-
ordinate geometry, and therefore reduces the chances of introducing errors or
calculations in the computation of heading. We make use of UTM co-ordinates not
only in the Guidance system on the boat, but also to calibrate and use maps for
setting way-points on the NAMOS-GUI program, as image co-ordinates have a
linear relationship with UTM as opposed to latitude longitude pairs. When passing
the position to the boat however, we prefer using latitude-longitude pairs because
34
UTM typically requires 3 fields – Easting, Northing and the UTM Zone. It is a good
practice to ensure that one uses the zones, as neglecting it could result in possible
malfunctioning of navigation or mapping code at the borders of UTM zones.
Our Line Of Sight controller relies heavily on the GPS at this time. We do not filter
or fuse the data presently, although we do plan to work on using an Extended
Kalman Filter to perform data fusion in the future. In the next section, we shall
describe the algorithm employed, followed by a short description of how missions
are executed on the vehicle and then results from test runs at Redondo beach.
Finally, we will discuss how a track-following controller can be designed.
4.2 Line of Sight Guidance
First we simplify the kinematics of the boat by neglecting the effects of roll and
pitch.
r
v u y
v u x
=
+ =
=
&
&
&
cos sin
sin cos
(10)
Where , u ,v , x & , y & , & are the yaw angle, body-referenced surge velocity,
body-referenced sway velocity, earth-referenced velocity along the x-axis, earth-
referenced velocity along the y-axis, and the yaw-rate respectively. If we were to
move and rotate the reference co-ordinate systems such that they are aligned with the
starting point of the boat, then the heading angle during control will be very
35
small, and we can assume that 1 cos and sin . Also, the surge speed u U.
Hence, the kinematic equations of motion reduce to a set of linear equations [Fossen,
1994] which are written below:
y
x
d v U y
d U x
+ + =
+ =
&
&
(11)
The additional terms
x
d and
y
d are used to represent errors due to linearization and
drift caused by environmental disturbances.
The Line Of Sight strategy is based on a very simple strategy. We compute the
desired angle by computing:
=
present desired
present desired
desired
UTMN UTMN
UTME UTME
1
tan (12)
Notice that we are computing an angle with respect to the y-axis and not the x-axis.
This is because the compass reads zero when aligned with the y-axis. A trivial but
important implementation requirement is to ensure that the desired heading angle is
in the correct quadrant before we hand it over to the heading autopilot. Thus, the
heading autopilot, the guidance system and the compass should all use a consistent
angular system, particularly for Euler angles. It is also a good idea to take care of a
divide-by-zero error if by chance the denominator is zero.
At this point, we will need to know if the vehicle is within what we call an
acceptance circle. This is defined as a region within which a reported heading will
36
be considered achieved, and it is centered around the desired point and can be taken
to have a radius of acceptance , such that:
()( )
2 2 2
+ present desired present desired
UTMN UTMN UTME UTME (13)
Typically the thumb-rule for ships is to choose such that it is at least twice the
length of the vessel [Fossen, 1994]. In Roboduck-II, due to its size and
maneuverability as well as it’s positioning capability, we can choose this number to
be between 2m-5m. Part of this reason is because the GPS accuracy is a limiting
factor in positioning, especially without an estimating filter. Hence, it is best to
choose a radius that is larger than the expected standard deviation of the GPS being
used. With WAAS enabled, we typically use 2m.
If the vehicle has achieved a waypoint it then uses the next waypoint in a mission file
to proceed to the next location. In Roboduck-II’s program structure this is handled
through shared memory, by making use of a flag called WayPtAchv. The Guidance
controller computes and sets a new heading based on the previous heading. When the
way-point has been achieved, it flags it as having been achieved, and halts at the
way-point until it drifts outside the circle of acceptance. It is expected that if there is
another way-point to be reached to at this point of time, the mission program, which
polls the WayPtAchv flag, will load the next desired location in UTM co-ordinates
for the boat to go to.
37
4.3 Test Results at Redondo Beach Harbor
Figure 14 : Line Of Sight waypoint guidance test at Redondo Beach
Figure 14 above shows a simple way-point guidance maneuver test conducted at
Redondo harbor in the harbor patrol area. This test involved setting a new way-point
via the NAMOS-GUI and not through the mission file. The reasons for this were
guided completely by us trying to stay out of the way of bigger boats which had the
right of way. We could at the time of the test set way-points only during times when
there were no vessels going in or out of the marina.
38
Figure 14 above shows a plot of latitude v/s longitude of the boat’s trajectory. The
continuous line is the reported trajectory of the boat and each red ‘*’ indicates the
location of the way-points which it used for reference. The boat started out at the
location indicated, and set out towards the east to the first way-point. After reaching
this waypoint it begins circling around it. We then set another waypoint to the south
east. The boat turned around and reached this location. We then set another way-
point near the start. This way-point proved difficult for the boat to achieve in terms
of the accuracy of achievement. This is partially due to external disturbances, but is
also related to another disturbance where a non-linearity gets introduced in the
magnetometer reading due to physical proximity of the IMU to the propulsion
motors. This had proved to be a major stumbling block initially, and the current
location of the IMU (as far back as it will go inside the hull), is not too great either,
but it seems to produce reasonable results. The ideal location for a gyro-compass
should be as close to the center of mass as possible. At present, we have not been
able to risk placing the IMU outside as we have not made an enclosure that can hold
it. Once this is done, we should have much better responses from the heading
controller and consequently better guidance
39
Figure 15 : A quiver plot of the GPS data and heading of the vehicle
The quiver plot shown in Figure 15 displays the boat path during the guidance
maneuver that we have performed. The purpose of including this plot is to show how
the vehicle’s heading varies while it performs a guidance maneuver. It is also more
intuitive to see the direction in which the boat traveled while performing these
maneuvers.
40
Figure 16 : Guidance and Autopilot outputs during Guidance maneuver
The plot shown above (Figure 16) shows us how the outputs of the line of sight
controller and the vehicle yaw-response compare. The heading autopilot generates
differential thrusts to try and make the error between the guidance input and the
actual heading of the boat zero. In the plot shown above, we find that for a large
portion of the plot, the heading autopilot matches the output of the guidance
controller, which shows how well the heading controller performs. The heading
autopilot tends to not be able to keep up only at times when the guidance controller’s
input to it is very different from the present heading of the vehicle. This is usually
41
the case when we set a new way-point, or when we begin reaching very close to the
actual way-point.
Another interesting observation is the response of the vehicle on the right hand side
of the plot. Here we can see how the vehicle displays a damped oscillatory behavior.
This is just as we expected from our simulations using the experimental gains which
we used for these experiments using the first order Nomoto model which we
obtained through system identification in chapter 3.
Figure 17 : Guidance controller test results in meters, relative to the start location
42
A drawback to the LOS guidance setup is that it is susceptible to currents. Very often
currents can take the vehicle substantially off the shortest path between itself and the
next waypoint. Hence, although we plan a trajectory for the boat with the expectation
that it will travel the path that connects these, there are marked excursions in the real
behavior of the boat if we employ this simple LOS strategy only. This is apparent in
both the first part of the guidance path when the boat is heading for the first way-
point as well as the last part where the boat is going to the third way-point. This
behavior manifests itself in the form of a delayed response from the boat to turning
towards the desired location until it is very close to it.
One way to improve upon this behavior would be to use an integral control term
which will correct for the steady-state error or bias that is being introduced by the
external disturbance such as winds or currents. This will have to be done very
carefully and can potentially result in unstable behavior if the external disturbances.
Another way to handle this is to interpolate waypoints in between which will force
the boat to steer into the disturbance thus staying on track. This is because at shorter
distances to the actual target, the turning command tends to be larger as the angular
changes for a small deflection off the track result in significantly large angular
correction commands. If the distance to the goal is further away, the desired heading
angle does not change all that much. This solution is also potentially unstable. If the
interpolated points are set too close to each other, GPS readings that are erroneous
43
could result in the boat turning around in circles and never getting to the intermediate
way-points, thus defeating the very purpose of the whole idea.
Most strategies for track following guidance systems use a controller that tracks the
cross-track error between the desired trajectory and the one the vehicle is on, and try
to keep that error as low as possible. This is an extension of the guidance controller
that we plan to implement in the future. The paper on path following [Encarnacao, et.
al, 2000] and the references therein cover the difficulties involved in path following
of an ASV followed by proposing a non-linear control algorithm that copes with
constant unknown ocean currents.
Another problem that is even more challenging is trying to hold a station. This
problem requires studying the resultant forces on the vehicle, estimating the position
accurately over time while not reacting to noise from sensors like the GPS while
doing so. This problem usually requires a state estimator such as a Kalman filter to
take care of noisy measurements. It also involves estimation of the resultant external
force field and the design of controllers (which are usually non-linear) to generate
the control inputs to actuators to achieve dynamic positioning. The reader is referred
to [Lindegaard, 2003] and the references therein for the current methods being used
in dynamic positioning design for ships and marine vehicles.
44
Chapter 5 : Obstacle Avoidance using Stereo Vision
5.1 Introduction
Obstacle avoidance is a well studied problem in robotics. See [Siegwart, 2004] and
the references therein for an extensive survey on obstacle avoidance and path-
planning. In order for a robot to operate reliably in real life, it is imperative that they
have the ability to first perceive these obstacles, and then take evasive action to avoid
running into them while navigating over to the desired target location. Obstacle
avoidance attempts to change the trajectory of the robot based on sensor information
whenever an evasive maneuver is necessary.
The most popular obstacle avoidance techniques in use today depend primarily on
some form of ranging, which is usually provided by laser scanners, sonar range-
finders. Before beginning we survey algorithms in use today for obstacle avoidance.
One of the most popular algorithms that is capable of path-planning and obstacle
avoidance is the VFH or Vector Field Histogram [Borenstein, 1991]. VFH creates a
polar histogram by assigning a probability that there is an obstacle for each angle at
which it expects to find an obstacle and then to threshold this based on an occupancy
grid map for obstacles. It then calculates a steering direction based on factors by
45
optimizing a cost function to every candidate opening that is large enough for the
vehicle to pass through. This cost function G has the following terms:
direction n orientatio direction
old c wheel b et t a G + + = arg (14)
The parameters a, b and c are used to tune the system to obtain better behavior. The
VFH+ algorithm [Ulrich et al., 1998] improves upon this by making use of a simple
model of the moving robot’s trajectories based on its kinematic limitations. It uses
knowledge of the way in which the robot can move to mask the polar histogram to
kinematically block trajectories, thus resulting making better choices. There are
many other methods such as the Bubble band method, the elastic band method,
curvature velocity methods and so on which are discussed very nicely in [Siegwart,
2004]. Another interesting algorithm is the local dynamic window approach used by
[Fox et al., 1997], where they first select a dynamic window of all tuples ) , ( v that
can be reached to within the next sample period, while accounting for the robots
capabilities. It then reduces the dynamic window by keeping only those tuples that
will allow the vehicle to stop before bumping into an obstacle. An objective function
that depends upon the heading, velocity and distance to goal is optimized upon. Just
like in the previous case with VFH, tuning parameters a, b, c can be tweaked to get
better responses to either of these criteria. The objective function is written as:
) , ( ) , ( ) , ( v dist c v velocity b v heading a O + + = (15)
46
5.2 Stereo Vision
Stereo vision is a technique that we use to determine depth. The reader is referred to
[Hartley, et al, 2006], [Klette et al, 1998], [Trucco et al, 1998] and [Forsyth et al,
2003] for more in-depth coverage of the principles involved in reconstruction of 3-
dimensional information from multiple views including stereo vision. It is extremely
difficult to determine distances to objects without using more than one images in
computer vision. Methods that work in such cases usually rely upon apriori
knowledge of the objects that they are looking at and will break-down when scale-
distance ambiguities take place. For example we might be able to tell how far away a
car is based on its size in the image. If however, there is a model car coming towards
the camera that is significantly smaller than the real one, it will appear to be distant
even at close distances.
Stereo vision relies on triangulation to determine range. An object typically tends to
produce an image of it self on two imagers which are separated by a baseline which
are located at a different location from each other. This difference is proportional to
their proximity to the camera. This difference in the locations of the same object’s
imaged pixels is called disparity. Therefore the closer an object is to the camera the
larger the disparity it produces.
47
Figure 18 : How disparities are calculated.
The disparity is given by the relation:
dr dl d where
d
f b
R
=
=
(16)
5.3 The Small Vision System
We use the Small Vision System (SVS) stereo algorithm, [Konolige, 1997] to
perform ranging. This system uses area correlation to find correspondences between
48
the views. It is one of the fastest algorithms currently that produces decent results,
and can be used to produce almost real-time disparity maps. On the 2GHz computer
on the boat it can handle 30 fps @ 14% CPU utilization, while generating disparity
maps and storing images at a resolution of 320x240 which are being grabbed from
the STH-MDCS stereohead made by Videre Systems. SVS’s area correlation
algorithm uses a user-adjustable correlation window which it uses to detect matches
between left and right views.
Figure 19 : Accuracy tests using SVS to find range to a checkerboard target.
We conducted tests to check how well the stereo system works at the laboratory.
This was under optimal conditions, but was primarily intended to test the linearity of
the sensing. The basic setup included taking pictures of a checkerboard target which
the program attempts to locate and then determine the range to using SVS. The graph
49
shown above shows the results for two trials. (A checkerboard pattern tends to
produce very good disparities at the distances we have used, since it offers a high-
contrast image and has corners that are much easier to localize at sub-pixel
accuracies). Since SVS uses interpolation to 1/16
th
of a pixel, this helps it in a big
way, boosting accuracies.
50
Figure 20 : Obstacle Avoidance System flowchart
51
5.4 Obstacle Avoidance Algorithm using Stereo Disparities
When we began trying to detect obstacles using stereo, our first step was to train the
camera at the corridor in an aisle and try and see what the program would want us to
do by using the simplest setup we could think of, which was basically turn left, turn
right or backup.
The results were promising for the first attempt – almost every frame produced the
correct turn signals (left/right), when the camera was pointing in the direction of a
desk/wall or chair. After adjusting a few of the parameters such as nearest distance to
objects, size of the smallest obstacle we experimented on the boat.
When we tried it out on the boat the results were quite different from the 95%
success rate at the lab. From the boat’s perspective, this did not turn out to be the
same. The boat turned around continuously in either direction, and continued doing
so. We believe this was due to the following reasons:
• The open environment at the harbor did not produce high-accuracy disparities
such as those achieved in the laboratory, where conditions were like
navigating a corridor.
• Lighting conditions were different, and the focus settings were tuned for
accuracy over larger distances. This resulted in poor estimates for near
objects, which could bring up noisy disparity results in a few frames.
52
• The algorithm used at the time computed the sum of all disparities on either
side of the center of the camera, nearer to it than the threshold distance, and
decided upon turning in the direction with the higher disparity mass. The
combined mass over a large sector angle such as the one I used in this method
makes it difficult to threshold correctly, since even frames containing no
obstacles may have enough erroneous data to trigger a turn. Setting a
threshold that has a large dynamic range cannot be achieved using such a
simplified algorithm.
The observed disparity estimates from the camera data indicate that the data is very
noisy. Hence it is probably a good idea to do some amount of spatial filtering on the
image to reject outliers. One example would be to use a median filter [Pratt, 2004] to
reject noisy data. This will get rid of noisy data from the image while still preserving
the information. (An averaging filter would change the disparity values). Another
way to remove noisy data is to consider that samples have to take on some critical
mass before they will be considered a serious obstacle.
One way to have an idea of this is to begin generating a polar histogram of ranges to
objects at various angles, a method used in sonar and laser-based obstacle avoidance
[Borenstein et. al, 1991], [Ulrich et. al, 1998], [Ulrich et. al, 2000]. To reduce
processing time in computing the true angles, we assume that there are 10 sectors
which are 32 pixels wide, and then projecting all pixels downwards, which are closer
53
than a certain threshold value to the cameras. (Please note that this is an
approximation to actual angles.) This results in polar histograms shown in figures
Figure 21 (a) & (b), Figure 22 (a) & (b) and Figure 23 (a) & (b).
These are obtained using the relation:
<
=
= =
Threshold
Threshold
disparity
width
pixels
disparity
disparity i
R R
R R
y x
W
x
i y x S
, 0
, 1
) , (
, ) , (
(17)
54
Figure 21 (a) & (b): Polar Histograms, image and disparity image for near obstacle
55
Figure 22 (a) & (b) : Polar Histograms, image and disparity image for very near
obstacle
56
Figure 23 (a) & (b) : Polar Histograms, image and disparity image – no obstacles
close by
57
From the figures shown above, it is clear that the disparity data is far from reliable.
There are situations such as in Figure 21, the nearest obstacles are not detected. This
is due to the fact that cameras can be made to focus at a particular distance, and
hence it becomes very difficult for intricate textures to be captured as they appear out
of focus. In the absence of a method of getting unique candidate correspondences,
the stereo algorithm has no way of identifying the same point in both views (at least
not by area correlation using small windows) and hence we do not get any estimates
here.
This poses a problem, which we plan to solve for now by assuming that the boat
starts out without having any obstacle in front of it that is closer than 1 m. From the
histograms above it is quite clear that anything lower than this is not detected.
The next step is determining a threshold that is will work well. There are in fact two
thresholds involved here:
1) The distance threshold, whereby we determine the closest distance below
which we start treating a disparity pixel as a threat or obstacle.
2) The next is the histogram threshold, whereby we determine what counts of a
sector’s histogram will be used to flag that sector as unsafe for traversal.
From the graphs plotted above, we would choose the distance threshold to be 3m,
and the histogram threshold to be 150. This produces satisfactory results as tested. It
is however, not completely reliable.
58
On the pictures above this results in:
Picture Unsafe Sectors Safe Sectors Comments
Figure 21(a) 3,4,5,6 1,2,7,8,9,10 A rule such as choose
the direction with
safer sectors should
work well here. The
left-hand side is more
dangerous (and does
have fewer safe
sectors from the
figure).
Figure 21(b) 5,6,7,8 1,2,3,4,9,10 The condition given
above fails here. The
generated command
would make the boat
turn left.
Figure 22(a) 2,3,4,6,9 1,5, 7,8,10 Another dangerous
situation. The vehicle
should back-up if it
does not find 3
adjacent safe sectors.
Figure 22 (b) 2,3,6,9 1,4,5,7,8,10 The condition
mentioned above
should help prevent a
dangerous collision.
Figure 23 (a) &
(b)
1,2,3,4,5,6,7,8,9,
10
All clear! Take
shortest path to goal.
Table 2 : Inferences from Polar Histograms
There are a few improvements that can still be made to this. One of these relies upon
calculating the histogram based upon weighting the pixels by range to them and then
thresholding. The advantage of this method is that it inherently captures the range
thresholding. The easier job of tweaking a single parameter is left to the user.
59
Figure 24 : Inverse Distance Polar Histograms for Figure 21 (a) and (b).
Figure 25 : Inverse Distance Polar Histograms for Figure 22 (a) and (b)
Figure 26 : Inverse Distance Polar Histograms for Figure 23 (a) and (b)
The Inverse Distance Polar Histogram is calculated as shown below:
(18)
=
>
=
= • =
0 , 0
0 , 1
) , (
,
1
) , (
Confidence
Confidence
y x
W
x
i
Range
y x S
disparity
width
pixels
disparity
disparity i
60
Picture Unsafe Sectors Safe Sectors Comments
Figure 21(a) 3,4,5,6 1,2,7,8,9,10 Identical result
with the two-
thresholding
method, used
earlier. Prefers turn
to the right.
Figure 21(b) 4,5,6,7 1,2,3,8,9,10 The boat should
reverse when the
top three values are
greater than 300.
Figure 22(a) 2,3,4,6,9 1,5, 7,8,10 Another dangerous
situation. Sector
angle should be
decided Boat
should back-up if it
does not find 3
adjacent safe
sectors.
Figure 22(b) 2,3,4,6,9 1, 5,7,8,10 The condition
mentioned above
should help
prevent a
dangerous
collision.
Figure 23 (a) 5 1,2,3,4,6,7,8,9,10 One false
classification. It is
a single sector.
Take shortest path
to goal except
sector 5.
Figure 23 (b) 1,2,3,4,5,6,7,8,9,10 All clear – a good
decision. Robot is
free to choose any
sector in the view
of the camera that
advances toward
goal.
Table 3 : Inferences from Inverse Distance Polar Histogram
61
5.5 Conclusion
Although, there seems to be some improvement in the estimates that we get from
individual frames, there is a lot more to be desired from this system. We would not
trust it to prevent the boat from running into obstacles. In the future we will explore
two ways of attempting to make this obstacle avoidance system more robust. The
first is to use a local occupancy grid map [Thrun et al., 2005], [Elfes, 1987].
The second method involves image processing techniques that rely upon learning
textures that may represent water. The decision rule will basically be used to weaken
disparity data returned from regions which are detected as water (since this is a
navigable region). Please refer to [Pratt, 2004], [Russ, 1999], [Sonka, 2003] and
[Gonzalez, 2001] and references therein for more information on texture based
classification. Besides this it will be useful to read pattern recognition texts such as
[Duda, 2001], [Bishop, 2006] and references therein that cover classification
schemes to put together better classifiers which should give better results. Another
useful addition could be to augment the feature vectors by using temporal tracking,
which should improve the robustness of this method, although it might introduce the
need to relearn the wave-behavior for different sea conditions.
In this strategy, the boat will maintain an occupancy grid map of about 20m x 20m
with each grid having a 0.1m x 0.1m resolution. In this way we will have 40000 grid
cells that define the local map around the boat. When the boat moves, we might have
62
required to update the cells around it to ensure that it never reaches the edge of this
grid or the algorithm may break down. This could prove to be computationally
expensive however. Hence, my strategy will be to start up another thread that
periodically cleans up and updates the boats local occupancy grid once every 10
seconds, as this does not violate the maximum distance the boat can travel in 10
seconds, which would in fact be about 12 m.
In the occupancy grid method, we assign each cell that appears to have an obstacle a
number (like a probability). The number assigned is based upon some reasonable
model such as an inverse- square of distance law. Also, since stereo does not work
well at close quarters we would want to be sure that we do not run into obstacles
closer than our smallest detection radius (Rmin).
To ensure this, we can grow all occupied cells in thickness by a few more cells,
using a keep-out label. We might not want to increase the occupancy number for
these cells until we have evidence (readings from stereo), indicating this. Every time
a location is said to be unoccupied we can subtract the inverse square distance
amount from that cell. (We might want to keep this relation anti-symmetric to give
more weightage to detection of obstacles rather than removing knowledge of them,
since the latter is more risky). In either case, we would want to set some maximum
and minimum values to bound the values both can take.
63
This would be especially true for unoccupied cells that are actually occupied, but
which may become negative enough due to errors in scanning to make it difficult to
be detected as obstacles unless there are enough readings to convert them to actual
obstacles. This method is based upon the classic Elfes paper using occupancy grids
to build maps and do obstacle avoidance using sonar [Elfes, 1987].
The next step is to integrate this into the path-planning methodology. One way to do
this would be to assign the closest position toward the goal as the target location in
the local map, and optimize upon a criterion that lays emphasis on the shortest path
towards the goal, while keeping away from obstacles detected in the occupancy grid
as being occupied. Another way to do this would be to use a potential field method
that draws the vehicle toward the goal, (an attractive potential), while the obstacles
or occupied cells could be given a repulsive potential which is a method commonly
followed in mobile robotic obstacle avoidance. [Khatib, 1995].
64
Chapter 6 : Conclusion and Future Work
6.1 Summary and Contributions
This thesis describes the conversion of the remote-controlled boat (Roboduck-II) into
an Autonomous Surface Vehicle that is capable of navigating relatively complex
water-bodies including lakes and marinas. At the time of writing this document, the
vehicle can perform autonomous maneuvers under GPS-aided line-of-sight guidance,
navigation and simple stereo vision-aided obstacle avoidance. This document
describes the ideas behind the implementation details, design challenges and how
some of them were overcome, issues we have faced during deployments of the boat,
how we have attempted to overcome them using various strategies and future
directions in which we would like to proceed to work on, using this platform.
In addition to the autopilot and LOS guidance we developed software that enables
users to create mission scripts using local maps of the area of deployment – a process
that reduces errors in missions, as well as the time to creation and execution of
missions. We also developed a mission scripting language that allows the vehicle to
visit way-points and autonomously sample data. We have not gathered any data
using this mode of functionality at the time of writing this document.
65
At present, Roboduck-II is being used to conduct biological sampling to monitor
Harmful Algal Blooms. Figure 27 shows a data-set collected at a single station by
Roboduck-II using a Hydrolabs MS5 Hydrosonde probe by making NAMOS-GUI
issue winch commands at regular intervals (NAMOS-GUI augmentation work done
by Jnaneshwar Das).
Figure 27 : Data collected by Roboduck-II from a single winch profile using direct
winch profiling commands through the NAMOS-GUI.
66
When marine biologists need to sample water-bodies, they usually have to carry the
sensors by hand, go out in a boat, lower these sensors either by counting the amount
of cable they have let out, or by looking at the depth-sensor readout. This is a tedious
process and can take up a lot of time, in terms of getting ready to perform the
sampling experiments, collecting the data, ensuring that they have allowed the sensor
to dwell long enough at a single sampling location as well as to hold station at a
single sampling location while collecting data.
All these aspects are already being taken care of by Roboduck-II. We can lower the
sensor package to a programmed depth of up to 5 m, program the sensor to collect
data at desired intervals, automate the process of taking care of sensor settling and
dwell times, keeping track of GPS data while collecting data, staying in the vicinity
of the sampling station and so on.
The vehicle is very easy to deploy and requires only two people to launch it, while a
single person can easily operate it using the NAMOS-GUI interface. This interface
was written on the Windows platform in order to ensure that non-engineers can take
advantage of it. Using this interface the operator can easily create a mission file that
specifies a set of waypoints, with instructions included on how long to wait at a
particular waypoint, as well as instructions on operation of the instrument package
and lowering the winch to prescribed depths.
67
With obstacle avoidance using stereo vision quickly reaching a state where it will
work reliably, we foresee a vehicle capable of autonomous behavior in the near
future. For now, it is capable of way-point navigation within the channel while under
supervision of an operator who is ready to take over using the RC-failsafe in the
event of other boats requiring to pass across the Roboduck-II’s path, as well as to
prevent it from running into other static obstacles.
6.2 Future Work
Future work on the boat will involve system identification to arrive at a better model
of the boat that incorporates surge and sway besides yaw motions. It would also be
advantageous to have an accurate model of the turning capabilities of the vehicle at
various velocities. Using these models it becomes easier to predict how the boat will
behave under various control inputs. An important and useful addition to the model
will be to develop a coupled rudder-thruster controller. We hope to be able to have
this working in the near future.
With good models of the system response of the vehicle, we can proceed to the
design of a Model-Adaptive controller such as the one described in [Amerongen,
1982] capable of station keeping. This is a difficult problem, but is one of the most
useful capabilities for a marine vessel.
68
Another important and very useful addition to the boat will be to introduce a state-
estimator that will allow it to have better estimates of its position based upon the
input commands to the thrusters and rudder controllers, GPS, compass, inertial
measurement unit readings and visual odometry. We expect to use an Extended
Kalman Filter to fuse the sensor data to perform this task. This is a challenging task
but one that will be very useful in improving estimates on the boat’s position and
attitude information.
69
Bibliography
[Abkowitz, 1969] Abkowitz, Martin A. (1969). Stability and Motion Control of
Ocean Vehicles. MIT Press. SBN : 262-51006-5.
[Amerongen, 1982] Amerongen, Job Van. (1982). Adaptive Steering of Ships – a
model-reference approach to improved maneuvering and economical course keeping.
PhD thesis, Delft University of Technology, Netherlands.
[Army, 1973] Army, Department of, (1973). Universal Transverse Mercator Grid,
U. S. Army Technical Manual TM 5-241-8, 64 p.
[Bishop, 2006] Bishop, Christopher M. (2006). Pattern Recognition and Machine
Learning. Springer. ISBN 978-0-387-31073-2.
[Borenstein et al., 1991] Borenstein, J., Koren, Y. (1991). The Vector Field
Histogram – Fast Obstacle Avoidance for Mobile Robots. IEEE Journal of Robotics
and Automation, 7, 278-288 p.
[Dorf et al., 1998] Dorf, Richard C., Bishop, Robert H. (1998). Modern Control
Systems. Eighth Edition.
[Duda et al., 2001] Duda, Richard O., Hart, Peter E., Stork, David G. (2001).
Pattern Classification, 2
nd
Edition. Wiley-Interscience, John Wiley & Sons, Inc. ISBN
0-471-05669-3.
[Elfes, 1987] Elfes, A. (1987). Sonar-Based Real World Mapping and Navigation. In
IEEE Journal on Robotics and Automation, 3(3):249-265.
[Encarnacao, et. al, 2000] Encarnacao, P., Pascoal, A., Arcak, M. (2000). Path
Following for Autonomous Marine Craft. International Federation of Automatic
Control, 2000.
[Forsyth et al, 2003] Forsyth, David A.; Ponce, Jean (2003). Computer Vision A
Modern Approach. Prentice Hall. ISBN 0-13-085198-1.
[Fossen, 1994] Fossen, T. I. (1994). Guidance and Control of Ocean Vehicles, John
Wiley & Sons Ltd. ISBN 0-471-94113-1
[Fox et al, 1997] Fox, D., Burgard, W., Thrun, S. (1997). The Dynamic Window
Approach to Avoidance. IEEE Robotics and Automation Magazine, 4: 23-33, 1997.
[Franklin et al., 2004] Franklin, Gene F., Powell, J. David; Emami-Naeni, Abbas.
(2002). Feedback Control of Dynamic Systems. Fourth Edition.
70
[Gonzalez et al., 2002] Gonzalez, Rafael C.; Woods, Richard E. (2002). Digital
Image Processing, 2
nd
Edition. Prentice Hall. ISBN 0-20-118075.
[Hartley, et al, 2006] Hartley, Richard; Zisserman, Andrew. (2006). Multiple
View Geometry in Computer Vision, Second Edition. Cambridge University Press.
ISBN 0-521-54051-8.
[Kamon et al., 1996] Kamon, I., Rivlin, E., Rimon, E.(1996). A New Range-Sensor
Based Globally Convergent Navigation Algorithm for Mobile Robtos. IEEE
International Conference of Robotics and Automation, Minneapolis, April 1996.
[Khatib et al, 1995] Khatib, M., Chatila, R., (1995). An Extended Potential Field
Approach for Mobile Robot Sensor-Based Motions. Proceedings of the Intelligent
Autonomous Systems IAS-4, IOS Press, Karlsruhe, Germany, March 1995, pp. 490-
496.
[Khatib et al., 1997] Khatib, M., Jaouni, H., Chatila, R., Laumod, J.P. (1997).
Dynamic Path Modification for Car-Like Nonholonomic Mobile Robots. IEEE
International Conference on Robotics and Automation, Albuquerque, NM, April
1997.
[Klette et al., 1998] Klette, R.; Schluns, Karsten; Koschan, Andreas. (1998).
Computer Vision Three-Dimentional Data from Images. Springer-Verlag Singapore
Pte. Ltd. ISBN 981-3083-71-9.
[Konolige, 1997] Konolige, Kurt. (1997). Small Vision Systems: Hardware and
Implementation. Proceedings of Eighth International Symposium on Robotics
Research, Hayama, Japan, October 1997.
[Lindegaard, 2003] Lindegaard, Karl-Petter W. (2003). Acceleration Feedback in
Dynamic Positioning. PhD Thesis ,Department of Engineering Cybernetics,
Norwegian University of Science and Technology, Trondheim, Norway.
[Lumelsky et al., 1990] Lumelsky, V., Skewis, T., (1990). Incorporating Range
Sensing in the Robot Navigation Function. IEEE Transactions on Systems, Man and
Cybernetics, 20:1990, pp. 1058-1068.
[Oliveira et al., 2003] Oliveira, P., Pascoal, A. (2003). On the design of Multirate
Complementary Filters for autonomous marine vehicle navigation.
[Siegwart et al., 2004] Siegwart, Roland, Nourbakhsh, Illah R. (2004).
Introduction to Autonomous Mobile Robots, MIT Press, ISBN 0-262-19502-X.
71
[Sonka et. al, 2004] Sonka, Milan ; Hlavac, Vaclav; Boyle, Roger. (2004). Image
Processing, Analysis, and Machine Vision, Second Edition. Thomson Brooks/Cole.
ISBN 981-240-061-3.
[Snyder, 1987] Snyder, J. P., (1987). Map Projections - A Working Manual. U.S.
Geological Survey Professional Paper 1395, 383 p.
[Thrun et al., 2005] Thrun, Sebastian; Burgard, Wolfram; Fox, Dieter. (2005).
Probabilistic Robotics. The MIT Press. ISBN 9-780262-201629.
[Triantafyllou et al., 2003]. Triantafyllou, Michael S., Hover, Franz S. (2003).
Maneuvering and Control of Marine Vehicles.
[Trucco et al, 1998]. Trucco, Emanuele; Verri, Alessandro. (1998). Introductory
Techniques for 3-D Computer Vision. Prentice Hall. ISBN 0-13-261108-2.
[Ulrich et al., 1998]. Ulrich, I., Borenstein, J. (1998). VFH+: Reliable Obstacle
Avoidance for Fast Mobile Robots, Proceedings of the IEEE International
Conference on Robotics and Automation (ICRA’98), Leuven, Belgium, May 1998.
[Ulrich et al., 2000]. Ulrich, I., Borenstein, J. (2000). VFH*: Local Obstacle
Avoidance with Look-Ahead Verification. Proceedings of the IEEE International
Conference on Robotics and Automation, San Francisco, May 24-28, 2000.
72
Appendix : Sensor Characteristics
Acquisition Time : Reacquisition Less than 2 seconds
Acquisition Time : Warm Approximately 15 seconds
Acquisition Time : Cold Approximately 45 seconds
Acquisition Time : AutoLocate® 2 minutes
Acquisition Time : SkySearch 5 minutes
GPS accuracy Position < 15 m, 95% typical
GPS accuracy Velocity
0.1 knot RMS steady state
DGPS (USCG) accuracy on Position 3-5 m, 95% typical
DGPS (USCG) accuracy on Velocity 0.1 knot RMS steady state
DGPS (WAAS) accuracy on Position < 3 m, 95% typical
DGPS (WAAS) accuracy on Velocity 0.1 knot RMS steady state
Update rate 5Hz
Table 4 : Specifications for the Garmin GPS 16A.
(Source : http://www.garmin.com/products/gps16a/spec.html)
73
Yaw: ± 180˚
Pitch: ± 180˚
Range
Roll: ± 70˚
A/D Resolution 12 bits
Digital Filter Infinite Impulse Response (IIR)
User programmable weighted moving
average
Pitch: 0.30˚ (typical)
Roll: 0.25˚ (typical)
Angle Resolution (no digital filtering)
Yaw: 0.50˚ (typical)
Pitch: < 0.1˚
Roll: < 0.1˚
Angle Resolution (most aggressive digital
filtering)
Yaw: < 0.1˚
Resolution specs. taken during static motions
Pitch: ± 0.93˚ typical (yaw from 0 - 360˚ and
roll=0˚)
Roll: ± 0.33˚ typical (yaw from 0 - 360˚ and
pitch =0˚)
Accuracy
Yaw: ± 1.0˚ typical (pitch and roll=0˚)
Angle measurement nonlinearity (pitch and roll) ± 0.23% F.S.
Pitch: 0.07˚ (typical)
Roll: 0.07˚ (typical)
Angle measurement repeatability
Yaw: 0.26˚ (typical)
Update rate (angle mode) 45 Hz/3 channels (maximum)
30 Hz/3 channels (typical)
Update rate (raw mode) 70 Hz/ 6 channels
Raw: ax,ay,az accelerometer
Raw: bx,by,bz magnetic field
Output modes
Units: pitch, roll, and yaw in degrees
Output format RS-232 serial
Transmission Rate 9600 bits/sec
Supply voltage +5.2 VDC min., +12 VDC max.
Supply current 50 milliamps/node @ standard speed
Connectors Sensor: RJ11 type power: min. coaxial jack
Operating Temperature - 25˚C to 70˚C
Pitch: 0.009 ± 0.008
Roll: 0.033 ± 0.025
Temperature Drift (%/˚ C)
(mean, std.dev.)
Yaw: 0.019 ± 0.019
Module size 1.7" wide, 2.5" long, 0.7 " thick
Weight 75.0 gr. with enclosure, 26.9 gr. without
enclosure
Table 5 : Specifications of the 3DMG rate-gyro
(Source : http://www.microstrain.com/3dm_specs.aspx )
Abstract (if available)
Abstract
This thesis describes the work done in transforming a small boat with two aft thrusters and a single rudder into an Autonomous Surface Vehicle which is capable of operation in diverse marine environments such as lakes, rivers, marinas and harbors. The sensors utilized are a Global Positioning System, a 3 degree of freedom Inertial Measurement Unit consisting of gyroscopes, accelerometers and an integrated magnetometer compass. The guidance system is aided by a stereo vision system that uses stereo disparities to make inferences about obstacles. The boat is currently being used to collect data used by a team of researchers, to study the effect of harmful algal blooms. After a brief description of the system design, we discuss the design and implementation of the heading autopilot, line of sight guidance system and the stereo vision-based obstacle avoidance system.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Risk-aware path planning for autonomous underwater vehicles
PDF
Discrete geometric motion control of autonomous vehicles
PDF
Multi-robot strategies for adaptive sampling with autonomous underwater vehicles
PDF
Autonomous mobile robot navigation in urban environment
PDF
Identification, control and visually-guided behavior for a model helicopter
PDF
Decision support systems for adaptive experimental design of autonomous, off-road ground vehicles
PDF
Data-driven robotic sampling for marine ecosystem monitoring
PDF
Architecture and application of an autonomous robotic software engineering technology testbed (SETT)
PDF
Networked cooperative perception: towards robust and efficient autonomous driving
PDF
A computational framework for diversity in ensembles of humans and machine systems
PDF
The development of an autonomous subsystem reconfiguration algorithm for the guidance, navigation, and control of aggregated multi-satellite systems
PDF
A computational framework for exploring the role of speech production in speech processing from a communication system perspective
PDF
Vision-based studies for structural health monitoring and condition assesment
Asset Metadata
Creator
de Menezes Pereira, Arvind Antonio (author)
Core Title
Navigation and guidance of an autonomous surface vehicle
School
Viterbi School of Engineering
Degree
Master of Science
Degree Program
Electrical Engineering
Publication Date
05/03/2007
Defense Date
04/24/2007
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
ASV,Guidance,navigation,OAI-PMH Harvest,robot
Language
English
Advisor
Narayanan, Shrikanth S. (
committee chair
), Sukhatme, Gaurav S. (
committee chair
), Caron, David (
committee member
)
Creator Email
menezesp@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-m481
Unique identifier
UC1203565
Identifier
etd-deMenezesPereira-20070503 (filename),usctheses-m40 (legacy collection record id),usctheses-c127-485268 (legacy record id),usctheses-m481 (legacy record id)
Legacy Identifier
etd-deMenezesPereira-20070503.pdf
Dmrecord
485268
Document Type
Thesis
Rights
de Menezes Pereira, Arvind Antonio
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Repository Name
Libraries, University of Southern California
Repository Location
Los Angeles, California
Repository Email
cisadmin@lib.usc.edu
Tags
ASV
navigation
robot