Close
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Crowd-sourced collaborative sensing in highly mobile environments
(USC Thesis Other)
Crowd-sourced collaborative sensing in highly mobile environments
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Crowd-Sourced Collaborative Sensing in Highly Mobile Environments by Yurong Jiang A Dissertation Presented to the FACULTY OF THE USC GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (COMPUTER SCIENCE) August 2016 Copyright 2016 Yurong Jiang Acknowledgements First of all, I would like to express my deepest appreciation to my advisor, Professor Ramesh Govindan, for his continuous support and patient guidance through my Ph.D life. Looking back, I was so lucky and grateful to get admitted by Ramesh with no background in Computer Science. Because of my background, I could barely make progress in my first two years. However, it was Ramesh who showed greatest patience and best guidance to train me towards a qualified computer science Ph.D student. All of my works in this thesis could not be possible without his insightful guidance. He is so nice in person but also strict and rigorous in research which will benefit my whole life . I couldn’t imagine having a better advisor than him for my Ph.D. Besides my advisor, I would also like to thank my dissertation committee, Professor Gaurav Sukhatme and Professor Bhaskar Krishnamachari, for their insightful comments and suggestions on improving the quality of this dissertation. Throughout my dissertation, it has been a great honor to work with many professors and prestigious researchers in both universities and industrial labs: CarLog (Chapter 2) is a joint work with Fan Bai, Donald Grimm at GM R&D and Professor William G J Halfond and Professor David Kempe at USC; CarLoc (Chapter 3) is a joint work with Professor Gaurav Sukhatme at USC, Professor Marco Gruteser at Rutgers University and Fan Bai, Donald Grimm at GM R&D; MediaScope (Chapter 4) is a joint work with Professor Tarek Abdelzaher at UIUC and Professor Amotz Bar-Noy at City University of New York. Although not included in the thesis, a joint work WebPerf also exists with Suman Nath, Lenin Ravindranath Sivalingam at Microsoft Research. I would also like to thank my fellow labmates Xing Xu, Matthew McCartney, Hang Qiu for the joint contribution to the projects we have done over the past six years. ii Finally, I would thank my family for their sincere and selfless support: specially my beloved wife Shan Dong, my parents Qin Mao and Meihuag Jiang and lastly my lovely son, Alex Jiang. iii Table of Contents Acknowledgements ii List Of Tables vi List Of Figures vii Abstract ix Chapter 1: Introduction 1 1.1 Challenges and Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Dissertation Overview and Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.3 Dissertation Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Chapter 2: CARLOG: A Platform for Flexible and Efficient Automotive Sensing 8 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2 Background and Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.3 CARLOG Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.4 CARLOG Latency Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.4.1 Predicate Acquisition Latency . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.4.2 Terminology and Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.4.3 Latency Optimization: Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.4.4 Parallel Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.4.5 Putting it All Together . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.5 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.5.1 Methodology and Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.5.2 CARLOG in Action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.5.3 Single Query Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.5.4 Multiple Query Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Chapter 3: CARLOC: Precisely Tracking Automobile Position 39 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 3.2 Background and Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 3.3 The Design of CARLOC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 3.3.1 Overview of CARLOC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 3.3.2 Probabilistic Representation of Position . . . . . . . . . . . . . . . . . . . . . . . 46 3.3.3 Map Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 3.3.4 Motion Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 3.3.5 Location Update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3.3.6 Crowd-sourced Landmark Positions . . . . . . . . . . . . . . . . . . . . . . . . . 52 3.4 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 iv 3.4.1 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.4.2 CARLOC on an Obstructed Route . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3.4.3 CARLOC on an Unobstructed Route . . . . . . . . . . . . . . . . . . . . . . . . . 61 3.4.4 CARLOC on Partially-Obstructed Routes . . . . . . . . . . . . . . . . . . . . . . . 62 3.4.5 Benefits of Optimizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 Chapter 4: MediaScope: Selective On-Demand Media Retrieval from Mobile Devices 68 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 4.2 Motivation and Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 4.3 MediaScope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 4.3.1 Architecture and Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 4.3.2 Design: Concurrent Queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 4.3.2.1 Queries and Credit Assignment . . . . . . . . . . . . . . . . . . . . . . 78 4.3.2.2 Credit-based Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . 82 4.3.2.3 Feature extraction on the phone . . . . . . . . . . . . . . . . . . . . . . 87 4.3.2.4 Leveraging a Crowd-Sensing Framework . . . . . . . . . . . . . . . . . 89 4.4 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.4.1 Query Completeness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4.4.2 System Overhead . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 Chapter 5: Literature Review 97 5.1 Flexible and Efficient Automotive Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . 97 5.2 Precisely Tracking Automobile Position . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 5.3 Selective On-Demand Media Retrieval from Mobile Devices . . . . . . . . . . . . . . . . 102 Chapter 6: Conclusions and Future Work 104 References 106 v List Of Tables 3.1 Measured GPS errors in three different areas . . . . . . . . . . . . . . . . . . . . . . . . . 42 3.2 CARLOC, smartphone GPS to High-precision GPS Distance Statistics . . . . . . . . . . . 62 3.3 Landmark Detection Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 4.1 System Communication and App Running Overhead . . . . . . . . . . . . . . . . . . . . 95 4.2 System Function Components Overhead . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 vi List Of Figures 1.1 My Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.1 CARLOG Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.2 Predicate acquisition latency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.3 Expansion Proof Tree for Rule 2.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.4 Example of a Negation Proof Tree and its Decision Tree . . . . . . . . . . . . . . . . . . 26 2.5 Events detected by CARLOG and by Naive . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.6 Rules uses in our evaluations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.7 Performance of single queries with 3 cloud sensors . . . . . . . . . . . . . . . . . . . . . 34 2.8 CARLOG Latency and Event counts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.9 Single query performance grouped by number of cloud sensors . . . . . . . . . . . . . . . 35 2.10 OPT Latency and Event Counts for multiple queries . . . . . . . . . . . . . . . . . . . . . 36 2.11 Multi-query performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 3.1 Portion of GPS Trace in City Downtown . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3.2 CARLOC Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 3.3 Kinematics of lateral vehicle motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 3.4 Multi-lane Stop Sign . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 3.5 Street Corner Illustration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 3.6 Stop Sign Landmark Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.7 Street Corner Landmark Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 vii 3.8 Speed Bump Landmark Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.9 Static Measurement Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.10 CARLOC and GPS Comparison in Downtown . . . . . . . . . . . . . . . . . . . . . . . . 58 3.11 Map Pin Points Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.12 HDOP comparison for Open Sky Area and Downtown . . . . . . . . . . . . . . . . . . . 61 3.13 CARLOC, high-precision GPS and smartphone GPS in Open Sky Area . . . . . . . . . . . 62 3.14 Different strategies’ Start-End Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 3.15 Start-End Distance with Number of Landmarks . . . . . . . . . . . . . . . . . . . . . . . 64 3.16 Start-End Distance with Number of Learning Trace . . . . . . . . . . . . . . . . . . . . . 64 3.17 Map-matching and Motion Model Optimization Performance . . . . . . . . . . . . . . . . 64 3.18 Landmark Error Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 3.19 Map-matching and Motion Model Issues on Map View . . . . . . . . . . . . . . . . . . . 67 4.1 CDF of Flickr Photo Availablility Gap . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 4.2 System Architecture Work Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 4.3 Illustration of Concurrent Queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 4.4 Image Resizing Overhead and Tradeoffs . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 4.5 Average Video Frame Extraction Time For Different Duration and Frequency . . . . . . . 88 4.6 Average Inter-frame Feature-Space Distance . . . . . . . . . . . . . . . . . . . . . . . . . 88 4.7 K Nearest Neighbor Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4.8 Cluster Representative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4.9 Spanner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4.10 Different Query Mixes by Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 4.11 Different Query Mixes by Timeliness Bound . . . . . . . . . . . . . . . . . . . . . . . . . 92 4.12 Sample Schedule Timeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 viii Abstract Networked sensing has revolutionized various aspects of our lives. In particular, it has allowed us to minutely quantify many aspects of our existence: what we eat, how we sleep, how we use our time, and so forth. We have seen such quantification from the smart devices we use daily, such as smartphones and wearable devices. Those smart devices usually have more than ten high precision sensors to sense both internal and external information. Another domain that will likely to see such quantification in near future is automobiles. Modern vehicles are equipped with several hundred sensors that govern the operation of internal vehicular subsystems. Those sensors from both smart devices and automobiles, coupled with online information (cloud computing, maps, traffic, etc.) and other databases as well as crowd-sourced information from other users, can enable various forms of context sensing, and can be used to design new features for both mobile devices and vehicles. We abstract those aspects for context sensing into three parts: mobile and vehicular sensing, cloud assistance and crowdsourcing. Though each part itself comes with different challenges, accurate context sensing usually requires a careful combination of one or more of the three aspects, which brings new challenges for designing and developing context sensing systems. In this dissertation, we focus on three challenges, Programmability, Accuracy and Timeliness, in designing efficient and accurate context sensing system for mobile devices and vehicles. We will leverage the mobile and vehicle sensors, cloud information and crowdsourcing, collectively to ease context sensing programming, improve context sensing accuracy and timeliness. First, for Programmability, we focus on programming context descriptions using information from cloud and vehicle sensors. As more sensor-based apps are developed for vehicular platforms, we think many of these apps will be programmed using an event-based paradigm, where apps try to detect events ix and perform actions on detection. However, modern vehicles have several hundred sensors, these sensors can be combined in complex ways together with cloud information in order to detect some complicated context, e.g. dangerous driving. Moreover, these sensor processing algorithms may incur significant costs in acquiring sensor and cloud information. Thus, we propose a programming framework called CARLOG to simplify the task of programming these event detection algorithms. CARLOG uses Datalog to express sensor processing algorithms, but incorporates novel query optimization methods that can be used to mini- mize bandwidth usage, energy or latency, without sacrificing correctness of query execution. Experimental results on a prototype show that CARLOG can reduce latency by nearly two orders of magnitude relative to an unoptimized Datalog engine. Second, for Accuracy, we focus on automotive positioning accuracy. Positioning accuracy is an im- portant factor for all kinds of context sensing applications for automobiles. Lane-level precise positioning of an automobile can improve navigation experience and on-board application context awareness. How- ever, GPS by itself cannot provide such precision in obstructed urban environments. We propose a system called CARLOC for lane-level positioning of automobiles which carefully incorporates the three aspects in context sensing. CARLOC uses three key ideas in concert to improve positioning accuracy: it uses digital maps to match the vehicle to known road segments; it uses vehicular sensors to obtain odometry and bearing information; and it uses crowd-sourced location estimates of roadway landmarks that can be detected by sensors available in modern vehicles. CARLOC unifies these ideas in a probabilistic posi- tion estimation framework, widely used in robotics, called the sequential Monte Carlo method. Through extensive experiments, we show our system achieves sub-meter positioning accuracy even in obstructed environment, which is an order of magnitude improvement over a high-end GPS device. Finally, for context sensing applications, Timeliness is another important problem we need to take care of. We consider how to ensure the timeliness and availability of media content from mobile devices. Moti- vated by an availability gap for visual media, where images and videos are uploaded from mobile devices well after they are generated, we explore the selective, timely retrieval of media content from a collection of mobile devices. We envision this capability being driven by similarity-based queries posed to a cloud x search front-end, which in turn dynamically retrieves media objects from mobile devices that best match the respective queries within a given time limit. We design and implement a general crowdsourcing frame- work called MediaScope that supports various geometric queries and contains a novel retrieval algorithm to maximize the retrieval of relevant information. In experiments on a prototype, our system achieves near optimal performance under different scenarios. xi Chapter 1 Introduction In recent years, networked sensing has revolutionized various aspects of our lives. Daily usage commodi- ties such as smartphones have more than ten high precision sensors to improve user experience. Similarly, modern automobiles are equipped with hundreds of sensors to monitor low-level vehicle dynamics and improve vehicle performance. The sensors from both smart devices and automobiles, coupled with online information (cloud computing, traffic, maps, etc.) and other databases as well as crowd-sourced informa- tion from other users, can enable various form of context sensing, e.g. activity tracking, event suggestions, driving behavior alerting and so forth. The biggest concerns for context sensing are accuracy and efficiency, as both accuracy and efficiency will minutely affect context sensing performance and user experiences in daily lives. In order to provide accurate and efficient context sensing, we need to understand the properties and conquer various challenges presented by the sensors. We often see four properties associated with the sensors: high quality, high frequency, large volume and environmental sensitivity. Many sensors usually come with one or more of the four properties. The first sensor property is high quality. Recent technology advances have made high quality sensors common in both smart devices and automobiles. In particular, camera sensors of smart devices can reach 41 MP res- olution, and cameras being able to capture 4K video have now become a standard configuration for smart devices. Likewise, high definition cameras are widely deployed in modern automobiles. Automobiles rely on high definition cameras to detect obstacles like pedestrians and other cars on the road, and even 1 estimate the distance to them. Second property for sensors is high frequency. Most sensors are sampled at rates between one and 100Hz. Typically, for smart devices, the motion sensor sampling frequency is about 100 Hz. Moreover, automobiles are usually equipped with high frequency sensors in order to achieve tight control loops. For example, the sampling frequency for engine speed and throttle position sensors is 100 Hz. Another property for the sensors is large volume. Because of the high quality and high frequency properties, continuously streaming the sensor readings will accumulate a large volume of data over time. For instance, several minutes’ 4K video taken by smart devices can easily reach several GBs and sensing data of ten engine unit sensors for one hour driving is usually over 1 GB. The last property needs attention is environmental sensitivity. Many sensors are inherently erroneous and vulnerable to environmental fac- tors. Images not taken properly will possibly be blurry, dark, or out-of-focus; GPS errors can reach several hundreds of meters in an obstructed area. Occasionally, some sensors’ error even drift over time, requiring periodically calibrating. Thus, we need carefully take those sensor properties into consideration while we build context sensing systems on them. The sensor properties offer us amount of opportunities to explore. With the power of generating sensing data from them has increased dramatically, the ability to process the data and extract useful, accurate context information remains as a major challenge. Moreover, what additional assistance can help improve the context sensing accuracy and efficiency, and how to bring the assistance into real systems turn out as another challenge. We will discuss the challenges and our approaches in the following section. 1.1 Challenges and Approaches Extracting useful context from sensing data requires comprehensive data and complicated data pro- cessing. Many onboard sensors (camera, audio etc.) can directly be used to describe contexts. Those sensors have been improving continuously as technology advances, resulting in better quality but larger size sensing data. For example, in a social network, people are not really willing to share the significant 2 amount media data in real time due to some concerns like limited cellular bandwidth budget. Thus, ex- tracting specific information from these media data will be hard for developers because of incomplete data. One question is how to ensure the sensor data as comprehensive as possible for developers? On the other hand, it’s rarely possible to extract context from a single sensor. For example, we can use gyroscope sensor in smartphones to estimate the device instantaneous movement, but cannot get device pose until we use a geomagnetic sensor. This process is usually known as information fusion. It brings many individual yet simple sensing information together to infer more complicated and meaningful context. Sensing itself is not sufficient to achieve good context accuracy in some scenarios, but cloud could be an important supplement for them. Such scenarios are quite common in our life. For example, we won’t be able to detect accurate driver behaviors purely based on car sensors, instead, we can look up some real time traffic information from the cloud to improve the detection accuracy. As cloud technologies are rapidly evolving, cloud providers are now able to provide high dimensional information in a timely fashion, such as traffic, points of interest, weather, news, etc. Cloud information will compensate many missing perspectives from sole local sensing. Thus, it’s beneficial to add cloud to context sensing for better accuracy and efficiency. Sensing from a single user often fails to assess complete and accurate contexts, but sometimes crowd- sourcing can help compensate the inaccuracy and limitations. We start with a simple example, assume driving on a road segment, we can use speed and gas consumption rate sensors to compute instantaneous mpg, but cannot determine whether we’re driving optimally on this particular road segment. However, if we can get crowd-sourced instantaneous fuel consumption, we could possibly overcome such challenge. In other words, context accumulation at scale prevents false or malicious data pollution and provides better visibility into the problems. This strategy is also known as crowdsourcing. Crowdsourcing is a common practice that enables large-scale, in-depth, cost-effective information collecting and more accurate ways to extract complete context from sensing. 3 Sensing Cloud Crowd Programmability Accuracy Timeliness Context Challenges CarLog CarLoc MediaScope Figure 1.1—My Contributions From the above discussion, we abstract those perspectives in context sensing as sensor sources, cloud and crowdsourcing. We focus on the interplay between mobile devices and automotive platforms. Along this trend, this dissertation focuses on mobile systems that combine those perspectives to solve variety of challenges as shown in Figure 1.1: Programmability, Accuracy and Timeliness. Programmability. Most modern cars are now equipped with advanced infotainment system. With development of operating system, such as Android Auto [11] and Car Play [29], more and more apps are developed for vehicles. When app developers start developing sensor-based apps for cars, we think many of these apps will fall into the event -> action category. In this category, apps will try to detect events, and perform one or more actions when an event is detected. Many such apps may also need information from the cloud to detect events. However, programming those event detection apps can be difficult and tedious for two reasons. First, there are hundreds of sensors built in individual car, and each sensor only gives local aspect of the car; combining those sensing information together for contexts can be tedious. Second, cloud services usually have widely varying interfaces, making it hard to incorporate to local context sensing. What kind of programming support do we need for efficient sensor fusion under this scenario? Accuracy. One important feature for modern cars is localization and navigation. Most navigation system relies on GPS and simple dead-reckoning for this work. However, GPS errors are prevalent and sometimes significant under obstructed area. Based on our actual measurement, we see the errors can reach 10s of meters in those areas. GPS errors can lead drivers to incorrect locations, e.g. Uber drivers occasionally fail to pick up passengers due to localization error [122]. How can we make use of vehicle sensors, cloud and crowdsourcing to improve the accuracy for vehicle positioning? 4 Timeliness. Cameras on mobile devices have given rise to significant sharing of media sensor data (photos and videos). Users usually upload visual media to online social networks like Facebook [2] and Instagram [4]. However, these uploads are not immediate. Camera sensors on mobile devices have been increasing in both image and video resolution far faster than cellular network capacity. More important, in response to growing demand and consequent contention for wireless spectrum, cellular data providers have imposed data usage limits, which disincentivize immediate photo uploading and create an availability gap (the time between when a photo or image is taken and when it is uploaded). This availability gap can be on the order of several days. If media data was available immediately, it might enable scenarios where there is a need for recent (or fresh) information. How can we enable the timely media information retrieval in crowdsourcing scenarios? Given these challenges, my research goal is to understand the benefits of integrating the sensor sources, cloud and crowdsourcing for context recognition in highly mobile environments. Throughout the disserta- tion, we propose our strategies towards those challenges described as follows. 1.2 Dissertation Overview and Contributions This dissertation made three major contributions to the field of crowd-sourced collaborative sensing. Each contribution carefully combines two or three of context sensing’ three aspects to solve its challenge. Flexible and Efficient Automotive Sensing (Chapter 2). In this work, we focus on Programmability for flexible and efficient automotive sensing. Automotive apps can improve efficiency, safety, comfort, and longevity of vehicular use. These apps achieve their goals by continuously monitoring sensors in a vehicle, and combining them with information from cloud databases in order to detect events that are used to trigger actions (e.g., alerting a driver, turning on fog lights, screening calls). However, modern vehicles have several hundred sensors that describe the low level dynamics of vehicular subsystems, these sensors can be combined in complex ways together with cloud information. Moreover, these sensor processing 5 algorithms may incur significant costs in acquiring sensor and cloud information. We propose a program- ming framework called CARLOG to simplify the task of programming these event detection algorithms. CARLOG uses Datalog to express sensor processing algorithms, but incorporates novel query optimization methods that can be used to minimize bandwidth usage, energy or latency, without sacrificing correctness of query execution. Experimental results on a prototype show that CARLOG can reduce latency by nearly two orders of magnitude relative to an unoptimized Datalog engine. Precisely Tracking Automobile Position (Chapter 3). In this work, we try to solve a GPS Accuracy problem for automobiles. Precise positioning of an automobile to within lane-level precision can en- able better navigation and context-awareness. However, GPS by itself cannot provide such precision in obstructed urban environments. We present a system called CARLOC for lane-level positioning of auto- mobiles. CARLOC uses three key ideas in concert to improve positioning accuracy: it uses digital maps to match the vehicle to known road segments; it uses vehicular sensors to obtain odometry and bearing information; and it uses crowd-sourced location estimates of roadway landmarks that can be detected by sensors available in modern vehicles. CARLOC unifies these ideas in a probabilistic position estimation framework, widely used in robotics, called the sequential Monte Carlo method. Through extensive experi- ments on a real vehicle, we show that CARLOC achieves sub-meter positioning accuracy in an obstructed urban setting, an order-of-magnitude improvement over a high-end GPS device. Selective On-Demand Media Retrieval from Mobile Devices (Chapter 4). In this work, we want to ensure the Timeliness for crowd-sourced media context retrieval from mobile devices. Motivated by an availability gap for visual media, where images and videos are uploaded from mobile devices well after they are generated, we explore the selective, timely retrieval of media content from a collection of mobile devices. We envision this capability being driven by similarity-based queries posed to a cloud search front- end, which in turn dynamically retrieves media objects from mobile devices that best match the respective queries within a given time limit. Building upon a crowd-sensing framework, we have designed and imple- mented a system called MediaScope that provides this capability. MediaScope is an extensible framework 6 that supports nearest-neighbor and other geometric queries on the feature space (e.g., clusters, spanners), and contains novel retrieval algorithms that attempt to maximize the retrieval of relevant information. From experiments on a prototype, MediaScope is shown to achieve near-optimal query completeness and low to moderate overhead on mobile devices. 1.3 Dissertation Outline This dissertation is organized as follows. In Chapter 2, we present CarLog. We first discuss the background of vehicle sensing and motivate that the automotive apps for vehicle sensing are usually event-driven apps. We then introduce the design and implementation of our proposed programming framework CARLOG. We finally present extensive evaluation results on our prototype system. In Chapter 3, we present CarLoc. We first quantify the GPS errors under different environments, and show GPS errors are significant under obstructed areas.. We then describe our approach by employing particle filter to combine various techniques and improve the localization accuracy, especially in the highly obstructed area. We discuss our system design and implementation and present extensive evaluation results to validate CARLOC accuracy. In Chapter 4, we discuss MediaScope. We motivate our work with the Flicker photo availability gap. To bridge such availability gap, we present the design and implementation of MediaScope which supports four geometric queries on the feature space, and contains novel retrieval algorithms for information retrieval. We compare MediaScope against several alternative algorithms and show the optimality of our algorithm. In Chapter 5, we present a comprehensive overview of related work in the literature. Finally, in Chap- ter 6, we summarize our work and conclude the dissertation. 7 Chapter 2 CARLOG: A Platform for Flexible and Efficient Automotive Sensing 2.1 Introduction Many mobile app marketplaces feature automotive apps that provide in-car infotainment, or record trip information for later analysis. With the development of systems like Mercedes-Benz mbrace [95], Ford Sync [55], and GM OnStar [59], it is clear that auto manufacturers see significant value in integrating mobile devices into the car’s electronic ecosystem as a way of enhancing the automotive experience. Be- cause of this development, in the near future we are likely to see many more automotive apps in mobile marketplaces. An important feature of automobiles that is likely to play a significant part in the development of future automotive apps is the availability of a large number of vehicular sensors. These sensors describe the instantaneous state and performance of many subsystems inside a vehicle, and represent a rich source of information, both for assessing vehicle behavior and driver behavior. At the same time, there has been an increase on the availability of cloud-based information that governs the behavior of vehicles: topology and terrain, weather, traffic conditions, speed restrictions etc. As such, we expect that future automotive apps will likely combine vehicular sensors with cloud-based information as well as sensors on the mobile device itself to enhance the performance, safety, comfort, or efficiency of vehicles (2.2). For example, apps can monitor vehicular sensors, GPS location, and traffic and 8 weather information to determine whether the car is being driven dangerously, and then take appropriate action (e.g., screen calls, alert the driver). Similarly, an app may be able to warn drivers of impending rough road conditions, based both on the availability of cloud-based road surface condition maps and an analysis of vehicle comfort settings (e.g., suspension stiffness). In this work, we consider automotive apps that combine sensor and cloud information. Many of these apps can be modeled as continuously processing vehicular sensors with cloud information, in order to detect events. In the examples above, a car being driven dangerously, or over a patch of rough road, constitutes an event, and sensor processing algorithms continuously evaluate sensor readings to determine when an event occurs or to anticipate event occurrence. In this setting, programming the algorithms that combine sensor and cloud information can be chal- lenging. Because cars can have several hundred sensors each of which describes low-level subsystem dynamics, and the cloud-based information can be limitless, determining the right combinations of sen- sors and cloud information to detect events can be challenging. For instance, whether someone is driving dangerously can depend not just on vehicle speed, but on road curvature, the speed limit, the road surface conditions, traffic, visibility etc. As such, programmers will likely need to build their event detectors in a layered fashion, first by build- ing lower-level sensing abstractions, and then combining these abstractions to develop more sophisticated event detectors. In the example above, a programmer can layer the dangerous driving detector by first building an abstraction for whether the driver is speeding (using car speed sensors and cloud-speed limit information), then an abstraction for whether this speed is likely to cause the driver to lose control (by analyzing the car’s turn radius vis-a-vis the curvature of the road), and combine these two abstractions to design the final detector. Beyond comprehensibility and ease of programming, this layered approach has the benefit of re-use: sensor abstractions can be re-used in multiple situations. For example, the abstraction for analyzing whether driving speed is likely to cause a driver to lose control can be used in an app that tells drivers what speed to take an impending curve on the road. Finally, many of these event detectors may 9 need to be tailored to individual users, since different users have different tolerances for safety, comfort, and performance. To address this challenge, we observe that a declarative logic-based language like Datalog [125] has many of the desirable properties discussed above. Datalog is based on the predicate calculus of first-order logic, and supports negation of rules. In our use of Datalog (2.3), sensors and cloud information are modeled as (time-varying) facts and applications define event detectors as rules which are conjunctions of facts. An event is said to occur at some time instance if the predicate corresponding to a specific rule is true at that instant. Because facts can be materialized at different times, we need to carefully specify the temporal semantics of event detection. Our use of Datalog addresses the first pain point in the following way: in Datalog, rules can be expressed in terms of other rules, allowing a layered definition of rules, together with re-usability. A second challenge is having to reason about the costs of accessing sensors and cloud-based infor- mation. Accessing cloud information can incur significant latency (several seconds in our experiments, 2.4), and designing efficient sensor algorithms that minimize these costs for every automotive app can be difficult, if not impossible. It is possible in Datalog for programmers to write rules carefully to improve the efficiency of rule execution. Datalog engines perform bottom up evaluation, so a programmer can re-arrange predicates so that sensor predicates are evaluated first. However, Datalog engines also perform optimizations to minimize redundancy, but because these engines are unaware of the costs of acquiring predicates, an engine may foil these programmer-directed optimizations. More generally, expecting mo- bile app developers to reason about this cost can increase programming burden significantly. To address this challenge, we have developed automatic optimization methods for rule evaluation that attempt to minimize latency (2.4). These methods are transparent to the programmer. In particular, our optimization algorithm re-orders fact assessment (determining facts from sensors or the cloud) to minimize the expected latency of rule evaluation. To do this, it leverages short-circuit evaluation of Boolean predi- cates. The expected cost is derived from a priori probabilities of predicates being true, where these prob- abilities are obtained from training data. During the process of predicate evaluation and short-circuiting, 10 the optimizer also reduces worst-case latency by evaluating cloud predicates in parallel when the parallel evaluation latency is cheaper than the expected residual cost of evaluating the un-processed predicates. More important, its optimization of expected cost is critical: because queries are continuously evaluated, incurring worst-case latency on every evaluation can cause Datalog to miss events. We have embodied these ideas in a programming framework called CARLOG. In CARLOG, multiple mobile apps can instantiate Datalog rules, reuse rule definitions, and can concurrently query the rule base for events. CARLOG includes several kinds of optimizations including provably-optimal fact assessment for a single query, and jointly optimized fact assessment for concurrent queries. Experiments on a prototype of CARLOG, and trace-driven evaluations on vehicle data collected over 2,000 miles of driving, shows that it is two orders of magnitude more efficient than Datalog’s naïve fact assessment strategy, detects 3 4 more events than the naïve strategy, and consistently outperforms other alternatives, sometimes by 3 (2.5). These evaluations also demonstrate the efficacy of multi-query optimization: without this, latency is 50% higher on average and half the number of events are detected. CARLOG is inspired by research in declarative programming, query optimization, and energy-efficient sensor and context recognition. It differs from prior work in its focus on latency as the metric to optimize (most prior work on mobile devices have focused on energy) and in its use of multi-query optimization (5). 2.2 Background and Motivation Automotive Sensing. Modern cars contain one or more internal controller area network (CAN) buses interconnecting the electronic control units (ECUs) that regulate internal subsystems [74]. All cars built in the US after 2008 are required to implement the CAN standard. Cars can have up to 70 ECUs, and these communicate using the Controller Area Network (CAN) protocols. ECUs transmit and receive messages that contain one or more sensor readings that contain information about a sensed condition or a system status indication, or specify a control operation on another ECU. ECUs generate CAN messages either periodically, or periodically when a condition is sensed, or in response to sensor value changes or threshold 11 crossings. The frequency of periodic sensing depends upon the specific data requirements for a vehicle system. Certain types of information may be reported by a module at up to 100Hz, whereas other types of information may be communicated only at 1-2Hz. Examples of sensor readings available over the CAN bus include: vehicle speed, throttle position, transmission lever position, automatic gear, cruise control status, radiator fan speed, fuel capacity, and transmission oil temperature. While the CAN is used for internal communication, it is possible to export CAN sensor values to an external computer. All vehicles are required to have an On-Board Diagnostic (OBD-II) [5] port, and CAN messages can be accessed using an OBD-II port adapter. In this work, we use a Bluetooth-capable OBD-II adapter that we have developed in order to access CAN sensor information from late-model GM vehicles. (Commercial OBD-II adapters can only access a subset of the CAN sensors available to us). This capability permits Bluetooth-enabled mobile devices (smartphones, tablets) to have instantaneous access to internal car sensor information. Some modern cars can have several thousand sensors on-board. Automotive Apps. The availability of a large number of sensors provides rich information about the behavior of internal subsystems. This can be used to develop mobile apps for improving the performance, safety, efficiency, reliability, and comfort of vehicles [53]. Many of these goals can be affected by other factors: the lifetime of vehicle components can be affected by severe climate, fuel efficiency by traffic conditions and by terrain, safety by road surface and weather, and so forth. Increasingly, information about these factors is available in cloud databases, and because mobile devices are Internet-enabled, it is possible to conceive of cloud-enabled mobile apps that combine cloud information with car sensors in order to achieve the goals discussed above. In this work, we focus on such mobile apps, specifically on event-driven apps that combine sensor and cloud information in near real-time (safety-critical hard real-time tasks such as collision avoidance or traction control are beyond the scope of this work; specialized hardware is needed for these tasks). This class of apps is distinct from automotive apps that record car sensor information for analytics (e.g., for assessing driver behavior, or long-term automotive health). In other words, detected events are not just meant to be collected and reviewed later by drivers, but used by near real-time apps that either act to alert 12 the driver or perform an action on their behalf (e.g., an app might wish to block calls or texts based on whether a driver is executing a maneuver that requires their attention) or used by crowd-sourcing apps to notify other drivers (e.g., an app might upload a detected event indicating an icy road to a cloud service so that other cars can receive early warning of this hazard). Therefore, in our setting, detection latency and detection accuracy are important design requirements. These two criteria are related: as we show in 2.5, poorly designed detectors which incur high latency can also incur missed detections. Examples. Consider an app that would like to detect when a driver is executing a dangerous sharp turn. This information can be made available to parents or driving instructors, or used for self-reflection. Detecting a sharp turn can be tricky because one has to rule out legitimate sharp turns at intersections, or those that follow the curvature of the road. Accordingly, an algorithm that detects a sharp turn has to access an online map database to determine whether the vehicle is at an intersection, or to determine the curvature of the road. In addition, this algorithm needs access to the sensor that provides the turn angle of the steering wheel, and a sensor that determines the yaw rate (or angular velocity about the vertical axis). Continuously fusing this information can help determine when a driver is making a sharp turn. Finally, we note that any such algorithm will include thresholds that determine safe or unsafe sharp turns; these thresholds are often determined by driver preferences and risk-tolerance. Consider a second example, an application that would like to block incoming phone calls or text mes- sages when a driver is driving dangerously. Call blocking can be triggered by a collection of different sets of conditions: a combination of bad weather, and a car speed above the posted speed limit or bad weather and a sharp turn. This illustrates an event-driven app, where events can be defined by multiple distinct algorithms. More important, it also illustrates layered definitions of events, where the call block event is defined in terms of the sharp turn event discussed above. In 2.5, we describe several other event-driven apps. Datalog. Datalog [125] is a natural choice for describing sensor fusion for event-driven apps. It is a highly- mature logic programming language whose semantics are derived from the predicate calculus of first-order 13 logic. Datalog permits the specification of conjunctive rules, and supports negation and recursion, and is often used in information extraction, integration, and cloud computing [69]. Facts and Rules. Operationally, a Datalog system consists of two databases: an extensional database (EDB) which contains ground facts, and an intensional database (IDB) which consists of rules. Facts describe knowledge about the external world; in our setting, sensor readings and cloud information provide facts instantiated in the EDB. Rules are declarative descriptions of the steps by which one can infer higher- order information from the facts. Each rule has two parts, a head and a body. The head of a rule is an atom, and the body of a rule is a conjunction of several atoms. Each atom consists of a predicate, which has one or more variables or constants as arguments. Any predicate which is the head of a rule is called an IDB-predicate, and one that occurs only in the body of rules is called an EDB-predicate. For example, the code snippet shown below describes a rule that defines a dangerous driving event. The head of the rule contains the predicate DangerousDriving, with four variables, and the body is a conjunction of several predicates, some of which are automotive sensors (like the Yaw_Rate and the Steer_Angle) and others access cloud information such asSpeedLimit. Dangerous driving is said to occur whenever the yaw rate exceeds 25rad=s, the steering angle exceeds 15 , and the vehicle speed exceeds the speed limit by a factor of more than 1.2. Thus, for example, when theYaw_Rate sensor has a value 30rad=s (when this happens, a factYaw_Rate(30) is instantiated in the EDB), and the steering angle is 60 , and car is being driven at 45mph in a 30mph zone, a new factDangerousDriving(30,60,45,30) is instantiated into the EDB and signals the occurrence of a dangerous driving event. DangerousDriving(x,y,z,w):- Yaw_rate(x), x > 15, Steer_Angle(y), y > 45, Vehicle_Speed(z), SpeedLimit(w), MULTIPLY(w, 1.2 , a), a < z. More generally, the head of a rule is true if there exists an instantiation of values for variables that satisfies the atoms in the body. As discussed above, one or more atoms in the body can be a negation, and 14 a rule may be recursively defined (the head atom may also appear in the body). An atom in the body of one rule may appear in the head of another rule. Rule Evaluation and Optimization. Datalog is an elegant declarative language for describing computations over data, and a Datalog engine evaluates rules. In general, given a specific IDB, a Datalog engine will ap- ply these rules to infer new facts whenever an externally-determined fact is instantiated into the EDB. Dat- alog also permits queries: queries describe specific rules of interest to a user. For example, while the IDB may contain several tens or hundreds of rules, a user may, at a given instant, be interested in evaluating the DangerousDriving rule. This is expressed as a query?-DangerousDriving(yaw,angle,speed,limit). 2.3 CARLOG Design In this section, we describe the design of a programming system called CARLOG that simplifies the de- velopment of event-driven automotive apps. CARLOG models car sensors and cloud based information as Datalog predicates, and apps can query CARLOG to identify events. Figure 2.1 shows the internal structure of CARLOG. The Sensor Acquisition and Cloud Acquisition modules access information from the car’s sensors and the cloud, respectively, and provide these to the Interface module in the form of Datalog facts. The Interface module takes (1) app-defined queries and (2) facts from the sensors, and passes these to a modified Datalog query processing engine that performs query evaluation. CARLOG introduces two additional and novel components, the Query Optimizer and the Query Plan Evaluator. The Query Optimizer statically analyzes a query’s associated rules and determines an evalu- ation plan for rule execution. Unlike traditional Datalog optimization, the Query Optimizer attempts to minimize query evaluation latency based on the latency of acquiring cloud information, instead of the number of rules to be evaluated. The output of the Query Optimizer is a query plan executed by the Query 15 Query Optimizer Cloud Acquisition Interface Query Plan Evaluator App 1 Acquisition Weather … CARLOG App 2 Yaw rate … Sensor Acquisition … Figure 2.1—CARLOG Design Plan Evaluator. In the remainder of this section, we describe CARLOG in more detail, and in 2.4 we discuss the Query Optimizer and Query Plan Evaluator. How Apps use CARLOG. Event-driven apps instantiate Datalog rules in CARLOG. Typically, these rules define events for which an app is interested in receiving notifications. In Datalog terminology, these rules constitute the IDB. Rules instantiated by one app may use IDB-predicates (heads of IDB rules) instantiated by other apps. Apps can then pose Datalog queries to CARLOG. When a query is posed, CARLOG first identifies the facts needed to evaluate the query. Then it continuously evaluates the query by monitoring when predicates from the relevant sensors become facts. As discussed in the previous section, instantiation of the query predicate as a fact corresponds to the occurrence of an event and therefore the interested app is notified when this occurs. Using this approach to query evaluation allows CARLOG to also support multiple concurrent queries. CARLOG Sensor and Cloud Predicates. CARLOG provides substantially the same capabilities as Datalog, and inherits all of its benefits (these are discussed below). Like Datalog, CARLOG supports conjunction and negation (2.5 shows examples of rules using negation). Unlike Datalog, CARLOG does not support optimization for recursion: we have left this to future work, as discussed in 2.4. CARLOG extends Datalog to support acquisitional query processing [93]: the capability to process queries that depend on dynamically instantiated sensor and cloud data. To do this, sensor and cloud 16 information are modeled as EDB-predicates; we use the terms sensor predicate and cloud predicate, re- spectively, to denote the source of the predicate. For example,Yaw_Rate(x) is a sensor predicate that models the yaw rate sensor in a vehicle, andSpeedLimit(w) is a cloud predicate that models the speed limit at the current location (2.2). These predicates are predefined EDB-predicates that applications can use when defining new rules. Benefits of CARLOG. Prior work [53] has proposed a procedural abstraction for programming automotive apps. Compared to such an abstraction, CARLOG is declarative due to its use of Datalog, so apps can define events without having to specify or program sensor or cloud data acquisition. Furthermore, apps can easily customize rules for individual users: the dangerous driving rule in 2.2 has several thresholds (e.g., 45 for Steer_Angle), and customizing these is simply a matter of instantiating a new rule. Since cars have several hundred sensors and Datalog is a mature rule processing technology that can support large rule bases, CARLOG inherits scalability from Datalog. This scalability comes from several techniques to optimize rule evaluation. In general, rule evaluation in Datalog has a long history of re- search, and many papers have explored a variety of techniques for optimizing evaluation [125, 31]. These techniques have proposed bottom-up evaluation, top-down evaluation, and a class of the program transfor- mations called magic sets (5). All of these approaches seek to minimize or eliminate redundancy in rule evaluation, and we do not discuss these optimizations further in this work. In the next section, our work discusses an orthogonal class of optimizations that have not been explored in the Datalog literature. CARLOG also inherits other benefits from Datalog. In CARLOG, rule definitions can include IDB- predicates defined by other apps. As such, rule definitions can be layered, permitting significant rule re-use and the definition of increasingly complex events. As discussed in 2.2,CallBlock can be defined in terms of aDangerousDriving IDB-predicate instantiated by another app. CARLOG also inherits some of Datalog’s limitations: some sensing computations may require capabil- ities beyond Datalog. Consider a predicate defined in terms of the odometer. On some cars, the odometer sensor may not be exposed to the consumer; apps can approximate odometry by mathematically integrating 17 speed sensor values, but this computation cannot be expressed in Datalog. In this case, we anticipate CAR- LOG will include a “virtual” odometer sensor as a Datalog predicate which is implemented in a different language (say Java) and integrated into the CARLOG runtime. 2.4 CARLOG Latency Optimization In CARLOG, programmers do not need to distinguish sensor and cloud predicates from other EDB-predicates. However, unlike other Datalog EDB-predicates, sensor and cloud predicates incur a predicate acquisition latency which is the latency associated with acquiring the data necessary to evaluate the predicate. In this section, we show how CARLOG can optimize predicate acquisition latency in a manner transparent to the programmer. 2.4.1 Predicate Acquisition Latency Cloud predicates incur high latency. Like several prior sensor-based query processing languages (e.g., [93]), CARLOG supports acquisitional query processing, where sensor data and cloud information are modeled as predicates, but may be materialized on-demand. However, an important difference is that in the automotive environment materializing cloud predicates can incur significant latency. To illustrate this, Figure 2.2 shows the latency incurred when accessing three different cloud predicates using two different carriers. The three predicates check, respectively, for whether the current speed exceeds the average traffic speed reported by Google, whether there are any traffic incidents reported by Bing’s traffic reporting service at a given location, and whether the current gas price reported by MyGasFeed exceeds a certain value. (In general, CARLOG permits cloud predicates implemented by multiple cloud services.) In calculating these latencies, we conducted experiments where we drove a car at an average speed of about 30mph (maximum 70 mph) and configured two mobile devices with different carriers to acquire individual predicates. Figure 2.2 shows the latency incurred on the cloud side (our phones queried a server we control, which in turn issued requests to the cloud services listed above), and the network 18 Traffic Speed Traffic Incidents Gas Price 0 500 1000 1500 2000 2500 3000 3500 Latency (ms.) Cloud T-Mobile AT&T Figure 2.2—Predicate acquisition latency latency (total request latency minus the cloud latency). Two features are evident from this figure: (a) cloud latency can vary significantly across services (MyGasFeed is less mature than the other two services, so is slower), and (b) network latency is highly variable on both carriers, and several seconds in the worst case (resulting from handoffs due to high mobility). Naive Datalog acquisition can be expensive. Although Datalog provides several benefits for event-driven automotive apps, its rule evaluation can incur high latency, because the default rule evaluation engine is agnostic to acquisition cost and acquires predicates sequentially. Thus, if a rule involves multiple cloud predicates, the total predicate acquisition latency is the sum of the latencies required to evaluate each cloud predicate. As we discuss below, it is possible to optimize this by acquiring all the cloud predicates in parallel, and the total latency in this case is the maximum latency required to evaluate a cloud predicate. Even in this case, acquisition latency can still be on the order of several seconds. Overview of latency optimization in CARLOG. CARLOG performs latency optimization by statically ana- lyzing each query and computing an optimal order of execution for the query’s predicate acquisition. This computation is performed once, when an application instantiates a query. Subsequently, whenever a query needs to be re-evaluated (as discussed above, this happens whenever a value of a sensor changes), this order of predicate acquisition is followed. CARLOG’s latency optimization builds upon short-circuit evaluation of Boolean operators. 1 In a con- junctive rule, if one predicate happens to be false, the other predicates do not need to be evaluated. CARLOG takes this intuition one step further, and is based on a key observation about the automotive setting: some 1 As an aside, CARLOG’s optimizations can be applied to other settings where predicate acquisition costs differ. We have deferred this to future work. 19 predicates are more likely to be false than others. Consider our dangerous driving example in 2.2. During experiments in which we recorded sensor values, we found that the predicate Yaw_Rate(x);x > 15 was far more likely to be false than Steer_Angle(y);y > 45. Intuitively, this is because drivers do not normally turn at high rates of angular velocity (yaw), but do turn (steer) often at intersections, park- ing lots, etc. In this case, evaluating the Yaw_Rate first will avoid the cost of predicate acquisition for Steer_Angle, thereby incurring a lower overall expected cost for repeated query execution as compared to whenSteer_Angle is evaluated first. In general, determining the optimal order of sensor acquisition can be challenging as it depends both on the cloud predicate acquisition latency and probability of the predicate being true (in 2.5, we consider and evaluate several alternatives). If it were less expensive to acquireSteer_Angle thanYaw_Rate, then the optimal order would depend both upon the acquisition latency and the probability of a predicate being true. CARLOG leverages this observation, but for cloud predicates. Cloud predicates can differ in acquisition cost (Figure 2.2), and some cloud predicates are more likely to be false than others. Thus, by re-ordering the acquisition of cloud predicates, CARLOG can short-circuit the acquisition of some cloud predicates or avoid acquisition entirely if any of the sensor predicates are false. Estimating predicate probabilities. A key challenge for latency optimization is to estimate the probability of a predicate being true. We estimate these probabilities using training data, obtained by collecting, for a short while, sensor and cloud information continuously while a car is being driven. When an application instantiates a query, CARLOG’s Query Optimizer statically analyzes the query, extracts the sensor and cloud predicates, and computes the a priori probability 2 of each predicate being true from the training data. For example, if the training data has N samples of Yaw_Rate, but only n of these are above the threshold of 10, then the corresponding probability is n=N. These probabilities, together with the predicate latencies, are inputs to the optimization algorithms discussed below. We note that accuracy of the probability estimates affects only performance, not correctness. One corollary of this is that training data 2 Our predicate estimation technique is similar to branch predictors in computer architecture: based on a history of driving traces, our approach estimates the probability of a predicate being true (the analog of a branch (not) taken). 20 from one driver can be used to estimate probabilities for similar drivers, without impacting correctness, only performance. Furthermore, rather than use a priori estimate, we can update cost and predicate probability estimates dynamically, and predicate evaluation could adapt accordingly (e.g., if in a particular area latency of query acquisition is low, or if the vehicle changes hands and the new driver’s behavior is significantly different, the evaluation order could change). We leave a detailed implementation of this for future work, but we note that these generalizations would not change the algorithms presented in the work, but would only change how the inputs to these algorithms are computed. Minimizing expected latency. The output of our algorithms is a predicate acquisition order that minimizes the expected latency. Without latency optimizations, CARLOG can miss events. To understand why, first recall that, in CARLOG, rules are continuously evaluated. Now, suppose an app defines a rule based on the Yaw_Rate sensor (with a threshold of 15, as in our example in 2.2), and a cloud predicate. First, suppose that Yaw_Rate and the cloud predicate have the same acquisition cost (say 20ms). Then, one can define an ideal event detection rate as the rate of detected events if the rule containing these predicates was evaluated every 20ms. In practice, however, cloud predicate acquisition cost can be higher. Suppose, in our example, that it is 1 second. To evaluate a rule, an unoptimized evaluation strategy would wait until the cloud predicate was acquired (i.e., wait for one second), then evaluate the predicate using the latest value of theYaw_Rate sensor. This strategy does not evaluate all otherYaw_Rate readings (in 1 sec, this sensor reports 50 values), and some of these readings may have been above the threshold. As such, this unoptimized strategy would have a lower detection rate than the ideal discussed above; in other words, this strategy can miss events. By optimizing latency, CARLOG can reduce instances of missed events. Instead of dropping the Yaw_Rate sensor readings, a rule engine can queue each sensor change to be evaluated sequentially or evaluate each sensor change in parallel. This is fundamentally infeasible because the arrival rate of events (50Hz) is higher than the service rate (1Hz). Missing events is unaccept- able, since for some applications the precise count of events may be important. For example, missing a DangerousTurn event can, in an app that monitors teen driving, translate into incorrect estimates of 21 ̴ ͳ ͷ ̴ Ͷ ͷ ̴ ͳǤʹ ଵ ଶ ଷ ܥ ଵ ܥ ଶ ܥ ଷ Probability: Cost: Figure 2.3—Expansion Proof Tree for Rule 2.2 the quality of the teen driver). Similarly, a missed icy road condition can, in an outsourced app, fail to alert other drivers of a dangerous condition. As we quantify later in our experiments, CARLOG’s latency optimization improves event detections by a factor of 3-4 over Datalog. Finally, although our algorithms can be used to optimize energy, a discussion of this is beyond the scope of the thesis. 2.4.2 Terminology and Notation In Datalog, a query can be represented as a proof tree. The internal nodes of this proof tree are IDB- predicates, and the leaves of the proof tree are EDB-predicates. In CARLOG, leaves represent sensor and cloud EDB-predicates. 3 Figure 2.3 shows the proof tree for the dangerous driving example rule. In general, a proof tree will have a setG ofn leaf predicatesG 1 ;:::;G n . EachG i is also associated with a costc i (in our setting, the cost is the latency) and a probabilityp i of being true 4 The order of pred- icate evaluation generated by CARLOG is a permutation ofG, such that there exists no other permutation ofG with a lower expected acquisition cost. For Figure 2.3, the expected costE of evaluating the predicates in the orderG 1 ;G 2 ;G 3 can be defined recursively as: E[G 1 ;G 2 ;G 3 ] =p 1 E[G 2 ;G 3 jG 1 = 1] + (1p 1 )E[G 2 ;G 3 jG 1 = 0] +C 1 (2.1) 3 In CARLOG, leaves can represent EDB-predicates which are not sensors or cloud predicates. We omit further discussion of this generalization as it is straightforward. 4 p i and c i may be better modeled using a distribution rather than a single average value, as in this work. We have left an exploration of this extension to future work. However, as we have discussed before, our choices forp i andc i generally do not affect correctness of predicate evaluation, only latency. 22 Because evaluation can be short-circuited whenG 1 is false, this results in the following expression: E[G 1 ;G 2 ;G 3 ] =p 1 E[G 2 ;G 3 ] +C 1 (2.2) This expected cost calculation can be applied to any size set of predicates. Using a brute force ap- proach, one can find the expected cost for each permutation of a setG and identify the permutation with the lowest cost. In the following sections, we explore algorithms for determining the optimal evaluation order for: (a) conjunctive rules without negation, (b) conjunctive rules with negation, and (c) concurrent conjunctive rules with no negation and shared predicates. Exploring optimizations for concurrent conjunc- tive rules with negation and shared predicates is left to future work. 2.4.3 Latency Optimization: Algorithms Single Conjunctive Query without Negation. Consider a single conjunctive query with n leaf sensor and cloud predicates and where none of the predicates are negated. Intuitively, the lowest expected cost evaluation order prioritizes predicates with a low cost (latency) and low probability of being true. For conjunctive queries without negation, this intuition enables CARLOG to use an optimal greedy algorithm withO(n logn) complexity [61] to compute an ordering with the minimal expected cost. Theorem 2.4.1 Specifically, if c 1 1p 1 c 2 1p 2 ::: c n 1p n (2.3) thenG 1 ;G 2 ;:::;G n is the predicate evaluation order with lowest expected cost. Single Query with Negation. The basic form of Datalog provides only conjunctive (AND) queries. Fun- damentally, negation cannot be expressed using conjunction alone. For this reason, many Datalog sys- tems incorporate support for negated rules and negated IDB-predicates. In the automotive domain, we 23 have found many event descriptions to be more naturally expressed using negation. Consider a predi- cateRightTurnSignal in CARLOG that determines whether the right turn indicator is on. The pred- icate (NOT RightTurnSignal) is useful to express some rules (2.5) but cannot be expressed in a purely conjunctive version of Datalog, since the negation is the OR of two cases (LeftTurnSignal OR NoSignal). A simple example of a proof tree for a query with negation is shown in Figure 2.4. In this example, the IDB-predicateR 1 is negated. Short-circuiting evaluation for negated predicates is different than in the purely conjunctive case. For example, in Figure 2.4, we can only short-circuit the evaluation of the query when bothG 2 andG 3 are true, but if one is false, we must continue the evaluation. In this work, we develop an algorithm for queries with negation that relies on an exchange argument, which we illustrate using Figure 2.4(a). Suppose that the optimal order of evaluation ofR 1 is (G 2 ;G 3 ). Then in the optimal order of evaluation for the overall query,RH,G 1 cannot be interleaved betweenG 2 and G 3 . Assume the contrary and consider the following order of evaluation: (G 2 ;G 1 ;G 3 ). For this ordering, it can be shown that the expected cost isc 2 +c 1 +p 1 p 2 c 3 :G 2 must be evaluated, and regardless of whetherG 2 is true or false, G 1 must be evaluated; G 3 is only evaluated ifG 2 andG 3 are both true. By a similar reasoning, it can be shown that the cost of (G 1 ;G 2 ;G 3 ) isc 1 +p 1 c 2 +p 1 p 2 c 3 . Comparing term-wise, the cost of this order is less than or equal to (G 2 ;G 1 ;G 3 ). Now consider the other possible ordering (G 2 ;G 3 ;G 1 ). In this case, the expected cost isc 2 +p 2 c 3 + (1p 2 p 3 )c 1 . Consider predicateR 1 of Figure 2.4(a) in isolation. This predicate has an effective cost of c 2 +p 2 c 3 (for similar reasons as above) and an effective probability of (1p 2 p 3 ) (sinceR 1 is negated, it is true only when bothG 2 andG 3 are not simultaneously true). By Theorem 2.4.1, CARLOG produces an optimal order of (R 1 ;G 1 ) only if c2+p2c3 1(1p2p3) c1 1p1 . After simplifying the expression on the LHS, this order implies that c3 p3 c1 1p1 . Therefore, the cost of (G 2 ;G 3 ;G 1 ) is less than or equal to the cost of (G 2 ;G 1 ;G 3 ) only if c3 p3 c1 1p1 . Therefore, an evaluation order in whichG 1 is interleaved betweenG 2 andG 3 is equal or greater in cost than other orders where it is not. 24 Algorithm 1 : OPTIMAL EVALUATION ORDER FOR QUERIES WITH NEGATION INPUT : Proof treeT , 1: FUNCTION :OPTORDER(T) 2:NS = set of minimal negated sub-trees inT 3: for allt2NS do 4: Compute optimal evaluation order fort using Theorem 2.4.1 5: c eff (t) = expected cost of optimal evaluation order fort 6: p eff (t) = 1 Q k i=1 pi, wherepis are the probabilities associated with the leaf predicate oft 7: Replacet with a single node (predicate) whose cost isc eff (t) and whose probability isp eff (t) 8: NS = set of minimal negated sub-trees inT 9: Compute optimal evaluation order forT using Theorem 2.4.1 This discussion motivates the use of an algorithm (Algorithm (1)) that independently processes subtrees of the proof tree using the algorithm for Theorem 2.4.1 as a building block. This algorithm operates on minimal negated-subtrees, which are subtrees of the proof tree whose root is a negated-predicate, but whose subtree does not contain a negated predicate. Intuitively, Algorithm (1) computes the effective cost and effective probability for each minimal negated-subtree and replaces the subtree with a single node (or predicate) to which the effective cost and probability are associated. At the end of this process, no negated subtrees exist, and Theorem 2.4.1 can be directly applied. For conjunctive queries, there is a single evaluation order. Because of more complex short-circuit evaluation rules, this is not always the case for queries with negated predicates. The output of our algorithm for negation is actually a binary decision tree that defines the ordering in which predicates should be evaluated. For example, in Figure 2.4(a), if the evaluation order is (G 2 ;G 3 ;G 1 ), the decision tree is as shown in Figure 2.4(b). In this tree, ifG 2 is false, thenG 1 must be evaluated. G 1 is also evaluated ifG 2 is true, butG 3 is false. 25 NOT ܴ ଵ ܩ ଵ ଵ ܥ ଵ ܩ ଶ ଶ ܥ ଷ ܩ ଷ ଷ ܥ ଷ (a) Proof Tree of an Example Query with Negation ܩ ଶ ܩ ଷ ܩ ଵ ܩ ଵ (b) Correspond- ing Decision Tree Figure 2.4—Example of a Negation Proof Tree and its Decision Tree We have proved (see [72]) Algorithm (1) to be optimal among all linear strategies: in these strategies, the order of predicate evaluation is fixed, but the evaluation of some predicates might be skipped if un- necessary. There is a class of strategies, called adaptive strategies, which can have lower expected cost, where the order of evaluation depends on the values of already-evaluated predicates. In general, adaptive strategies perform better, but finding an optimal adaptive strategy for the negation case is known to be NP-hard [61]. Multiple Queries without Negation. In CARLOG, multiple automotive apps can concurrently instantiate queries. These queries can also share predicates. Consider two queries, one which uses predicatesX and Y , and another which usesY andZ; i.e., they share a predicateY . Now, suppose the probabilities ofX,Y andZ are 0.39, 0.14 and 0.71 respectively, and their costs are 201, 404, and 278. Jointly optimizing these queries (by realizing that evaluatingY first can short-circuit the evaluation of both queries) results in an order (Y;X;Z), which has an expected cost of 471.1. Alternative approaches like individually optimizing these queries using Theorem 2.4.1 and evaluating the shared predicate only once, or using Theorem 2.4.1 but assigning half the cost ofY to each query, incur higher costs (643.9 and 521.6 respectively). This multi-query optimization, unfortunately, is NP-complete: we have proved this by reduction from Set Cover (see [72]). (We do not know of prior work that has posed this multi-query optimization, or examined its complexity). We have designed a greedy adaptive heuristic for this strategy that is loosely 26 modeled after a (logn) approximation algorithm for set-cover [51]. We have yet to prove approximation bounds for our heuristic. Intuitively, this heuristic works as follows. LetP i be a predicate that has not yet been evaluated, whose probability isp i and costc i . LetP i occur inN i rules (or proof trees) that have not yet been resolved. Then, Ni(1pi) ci represents the benefit-to-cost ratio of evaluatingP i . Our greedy heuristic, at each step, picks that P i , amongst all un-evaluated predicates, which has the highest benefit-to-cost ratio. This greedy heuristic has a cost ofO(n 2 ), wheren is the number of predicates. As we show later, multi-query optimization can provide significant latency gains in practice. 2.4.4 Parallel Acquisition Naive Datalog fact assessment evaluates predicates sequentially. The latency of cloud predicate acquisition can be reduced by issuing requests in parallel. In this case, when acquiring predicates G 1 and G 2 , the resulting latency is the larger of the two individual latencies. However, parallel acquisition is not always better than short-circuit acquisition (the converse is also true). AcquiringX andY in parallel is beneficial only if the minimal expected cost of acquiring both of them is larger than the cost of acquiring them in parallel 5 . CARLOG uses this observation to further optimize predicate acquisition latency. Considern predicates and, without loss of generality, assume an evaluation orderG 1 ;G 2 ;:::;G n . Suppose thatG 1 ;G 2 ;:::;G i has already been evaluated and all of those predicates are true. Then, consider the minimal residual expected cost of evaluating the remaining predicates (jfG i+1 ;:::;G n g, this can be computed using the algorithms described above). If this residual cost is greater than the latency cost of evaluating those predicates in parallel, CARLOG reduces latency by acquiring the remaining predicates in parallel. 5 We do not assume thatX andY are independent. They may be correlated. But, in general, both cloud predicates would need to be retrieved, since a rule can use different thresholds for each predicate. 27 2.4.5 Putting it All Together When an app instantiates an CARLOG query, the Query Optimizer statically analyzes the query and assigns probabilities to each sensor or cloud predicate, as discussed above. The Query optimizer maintains average latencies for acquiring cloud predicates, from offline measurement or gathered as part of the training process discussed earlier. Using these costs and probabilities, the Query Optimizer applies the appropriate form of latency opti- mization discussed above. This is a one-time computation performed when the query is instantiated. The output of this optimization is a decision tree (e.g., Figure 2.4(b)) that is passed to the Query Plan Evaluator, which repeatedly evaluates queries when new sensor facts are materialized. We have left other potential CARLOG enhancements to future work. For example, one approach to further reducing latency is to use recently-derived facts to short-circuit fact establishment. We know that if a driver is on the highway and no obvious deceleration or large turn occurs, then driver is still on the highway. This can be expressed easily in Datalog, but requires support for recursion, which Datalog supports but for which we have not designed optimization algorithms. As another enhancement, CARLOG can also update its predicate probabilities continuously to track changes in driving habits. 2.5 Evaluation In this section, we present evaluation results for several event-driven automotive apps in CARLOG. 2.5.1 Methodology and Metrics CARLOG Implementation. Our implementation of CARLOG has two components: one on the mobile de- vice and the other on the cloud. The mobile device implementation pre-defines sensor and cloud predicates, and some common aggregation functions (count, min, max and avg). Rules can be expressed by these predicates with aggregation functions, or in terms of other rules. The CARLOG API provides functions for installing and removing rules, and installing and removing queries. Query responses are returned through 28 inter-process messaging mechanisms. The mobile device implementation includes the query optimization algorithms described in 2.4 and code for acquiring local sensors from the CAN bus over Bluetooth. Our query evaluation engine is a modified version of a publicly available Java-based Datalog evaluation engine called IRIS [20]. Our modifications implement the Query Plan Evaluator, which executes the decision tree returned by the Query Optimizer. The local sensor acquisition code is 14,084 lines of code, and the query processing code, including optimization and plan evaluation, is 6,639 lines. The cloud sensor acquisition component of CARLOG accesses a cloud service front-end we imple- mented. This front-end supports access to a variety of cloud IDB-predicates: the curvature of the road, whether it’s a highway or not, the current weather information, list of traffic incidents near the current location, the speed limit on the current road, whether the vehicle is close to an intersection or not, the cur- rent real-time average traffic speed, and a list of nearby landmarks including gas stations (and associated gas prices). Our cloud front-end aggregates information from several other cloud services; map informa- tion is provided with Open Street Map (OSM [63]), weather information from Yahoo Weather Feed [58], gas prices from MyGasFeed [58], traffic information from Bing Traffic [19], place-of-interest and current traffic speed information from Google [60]. The cloud front-end is about 700 lines of PHP code. Methodology and Datasets. To demonstrate some of the features of CARLOG, we illustrate results from an actual in-vehicle experiment. However, in order to be able to accurately compare CARLOG’s optimization algorithms against other alternatives, we use trace analysis. For this analysis, we collected 40 CAN sensors (sampled at the nominal frequency, which can be up to 100Hz for some sensors), together with all the cloud information discussed above retrieved continuously, from 10 drivers over 3 months. When collecting these readings, we also record the latency of accessing the sensors and cloud information. Our dataset has nearly 2GB of sensor readings, obtained by driving nearly 2,000 miles in different areas. We use this dataset to evaluate CARLOG as described below. Event Definitions. To evaluate CARLOG, we created different Datalog rules that cover different driving related events. Some rules are inspired by existing market apps such as RateMyDriving [108], others by 29 Figure 2.5—Events detected by CARLOG and by Naive academic research [70, 75], while the rest were derived from our collective driving experience. These in- clude (Figure 2.6): a sudden sharp turn (Sharpturn); speeding in bad weather (SpeedingWeather); a sharp turn in bad weather (SharpTurnWeather); a left turn executed with the right turn indicator on (BadRTurnSignal) and vice versa (BadLTurnSignal) and sharp turn variants of these (BadRSharpTurnSignal andBadLSharpTurnSignal); finding the cheapest gas station within driving range (GasStationOp); a slow left turn (SlowLTurn); tail-gating while driving (Tailgater); several events defined for high- way driving at speed (HwySpeeding), or having the wrong turn indicator on the highway (HwyBadRTurnSignal and HwyBadLTurnSignal), or executing a sharp turn on the highway (HwySwerving); a legal turn at an intersection at high speed (FastTurn); driving slowly on a rough road surface (SlowRoughRoad), turning on such a surface (RoughRoadTurn), or driving on the rough road during bad weather (RoughRoadWeather); speeding or sudden hardbrake while passing the traffic light (TrafficSignSpeeding andTrafficSignHardBrake); finally, executing a turn without activating the turn signal (CarelessTurn). Many of these event descriptions are, by design, layered. For example, the SharpTurnWeather event uses the SharpTurn rule (Figure 2.6). As discussed before, we expect that programmers will naturally layer event descriptions, because this is a useful form of code reuse. Layering permits sharing of predicates and allows us to also evaluate multi-query execution and to quantify the benefits of joint optimization of multiple queries. On average each rule uses 3.6 sensor predicates and 2.3 cloud predicates (cloud predicates are shown in bold in Figure 2.6). The largest and smallest numbers of sensor predicates in a rule are 7 and 2, respectively, and of cloud predicates 4 and 0. Finally, six of these rules use negation. 30 Rule Name Rule Definition Sharpturn SteerWheelAngle(?angle), ABS(?angle) > 30, YawRate(?yaw), GREATER(ABS(?yaw), 15), Intersection(?intersect), NOT(?intersect), Curvature(?curv), LESS(ABS(?curv), 30), LatAcc(?latacc), GREATER(ABS(?latacc), 2) SpeedingWeather Weather(?weather), NOT(GoodWeather(?weather)), SpeedLimit(?limit),VehicleSpeed(?speed), LESS(MULTIPLIER(?limit, 1.2), ?speed) , GREATER(?speed, 35) SharpTurnWeather Weather(?weather), NOT(GoodWeather(?weather)), SharpTurn(?angle, ?yaw,?latacc, ?intersect, ?curv) LeftSignalOn LeftSignal(?signal), COUNT(?signal) > 1 RightSignalOn RightSignal(?signal), COUNT(?signal) > 1 GoodLTurn LeftSignalOn(?signal), SteerWheelAngle(?angle), ?angle < -15 GoodRTurn RightSignalOn(?singal), SteerWheelAngle(?angle), ?angle > 15 BadRTurnSignal NOT GoodRTurn(?signal, ?angle), RightSignalOn(?signal) BadLTurnSignal NOT GoodLTurn(?signal, ?angle), LeftSignalOn(?signal) GasStationOp GasStation(?distance), GasPrice(?price, ?avgprice), FuelRate(?fuelrate), FuelLEFT(?fuelleft), ?price < ?avgprice, DIVIDE(?fuelleft, ?fuelrate)> ?distance BadRSharpTurnSignal Sharpturn(?angle, ?yaw,?latacc, ?intersect, ?curv), BadRTurnSignal(?angle,?single ) BadLSharpTurnSignal Sharpturn(?angle, ?yaw,?latacc, ?intersect, ?curv), BadLTurnSignal(?angle,?single ) SlowLTurn Curavture(?curvature), LESS(ABS(?curvature), 30), VehicleSpeed(?speed), CurrentSpeed(?curSpeed), ?speed < ?curSpeed, Intersection(?intersect), ?intersect = True, LeftSignalON(?signal) Tailgater HwySpeeding(?throttle, ?engine, ?hwy, ?limit, ?speed, ?trac), TrafficIncident(?traffic),TrafficOnWay(?traffic) HwySpeeding Throttle(?throttle), ?throttle > 20, EngineSpeed(?engine), ?engine > 180, Highway(?hwy), ?hwy = True, SpeedLimit(?limit), VehicleSpeed(?speed), LESS(MULTIPLIER(?limit, 1.2), ?speed) Traction(?trac), ?trac= True HwyBadRTurnSignal HwySwerving(?angle, ?engine, ?hwy, ?limit, ?speed), BadLTurnSignal(?angle,?single ), TrafficIncident(?traffic),TrafficOnWay(?traffic) HwyBadLTurnSignal HwySwerving(?angle, ?engine, ?hwy, ?limit, ?speed), BadLTurnSignal(?angle,?single ), TrafficIncident(?traffic),TrafficOnWay(?traffic) HwySwerving SteerAngle(?angle), ABS(?angle) > 30, EngineSpeed(?engine), ?engine > 180, Highway(?hwy), ?hwy = True, SpeedLimit(?limit), VehicleSpeed(?speed), LESS(MULTIPLIER(?limit, 1.2), ?speed) FastTurn SteerAngle(?steer), ABS(?steer) > 90, EngineSpeed(?engine), ?engine > 180, LatAcc(?latacc), GREATER(ABS(?latacc), 2), Intersection(?intersect), ?intersect = True, VehicleSpeed(?speed), ?speed > 15, SpeedLimit(?limit), CurrentSpeed(?curSpeed), GREATER(MULTIPLIER(?curSpeed, 0.4), ?limit) SlowRoughRoad RoughRoadMagnitude(?rrm), ?rrm > 180, Traction(?trac), ?trac = True, Brake(?brake), ?brake = True, SteerAngle(?steer), ABS(?steer) > 30, VehicleSpeed(?speed), ?speed < 20, SpeedLimit(?limit), CurrentSpeed(?curSpeed), GREATER(MULTIPLIER(?curSpeed, 0.4), ?limit) RoughRoadTurn RoughRoadMagnitude(?rrm), ?rrm > 180, Traction(?trac), ?trac = True, Brake(?brake), ?brake = True, Intersection(?intersect), NOT(?intersect), RoughRoadWeather RoughRoadMagnitude(?rrm), ?rrm > 180, Traction(?trac), ?trac = True, Brake(?brake), ?brake = True, Weather(?x), NOT(GoodWeather(?x)), Intersection(?intersect), NOT(?intersect), CarelessTurn SteerAngle(?steer), ABS(?steer) > 90, Intersection(?intersect), ?intersect = True, LatAcc(?latacc), GREATER(ABS(?latacc), 2), NOT(RightSignalON(?right)), NOT(LeftSignalON(?left)) TrafficSignSpeeding Intersection(?intersect), ?intersect = True, TrafficSignal(?signal), Close(?signal), LonAcc(?lonacc), ?lonacc > 2, Throttle(?throttle), ?throttle > 20, EngineSpeed(?engine), ?engine > 180, SpeedLimit(?limit), CurrentSpeed(?curSpeed), GREATER(MULTIPLIER(?curSpeed, 0.4), ?limit) TrafficSignHardBrake Intersection(?intersect), ?intersect = True, TrafficSignal(?signal), Close(?signal), LonAcc(?lonacc), ?lonacc < -2, HardBrake(?brake), ?brake = True, SpeedLimit(?limit), CurrentSpeed(?curSpeed), GREATER(MULTIPLIER(?curSpeed, 0.4), ?limit) HeavyDuty Slope(?slope), ?slope > 0.8, Intersection(?intersect), ?intersect = True,Throttle(?throttle), ?throttle > 20, EngineSpeed(?engine), ?engine > 180, VehicleSpeed(?speed), ?speed < 20 Figure 2.6—Rules uses in our evaluations A good example of the use of negation is the definition of the BadRTurnSignal predicate; we have earlier (2.4) motivated the need for negation using this rule. Comparison for Trace-Driven Evaluation. Our evaluations use 10% of the dataset to compute the pred- icate probabilities for the 21 rules, and use the remaining 90% of the data set to evaluate the optimiza- tion algorithms. Our evaluation compares CARLOG’s latency optimization against several alternatives. A Naive approach always acquires all cloud predicates in parallel during query execution; this represents a simple optimization beyond what a standard Datalog engine would do. A slightly cleverer strategy, Cloud-Parallel, acquires cloud predicates in parallel only when all sensor predicates evaluate to true. This strategy could be achieved by a programmer re-ordering predicates in rules so that local sensors appear first in rule descriptions (2.1) 6 Two other approaches consider 2 different predicate acquisition orders, and employ short-circuited evaluation: Lowest Prob first and Lowest Cost first. In the Lowest Prob first, pred- icates are evaluated in order of increasing predicate probability (as learnt from traces), while with lowest cost predicates are evaluated in order of increasing predicate cost. 6 A variant of Cloud-Parallel can short-circuit computation as predicates are fetched. This is latency-optimal but would send many more cloud requests than necessary. Especially for cloud services that charge per request or by data volume, this might be an undesirable alternative. 31 Our final two alternatives require some explanation. Some of the information made available by our cloud service is relatively static (e.g., the road map, locations of intersections etc.), but some information varies with time (e.g., gas prices, current traffic levels, traffic incidents etc.). We conservatively assume that the static information such as maps cannot be completely downloaded onto to the phone, not for storage reasons, but because maps are expensive, and it is not clear that developers can afford the up-front costs of getting multi-user licenses for these maps. We believe it is more likely that mapping companies will offer pay-as-you-go services where users can access maps online, and pay for the information they access. However, mobile devices may be able to cache relatively static information and our Naive-Cached strategy first checks the local cache for cloud predicates and acquires in parallel the uncached ones. Finally, Cloud-Parallel Cached applies caching to Cloud-Parallel. Metrics. We use two metrics for comparison: the latency ratio is the ratio of the average query response latency of one of our alternative schemes to that of CARLOG, and the event ratio is the ratio of the number of events detected by CARLOG, to that detected by one of the alternatives. 2.5.2 CARLOG in Action Before discussing our trace-based evaluation, we demonstrate the benefits of CARLOG’s latency opti- mizations using results from an actual run of CARLOG during a 40-minute drive (Figure 2.5). Dur- ing this drive, an Android smartphone was configured with CARLOG and evaluated 6 queries concur- rently (TrafficSignSpeeding, CarelessTurn, HwySpeeding, TrafficSignHardBrake, Sharpturn, SlowRoughRoad); these rules collectively invoked 16 sensor predicates and 7 cloud predicates. We applied our scheme with multi-query optimization, since all 6 rules shared at least one predicate with another rule. Each query was evaluated whenever one of its sensor predicates changed. After one evaluation completed, the next commenced when a sensor predicate changed; thus, queries were continuously evaluated. In this experiment, we compare CARLOG with the Naive strategy. During this run, we found that Naive had an average query response time of 899.24ms, but CARLOG’s average query response time was only 32 9ms (or almost 2 orders of magnitude smaller). Moreover, CARLOG detected 4 more events than Naive: because Naive incurs worst-case latency for each evaluation, it misses many events. Figure 2.5 shows the screenshot of one of our apps that tracks these events on a map in real-time. The map shows the locations at which the various events were triggered; the dark marker shows events detected by CARLOG, and the white marker by Naive. At many locations, Naive detects at least one event where CARLOG detects several. However, there are at least 3 locations where CARLOG detects an event, but Naive is unable to. This experiment is adversarial along many dimensions: it demonstrates a number of concurrent rules, uses many local and cloud sensors, and has a large number of events (nearly 1 per minute). Even under this setting, CARLOG’s benefits are evident. We now explore CARLOG’s performance for a wide range of queries and compare it with other candidate approaches. 2.5.3 Single Query Performance We compare the performance of CARLOG against the other candidate strategies discussed above for each query individually; that is, in these experiments, we assume that only a single query is active at any given point in time. We cannot conduct such comparisons using live experiments on the vehicle, since during each run of the vehicle we can only evaluate a single strategy and different runs may produce different conditions. Instead, we used trace analysis to evaluate our queries for the 7 different strategies described above. Figure 2.7 plots the relationship between latency ratio and event ratio, for 6 of our queries (in what follows, we use queries and rules interchangeably, since in Datalog, a query seeks to establish whether a given rule is true). In this subset, all the rules acquire 3 distinct (but different sets of) cloud predicates. To calibrate these figures, the absolute latency and the number of events detected by CARLOG are shown in Figure 2.8; using these numbers together with the ratios in Figure 2.7, one can obtain absolute values for the latency and events for each strategy. We first note that none of the alternative strategies dominate CARLOG for any of the queries (i.e., none of the points in the figure is in the box defined byx = 1 andy = 1). Put differently, CARLOG is strictly 33 Figure 2.7—Performance of single queries with 3 cloud sensors Rule Name SharpTurnWeather SlowLTurn Tailgater HwyBadRTurnSignal HwyBadRTurnSignal FastTurn Latency(ms) 23.3 23.6 17.44 16.31 18.36 8.18 Events 2462 962 1572 1480 1432 1860 Figure 2.8—CARLOG Latency and Event counts better than any other candidate scheme both in terms of latency and in detected events. For some queries, like FastTurn, Lowest Prob First detects more events than CARLOG, but incurs more than twice the latency on average. The reason for this is interesting: very often, Lowest Prob First is faster than CARLOG because it can short-circuit evaluation quicker, so it detects more events. However, when it cannot short- circuit, it may end up acquiring a more expensive predicate which takes longer to acquire. During these times, it can miss events, but on balance detects more events. For other queries, like SlowLTurn and SharpTurnWeather, the Cloud-Parallel alternatives are faster on average because these queries acquire cloud predicates less frequently (this acquisition is short-circuited by sensor predicates) than CARLOG, but when they do the incurred latency which causes them to miss events, resulting in event ratios of between 1.2 and 1.5. The performance of each strategy varies by the query. This is most evident for Naive, where different rules experience a wide range of latency ratios (between 40 and 160) and event ratios (2 to over 4). The same observation holds for other strategies as well, albeit to a less degree. Although all queries in the set acquire 3 distinct cloud predicates, the frequency with which these predicates are evaluated varies widely across rules, resulting in the observed variability. 34 Figure 2.9—Single query performance grouped by number of cloud sensors Simply adding parallelism to cloud predicate acquisition doesn’t provide any benefits; witness the pessimal performance of Naive (there is a discontinuity in the y-axis of Figure 2.7 because of Naive’s poor performance). Its 2 orders of magnitude worse performance is consistent with our experimental results described in the previous subsection. Combining short-circuiting with parallel cloud acquisition (Cloud- Parallel) helps significantly; as discussed above, this scheme is sometimes faster than CARLOG. However, its benefits are uneven: forFastTurn, this approach incurs 3 worse latency on average because in this case cloud sensors are acquired more often than CARLOG even though their probability of being true may be small. Caching relatively static cloud predicates improves the performance of Naive and Cloud-Parallel, but not by much. There are two reasons for this. Many rules involve cloud predicates accessing dynamic information (current speed, gas prices, weather etc.) that cannot be cached. Moreover, since every cloud predicate is calculated with respect to the car’s current position, a cached value is associated with a given GPS reading. Because GPS is sampled discretely and can have errors, a cached value is useful only if the cloud predicate is evaluate at exactly the same GPS location, the probability of which is not high. In our experiments, we used “fuzzy” matching of GPS locations: if there is a cached reading from within a radius r of the current location, the cached reading is used, instead of acquiring the cloud predicate. The choice 35 Combination 4 Rules 8 Rules 12 Rules 16 Rules 20 Rules Latency(ms) 32.0 34.3 39.4 45.4 49.2 Events 5332 16768 22300 33836 55898 Figure 2.10—OPT Latency and Event Counts for multiple queries ofr is a function of the type of cloud predicate: for instance, road curvature can vary beyond 10m. In our experiments, we usedr values from 10m to 1 mile: even so, even so, caching is ineffective. Paradoxically, Lowest Cost First has consistently higher latency cost than Lowest Prob first, but their event ratios are comparable. Both of these approaches evaluate cloud-predicates sequentially with short- circuiting. In general, the costs for cloud sensors are within a small factor of each other, and the lowest-cost cloud predicate is unlikely to have the lowest probability. So, Lowest Prob First does better by accessing the least likely predicate, whose cost, even if higher, reduces the need to access additional cloud predicates most of the time. Finally, Figure 2.9 depicts the performance of queries grouped by the number of cloud predicates they contain. That is, for a given strategy (say Naive), we average all queries with n sensors for each n = 1::: 4, and repeat this procedure across all strategies. This figure re-emphasizes the observation that no strategy dominates CARLOG (except Cloud1 for Cloud-Parallel Cache, which is caused by the fact that all Cloud1 rules are defined with a cacheable cloud sensor, the cache will reduce the latency compared to any cloud fetching strategy.). However, while Naive and its cached version are pathologically bad, most of the other schemes incur less than 50% additional latency, but CARLOG detects up to 30% more events than these, as shown in the inset in Figure 2.9. While it may seem that some of these alternatives may be competitive, we shall see in the next section that their performance can be worse in realistic settings with multiple queries. Furthermore, 30% fewer events corresponds to missing 500-600 events in some cases, a substantial penalty. There does not seem to be any monotonicity in performance with respect to the number of cloud predicates: for example, Naive has a higher latency ratio with 1 cloud sensor than with 4. This is because the probability with which cloud predicates are accessed more strongly dictates performance than the number of cloud predicates. Interestingly, Lowest Cost first, Lowest Prob first, and the Cloud-Parallel 36 variants perform the same as CARLOG for rules with a single-cloud predicate. In all of these cases, short- circuiting is employed and the single cloud predicate is invoked at the same time by all three schemes. 2.5.4 Multiple Query Performance In realistic settings, multiple apps may issue concurrent CARLOG queries. In 2.4, we argued that jointly optimizing across multiple queries can provide a lower overall cost. In this subsection, we explore various aspects of CARLOG performance with concurrent queries: the importance of multi-query optimization, the performance hit due to our heuristic, and how performance scales with increasing number of rules. Figure 2.11 depicts this performance where all results are normalized with respect to a strategy called OPT, for different numbers of concurrent queries. This strategy uses dynamic programming to compute the optimal query execution order for multiple queries, while CARLOG uses the greedy heuristic proposed in 2.4. Also, Single OPT uses single-query optimization separately, instead of jointly optimizing across queries. As before, to obtain absolute ratios and event detections, Figure 2.10 depicts the absolute latencies and events for OPT. We first note that CARLOG is the closest to OPT amongst all schemes. Because it is a heuristic, CARLOG’s multiquery optimization generally has a latency ratio that is off the optimal by about 20-50% depending on the number of rules. It is unclear if query concurrency in mobile apps will exceed 20, so a latency penalty of at most 50% may be what our heuristic sees in practice. Interestingly, this comes at no change in the event ratio, because OPT latencies are small enough to begin with, the small increases do not perceptibly affect event detections. Next, CARLOG’s multi-query optimization is essential for performance. Single OPT, which optimizes each query independently, detects half as many events or less and incurs up to 3 more latency. In our rule base, each rule shares at least one predicate with at least one other rule, and our multi-query optimization clearly short-circuits evaluation much more effectively than Single OPT. Other candidate strategies perform worse than CARLOG. Cloud-Parallel has good latency performance compared to OPT and CARLOG, but can miss a third or more events. Both Lowest Cost first and Lowest 37 Figure 2.11—Multi-query performance Prob first have latency ratios above 1.5 and event ratios nearing 2. These event ratios suggest that these approaches are unacceptable. Interestingly, unlike for the single-query case, Lowest-cost first performs better than Lowest Prob first in terms of the event ratio, though the two have comparable average latency ratios. We conjecture that the latter scheme more often acquires an expensive cloud sensor first before short-circuiting evaluation, and so is more likely to miss events. Finally, the latency and event ratios don’t change appreciably with increasing numbers of concurrent queries. For example, Naive’s latency ratio lies in the 25-30 range, while Lowest Cost First and Lowest Prob First have latency ratios in the 1.5-2 range. This suggests that each scheme degrades in performance proportionally to the optimal and to CARLOG. Put another way, relative to the other schemes, CARLOG does not scale appreciably worse than other schemes. 38 Chapter 3 CARLOC: Precisely Tracking Automobile Position 3.1 Introduction As mobile devices have proliferated, they have become the de-facto method for estimating the position of automobiles. The built-in GPS receiver in mobile devices provides positioning for navigation, but also for context-awareness; many apps now routinely use vehicle position to suggest nearby services or points of interest. As elements of autonomous driving start to appear in commercial offerings, the accuracy of positioning vehicles will become much more important. However, it has long been known that smartphone GPS receivers have errors on the order of 10s of meters, especially in obstructed urban environments. It is precisely in these environments, unfortunately, where accurate positioning is most necessary because of the density of services or points of interest. An order of magnitude lower positioning error of automobiles would be able to position a vehicle with up to lane-level accuracy, which will likely enable much more accurate navigation, but also more precise context-awareness in urban environments [89]. Much research (Section 5.2) has explored how to enhance GPS position by fusing information from other sensors such as laser-range finders and inertial sensors and from other sources, such as digital maps. 39 Intuitively, maps can be used to constrain vehicle trajectories, inertial sensors can be used for dead reck- oning when GPS is unavailable, and laser range-finders can estimate distances to landmarks in the envi- ronment, which can then be used to get a position fix. In this work, we explore two dimensions in this design space that can help significantly improve po- sitioning accuracy. First, we observe that modern automobiles have hundreds of sensors that govern the operation of their internal subsystems, and some of these sensors provide odometry and heading informa- tion. These can be used to improve the efficacy of matching a car’s location to a digital map, and to model its motion. Second, car sensors can also provide enough information to detect roadway landmarks — road- way features such as potholes or speedbumps. If these can be reliably detected, then the position estimates of other cars at these landmarks can be used to improve a car’s position estimate. Contributions. In this work, we design and evaluate a system called CARLOC (Section 3.3) that can continuously track the precise position of a vehicle, even in highly obstructed environments. CARLOC uses a collection of techniques, some of which are inspired by prior work on robot localization, while others use existing techniques by adapt them to use car sensors, and some are novel. Specifically, CARLOC uses a non-parametric probabilistic position representation, called a particle filter, as a uniform framework that is able to express various forms of information fusion. CARLOC matches a car’s current position estimate to a road-segment on a map. This matching, whose accuracy we improve by leveraging the availability of vehicle sensors, can be used to truncate the position uncertainty to within the nominal road width of the matched segment. CARLOC then updates the particle filter using a well-known kinematic model, but uses car sensors to accurately estimate inputs to the kinematic model. A particularly novel contribution of CARLOC is the ability to enhance position estimates of a vehicle using crowd-sourced position estimates of roadway landmarks. To understand this, suppose a car hits a speed bump. If CARLOC is able to detect the speed bump, then the car’s particle filter at the instant the speed bump is encountered, is a probabilistic representation of the speed bump’s position. SupposeN cars pass over the same speed bump, the collection of all their particle filters at the speed bump represents a crowd-sourced collection of position estimates of the speed bump. Intuitively, one expects the distribution 40 described by these crowd-sourced particles to converge to the true location of the speed bump as more and more vehicles contribute to the collection. CARLOC uses this observation and contains novel algorithms to detect three types of roadway landmarks (stop signs, speed bumps, and street corners) and to update particle filters. Using extensive evaluations (Section 3.4) on roads with varying degree of satellite obstructions (and therefore various degrees of GPS availability and accuracy), we show that CARLOC has mean error of 2.7m in a highly obstructed downtown road, an order of magnitude improvement over commodity GPS, high- precision GPS receivers, differential GPS, and the closest prior work on GPS augmentation using mobile devices. In unobstructed environments, CARLOC’s mean position error drops to 1.38m, while in partially obstructed environments, the mean error can vary between 1.1m and 2.2m. CARLOC’s position error does not appear to depend on length of route, and a relatively small number of landmarks suffices to achieve significant accuracy. More important, each component of our design, and each optimization contributes significantly to the design. 3.2 Background and Motivation Positioning accuracy for automobiles. Over the last few years, the use of in-car navigation has increased significantly. This has been driven, in part, by the ubiquitous availability of free navigation apps on mobile devices such as smartphones and tablets. The commodity GPS receivers on these devices (that the naviga- tion apps rely on) can be highly inaccurate in some settings. For example, Figure 3.1 shows GPS readings from a city downtown area, where the GPS signal reception is affected significantly by the obstructions caused by tall buildings, a well-known effect sometimes called the urban canyon effect. To quantify the degree of error in GPS, we obtained smartphone GPS readings from nearly 200 miles of driving, on three different types of roads: Urban roads, e.g. a downtown road surrounded by tall buildings, Shaded roads, e.g. roads covered by trees, and Opensky roads, e.g. highway or major roads having an unobstructed view of the sky. Table 3.1 shows statistics for GPS errors from our traces. Although 41 Figure 3.1—Portion of GPS Trace in City Downtown Urban Area Shaded Area Opensky Area Average Error (m) 24.3 15.3 4.7 Error STD (m) 5.5 3.2 1.6 Table 3.1—Measured GPS errors in three different areas we obtain reasonably good GPS location accuracy on open sky roads, the accuracy degrades sharply on shaded and urban roads, with over 15 meters of error on average, and over 90 meters in some cases. This is consistent with other work that has observed similar errors in obstructed environments [22]. Why do we need highly accurate positioning? With these levels of inaccuracy, navigation apps may be led astray, and may give wrong turn-by-turn directions, which can lead to driver confusion. In this work, we ask the question: Is it possible to achieve lane-level positioning accuracy for automobiles even in highly obstructed environments? In North America’s interstate system, the nominal lane width is about 12 feet (3.6m), so our question translates to: Is it possible to achieve 3-4m accuracy, in the worst case, in obstructed environments? Aside from more accurate (and therefore less confusing) navigation, precise positioning of vehicles can have many potential applications. Accurately positioning crowd-sourced detection of road features (e.g., potholes, rough roads etc.) can help municipalities target roadway improvements. Lane-level traf- fic flow analysis can help traffic agencies provision roadways; for example, an often clogged right lane might prompt the addition of a dedicated right turn lane. Moreover, insurance companies can track driver propensity to stay on fast lanes, or track violations of lane occupancy rules (e.g., on some roads, trucks are required to stay in the right lanes). 42 Possible Responses. One possible response is to hope that future GPS receivers will have enhanced accuracy in highly obstructed settings. As we demonstrate later, expensive GPS devices available on the market today are still susceptible to the urban canyon effect. This is not surprising, since GPS receivers will, in general, find it difficult to compensate for lack of visibility to satellites or for multipath effects. For this reason, most prior work on precise positioning for automobiles (Section 5.2) has relied on information fusion: combining sensors of other modalities (like LIDAR), or other sources of information (such as digital maps), in order to augment or correct GPS readings. Our approach. Our work also uses this approach, but with a new twist: we exploit the availability of sensors built-in to vehicles to improve positioning accuracy. Modern vehicles are equipped with several hundred physical and virtual (derived from physical) sen- sors on-board. These sensors provide the instantaneous internal state of all vehicular subsystems. From the industry standard CAN [6] bus and using the standard On-Board Diagnostics (OBD-II) port on cars, users can, in theory, access most of these sensors, such as: vehicle speed, steering wheel angle, throttle position, transmission lever position, and some inertial sensors [116, 73, 54]. These sensor readings are internally used to control subsystems of the vehicle, such as stability control and engine health monitoring. While many of these sensors are proprietary, several tools [120] have been able to access these through reverse engineering. More recently, Ford and General Motors have made about 20 sensors available through their OpenXC platform and GM Developer Network respectively, so it’s likely that, in the future, such information will be ubiquitously available. In collaboration with a major automotive manufacturer, we have obtained access to many internal car sensors. In this work, we explore whether, and how, these sensors can help precisely position an automobile. Claims re the sensor readings from cars not being avail- able to end-users is incorrect - Section 2. OBD-II as mentioned by the authors provides access to 100s of sensors that can be read using a mobile phone app (several of which are available in the app-market). In fact, this makes the system presented in this work more feasible with lesser roadblocks. 43 Specifically, we use in-vehicle sensors to improve positioning accuracy in two ways. First, in-vehicle sensors can provide accurate odometry for precise dead reckoning. By contrast, prior work has used GPS- derived speed measurements for dead reckoning. Second, in-vehicle sensors can be used to precisely identify roadway landmarks (a pothole, or a speed bump). In turn, crowd-sourced position estimates of these landmarks can be used to fix a car’s position. 3.3 The Design of CARLOC In this section, we describe the design of CARLOC. We begin with an overview of the overall design, which motivates specific design challenges that are then addressed in subsequent subsections. 3.3.1 Overview of CARLOC Figure 3.2 depicts the various components of CARLOC. At a high-level, CARLOC models the current position of the vehicle probabilistically: intuitively, the position of the vehicle at any point in space is associated with a specific probability. The key idea then is to update or refine this probabilistic represen- tation using information from various sources. Then, at any given point in time, the precise position of the vehicle is obtained by actualizing the probabilistic representation, as described later. One way that CARLOC updates the probabilistic representation is by using vehicle sensors to obtain distance traveled and the heading of the car. This approach, often called dead reckoning, is, in CARLOC, more accurate because of its use of in-built vehicular sensors. These sensors are available at frequen- cies ranging from 10-100Hz, so they can provide accurate estimates of distance and heading over short timescales and distances. However, vehicle sensors by themselves are insufficient: sensor errors can, over time, cause position estimates to drift significantly from the true position. A second way to update the probabilistic representation is to periodically obtain GPS readings. These readings are associated with estimates of error, which can be used to tighten the car’s position estimates. 44 However, as shown in Section 3.2, GPS errors in obstructed environments can increase the car’s position uncertainty. To overcome GPS errors, one can spatially refine the probabilistic position estimates using digital maps. Intuitively, a car is likely to be off a roadway with a very small or near-zero probability, and map- matching algorithms [83] use this observation to refine position estimates. Car sensors provide accurate estimates of speed and turns, and these can be used to enhance existing map-matching algorithms to in- crease positioning accuracy. The last component of CARLOC is based on the observation that roadway landmarks mark consistent positions in the environment that be exploited to refine the car’s probabilistic position estimate. Consider a speed-bump on a road: if a car passes over a speedbump, it can refine its own position estimates using posi- tion estimates of other cars when they passed over the same speedbump. This suggests that crowd-sourced position estimates of roadway landmarks can be used to improve a car’s position estimates. CARLOC incorporates several novel algorithms that use vehicle sensors to identify roadway landmarks. In designing these components of CARLOC, we faced the following challenges: Choosing the probabilistic model for position representation, since we needed a representation that would be amenable to update and refinement from a variety of sources of information, including maps, vehicle sensors, and crowd-sourced information. Selecting a model that correctly represents GPS position uncertainty, so that the vehicle’s positioning uncertainty could be appropriately updated. Designing a motion model for the vehicle that uses vehicle sensors to determine how a vehicle’s position evolves over time; prior motion models have used GPS readings, but the vehicle sensors are available at much higher frequencies than GPS readings. Designing appropriate map-matching algorithms; although map-matching has been studied extensively, they rely on frequent GPS updates which can lead to matching errors in the face of significant GPS errors. 45 l Crowd-Sourced Landmarks Map-Matching GPS Update Dead- Reckoning g Figure 3.2—CARLOC Design Designing algorithms to detect landmarks from vehicle sensors, and to refine position estimates using crowd-sourced location information. We discuss each of these challenges in the subsequent sections. 3.3.2 Probabilistic Representation of Position A vehicle’s position estimate has inherent uncertainty due to sensor noise. A common approach to dealing with this uncertainty is to use linearized models and assume Gaussian noise so that a Kalman filter can be applied. As prior work [46] has shown, however, vehicle positioning violates some of these assumptions; specifically, during a turn the a posteriori position distribution is non-Gaussian and non-linear filtering methods are required to solve the problem. In this work, we use a well-known non-parametric probabilistic model of position, based on Sequen- tial Monte Carlo (SMC) methods. Our representation is commonly called a particle filter [91, 119]. A particle filter estimates the posterior density of a vehicle’s position through predefined Bayesian recursion equations. Concretely, the current position of the vehicle is represented by a set of particles. Each particle repre- sents a probabilistic state vector, indicating the likelihood the vehicle is at this position. Thus, if we have N particles, each particle is associated with a state vectorv i (which contains its position and its orienta- tion), and a probability or weight! i that determines the likelihood of the vehicle being in that state. At 46 any given instant, the particle filter can be used to estimate the position of the vehicle as a weighted sum of the particles: N i=1 vi!i N i=1 !i . This representation provides a uniform foundation for many of the kinds of fusion we are interested in. For example, vehicle odometry and bearing information can be used to update thev i s, and the associated sensor errors can be used to update the ! i s. Getting a GPS fix results in re-weighting the particles, and adding map constraints may require removing off-road particles from the filter. Finally, the positions of roadway landmarks can be represented as particle filters, so crowd-sourced landmark updates require merging particle filters (in a manner described later). 3.3.3 Map Matching Many digital maps represent roads using road segments which are polyline representations of a road. Map- matching is the process of identifying the road segment corresponding to a given position. Map-matching has been used in prior work (Section 5.2) to improve vehicle positioning, and is in general known to be a hard problem because position errors can lead to errors in map-matching. CARLOC builds upon a specific piece of prior work [83, 118] that models the map matching problem as a maximum likelihood path estimation on a Hidden Markov Model (HMM). In this work, the states in the HMM are the map-matched road-segments, and transitions occur when a vehicle turns from one road onto another. Given a GPS reading, one can estimate, given a model of GPS errors and using Bayes’ rule, the posterior observation probability of the car being on a specific road segment. One can also estimate a transition probability; namely, at a given instant, and given a GPS reading, the probability that a transition has occurred from one road segment to another. With these observation and transition probabilities, [83] uses the Viterbi algorithm to find the maximum likelihood sequence of states (or road segments the car has traversed). 47 The Viterbi algorithm is fast enough that we can run map matching on a mobile device. To optimize the implementation, rather than run map-matching on every GPS update (1 per second in our implementation), we do so only when the GPS reading has deviated significantly from the last matched road segment. CARLOC map-matching enhancements. We have modified the observation probability computations to increase its efficiency and robustness. For efficiency, we use the fact that modern GPS receivers report estimates of error: we then only search road segments that fall within these error bounds. For robustness, we avoid using GPS readings that are inconsistent with the heading (direction of motion) of the car. Our changes to the transition probability calculations from [83] are more substantial. That work cal- culates transition probabilities by estimating the travel time from the last known update. One change we make is to use travel distance (as measured from car sensor readings, by integrating the instantaneous speed sensor) instead of travel time. However, travel distance isn’t able to distinguish, in Figure 3.1, whether the car turned left from Road A to Road B or continued straight on Road C, since both outcomes would be equally likely. CARLOC has additional information that can make the estimation more accurate — car sensors that measure turns. Specifically, using the steering wheel angle sensor, we can estimate the change in heading of the car j;k from road segmentj to road segmentk. We also estimate the difference l j;k between the actual distance traveled and the projected distance traveled on the map (obtained by projecting GPS readings onto the road segments j and k). Using these two quantities, we can estimate the transition probability, at a given instant, of the event j;k of the vehicle transitioning from road segmentj to road segmentk from Bayes rule (M is the number of road segments considered): P ( j;k jl j;k ; j;k ) = P ( j;k jl j;k )P ( j;k j j;k ) M P m=1 P ( j;m jl j;m )P ( j;m j j;m ) (3.1) If we assume that the difference in angle, and the difference between the projected and traveled distance are both Gaussian with zero mean and standard deviations a and d respectively, then this becomes: 48 P ( j;k jl j;k ; j;k ) = exp(0:5( j;k a ) 2 0:5( l j;k d ) 2 ) M P m=1 exp(0:5( j;m a ) 2 0:5( l j;l d ) 2 )) (3.2) Using the matched road segments. CARLOC uses the result of map-matching in three ways. First, when it starts up, CARLOC does not have a usable estimation of the vehicle’s position so it projects the GPS reading onto the map-matched road segment as a position estimate. Second, we use the map-matched road segments to filter erroneous GPS readings. Even though GPS readings come with an associated error, we have found that, especially in obstructed environments, the associated error bounds can under-estimate the actual error. To filter out these erroneous readings, we first project the GPS reading to the map-matched road-segment, then filter out readings whose projected distance to the road-segment is greater than the nominal road widths. In the map we use, Open Street Maps (OSM) [63], road segments have associated types such as residential or highway. We make conservative assumptions about the number of lanes in each type (e.g., 2 for residential and 6 for highway, in each direction), then use nominal lane width to compute road widths. If the GPS location does not fall within the road, we declare it invalid. We also use another optimization. For some roads, OSM depicts them using one road segment, for others two. If our projected GPS reading is closer to the road segment that is against the car heading, we drop that reading. Finally, we use map-matching to update the weights on the particle filter. Intuitively, for each particle, letx be the projected distance of the particle from the outer edge of the map-matched road segment. For this calculation, we use the conservative road width estimate described in the previous paragraph. Ifx> 0 (the particle falls outside the road segment) we re-weight its probability inversely withx. Specifically, the new weight is calculated to be: 1 p 2 2 exp( x 2 2 2 ), where is the variance of all the particles’ projected distance to road segment. 49 X O Y R Ƚ Ⱦ ɗ Ʌ ݒ C Figure 3.3—Kinematics of lateral vehicle motion STOP RoadSegment Figure 3.4—Multi-lane Stop Sign Ɂ Ɂ O C A B RoadSegment GPS X Y Figure 3.5—Street Corner Illustration 3.3.4 Motion Model A motion model captures how the pose (position and orientation) of a car (or any moving object) evolves with time. Formally, the pose consists of 5 components, 3 of which estimate position (latitude, longitude and altitude) and 2 measure orientation (heading or yaw, and pitch). In this work, we focus on modeling motion in a 2-dimensional plane; extending to 3 dimensions is left to future work. Changes in position and orientation can be measured using GPS, but GPS measurements have error, and are sampled relatively infrequently (once a second). Instead, CARLOC exploits the availability of car sensors to dead-reckon the car’s pose. Dead-reckoning requires the ability to estimate displacements and to estimate changes in heading. Estimating displacement. In theory, one can estimate displacement using the vehicle speed and accel- eration and using simple kinematic equations. Thus, ifx t represents the vehicle’s position vector at time t, then we can write x t = x 0 +v 0 t + 1 2 a 0 t 2 where v 0 represents the velocity and a 0 the acceleration. In vehicles, the speed sensor can be sampled at relatively high rate (10Hz), and if we assume constant acceleration between two samples of the speed sensor, the kinematic equation to update position becomes: x t =x t1 + 1 2 (v t +v t1 ) t , where t represents the sampling interval. 1 Estimating change in heading. To estimate changes in heading, CARLOC also uses vehicle sensors. Modern vehicles expose two sensors that can help estimate changes in heading, the steering wheel angle and the yaw rate sensor. However, correctly estimating a change in heading needs to take the vehicle 1 Rather than using a global geodetic coordinate system (e.g., latitude and longitude), we convert all poses to a local geodetic system East-North-Up (ENU [48]). This ignores the earth’s curvature, but is easier to model, and has been used in the vehicular positioning literature [94, 42]. We omit the details of this conversion. 50 steering design into account. Vehicle steering follows the Ackermann geometry [7] which describes how turns are effected by steering. A simplified version of the kinematics of lateral motion resulting from the steering system design is shown in Figure 3.3 [107]. In this figure, a turn angle of on the front wheel results in an effective change in heading for the center of mass of the vehicle. consists of two components and. As [107] shows, (the slip angle) can be estimated using vehicle geometry and estimates of and . To estimate , we use the steering wheel angle sensor reading from the vehicle and the empirical observation that there is a linear relationship between the steering wheel angle and the actual wheel angle . To estimate , we continuously integrate the vehicle yaw rate sensor. However, we have found that errors in the yaw rate sensor can accumulate over time, so we correct for these errors by using filtered readings of GPS bearing. Our filtering uses three steps. First, we only take GPS readings that are consistent with map-matching (as described in Section 3.3.3). Next, we check if successive valid GPS readings are comparable to (within 10% of) the distance traveled as reported by the vehicle sensors. Finally, we check if the resulting bearing computed from the successive readings is consistent with (within 5% of) the heading change computed from the yaw rate sensors. If so, we use the GPS bearing to estimate . Because the yaw rate sensor is sampled at a higher frequency than GPS, and because GPS can often be inaccurate, GPS bearing corrections occur infrequently relative to the calculations of using the yaw rate sensor. Updating the particle filter. In practice, the kinematics calculations can be affected by noise. Our particle filter representation is able to account for sensor noise as follows. Recall that each particle in the particle filter is associated with a pose vectorx and a weight!. When the vehicle moves, we update each parti- cle’s pose vector using the displacement and heading change calculations discussed above. To account for sensor noise, we assume that each particle’s pose is independently affected by Gaussian noise in the speed and car sensor. We use nominal noise estimates for these sensors from the manufacturer datasheet. 51 3.3.5 Location Update Map-matching provides coarse location corrections by removing off-road particles. The motion model can provide fine-grain and accurate updates to particle locations but over small spatio-temporal scales. At larger time-scales, sensor errors can accumulate. As we show experimentally, these two methods alone do not achieve high positioning accuracy. So, CARLOC also uses GPS readings to update the particle filter. Specifically, CARLOC uses GPS readings determined valid from map-matching (Section 3.3.3). Each GPS reading is associated with an accuracy range [24, 23] and the error distribution of GPS readings can be well approximated by a Rayleigh distribution [100]. When we obtain a valid GPS reading, we update each particle’s weight according to Rayleigh distribution, based on the particle’s distance to the GPS-reported location. Intuitively, particles that are far from the reported GPS location are assigned a lower weight or likelihood. When a vehicle is stopped, we might obtain multiple readings at the same location: in this case, we aggregate the error reported by those readings before re-weighting the particles. We also apply several standard transformations to the particle filter. Recall that particles represent samples of positional probability distribution. With particle weight updates, particles need to resample occasionally to improve the probabilistic estimates. The resampling process adheres to the Sampling Im- portance Re-sampling (SIR) algorithm, by only resampling when the effective number of particlesN eff is less than the threshold (N th ). Assuming each particle has a weight of! i , it follows thatN eff = 1 P !i 2 [18, 47]. We setN th to 2 3 N, whereN is the number of particles. Moreover, an incorrect resample can cause particle diversity loss, so we also occasionally draw samples from the GPS position distribution, an approach called sensor resetting [87]. 3.3.6 Crowd-sourced Landmark Positions Given that GPS availability in obstructed urban environments is known to be poor, CARLOC uses an addi- tional, novel positioning enhancement, crowd-sourced landmarks. 52 0 2000 4000 6000 8000 10000 Time (ms) 0 5 10 15 20 Brake Active Vehicle Speed Throttle Position Shifter Position 60 80 100 120 140 160 180 200 Engine Speed Engine Speed Figure 3.6—Stop Sign Landmark Detec- tion 0 4000 8000 12000 16000 Time (ms) −1 0 1 2 3 4 5 Lateral Acc (m/s2) Yaw Rate (rad/s) 0 50 100 150 200 250 Steer Wheel Angle (SWA) SWA (deg) Figure 3.7—Street Corner Landmark De- tection 0 1000 2000 3000 4000 5000 6000 7000 Time (ms) 0 5 10 15 20 Vehicle Speed (mph) Vertical Acc (m/s2) 0 50 100 150 200 Rough Road Magnitude (RRM) RRM Figure 3.8—Speed Bump Landmark De- tection Suppose a car hits a speed bump. If CARLOC is able to detect the speed bump, then the car’s particle filter at the instant the speed bump is encountered is a probabilistic representation of the speed bump’s position. SupposeN cars pass over the same speed bump, the collection of all their particle filters at the speed bump represents a crowd-sourced collection of position estimates of the speed bump. Intuitively, one expects the distribution described by these crowd-sourced particles to converge to the true location of the speed bump as more and more vehicles contribute to the collection. Finally, a speed bump is an instance of a roadway landmark: this discussion applies to other roadway landmarks such as stop signs and street corners (at intersections). CARLOC uses this observation to improve positioning accuracy. When a car detects a roadway land- mark, it can check to see if crowd-sourced particles are available for the landmark. (These particles can be maintained in a cloud database, and made available through a cloud service. To minimize network latency, relevant particle clouds can be pre-fetched before reaching a landmark. The detailed design of this service is beyond the scope of the thesis ). If they are available, the car can resample its particle filter from the set of particles that include crowd-sourced particles for the landmark and its own current particle filter. This approach poses two challenges: (a) How can vehicles detect roadway landmarks? (b) How should a car’s particle filter be updated using the crowd-sourced particles? We discuss answers to this question for three types of roadway landmarks below. Our detection algorithms use vehicle sensors to achieve accurate landmark detection. Stop Signs. At a stop sign, there is usually a line drawn on the roadway surface. Drivers are supposed to stop at the line before proceeding into the intersection. Of course, not all drivers stop exactly at the 53 line. CARLOC leverages the wisdom of the crowds: if most drivers stop at or near the line, their combined position distributions will be an accurate estimate of the average behavior of drivers when encountering a stop sign (e.g., stopping just a little before the stop sign) We make this intuition more precise below. Detection: Our detection algorithm is based on the following observation: to leave the stop sign and enter the intersection, the driver usually releases the brake, steps on the gas pedal, as a result of which the engine speed increases, one or more gear shifts may occur, and the vehicle speed increases. To detect this, CARLOC continuously samples the following vehicle sensors: (Brake Active,Vehicle Speed, Throttle Position, Shifter Position and Engine Speed). Figure 3.6 shows the timeseries of these sensors at a stop sign and pictorially depicts the algorithm: on the timeseries of each sensor, the algorithm applies a sliding window and attempts to discover a window that contains a brake pedal release, followed by a sharp increase in throttle position and engine speed, followed by an increase in engine speed. The particle filter P of the car at the time when there is a discontinuity in the vehicle speed timeseries (i.e., the time when the speed increases suddenly) marks an estimate of the location of the stop sign line. Once a car computes P, it can store P in a cloud service (discussed below), for later retrieval by other cars. The precise algorithm is more complicated than this since it has to take into account many practical constraints. First, for any window that satisfies these features, we lookup the current position in an online database of stop signs [63] and our accumulated stop sign database, and only use the particle filter if it is found in the database. This eliminates false detections caused by a car stopping and then starting, say, after a delivery. This database is currently incomplete, but with time we expect its coverage to improve. For additional coverage, CARLOC uses other landmarks discussed below. Second, drivers may release the brake and roll through the stop sign, to account for which CARLOC uses a slightly larger window. Third, drivers may stop multiple times before reaching the stop sign line, since they may be queued up behind other cars. This behavior manifests itself as a sequences of detected windows, and we use the last window in the sequence. Finally, we crowd-sourcing to disambiguate traffic lights from stop signs; if some car traces don’t stop at an intersection, but others do, that intersection has a traffic light, not a stop sign. We 54 use this same technique to improve our stop sign database coverage; if every car stops near an intersection, that indicates the existence of a stop sign there. Particle Storage and Update. When the particle filter P is uploaded to the cloud service, that service performs a processing step. Figure 3.4 shows a multi-lane road scenario, in which the cloud service can get aggregated data from cars stopping on each lane. The cloud service employs a clustering algorithm to cluster the particles based on lane width threshold; the resulting number of clusters determines the number of lanes on the road. Figure 3.4 shows two such clusters. Now, suppose a car A wishes to update its particle filter when it reaches the stop sign. It downloads the cluster of particles closest to its current estimated position. These particles, together with its own, are then re-sampled with a probability proportional to the weight of each particle. Thus, more important particles are likely to be selected during re-sampling. These re-sampled particles then constitute the updated particle filter for A. In this way, if the crowd-sourced cluster of particles has converged to the actual location, the new particle filter will be closer to the car’s precise position. Street Corners. Street corners can be detected when a car performs a right turn. Detection: At a right turn, the timeseries of the steering wheel angle sensor peaks (Figure 3.7). So, any maximum in the steering wheel angle (SWA) that is larger than some high threshold (90 degrees in our implementation) can indicate a street corner. We have found that this peak is fairly robust to a variety of turning behaviors. For example, even when drivers turn from the rightmost lane into the non-rightmost lane, this peak is observed. To disambiguate other turns (for example, lane shifts at low speed which might also trigger the thresh- old), we correlate with two other car sensors: the lateral acceleration and the yaw rate 2 (Figure 3.7). During a turn, these three sensors all exhibit peaks, so CARLOC applies a sliding window to find a window in the trace that contains peaks of the three signals and then finds the average of the time of occurrence of these peaks. The particle filter of the car P at this time is used as an estimate of the position of the street corner and is crowd-sourced (uploaded to the cloud service). However, to disambiguate right turns at places other 2 These sensors are noisy, so we use box smoothing [103] to smooth these time series. 55 than intersections, we use an online map of intersections to determine if the car’s current estimated position is near an intersection; if not, P is not uploaded. Particle Storage and Update: When P is uploaded to the cloud service, it filters outlier particles to improve accuracy. From road segment data in an online map, the location of the mid-point of the intersection can be determined, and CARLOC assumes that particles in P outside the shaded cone (betweenOA andOB) in Figure 3.5 are unlikely to represent the car’s true position, and assigns those particles very low weight. When a car A reaches the street corner, it downloads the crowd-sourced particles from the cloud service and applies an update procedure identical to that for the stop sign. If a car stops before turning, then a particle update is performed only once (at the stop sign), rather than at both locations since the latter alternative could lose particle diversity (i.e., we might not have a good sample of the underlying distribution of position). Speed Bumps. The last landmark we consider in this work is the speed bump. CARLOC is careful to disambiguate potholes and speed bumps and uses several car sensors for this purpose. Detection: The detection algorithm is best illustrated using Figure 3.8. This shows the measurements of three sensors recorded from a car traversing a speed bump: (Rough Road Magnitude (RRM), Vehicle Speed and inertial sensor Vertical Acceleration). As it approaches the bump, the car slows down and the vertical acceleration sensor exhibits a peak. These two features are used to determine a potential speed bump. CARLOC then monitors the RRM sensor and performs peak detection within a window whose scale (shown as red box) approximates the vehicle’s wheelbase. If two peaks (shown in black dots) of vertical acceleration and increase of RRM are sensed, we determine a potential speed bump has been observed. CARLOC deems the car’s particle filter P at the first peak to be an estimate of the speed bump’s location. As with other landmarks, P is uploaded and stored in the cloud service. Particle storage and update. CARLOC clusters speed bump particles as it does for stop sign particles. Updating another car’s particle filter follows the same re-sampling procedure as discussed above. 56 3.4 Evaluation In this section, we assess the performance of CARLOC in obstructed and unobstructed environments over different trip lengths, and compare its performance against other alternatives including GPS positioning from commodity GPS receivers as well as higher precision GPS receivers, and with differential GPS. Our final goal in this section is to assess the efficacy of each of the components in CARLOC: dead reckoning, map-matching, and landmark-based position augmentation. 3.4.1 Methodology Experimental Setup. Our evaluations are based on multiple traces collected by two different drivers over routes with different characteristics from the perspective of a GPS receiver: obstructed routes in a downtown urban canyon, unobstructed routes with a view of the open sky at all points, and partially- obstructed routes with obstructed sky visibility in some locations. Each trace consists of several vehicle sensor readings obtained through our car sensing platform ([54, 73]). The sensors readings we collect are already available through the CAN bus, as described earlier, and our sensor collection software can sustain continuous collection nearly 40 car sensors, of which we use only a subset. On each route, we collect multiple traces at different times of day, which helps avoid bias caused by a specific traffic pattern. In addition, we use a subset of these traces to obtain crowd-sourced landmark positions, and evaluate the remaining traces using these crowd-sourced positions. Our results are averaged over the evaluations on these remaining traces, and we quantify the variability of our results in terms of quartiles. Comparisons. Across all routes, we compare CARLOC performance against using a GPS receiver on a Google Nexus 5 (labeled SPGPS on our graphs). In addition, on some routes, we also compare against using an expensive (>$200) high-precision GPS receiver, the ublox NEO-7P GPS (HPGPS). Finally, we also use a companion rover receiver, the ublox LEA-6T to obtain Differential GPS (DGPS [114]) position 57 Google View Ground View Figure 3.9—Static Measurement Setup CARLOC SPGPS HPGPS RTK DGPS Figure 3.10—CARLOC and GPS Compari- son in Downtown CARLOC HPGPS SPGPS RTK DGPS 10 1 10 2 Distance to Pin Point (m) Figure 3.11—Map Pin Points Comparison as well as Real-Time Kinematic (RTK [110]) based positioning. It is likely that future cars may be able to incorporate these high precision receivers. The rover receiver estimates position based on two modes [94]. The first mode returns precise location values. In its DGPS mode the rover receiver utilizes corrections from a known base station. Both modes update location at 1Hz. Our LEA-6T rover devices use ublox firmware 7.03 and the NEO-7P devices use version 1.00. For RTK and DPGS, we use a publicly available NTRIP caster base station within 10 miles of all our experiments. We obtained access to the base station’s NTRIP stream and position information through the UNA VCO consortium [126]. This station is equipped with an advanced Ashtech antenna mounted on a hilltop. On our rover devices (LEA-6T), ubx-formatted GPS measurements are captured using u-center, the ublox driver for the device with settings obtained directly from the developers of RTKLIB [134, 110]. Metrics and Ground Truth. Our measure of CARLOC performance is positioning error measured by distance between CARLOC’s position and ground truth. Obtaining ground truth is extremely hard for positioning in some places, and we resort to three approaches. Closed-loop routes. In our partially obstructed routes, we start the route at a well-marked location (and empty metered parking spot) and return to the same location. Our measure of accuracy then is the dif- ference between the start position and the end position as reported by CARLOC (or any of the candidate 58 comparison algorithms). The start position is calculated using our high-precision GPS receiver. This approach has also been used by prior work [65, 121]. High-precision GPS receiver. On our unobstructed routes, we also continuously collect readings from the high precision GPS receiver and use these as ground truth. The reported accuracy of these devices is within one meter. This method enables us to determine accuracy along the entire route instead of just at the end. Fiducials. On our obstructed routes, as we show below, the accuracy of the high precision receiver is not sufficient. So, we resort to using fiducials in the environment. As we drive on the obstructed route, we stop at several easy-recognizable points (or fiducials) along the right side of the car, e.g. sidewalk ramp exit, mailbox, etc. When we stop, we record the current timestamp and take a image from the passenger side to cover the car right side road to ramp distance. We then use these images to look up Google’s satellite views, pin down the points recorded in the image, and then use the location of those points (as obtained from Google Maps) as ground truth (Figure 3.9). To validate our fiducial-based ground truth collection, we applied the same methodology in several locations with an unobstructed view of the sky, and at these locations we also used the high-precision reference GPS receiver to record position. We find that our pinned down points are within 1.2m of the reference receiver. 3.4.2 CARLOC on an Obstructed Route Our obstructed route is a 2-mile loop in a downtown area surrounded by tall buildings (Figure 3.10). We collected a total of 12 traces along this loop using two different drivers at different times of day. On this route, we use only 4 landmarks: the 4 street corners shown in the figure. Of our traces, we use 8 to obtain crowd-sourced particles, and 4 to evaluate CARLOC and compare it against the high-precision GPS receiver, DGPS and RTK. As described above, we use the fiducials-based ground-truthing technique and we collected 15 different points as ground-truth. 59 Figure 3.11 shows the accuracy of each of these techniques with respect to the ground truth. The error bars represent the 25th and 75th percentile in our measurements. CARLOC has an average error of about 2.7m, with the smallest error being 0.6m as closest and the largest being 4.9m. Surprisingly, all of the alternatives have one to two orders of magnitude higher error. The high-precision receiver has an average error of 19.4m (min 7.7m, max 44m). The smartphone has an average/min/max error of 16m/1.2m/40.2m. This is slightly better than our high-precision GPS; this difference could either be within the margin of experimental error, or that smartphones have better dead-reckoning or GPS signal processing algorithms in software. Moreover, both DGPS and RTK, achieve really poor performance, with an average error of 75m, and a worst-case error of 200m. Figure 3.10 depicts these results visually. In the figure on the left, it is evident that DGPS and RTK measurements span the entire area covered by the two-mile loop. In the figure on the right, the superior performance of CARLOC vis-a-vis the high-precision receiver and the smartphone are visually evident. Why this pathological performance for the other alternatives? Clearly, both the high precision receiver and the smartphone GPS suffer from the urban canyon effect: the inability to see enough satellites affect their ability to get good position fixes. They are able to achieve reasonable performance primarily because of their use of dead reckoning filters. To understand why even the high-performance GPS receiver does not perform well, we examined the dilution of precision (DOP [85]) reported by the receiver. This measure of variability of GPS signals is much higher downtown (DTSHDOP) than along an unobstructed road with a clear view of the open sky (OSSHDOP) (Figure 3.12). This suggests that satellite availability and multipath effects degrade the performance of the high-performance receiver. To further understand this performance, we discussed our findings with developers of RTKLib [110], a well-known open source software for processing GPS data which has been reported to have centimeter accuracy in many situations. As such, this forum includes many experts in GPS positioning performance. Our discussions corroborated our findings above that both DGPS and RTKLib suffer from lower satellite availability and multipath effects. It is well known that differential GPS cannot fix errors caused by differ- ing multipath environments at the rover and basestation [44]. We have also verified that satellite availability 60 OSSHDOP DTSHDOP 10 -1 10 0 10 1 10 2 HDOP Figure 3.12—HDOP comparison for Open Sky Area and Downtown is lower in our downtown trace (the rover receiver sees about 5 satellites) than in a trace from an unob- structed area (6-8 satellites). Finally, we notice far fewer location updates (once every 2.6s) from DGPS and RTK compared to using these on readings from an unobstructed trace (once every 1.1s). The devel- opers of RTKlib believe these pathological errors can be improved with careful, route-specific parameter tuning, and we have left this to future work. 3.4.3 CARLOC on an Unobstructed Route We now quantify CARLOC performance along a 4.4km unobstructed route (Figure 3.13). Along this route, we treat the readings from the high-precision GPS receiver as ground truth, since, in this setting, its claimed accuracy is less than 1m [123]. We collected 8 traces along this route and used 5 of them to obtain crowd- sourced landmarks and 3 to evaluate accuracy. We obtained a total of 13 landmarks: 6 stop signs, 4 speed bumps and 3 street corners. Table 3.2 shows CARLOC performance. When the error is computed across the entire trace (the com- plete comparison), CARLOC averages a 2.27m error with a minimum error of 0.14m and a maximum error of 4.51m. However, we believe this number is a little misleading because the high precision GPS receiver updates its position at a frequency of 1Hz, but CARLOC can track position changes every 10th of a second. So, these two readings can be off, on average by half of a tenth of a second in the worst-case. In that time, a car traveling at 45mph travels 1m, which can add to the error estimate. To avoid this bias, Table 3.2 also reports error computed only at points where the car has stopped along the route (car sensors can tell us when the car has stopped). In this case (the static comparison), CARLOC 61 Mean (m) Min (m) Max (m) Complete Comparison 2.27 0.14 4.51 Static Comparison 1.38 0.16 2.42 Smartphone GPS Comparison 4.19 0.14 15.83 Table 3.2—CARLOC, smartphone GPS to High-precision GPS Distance Statistics has an average error of 1.38m and a maximum error of 2.4m. Finally, the smartphone GPS has a 3 higher worst-case error compared to CARLOC, and almost 2 higher average error. Figure 3.13 pictorially depicts these results. Although the three alternatives are not visually distin- guishable, the inset shows a part of the trace where the errors in the smartphone GPS are much more evident: CARLOC’s map matching and the motion model are able to compensate for inaccurate GPS, as a result of which it is able to much more closely follow the high-precision GPS receiver. CARLOC SPGPS HPGPS Figure 3.13—CARLOC, high-precision GPS and smartphone GPS in Open Sky Area 3.4.4 CARLOC on Partially-Obstructed Routes In this section, we explore the performance of CARLOC on routes of different lengths. We also quantify the benefits of each of the components of CARLOC, and understand more closely how crowd-sourced landmarks help improve positioning. It is hard to find unobstructed routes in metropolitan areas, so all our routes are partially obstructed. Because some of our routes are longer than our unobstructed route, it was logistically difficult to collect ground-truth using fiducials (which require significant manual effort), we used the closed-loop accuracy estimation technique described above. 62 3.4 km 4.5 km 5.3 km 7.6 km 9.2 km Route Length 10 0 10 1 10 2 Start-End Distance (m) DR DR MAP DR GPS DR MAP GPS CARLOC Figure 3.14—Different strategies’ Start-End Distance Our routes range in length from 3.4km to 9.2km. For each route, we collect 15 traces, 10 of which we use for extracting crowd-sourced landmark locations, and 5 for evaluation. Along the longest of these routes, we have 19 landmarks: 5 stop signs, 10 street corners and 4 speedbumps. Error as a function of distance. Over different distances, CARLOC is able to achieve mean error between 1.2 and 2.2m (Figure 3.14). The maximum errors for these 5 routes are 1.73m, 1.67m, 2.57m, 3.0m and 2.7m respectively. This is highly encouraging and suggests that lane-level precision might be achievable in most settings. Although there is a slight increase as a function of length, we believe this is largely due to difference in characteristics along the longer routes, rather than an increasing trend in CARLOC error as a function of distance. Indeed, there is no fundamental reason to believe that CARLOC error should increase with distance: any error accumulation with distance from, say the motion model would be corrected by GPS position fixes and crowd-sourced landmarks. The mean error along the longer 9.2km is slightly lower than the 7.6km trace primarily because the longer trace has two additional stop sign landmarks which improve CARLOC performance. Contribution of different components of CARLOC. Using these traces, we are able to quantify the contribution of different components of CARLOC. Our motion model permits pure dead-reckoning (DR). To this, we consider the adding location updates from GPS (DR GPS). We also consider an alternative strategy which uses map-matching with dead-reckoning (DR MAP). Our final alternative strategy adds both map matching and location updates to dead-reckoning (DR MAP GPS). 63 0 4 8 12 ALL Number of Landmark 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 Start-End Distance (m) Figure 3.15—Start-End Distance with Number of Landmarks 0 2 4 6 8 Number of Learning Traces 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 Start-End Distance (m) Figure 3.16—Start-End Distance with Number of Learning Trace 3.4 km 4.5 km 5.3 km 7.6 km 9.2 km Route Length 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 Start-End Distance (m) ALT Motion Model ALT Map Matching CARLOC Figure 3.17—Map-matching and Motion Model Optimization Performance Figure 3.14 shows the mean error using closed-loop error estimation for these different strategies and different route lengths. For the DR, we can clearly see an increase with length of trip, as expected: dead-reckoning error accumulates with distance. From the 15.9 m for 3.4 km to 40.9 m for 9.2 km trip, the errors grow almost linearly. When we add map-matching, we can see the Start-End distance for DR MAP drops to 10-26m. When GPS fixes are added to dead-reckoning, the errors also drop and appear to be independent of trace length. One would expect DR GPS to have similar error characteristics as the GPS receiver, since every position fix biases the location estimate towards the GPS position. DR GPS has errors of 6-10m, consistent with this expectation. Finally, with the addition of map-matching, the error of DR MAP GPS reduces to 3-4m across all route lengths. Finally, the addition of landmarks brings the error down to 1.2-2.2m as discussed above. These results suggest that each component plays a significant part in reducing the overall error, validating the design of CARLOC. The Role of Landmarks. Why do landmarks perform well? How does the accuracy vary as a function of the number of landmarks along a route or the number of crowd-sourced particle filters used? 3 How accurate are our landmarks? To measure the accuracy of our landmarks, we collected multiple traces on an unobstructed route on which we also collected measurements from our high precision GPS device (which we use as ground truth). Our route covers a total number of 9 right turns, 5 speed bumps and 4 stop signs. 3 We omitted the landmark detection accuracy evaluation due to space. 64 We then ran our landmark detection algorithms over all traces. For each landmark in a trace, our algorithms determine the timet at which the landmark was detected (Section 3.3.6). In our traces, we find the nearest position reading from the high-precision GPS receiver within a small time interval t around t. (The high-precision GPS samples at 1Hz, which is too coarse since in 1 second the car can move several meters, so we restrict our search to a smaller window.) In our evaluations, we used 100ms for t for speed bumps and street corners and 300ms for stop signs, since the car stays longer at stop signs. Then, we define the landmark error as the difference between the estimated position of the landmark and the high-performance GPS receiver. Figure 3.18 shows the statistics of landmark error for each type of landmark. The average error of three landmarks is around 2 meters. Stop signs have a minimum error of 0.6m and maximum of 2.9 m, which is encouraging. For street corners, the maximum error reaches 4m, mainly because 2 street corners are a bit obstructed, thus the results are biased by incorrect GPS readings. The minimum error can be as low as 0.3m. Speed bump errors can also reach around 4.1m, caused by a single outlier where the speed bump has a pothole just before it, which increases the error. However, the speed bump’s minimum distance can reach as low as 0.15 m, which is very close to the center of the particle cloud. These results explain why using crowd-sourced particles can improve positioning accuracy, but also indicate room for improvement in our detection algorithms. How accurate are the landmark detection algorithms? We have described our detection algorithms in Section 3.3. We run our detection algorithms over all the traces we have and compare against the ground truth we collected. We summarize the precision and recall for each algorithm in Table 3.3. For stop sign, we apply crowd-sourcing to eliminate the outlier cases, like traffic lights. For speed bump, because of an severe pothole in our traces, our algorithm always treats it as speed bump. This brings down the overall accuracy. For right turn, we have near optimal performance. The main reason is the detection algorithm gets triggered by peak of multiple sensors, and we also apply the verification with map information, so the good accuracy performance is expected. 65 Precision Recall Stop Sign 0.89 0.95 Speed Bump 0.83 0.88 Right Turn 0.97 0.98 Table 3.3—Landmark Detection Accuracy Street Corners Stop Sign Speed Bump 0 1 2 3 4 HPGPS to PC center (m) Figure 3.18—Landmark Error Statistics How many landmarks are enough? Figure 3.15 shows the CARLOC closed-loop error on our 5.3km loop. On this loop there are 15 landmarks, and the figure plots mean CARLOC error (and 25th and 75th per- centiles) as a function of the number of landmarks used to calculate position. As expected, the error decreases as more landmarks are used in estimating position. However, beyond about 12 landmarks there is no improvement in the error, suggesting that of a relatively small number of landmarks along a route might be sufficient to achieve high accuracy. What degree of crowd-sourcing is necessary? For the same route, Figure 3.16 shows the accuracy of using all 15 landmarks, but computed from an increasing number of traces. This shows what degree of crowd-sourcing is necessary. Interestingly, using 2 traces has higher error than using one trace. We found that this is because, in the second trace, the driver did not fully stop at the stop sign, inducing an error in landmark estimation. Moreover, the error drops linearly with the number of crowd sourced traces. This behavior is expected, but we don’t see a flattening, suggesting that CARLOC accuracy can be improved by adding a higher degree of crowd-sourcing. We have left this to future work. 66 CARLOC ALTMatching HPGPS ALTMotion Figure 3.19—Map-matching and Motion Model Issues on Map View 3.4.5 Benefits of Optimizations CARLOC optimizes map-matching (Section 3.3.3) and the motion model (Section 3.3.4). In this section, we quantify the benefits of these optimizations. CARLOC enhances a previously proposed map-matching algorithm [83] to include car-sensor read- ings that provide distance and changes in heading. The availability of these readings motivates a more sophisticated transitional probability calculation for a Hidden Markov Model, and CARLOC incorporates this. What if CARLOC had used the original transitional probability calculation? Figure 3.17 shows how the performance of this ALT Map Matching strategy performs against CARLOC for our closed-loop tests. This strategy exhibits significantly higher error between 3-3.5m across all our route lengths, suggesting that the optimization is definitely beneficial. Figure 3.19 illustrates one situation where the optimization helps: CARLOC is able to track the turn correctly, but ALT Map Matching, because its transition probability calculation does not incorporate turns, is unable to do so. Second, CARLOC incorporates an advanced motion model that computes the slip angle . Instead, it could have simply used heading change computed from the yaw rate ( in Figure 3.3), which would have been a coarse approximation of the slip angle. This alternative motion model, named as ALT Motion Model, also has higher error than CARLOC and comparable error to ALT Map Matching ( Figure 3.17). The reason for this inaccuracy is also depicted in Figure 3.19, which shows how ALT Motion Model is not able to track turns. 67 Chapter 4 MediaScope: Selective On-Demand Media Retrieval from Mobile Devices 4.1 Introduction Cameras on mobile devices have given rise to significant sharing of media sensor data (photos and videos). Users upload visual media to online social networks like Facebook [2], as well as to dedicated sharing sites like Flickr [3] and Instagram [4]. However, these uploads are often not immediate. Camera sensors on mobile devices have been increasing in both image and video resolution far faster than cellular net- work capacity. More important, in response to growing demand and consequent contention for wireless spectrum, cellular data providers have imposed data usage limits, which disincentivize immediate photo uploading and create an availability gap (the time between when a photo or image is taken and when it is uploaded). This availability gap can be on the order of several days. If media data was available immediately, it might enable scenarios where there is a need for recent (or fresh) information. Consider the following scenario: users at a mall or some other location take pictures and video of some event (e.g., an accident or altercation). An investigative team that wants visual evidence of the event could have searched or browsed images on a photo sharing service such as Flickr to retrieve evidence in a timely fashion. 68 To bridge this availability gap, and to enable this and other missed opportunities, we consider a novel capability for on-demand retrieval of images from mobile devices. Specifically, we develop a system called MediaScope that permits concurrent geometric queries in feature space on that may be distributed across several mobile devices. Wireless bandwidth is limited and can vary, concurrent queries might compete for limited bandwidth, and query results can be large (since images are large and many images can match a query). These factors can result in unacceptably long query response times, which can impede usability. In some cases, appli- cations might need lower query response times for correctness; in the scenario above, time may be of the essence in taking action (e.g., apprehending suspects). MediaScope addresses this challenge using an approach that trades off query completeness 1 , while meeting timeliness requirements (measured by the time between the issue of the query and when a query result is returned). It incorporates a novel credit-assignment scheme that is used to weight queries as well as differentiate query results by their “importance”. A novel credit and timeliness-aware scheduling algorithm that also adapts to wireless bandwidth variability ensures that query completeness is optimized. A second important challenge is to enable accurate yet computationally-feasible feature extraction. MediaScope addresses this challenge by finding sweet spots in the trade-off between accuracy and computational cost, for extracting features from images and frames from videos. An evaluation of MediaScope on a complete prototype (Section 4.4), shows that MediaScope achieves upwards of 75% query completeness even in adversarial settings. For the query mixes we have experi- mented with, this completeness rate is near-optimal; an omniscient scheduler that is aware of future query arrivals does not outperform MediaScope. Furthermore, MediaScope’s performance is significantly dif- ferent from other scheduling algorithms that lack one of its features, namely timeliness-awareness, credit- awareness, and adaptivity to varying bandwidth. Finally, we find that most overheads associated with MediaScope components are moderate, suggesting that timeliness bounds within 10s can be achievable. 1 Completeness is intuitively defined as the proportion of desired images uploaded before the timeliness bound, see Section 4.4.1 69 4.2 Motivation and Challenges In this section, we first motivate the need for on-demand image retrieval, then describe our approach and illustrate the challenges facing on-demand image retrieval. Motivation. With the increasing penetration of mobile devices with high-resolution imaging sensors, point-and-shoot cameras and camcorders are increasingly being replaced by mobile devices for taking photos and videos. This trend is being accelerated by an increase in the resolution of image sensors to the point where mobile devices have image resolutions comparable to cameras. The availability of high resolution image sensors has prompted users to more pervasively share images and videos. In addition to giving birth to services like Instagram, it has prompted many image and video sharing sites to develop a business strategy developed on mobile devices. Beyond sharing media (photos and videos) with one’s social network, this development has also been societally beneficial, e.g., in crime- fighting [1]. On the flip side, wireless bandwidth is scarce and has not been able to keep up with increases in mobile device usage. As a result, cellular operators limit data usage on mobile devices; standard data plans come with fairly restrictive data usage budgets per month (on the order of 1-2 GB). Users are increasingly becoming aware of the implications of these limits and how media transmission can cause users to exceed their monthly data usage limits. These conflicting trends will, we posit, lead to an availability gap for media. The availability gap for a media item (an image or a video) is defined as the time between which the item is taken and when it is shared (uploaded to a sharing site). We believe that users will be increasingly reluctant to use cellular networks to share media, preferring instead to wait for available WiFi. Indeed, this availability gap already exists. OnFlickr [3], we randomly selected 40 popular Flickr users and extracted about 50 recent photos from each user’s gallery. We then plotted the CDF of the difference between the day when each photo was taken, and when it was uploaded (the photo’s availability gap). As Figure 4.1 shows, more than 50% of the photos have an availability gap of greater than 10 days! 70 We conjecture that this availability gap will persist with mobile devices: existing data plan usage limits ensure that users treat these devices as similar to traditional cameras or camcorders from the perspective of video and photo upload (i.e., as a device with no network connectivity) 2 Furthermore, mobile device storage has been increasing to the point where multiple photos and videos can be stored; a 64GB iPad can hold 10,000 photos which can take several months to upload with a 2GB/month data plan. This availability gap represents a missed opportunity for societal or commercial uses. For example, 1. Consider a robbery in a mall in an area uncovered by security cameras. The mall’s security staff would like to be able to access and retrieve images from mobile devices of users who happen to be in the mall on that day in order to be able to establish the identity of the thief . 2. A sportswriter is writing a report on a sporting event and would like to be able to include a perfect picture of a play (e.g., a catch or a dunk). The newspaper’s staff photographer happened to have been obscured when the play happened, so the sportswriter would like to be able to retrieve images from mobile devices of users who happened to be attending the event. The focus of this work is the exploration of a capability for bridging the availability gap by enabling media retrieval in a manner suggested by the above examples. Approach. To bridge the availability gap, so that, in the scenarios above, the security staff or the sportswriter can obtain recent information, we explore on-demand retrieval of images from a collection of mobile de- vices. These devices belong to users who have chosen to participate and provide images on demand. In return, participating users may be incentivized by explicit micropayments; we do not discuss incen- tives and privacy issues in this work, but note that our approach is an instance of crowd-sensing built on Medusa [106], which has explored these issues in the context of crowd-sensing. In what follows, we use the term participating device to mean a mobile device whose user has chosen to participate in image retrieval. 2 This may not be the only reason an availability gap exists today or is likely to persist — users may wait to process photos on a desktop or laptop computer before uploading, for example. 71 Our approach is inspired by image search techniques that support similarity searches on image feature space. There is a large body of literature that seeks to support content-based image retrieval by defining appropriate features that characterize images: ImgSeek[71], CEDD [34] (Color and Edge Directivity De- scriptor), FCTH [36] (Fuzzy Color and Texture Histogram), Auto Color Correlogram [68], and JCD [35] (Joint Composite Descriptor). Generally, these algorithms are based on 2 features: image color and texture description. Taking CEDD as an example, for texture space, CEDD sub-divides an image into blocks and for each image block, sub-divides it into 4 sub-blocks, calculates the average gray level of each sub-block, then computes the directional area (vertical, horizontal, 45-degrees, 135-degrees and non-directional) with the sub-block parameters for this image block; thus, an image is divided to 6 regions by texture unit. For color space, it projects the color space into HSV (Hue, Saturation, Value) channels, then divides each chan- nel into several preset areas using coordinate logic filters (CLF), so that the color space is divided to 24 sub-regions. A histogram is drawn on these parameters, so that 246 = 144 coefficients (ranging in value from 0 to 7) are output as the CEDD feature vector. Finally, the image processing community has exper- imented with a wide variety of measures of similarity. Of these, we pick a popular measure [34, 36, 92], the Tanimoto distance [113], which satisfies the properties for a metric space [90]. Since CEDD is popularly used and widely accepted, we have developed our system (Section 4.3) using this algorithm. From our perspective, this algorithm has one important property: for a single image, CEDD’s feature vectors consist of 144 coefficients which require 54 bytes, a negligible fraction of the size of a compressed image, often 1-2MB. Moreover, CEDD is computationally lightweight relative to other feature extraction mechanisms, but has comparable accuracy. CEDD is defined for images; as we describe later, we are also able to derive features for video. More generally, our approach is agnostic to the specific choice of features and similarity definition; other feature extraction algorithms can be used, so long as the features are compact relative to image sizes. On top of this image similarity search primitive, we explore a query interface that supports several queries: 72 Top-K Given an image, this query outputs theK most similar images among all images from all available participating devices. A special case ofK = 1 is the typical content based image retrieval query that has been explored in the image processing literature [71, 139, 13]. Our sportswriter could use this query by presenting an image of a specific play (e.g., a dunk) taken, say, at a different game. Spanners This query returns a collection of images whose features span the feature space of all images from all participating devices. The mall security staff in the example above can use this query to understand the range of images available in participating devices before deciding to drill down and issue more specific queries (top-k) with retrieved images. Clusters This query returns representatives from natural clusters in the feature space and can effectively identify the most common “topics” among images from participating mobile devices. This query can also help in both scenarios to give the querier an overview of the different classes of images in participating devices, prior to drill down (as above). Our approach can be extended to support other kinds of queries (e.g., enclosing hulls), as described later. While Top-K queries have been used with images, we are not aware of other work that has proposed using Spanners and Cluster queries with images. Finally, our use of these queries in conjunction with a database of images spread over mobile devices is, to our knowledge, novel. Our queries can be qualified by several attributes. Attributes like location and time constrain the set of objects that are considered in computing the query result; the location attribute constrains media objects to those taken in the vicinity of a certain location and the time attribute specifies when the corresponding photo or video was taken. Users may also specify a freshness attribute, which constrains the age of media objects selected to compute the query result. The last, but perhaps the most interesting attribute, is timeliness. Timeliness is a property of the query result, and specifies a time bound within which to return the result(s) of a query: if a query is issued at time T and the timeliness constraint ist, the system attempts to return query results beforeT +t. The timeliness attribute is motivated by the surveillance example discussed above; the security team might want results 73 within a bounded time to take follow-up action. It may also be bounded by interactivity concerns: since wireless bandwidth is limited and can vary, images may be large, and multiple concurrent queries may compete for bandwidth, query response times can be large and may vary significantly. Challenges. Our approach faces several challenges. The first of these is feature extraction: it turns out that feature extraction algorithms for large images encounter memory limits even on high-end modern smart- phones. Equally challenging is feature extraction for video, since the frame rate for video can overwhelm many feature extraction algorithms. The more central challenge in our work is the design of the system that satisfies the timeliness con- straints multiple concurrent queries. In general, this is a hard problem, primarily because of the bandwidth limitations of wireless mobile devices; the aggregate query result may need a throughput that may over- whelm the available bandwidth. There are two approaches to solve this problem. The first is admission control, whereby we restrict the number of concurrent queries such that the timeliness constraints can al- ways be met. We did not consider this solution because of the variability and unpredictability of wireless bandwidth availability. The second approach is to deliver maximal information within the given timeliness bound, while adapting to variability in available bandwidth. Our work chooses the second approach, in the context of which there is an interesting challenge: what does it mean to deliver maximal information? In the next section, we describe the design of a system called MediaScope that addresses these chal- lenges. 4.3 MediaScope MediaScope is a system that supports timely similarity-based queries on media objects stored on mobile devices. We begin by describing the MediaScope architecture and then discuss the design and implemen- tation of each component. 74 0 10 20 30 40 50 60 0 20 40 60 80 100 [10.5, 50%] Availability Gap (day) Percentage (×100%) 0 100 200 300 0 50 100 Entire CDF Figure 4.1—CDF of Flickr Photo Avail- ablility Gap Figure 4.2—System Architecture Work Flow Figure 4.3—Illustration of Concurrent Queries 4.3.1 Architecture and Overview Mediascope is conceptually partitioned across a cloud component called MSCloud, and another component called MSMobile that runs on mobile devices. This partitioned design leverages the computation and storage in clouds to support geometric queries on the feature space; mobile devices provide sensing and storage for media objects. These components interact as follows (Figure 4.2). Whenever participants take photos or videos, the Feature Extractor component of MSMobile continuously extracts, in the background, image and video features and uploads them to the MSCloudDB. Users (e.g., a security officer or a sportswriter) pose queries to MSCloud using a standard web interface, possibly on a mobile device. These queries are processed by the MSCloudQ query processing engine, which uses the features stored in the MSCloudDB to compute the query results. The results of the queries identify the media objects that need to be retrieved from individual mobile devices. In some cases, a media object may already have been retrieved as a result of an earlier query; query results are also cached in MSCloudDB in order to optimize retrieval. MSCloudQ coordinates with an Object Uploader component on MSMobile in order to retrieve query results. Once a query’s timeliness bound expires, MSCloudQ terminates the corresponding Object Uploader and returns retrieved results. MediaScope uses a publicly available crowd sensing platform called Medusa [106]. Medusa was orig- inally designed to permit human users to pose crowd-sensing tasks. MediaScope’s retrieval of features and media objects from mobile devices leverages Medusa’s support for “sensing” stored information on these 75 devices. To enable programmed interaction between MSCloud and Medusa, and to support MediaScope’s timeliness requirements, we made several modifications to the Medusa platform (discussed later). MediaScope thus provides a high-level abstraction (queries on media objects) that hides many of the details of object retrieval from users. In the following subsections, we describe the two most challenging aspects of MediaScope design: support for concurrent queries, a functionality distributed between the MSCloudQ and the Object Uploader; and feature extraction. We conclude with a brief description of other design and implementation issues. 4.3.2 Design: Concurrent Queries The most challenging component of MediaScope is support for concurrent queries — MSCloudQ may receive one or more queries while other queries are being processed. In MediaScope, the result of a query is a list of media objects to be retrieved from a subset of the participating phones. Recall that each query has a timeliness constraint. In the presence of concurrent queries, MediaScope may need to upload all media objects before their timeliness bound expires. In general, this may be difficult to achieve because wireless bandwidth can vary significantly over time, resulting in variable upload times for images. To illustrate this, consider the example of two concurrent queriesQ 1 andQ 2 that arrive at the same time for media objects distributed across two phones P 1 and P 2 in Figure 4.3. Also, assume that both queries have a timeliness bound of 5 seconds, each object can upload 1 object per second, and all objects are of the same size. IfQ 1 needs to retrieve 3 objects fromP 1 and 2 objects fromP 2 , whileQ 2 needs to retrieve 4 objects fromP 1 and 3 fromP 2 . Under these circumstances, it is not possible to satisfy the timeliness requirements of one of the two queries. In practice, the problem is much harder because there may be more than two concurrent queries, many more participating devices, queries can arrive at different times, media objects may have different sizes, and wireless available bandwidth can vary dynamically. Especially because of the last reason, admission control cannot guarantee that all timeliness constraints are met, or may severely underutilize the available bandwidth. 76 MediaScope uses a different approach, trading off query completeness for timeliness. In MediaScope, not all query results may be uploaded within the timeliness bound, but the challenge is to upload the most relevant queries so as to maximize the amount of information retrieved. In doing this, there are two challenges: how to differentiate between queries, and how to prioritize media items for the retrieval in order to maximize the information retrieved. MediaScope addresses these two challenges using a credit assignment mechanism. Each query is assigned, by MediaScope, a number of credits. The credits assigned to a query reflect the importance of that query and result in proportionally more information being uploaded for that query (and therefore the proportional completeness of the query result). The specific credit assignment mechanism for queries is beyond the scope of this thesis, but MediaScope may use monetary incentives (e.g., users who pay more get more credits for their queries) or other approaches in order to assign credits to queries. If a query is assignedn credits, it divides up these credits among its results (media objects) in a way that reflects the importance of each object to the query. The key intuition here is that, for a given query, the importance of a result object to the query can be determined by the feature space geometry. For example, consider a queryQ which attempts to retrieve the two nearest photos in feature space to a given photoc. If the resulting photosa andb are each 20 units and 80 units distant fromc in feature space, andQ has been assigned 100 credits, a and b each receive 80 and 20 credits respectively (in inverse proportion to their distances toc). MediaScope uses this intuition to define credit assignment to result objects. Once objects have been assigned credits, object uploading is prioritized by credit in order to maximize the total credit retrieved across all concurrent queries. In what follows, we first describe the queries that MediaScope supports and how credits are assigned for each query. We then describe MediaScope’s credit-based object scheduling technique and discuss its optimality. 77 4.3.2.1 Queries and Credit Assignment Our current instantiation of MediaScope supports three qualitatively different queries: nearest neighbor, clusters, and spanners. Below, we discuss the design of the query engine MSCloudQ and how credits are assigned to query results. Recall that for each query, users can specify time, location and freshness attributes: before performing each of the queries described below, MSCloudQ filters all the feature vectors stored in MSCloudDB to select feature vectors that match these attributes. In our description of the queries below, we assume that this filtering step has been applied. k-Nearest Neighbors. For this query, the user supplies a target image and the server attempts to return the k nearest images (from photos or videos) in feature space to the target. The implementation of this query is straightforward: it is possible to build indexes to optimize the search for theK nearest neighbors, but our current implementation uses a brute force approach. Credit assignment for this query attempts to capture the relative importance of the query results. Thus, the assignment of credits to each result is proportional to its similarity to the target image. For thei-th result, lets i be the similarity measure to the target; we then assign credits to thei-th result proportional to p i = (1 si P si ). K Clustering. The second class of queries supported by MSCloudQ is based on clustering in feature space. This query takes as input the number k as well as well as a type parameter which describes the expected result and can have one of two values: Cluster Representative With this parameter, the result contains k images, one from each cluster. For each cluster, our algorithm selects that image as the representative whose distance is least to the centroid of the cluster. Intuitively, this query type identifies different “topics” among images taken by participating users. Common Interest With this parameter, the result includes images from that cluster which contains objects belonging to the most number of users. Thus, if thei-th cluster contains images fromu i users, the query returns images from that cluster for whichu i is the largest. Intuitively, this query identifies 78 the cluster that represents the maximal common interest between participating users. Within the selected cluster, the query returns one image for each participating user, selecting that image of the user that is closest to the centroid of the cluster. These queries can be implemented by any standard algorithm fork-means clustering. For the cluster representative type of query, we assign credits proportional to the size of the cluster. Thus, if thej-th cluster’s size isc j , the credit assigned to the image selected from clusterj is proportional to cj P cj . For the common interest type of query, we assign a credit to each selected image that is inversely proportional to the image’s distance from the centroid of the cluster. The credit assignment is similar tok nearest neighbors above. Spanner. The third, and qualitatively different query that MediaScope supports is based on spanning the feature space. The intuition behind the query is to return a collection of images which span the feature space. In computing the spanner, we assume that each usert contributes exactlys t images, wheres t is derived from the query’s timeliness bound and a nominal estimate of the average upload rate from the corresponding mobile device 3 Our spanner maximizes the minimum dissimilarity between all pairs. We now express this problem mathematically. Assume that K n , the complete graph on n vertices (vertices represent images), has a vertex set V partitioned into C classes V 1 ;:::;V C (classes represent users). Letv it denote vertexi in classV t . Lete itj k represent the edge connectingv it withv j k . Assume edgee itj k has weightw itj k (where the weight represents the dissimilarity between objectsi t andj k ). 3 As we describe later, the average upload rate is estimated dynamically by MSCloudQ. 79 Assuming that exactly s t vertices must be selected from V t , we need to select a set of vertices so that the minimum edge weight of the selected clique is maximized. This problem can be formulated as a mixed-integer program: maxz s.t.zw itj k y itj k 8i t ;j k s:t:i t <j k (4.1) y itj k x it 8i t ;j k s:t:i t <j k (4.2) y itj k x j k 8i t ;j k s:t:i t <j k (4.3) x it +x j k y itj k 1 8i t ;j k s:t:i t <j k (4.4) X it2Vt x it =s t 8t = 1;:::;C (4.5) x it 2f0; 1g 8i t y itj k 2f0; 1g 8i t ;j k s:t:i t <j k In this mixed-integer program, variablex it is used as the indicator variable for selecting vertexv it for the clique. Similarly, variabley itj k is used as the indicator variable for selecting edgee itj k for the clique. Variable z is used to achieve the min it<j k w itj k y itj k . Inequalities 4.2 and 4.3 ensure that edge e itj k is not selected if either vertexi t orj k is not selected. Inequality 4.4 guarantees thaty itj k is selected if both verticesi t andj k are selected. Inequality 4.5 ensures that the number of vertices selected from classt is s t . The above problem is NP-hard so we use a O(jVj 2 ) heuristic (Algorithm 2) for solution. The idea behind this heuristic is to select the set of vertices greedily i.e., add “qualified” vertices whose minimum weighted edge to the set selected thus far is maximum. “Qualified” vertices are vertices in the classes which have not yet met their constraint, and hence these vertices can still be selected. We deal with the issue of which vertex should be selected first by trying all possible vertices as being the first vertex in the set and taking the maximal such set. 80 Algorithm 2 : MAXMIN HEURISTIC 1: Define a listl for storing best vertex set and a variablemax_min for minimum weighted edge 2: l [],max_min 0 3: for all i2f1;:::;Vg do 4: min =1 5: Define a temporary listlt andlt i 6: while new item added tolt do 7: forj2f1;:::;Vg andj62L do 8: d(j) min o2l t similarity_dist(o;j) 9: if9 qualified vertexv then 10: lt:add(fvjmaxd(v)g) 11: temp_min d(fvjmaxd(v)g) 12: if temp_min<min then 13: min =temp_min 14: ifmin>max_min then 15: max_min =min 16: l =lt OUTPUT:l andmax_min For this query, intuitively, credit assignment should give more importance to dissimilar images. For the i-th query result, we computed i , the average distance from thei-th image to all other images. The credit assigned to this image is proportional to di P di . Extensibility of MSCloudQ. These are, of course, not the only kinds of geometric queries that can be supported. Developers wishing to extend MSCloudQ by adding new queries can do so quite easily by: (a) defining the query syntax and semantics, (b) implementing the query algorithm, and (c) specifying a proportional credit assignment based on the semantics of the query. 81 4.3.2.2 Credit-based Scheduling In general, users can pose concurrent queries to MSCloudQ. Queries may arrive at different times and may overlap to different extents (we say one query overlaps with another when one arrives while the other’s results are being retrieved). Furthermore, different queries may have different timeliness constraints, may retrieve different numbers of objects (e.g., for different values of k, or different sizes of spanners), and the retrieved media objects may be of different sizes (images with different resolutions). In these cases, MSCloudQ needs an algorithm that schedules the retrieval of different objects subject to some desired goal. In MediaScope, this goal is to maximize the total completeness of queries, defined as the sum of the credits of all the uploaded images. To achieve this, recall that MSCloudQ assigns a credit budget to each query based on the importance of that query; then, using the proportions defined above, it assigns credit values to each query result. To mathematically define the completeness goal, we first introduce some notation. LetQ i denote the set of media objects that form the result of the i-th query, and let that query’s timeliness constraint be d(Q i ). Letg(o) be an indicator variable that denotes whether a media objecto is retrieved befored(Q i ). Then, for thei-th query, the total credit for all uploaded media objects is given by: g(Q i ) = X o2Qi g(o)c(o) Thus, given a series of concurrent queriesQ, the total number of credits retrieved is given by: c(Q) = X Q2Q X o2Q g(o)c(o) Maximizing this quantity is the objective of MediaScope’s retrieval scheduling algorithm. 82 0 1000 2000 3000 4000 5000 Different Image Resolution Processing Time (ms) 1632×1224 1280×960 1024×768 960×720 816×612 (a) Average CEDD Execution Time Per Im- age for Different Size 0 50 100 150 200 250 Different Image Resolution Processing Time (ms) 1632×1224 1280×960 1024×768 960×720 816×612 (b) Average Time of Resizing image to Dif- ferent Size 0 5 10 15 20 25 Different Image Resolution Error Rate (× 100%) 1632×1224 1280×960 1024×768 960×720 816×612 (c) Average Error Rate of KMeans Cluster- ing for Different Size Figure 4.4—Image Resizing Overhead and Tradeoffs It turns out that it is possible to decompose this objective into a per-device credit maximization schedul- ing algorithm. To see why this is so, letP denote the set of participating devices, and thek-th device be denoted byp k . Then, the above credit sum can be written, for concurrent queriesQ: c(Q) = X Q2Q X o2Q g(o)c(o) = X Q2Q X P2P X o2P\Q g(o)c(o) = X P2P X o2P g(o)c(o) 83 This equality shows that, in order to maximize the total credits retrieved across a set of concurrent queriesc(Q), it suffices to maximize the total credits uploaded by each participating device: P P2P c(P ). This is true under the following two assumptions: (a) if two different queries retrieve the same object from P k , then the object will need to be uploaded at most once and (b) the credit assigned to that object is the sum of the credits allocated by each query to that object. This finding has a nice property from the systems perspective: it suffices to run a local credit-maximizing scheduler on each participating device in order to achieve the overall objective. In general, local schedulers have the attractive property that they can locally adapt to bandwidth variations without coordinating with MSCloudQ, and need only minimal coordination with MSCloudQ in order to deal with new query arrivals. In MediaScope, the Object Uploader component of MSMobile implements the scheduling algorithm. An Optimal Scheduler. We first describe a scheduling algorithm that is optimal under the assumption of fixed file sizes and fixed wireless bandwidth per participating device. Under these assumptions, for each object o, it is possible to compute the exact upload time t(o) which is the same for all objects. If each object’s timeliness bound isd(o) (different objects can have different bounds), our goal is to find an uploading sequence such that P o g(o)c(o) is maximized. First, we may assume that an optimal schedule orders the objects by earliest timeliness bound first. Assume an optimal schedule does not order objects by earliest timeliness bound first. Then there exist two objectsi andj for whichd(o i ) > d(o j ) buti is scheduled beforej. By switching the order of objectsi andj we can obtain another optimal schedule. However, merely scheduling by earliest timeliness bound is not likely to maximize credit. To do this, the algorithm preprocesses the schedule to obtain a set of scheduled objects in the following way. It orders the objects by earliest timeliness bound first. Then, it adds objects to the schedule one right after another as long as each object’s finish time does not exceed the timeliness bound. If an object’s end time exceeds its timeliness bound, the algorithm removes the object receiving the smallest credit of those objects scheduled thus far (including current object) and shifts objects to the right of this object to the left byt(o) to cover 84 the gap. Intuitively, this step maximizes the total credit uploaded: lower credit objects, regardless of the query they belong to, are replaced. The algorithm then selects the next object in order of timeliness. Algorithm 3 : OPTIMAL UPLOADING SCHEDULE 1: Arrange the pending objects listO by earliest timeliness bound first, schedulingS [] 2: l 0 3: foro O:first do 4: S o 5: O:remove(o) 6: ifl+t(o)d(o) then 7: l l+t(o) 8: else 9: Remove the smallest credited object inS 10: Shift all objects to the right of this object to left byt(o) OUTPUT: schedulingS, uploading objectS[0] The following example illustrates this algorithm. Suppose there are 3 queries, each with one result object. Let their respective timeliness bounds be 2, 3, and 5 and the credits they receive be 7, 8, and 6 respectively. Finally, supposet(o) is 2 time units. The algorithm would proceed in the following way. It would schedule the first object initially. Since the second object would not be delivered in a timely manner if scheduled after the first object, and since the second object receives more credits than the first, the first is removed and the second is scheduled from time 0-2. The third object is then scheduled from time 2-4 giving a maximal 14 total credits to the system. This algorithm is a special case of an optimal pseudo-polynomial algorithm discussed below, so we omit a proof of its optimality. Optimality under different object sizes. If object uploading times are different, the scheduling problem is NP-hard; the simple case of different object sizes with all objects having the same timeliness bound is equivalent to the NP-Hard Knapsack problem [57]. We can however give the following pseudo-polynomial 85 time dynamic programming algorithm for this problem. Let S[i;q] be the maximum credited schedule using only the firsti objects, i.e., objectso 1 ;:::;o i , taking upq time units. Lets[i;q] be the corresponding credit for such a schedule. Thens[i;q] is defined in the following way: s[i;q] = 8 > > > < > > > : maxfs[i1;qt(oi)]+c(oi);s[i1;q]g ifqd(oi) s[i1;q] ifq >d(oi); (4.6) where the following initial conditions hold: s[0;q] =s[i;q <t(o 1 )] = 0. Ifs[i 1;qt(o i )] +c(o i )> s[i 1;q] andqd(o i ), thenS[i;q] S[i 1;qt(o i )][fo i g, elseS[i;q] S[i 1;q]. The desired output isS(n;d(o n )) for an input ofn objects. The running time of this algorithm is O(nd(o n )). The optimality of Algorithm 3 follows from the optimality of this dynamic programming algorithm for the general case [21]. Practical Considerations. In a practical system, the Object Uploader estimates t(o) continuously, and re-computes the schedule after each upload is completed, in order to determine the next object to up- load. There are two reasons for this. First, t(o) can change because available wireless bandwidth can vary. Second, new queries may arrive at MSCloud; when a query arrives, MSCloud evaluates the query, assigns credits to the query results, and notifies the relevant devices (those which contain one or more result objects). Thus, at a given device, the set of objects to be uploaded can vary dynamically, so the Object Uploader needs to re-evaluate the schedule after every upload. Finally, for large objects, bandwidth variability might cause their timeliness bounds to be violated (e.g., because the available bandwidth be- came lower than the value that was used to compute the schedule); in this case, the Uploader can abort in-progress transmission to reduce the bandwidth consumed and and thereby trade-off query completeness for timeliness. We have left this optimization to future work. 86 4.3.2.3 Feature extraction on the phone In MediaScope, feature extraction is performed on the mobile device by the Feature Extractor component of MSMobile 4 . This component extracts features for photos, as well as images extracted from videos. Even for high-end smartphone platforms, these are nontrivial computation tasks and some computation vs. accuracy trade-offs are required in order to achieve good performance. We now discuss these trade-offs. Image Feature Extraction. The Samsung Galaxy S III (a high-end smartphone at the time of writing) can generate images with native resolution of 3264x2448. At this resolution, our CEDD feature extraction algorithm fails because of lack of memory on the device. One way to overcome this limitation is to resize the image to a smaller size and compute features on the smaller image. As Figure 4.4(a) shows, the time to compute features (averaged over 300 images taken on the Galaxy SIII) can reduce significantly for different sizes, ranging from 4s for a resolution about 1/2 the native resolution to about 1s for 1/4 the native resolution. The cost of the resizing operation itself is about 250ms, as shown in Figure 4.4(b), roughly independent of the resized image size. However, computing features on a smaller image trades off accuracy for reduced computation time. To explore this trade-off, we evaluated two queries to see how accuracy varies with resizing. Figure 4.4(c) shows the results for K-means clustering, whose error rate is obtained by dividing the total number mis- classified images by the total number of images. This error rate is less than 5% for a 1280x768 resolution, but jumps to 20% for the 816x612 resolution. The error rate for K-nearest neighbor queries is defined as the ratio of incorrect images (relative to the full size) selected by feature vectors computed on a resized image andk, averaged over different values ofk. In this case, the knee of the error curve occurs somewhere in between the resolution of 1280x960 and 1024x768 (figure omitted for space). Given these results, we 4 MSCloudQ also needs to implement the same feature extraction algorithm for a Top-K query. Since mobile devices are more constrained, we focus on feature extraction on these devices. 87 0.125 .25 0.5 1 2 4 8 0 100 200 300 Extraction Frequency (Hz) Total Extraction Time (s) Average 30s Duration Average 60s Duration Average 120s Duration Figure 4.5—Average Video Frame Extraction Time For Different Duration and Frequency 0.125 .25 0.5 1 2 4 8 0 10 20 30 40 Extraction Frequency (Hz) Average Inter−Frame Distance Average 30s Duration Average 60s Duration Average 120s Duration Figure 4.6—Average Inter-frame Feature-Space Distance use a resizing resolution of 1024x768 in our implementation as the best trade-off between computation time and accuracy. Video frame extraction. The second major component of MSMobile’s Feature Extractor is video frame extraction. Ideally, for videos, we would like to be able to extract every frame of the video and compute features for it. This turns out also to be computationally infeasible even on a high-end device, and one must perform a computation accuracy trade-off here as well, by subsampling the video to extract frames at a lower rate than full-motion video. Figure 4.5 shows the total cost of frame extraction for videos of different durations. Clearly, for long videos, even are relatively modest sampling rate of 4 fps can incur a total processing time of 150 seconds! On the other hand, extracting a single frame takes on average 240 ms, regardless of frame rate or duration. On the flip side, subsampling a video can introduce errors; successive frames, if they are far apart from each other, may miss important intervening content. Figure 4.6 shows the average distance in feature space between successive frames for videos of different durations and sampling frequencies. For context, our clustering algorithms have generally found that cluster diameters are at least about 20 units. At 0.5fps, the interframe distance is more than this number, but at 1 fps, it is less. More generally, 1 fps seems to be a good choice in the trade-off between computation time and accuracy, so our current prototype uses this value. An alternative approach to feature extraction for videos would have been to segment a video on the mobile device and then select frames from within the segment. A segment roughly corresponds to a scene, 88 so one might expect that frames within a segment might have similar feature vectors. We have left an exploration of this to future work. 4.3.2.4 Leveraging a Crowd-Sensing Framework MediaScope leverages an existing, publicly available, crowd sensing programming framework called Medusa [106]. Medusa provides high-level abstractions for specifying the steps required to complete a crowd-sensing task: in our case, uploading the feature vectors can be modeled as a crowd-sensing task and so can the upload of selected media objects. Medusa employs a distributed runtime system that coordinates the execution of these tasks between mobile devices and a cloud service. In MediaScope, MSCloud uses Medusa to distribute tasks and collect the results; MSMobile consists of extensions to Medusa’s runtime to implement the Feature Extractor and the Object Uploader. However, in order to support MediaScope, we needed to extend the Medusa model, which was focused on tasks generated by human users. We also needed to make several performance modifications in Medusa. In the former category, we modified Medusa’s programming language to selectively disable Medusa’s recruitment feature and data privacy opt-in: these features require human interaction, and MediaScope assumes that participants have been recruited and have signed a privacy policy out-of-band. We also added a data delivery notification system that would allow Medusa’s cloud runtime to deliver notification of data upload to external servers, such as MSCloudDB. In the second category, we modified Medusa’s mobile device notification system, which originally used SMSs, to use Google’s C2DM notification service, which greatly reduced the latency of task initiation on mobile devices. We also optimized several polling loops in Medusa to be interrupt-driven, so that we could hand-off data quickly to components within Medusa’s runtime as well as to external servers. 89 4.4 Evaluation In this section, we evaluate the performance of MediaScope. Although MediaScope’s credit assignment algorithm is optimal in a pseudo-polynomial sense, we are interested in its practical performance under bandwidth variability. Furthermore, in practice, since query arrival cannot be predicted ahead of time, the practical performance of MediaScope may deviate from the optimal. Finally, it is instructive to examine alternative scheduling mechanisms to quantify the performance benefits of MediaScope’s algorithms. We are also interested in the overhead imposed by MediaScope; since timeliness is an essential attribute of many queries, system inefficiencies can impact query completeness. All our experiments are conducted on a prototype of MediaScope. MSCloud is written mainly in Python; PHP and Python are used for MSCloudQ web interface. The implementation of MSCloud is about 4300 lines of PHP and Python code, and MSMobile requires about 1150 lines of C and Java code (measured using SLOCCount [133]). Our experiments use commodity hardware, both for MSCloud and the mobile device. We use up to 8 Android phones, which are either the Galaxy Nexus or the Galaxy SIII. MSCloud runs on a Dell XPS 7100 (six-core AMD Phenom II X6 1055T 2.8 GHz processor and 6MB built-in cache). Before describing our results, we give the reader some visual intuition for the usefulness of MediaS- cope. Figures 4.7, 4.8, and 4.9 show the results of three different queries: a K nearest neighbor query, a Cluster Representatives query and a Spanner on a set of six groups of photos: a university campus, a garden, a view of the sky framed by trees, an athletics track, a supermarket, and a laboratory. Notice that the cluster representatives query identifies representatives from each of groups, while the Spanner extracts qualitatively different pictures, while the K nearest neighbor query extracts matching images as we might expect. 90 Figure 4.7—K Nearest Neighbor Result Figure 4.8—Cluster Representative Figure 4.9—Spanner 4.4.1 Query Completeness In this section, we evaluate query completeness in the presence of concurrent queries. Metrics and Methodology. Our metric for query completeness is the total credit associated with all the query results successfully uploaded before their timeliness bounds. We evaluate several query mixes (described below), with different concurrent queries of query types that arrive at different times and have different timeliness bounds. These queries are all posed on 320 images captured on 8 mobile devices. Our experiments are conducted as follows. For each query mix, we first compute the results of each query and the credit assigned to each result object. This computation yields a trace, on each mobile device, of objects, their associated credits and the arrival time. We use this trace to replay the credit-based scheduling algorithm during repeated runs and report the average of 10 runs. This trace-based methodology is also useful in comparing MediaScope’s credit-based scheduling al- gorithm (henceforth, MSC) with several alternatives. For each alternative, we replay the trace for that 91 4 5 6 0 20 40 60 80 100 Different Query Numbers Completeness (×100%) MCF EDF RR MSC Figure 4.10—Different Query Mixes by Size Interrupted Staggered Complex 0 20 40 60 80 100 Different Query Settings Completeness (×100%) MCF EDF RR MSC OMNI Figure 4.11—Different Query Mixes by Timeliness Bound Figure 4.12—Sample Schedule Timeline particular scheduling algorithm. We consider the following alternatives: an Omniscient algorithm that knows about future query arrivals; a Max Credit First (MCF) that always selects the object with a maxi- mum credit to upload; a Round Robin (RR) that allocates bandwidth fairly to each concurrent query so that, in each round, the object with the highest credit from each query is uploaded; and an Earliest Deadline First (EDF) scheduler that always schedules that object with the earliest timeliness bound first, breaking ties by credit. The Omniscient algorithm demonstrates the benefits of lookahead, while each of the other algorithms has at most one of MSC’s features (timeliness-, credit-, and bandwidth-awareness). In our experiments, each mobile device contains a number of images taken with its camera. These images are naturally of different sizes because they have different levels of compressibility. Furthermore, we make no attempt to control network variability; upload bandwidths in our experiments vary and MSC estimates upload bandwidth by measuring the average speed of the last upload (MSC’s algorithm needs uses this estimate fort(o)). Results. Our first experiment compares the performance of these alternatives for three different query mixes with different types of queries. The first mix contains 4 queries, namely, 1 Top-K, 1 Spanner, 1 Cluster Representative and 1 Common Interest. All the queries arrive at the same time but with different timeliness bounds; thus, in this experiment there are no future arrivals and we do not evaluate the Om- niscient algorithm. The second mix adds one more Cluster Representative query to the first one, and the third is generated by adding one more Common Interest query. In each query mix, each query is assigned the same total credit. 92 Figure 4.10 shows the performance of various schemes. MSC achieves at least 75% completeness across all three query mixes, and its performance improves by 5% as the number of queries increases from 4 to 6. Although a 75% completeness rate seems pessimistic, we remind that reader than MSC is optimal, so no other scheduling scheme could have done better than this; in other words, for this query mix, this is the best result that could have been achieved. Furthermore, MSC outperforms other schemes significantly. The superior performance of MSC comes from its timeliness-awareness, credit-awareness, and adaptivity to available bandwidth. By contrast, ap- proaches that lack one or more of the features have much lower completeness rates. Thus, EDF does not take into account an object’s credit, and thus might waste bandwidth on objects with an early deadline but small credit; on average, EDF achieves 55% completeness. RR is unaware of timeliness constraints, but uploads the result objects for each query in a round-robin fashion. It is comparable in performance to EDF, achieving 52% completeness on average. RR’s poor performance arises from two factors: first, because it ignores timeliness constraints, it uses transmission opportunities by sometimes transmitting objects which could have been deferred without violating data timeliness bounds; second, RR gives equal transmission opportunities to queries, even though, on a given mobile device, one query may contain objects with far more credit than another query. MCF improves upon RR in the second aspect, in that it always transmits the object with the highest credit first; in so doing, it achieves an average completion rate of 59% and is significantly better than EDF and RR. However, MCF is still noticeably worse than MSC, primarily be- cause MCF ignores timeliness constraints and sometimes transmits objects that could have been deferred without violating timeliness bounds. In order to get more insight into the relative performance of these schemes, we consider variants of the 6-query mix which have different combinations of arrival rates and deadlines. Figure 4.11 plots the results of these experiments. In the first query mix, three of the six queries arrive first with the timeliness bound of 20 seconds. The remaining three queries arrive within three seconds, but have a relatively tight timeliness bound of 6 seconds. In this sense, they interrupt the first set of queries. This query mix is designed to demonstrate 93 the benefits of timeliness-awareness. In this somewhat adversarial setting, MSC still outperforms other schemes but has a much lower completeness rate of about 60%. RR performs poorly, but EDF performs comparably to MCF; this is not surprising because EDF is timeliness-aware. Even so, EDF does not per- form as well as MSC because it ignores credit values and uploads objects with lower credits unnecessarily. In the second query mix, 6 queries with the same timeliness requirement arrive in a staggered fashion, with each query arriving three seconds after the previous query. This illustrates a setting where queries arrive frequently but the arrivals are not synchronized. In this setting, MSC achieves a completeness rate of nearly 80%, and, not surprisingly, MCF comes quite close with a completeness rate of 71%. Since all queries have identical timeliness bounds, it is not surprising that a credit-aware scheme like MCF performs well. The third query mix represents a complex pattern where queries arrive at different times and have different deadlines. For this mix, the performance advantages of MSC are clear, since this mix requires a scheduling scheme to be both credit and timeliness-aware. Finally, for all these query mixes (Figure 4.11), MSC is comparable to the Omniscient scheme, which knows the arrival times of different queries. Intuitively, because MSC continuously adapts its transmission schedules when new queries arrive, it can make a different decision from Omniscient only at the times when queries arrive. To be more precise, say a new query arrives at timet: Omniscient might have scheduled an upload of an object for the new query starting at time t, but MSC has to wait until the object being uploaded att finishes, before it updates its schedule. This difference can be fixed by adding preemption to the scheduler, aborting the current transmission if it does not have the highest priority; we have left this to future work. To get some more insight into the differences between the scheduling algorithms, Figure 4.12 plots the timeline of decisions made by these algorithms for the 6-query mix when all queries arrive at the same time. The figure clearly shows that MSC is better able to use the available time to carefully schedule uploads so that completeness is maximized; MCF, having uploaded objects with high credits is unable to utilize the available time because the timeliness bound for the remaining objects has passed. EDF performs 94 Average Latency (ms) MSCloud to Medusa 131 C2DM (send-to-receive) 150 Task Execution 67 Upload Scheduling 46 Medusa to MSCloud Image Transfer 67 Table 4.1—System Communication and App Running Overhead comparably to MCF, but, because it is credit-unaware, misses out on some transmission opportunities relative to MSC (e.g., MSC uploads Q3:91 first, but EDF does not). In summary, our approach bridges the availability gap by extracting relevant photos and images dy- namically from participating devices. The approach hinges on the observation that feature space similarity can be used to determine relevant media objects, and that image features are an extremely compact repre- sentation of the contents of an image. However, it is well-known that content based information retrieval exhibits a semantic gap [130]: feature-based similarity matching is oblivious to the semantic structures within an image, so the matching may not be perfect. In these cases, we rely on additional filtering by human intelligence (e.g., in our examples, the security officer, or the reporter). To put it another way, our approach may not always give the right answer, because of the semantic gap. To properly evaluate our approach, we need to conduct a user study. This is because, for example, determining whether the results of a spanner query really span a given corpus can be highly subjective. We have left this user study to future work. 4.4.2 System Overhead Latency. Because MediaScope attempts to satisfy timeliness constraints, the efficiency of its implemen- tation can impact query completeness; the less overhead incurred within the system, the greater the query completeness can be. To understand the efficiency of our system, we profiled the delays within the various components of MediaScope (Table 4.1). In an earlier section, we have discussed the cost of feature and frame extraction: these operations are not performed in the object retrieval path, so do not affect query timeliness. 95 Average Latency (ms) Query Parsing 24 Feature Vector Download 138 Medusa Server Interpretation 68 Spanner 89 K Clusters 52 K Nearest Neighbor 11 Query Result Response 54 Table 4.2—System Function Components Overhead As this table shows, the latency incurred for most components is modest; C2DM notifications take less than 1/6 second, and the communication between MSCloud and Medusa takes about 1/8 second. Other components are under 70 ms. Finally, latency within the MSCloudQ query engine is also moderate (Table 4.2). Even in our relatively un-optimized implementation, most components of query processing take less than 100ms, with the only exception being the download of feature vectors from MSCloudDB; we plan to optimize this component by caching feature vectors in memory. These overhead numbers suggest that our current prototype may be able to sustain timeliness bounds of 10s or lower. Indeed, some of our experiments in the previous section have used 6s timeliness bounds. Energy. The other component of overhead is energy expenditure. Frame extraction and feature extraction can take up to a second, or more, of CPU time. The energy cost, on a Motorola Droid (measured using a power meter), of frame extraction is 57Ah, and of feature extraction (including resizing) is 331Ah. We believe these energy costs are still reasonable: for feature extraction to consume even 10% of the Droid’s battery capacity, a user would have to take more than 400 photos! 96 Chapter 5 Literature Review This dissertation covers several topics in the area of mobile and automotive sensing. In this chapter, three sets of related work based on the problems that we addressed. We first review the related works of efficient automotive sensing, and then investigate the literatures related to automotive precise position tracking. Finally, we discuss the prior work for media crowdsourcing from mobile devices. 5.1 Flexible and Efficient Automotive Sensing Industry Trends. Developments in industry are progressing to the point where automotive apps will become much more widespread than they currently are, at which point a CARLOG-like platform will be indispensable. Several applications like OBDLink [98] and Torque [120] are popular in both Android and iOS, and allow the users to view very limited real time OBD-II scan data (a subset of information avail- able on the CAN bus). Torque also supports extensibility through plug-ins that can provide analysis and customized views. Automotive manufacturers are moving towards producing closed automotive analyt- ics systems like OnStar [59] by General Motors, and Ford Sync [55] by Ford. Currently, these systems 97 do not provide an open API, but if and when car manufacturers decide to open up their systems for app development, CARLOG can be a candidate programming framework. Automotive Sensing. Recent research has also explored complementary problems in the automotive space, such as sensing driving behavior using vehicle sensors, phone sensors, and specialized cameras [26, 9, 140, 143, 131, 138]. These algorithms can be modeled as individual predicates in CARLOG, so that higher level predicates can be defined using these detection algorithms. Prior work has also explored procedural abstractions for programming vehicles [53], and focuses on tuning vehicles but does not consider latency optimization, unlike CARLOG. Recent work has examined user interface issues in the design of automotive apps [86], which is complementary to our work. Finally, while automotive systems have long been known to have a large number of networked sensors, our work is unique in harnessing these networked sensors and designing a programming framework for automotive apps that access cloud-based information together with car sensors. Datalog query optimization. Datalog optimization [31] has been studied over decades, many different optimization strategies have been proposed and well-studied. There are mainly 4 classes of optimization methods: top-down, bottom-up, logic rewriting methods (magic sets), algebraic rewriting. Bottom-up evaluation [30, 33, 14, 16] was originally designed to eliminate redundant computation in reaching a fixpoint in Datalog evaluation. Top-Down evaluation [128, 129, 15] is a complementary approach with a similar goal of eliminating redundant computation in goal or query-directed Datalog evaluation. The Magic Sets method [32, 15, 17], and a related Counting method [15, 17], are rewriting methods that insert extra IDB-predicates into the program; these serve as constraints for bottom-up evaluation, thus eliminating redundant computations of intermediate predicates. In contrast to all of these, our algorithms optimize the order of predicate acquisition for sensor and cloud predicates, a problem motivated by our specific setting. Boolean predicate evaluation. The theory community has explored optimizing the evaluation order of Boolean predicates. Greiner et al. [61] consider the tractability of various sub-problems in this space, 98 and our work is heavily informed by theirs. However, they do not consider multi-query optimization. Laber [28] suggests re-ordering conjunctive predicates with no negation based on the properties of the relational table on which the predicates are evaluated. Another work by the same author [41] deals with more complicated queries that include negation, in a similar setting. These kinds of optimizations are special cases of the evaluation of game trees [111]. In general, these problems have not addressed a setting such as ours, where predicates have both a cost and an associated probability. Closest is the work of Kempe et al. [81], who prove a result similar to Theorem 2.4.1, but in the context of optimizing ad placement on websites. Declarative Programming. Declarative programming using Datalog has been proposed in other contexts. Meld [12] uses Datalog to express the behavior of an ensemble of robots, and partitions the program into code that runs on individual devices. Snlog [40] uses Datalog for providing a similar capability in the context of wireless sensor networks. Beyond differences in the setting (CARLOG is for cloud-enabled mobile applications), these pieces of work do not consider latency optimization. Partitioning cloud-enabled mobile app computations. A body of work has explored automatic parti- tioning of computations across a mobile device and the cloud, either to conserve energy [115, 39], or to improve throughput and makespan for video applications [105]. A complementary body of work has ex- plored crowd-sourcing sensing tasks from the cloud to the mobile device [109, 106]. Unlike this body of work, CARLOG focuses on applications that use the cloud as a source of dynamically-changing informa- tion. Context Sensing. CARLOG is intellectually closest to a line of work that has considered continuous context monitoring on mobile devices. In this work, the general idea is to define, for a given context (e.g., Walking or Running) monitoring task, an efficient execution order that, for example, uses the output of cheaper sensors to estimate, or determine when to trigger, a more expensive sensor. Work in this area has focused on permitting users to declaratively specify multiple contexts of interest [78, 132] and then, given optimal execution orders for each individual context sensing task, to try to jointly optimize energy usage across 99 multiple contexts. A complementary line of work has explored CPU resource management and scheduling of these continuous sensing tasks [77, 79]. Unlike this body of work, our work explores optimizing latency of access to cloud information, leveraging the fact that Datalog’s declarative form makes it possible to perform these optimizations at run-time transparent to the developer. Closest to our work is ACE [97], which explores energy-efficient continuous context sensing, but focuses, in part, on devising an optimal execution order for sensors on a mobile phone. ACE tackles the problem of single query with negation, and presents an algorithm substantially similar to ours, but has not considered multi-query optimization. Furthermore, CARLOG focuses on latency of access to cloud sensors, a problem that is slightly different since latency costs are non-additive (parallel access to sensors does not additively increase latency). 5.2 Precisely Tracking Automobile Position We are inspired by prior work in mobile sensing based position augmentation, improved GPS-based meth- ods, and robot localization. CARLOC sits in the unique point in the design space, with its use of vehicle sensors and crowd-sourced landmarks. Mobile Sensing. The mobile sensing community has long explored approaches to use GPS position and other sensors to detect features on roadways (such as stop signs [27, 66] and potholes [80, 82, 142, 52, 49]). In contrast, CARLOC uses vehicle sensors to identify common roadway landmarks with the aim of improving positioning. Closest to our work is SmartLoc [22], which estimates location and travel distance using inertial sen- sors on mobile devices. In obstructed environments, SmartLoc uses smartphone sensors to detect land- marks in the environment (like bridges and traffic lights), but these measurements are not crowd-sourced. CARLOC’s use of vehicle sensors and crowd-sourced landmarks, together with advanced map matching, gives it an order of magnitude higher accuracy than the prior work. LaneQuest [10] uses probabilistic methods to estimate which lane a car is driving on, a qualitatively different problem than ours. LaneQuest, 100 however, uses crowd-sourced anchors, but, unlike CARLOC, cannot leverage vehicle sensors to detect these. Similar to LaneQuest, [89] keeps track of relative location between cars, while CARLOC focuses on the problem of precisely positioning automobiles. Several other pieces of work explore improvements to map matching: these can potentially be used to improve the accuracy of map-matching in CARLOC. Track [118] and CTrack [117] propose map-matching improvements using WiFi localization and cellular position respectively. AutoWitness [62] employs in- ertial sensor-based HMM and Viterbi Decoding to improve path estimation. Map-matching is just one of the components in CARLOC, and we use vehicle sensors to augment map-matching. Finally, [94] proposes fusing GPS and inertial measurements from custom hardware, and leverages DGPS for accurate vehicle position. In contrast, CARLOC does not require custom hardware, and our results show that in obstructed environments, DGPS can perform poorly. GPS Enhancements. Much work has explored techniques to improve GPS positioning without fusion from other sensors. DGPS [114] and RTK [110, 99] use a base station and a rover receiver and are able to achieve high accuracy. More recent work has explored using DGPS [65, 64] but improving the positioning calculations; this work is able to achieve centimeter-level accuracy in unobstructed environments. Finally, a body of work has explored other improvements to DGPS and RTK [50, 112]. Unlike this class of work, CARLOC can achieve high accuracy in urban environments using a single commodity GPS receiver. High- precision GPS receivers [123] might well become available in future makes and models, but even these will require CARLOC-like fusion in obstructed or partially-obstructed urban environments. Robot localization. Many of the techniques we use, such as the motion model and particle filters, are inspired by prior work on robot localization. Robot and vehicle localization have extensively explored fusion using information from various kinds of sensors: inertial sensors [135], stereo vision cameras [101], laser range finders [88, 45, 37, 67]. Unlike these, CARLOC explores the use of in-built vehicle sensors, and, in addition, crowd-sourcing landmark positions, in order to achieve high accuracy. Finally, other work has also explored map integration for position enhancement [102, 141, 96]; as we show, while maps and 101 GPS can provide high accuracy, the use of crowd-sourced landmarks in CARLOC is necessary to get good results. 5.3 Selective On-Demand Media Retrieval from Mobile Devices Perhaps the closest related piece of work to MediaScope is CrowdSearch [136], which attempts to search for the closest match image generated on a mobile device from among a set of images stored on a photo sharing service. Its focus, however, is complementary to MediaScope, and is on bridging the semantic gap inherent in feature-based image searches; most feature extraction methods do not understand the semantics of images, and CrowdSearch focuses on using human intelligence in near real-time to complete search tasks. MediaScope can use this capability to filter search results to bridge the semantic gap, but its focus is on supporting a richer query interface and enabling tighter timeliness constraints than might be possible with humans in the loop. Also closely related is PhotoNet [124], which proposes an opportunistic image sharing and trans- mission capability in a delay tolerant network. PhotoNet uses similar image features to perform photo comparisons, but is otherwise very different from MediaScope in that the latter explicitly supports a query interface with timeliness constraints on queries. MediaScope is informed and inspired by several pieces of work on techniques for content-based image retrieval, and image search on mobile devices. In the former category are systems like Faceted Image Search [139], the Virage Image Search En- gine [13] and ImgSeek [71], that support searches on a centralized database of images. MediaScope builds upon these search techniques, but unlike them, supports timely geometric queries over a distributed database of images and videos on mobile devices. Other work in content-based image retrieval has pro- posed clustering [25, 38], but has not explored the mobile device setting. A second category of work has explored support for image search on a mobile device. For example, [84] discusses energy efficient feature extraction on a mobile device but supports on the local searches on 102 the device, as does [137]. Other pieces of work have explored a client/server architecture for image search, but where the content is stored on the server [76, 56, 8]. By contrast, MediaScope supports searches on a cloud server, but where the content is stored on the mobile devices and is retrieved on demand. Finally, tangentially related to MediaScope is work on automated or semi-automated annotation of images with context obtained from sensors [43, 127, 104]. MediaScope can use such annotations to support a broader range of queries, but we have left this to future work. 103 Chapter 6 Conclusions and Future Work In this dissertation, we have explored how we can enable crowd-sourced collaborative sensing in highly mobile environments. In Chapter 2, we discuss CARLOG, a programming system for automotive apps. CARLOG allows programmers to succinctly express fusion of vehicle sensor and cloud information, a capability that can be used to detect events in automotive settings. It contains novel optimization algorithms designed to minimize the cost of predicate acquisition. Using experiments on a prototype of CARLOG, we show that it can provide significantly lower latency than parallel access to cloud sensors and also detect 3-4 more results. However, our work is an initial step into this space, with many works to be explored in the future. To support more general rule definition, CARLOG needs consider the optimization for recursion. Moreover, learning driving statistics based on different drivers will help get more accurate evaluation process. In Chapter 3, we present CARLOC, a system for precisely tracking the position of an automobile. CARLOC builds upon prior work in probabilistic position estimation using map matching, but adds novel components: it uses sensors built into vehicles to augment map-matching and motion models, and crowd- sourced landmark estimates to improve positioning accuracy. CARLOC’s mean error is on the order of 2m, suggesting the feasibility of lane-level positioning in the future. Future work can explore several directions. CARLOC’s motion model can be generalized to 3 dimensions to account for hilly roads. It may be possible that other alternatives like RTKLib can be tuned to achieve better performance, and it would 104 be interesting to see how close such tuning comes to CARLOC’s performance. Although CARLOC has high-accuracy and outperforms its competitors, its position tracking during turns can be improved (Figure 19). Furthermore, our current experiments are conducted with traces from 2 drivers. While our current experiments hint at the bene- fits of crowdsourcing, the impact of multiple drivers and cars needs further study. The landmark detection algorithms can be made more robust to different drivers’ driving behaviors. CARLOC is designed to generalize to various landmarks: CARLOC can attempt to leverage additional roadway landmarks such as changes in the road surface texture, potholes, or discontinuities in lighting caused by entering a tunnel. Finally, we propose to explore practical deployability of CARLOC. We envision this to be conceptually straightforward, since CARLOC uses in-built vehicle sensors, and needs a relatively simple cloud service for storing its particle cloud. In practice, CARLOC can be retrofitted into a car’s existing navigation system as a firmware update. In Chapter 4, we have discussed the MediaScope, a system that bridges the availability gap for visual media by supporting timely on-demand retrieval of images and video. MediaScope uses a credit-based timeliness-aware scheduling algorithm that optimizes query completeness, and its overheads are moder- ate. Much work remains, including optimizing the internals of the system to improve completeness, and supporting more geometric queries on visual media. Larger scale experiments using more mobile devices can help understand how well the system scales, and how network variability can impact query complete- ness. Finally, a user study focused on understanding how well MediaScope’s query results bridge the semantic gap can help establish MediaScope’s usefulness. In the future, we would like to extend MediaS- cope to support more variety of data, such as video and large volume of sensing data. Moreover, we also aim to focus on how to bring the privacy and security into the system to make the system more practical. 105 References [1] Cops using youtube to catch criminals. http://www.afterdawn.com/news/article.cfm/2007/03/04 /cops_using_youtube_to_catch_criminals. [2] Facebook. http://www.facebook.com. [3] Flickr. http://www.flickr.com. [4] Instagram. http://www.instagram.com. [5] Society of automotive engineers. E/E Diagnostic Test Modes(J1979), 2010. [6] International organization for standardization. road vehicles - diagnostics on controller area net- works (can) - part 4: Requirements for emissions-related systems. 2011. [7] Ackerman Steering Principle. http://www.rctek.com/technical/handling/ ackerman_steering_principle.html. [8] I. Ahmad, S. Abdullah, S. Kiranyaz, and M. Gabbouj. Content-based image retrieval on mobile devices. In Proc. of SPIE, volume 5684, pages 255–264, 2005. [9] S. Al-Sultan, A.H. Al-Bayatti, and H. Zedan. Context-aware driver behavior detection system in intelligent transportation systems. IEEE Transactions on Vehicular Technology, 62(9), 2013. [10] Heba Aly, Anas Basalamah, and Moustafa Youssef. Lanequest: An accurate and energy-efficient lane detection system. Proceedings of IEEE PerCom 2015, 2015. [11] Android Auto. http://www.android.com/auto/. [12] Michael P Ashley-Rollman, Seth Copen Goldstein, Peter Lee, Todd C Mowry, and Padmanabhan Pillai. Meld: A declarative approach to programming ensembles. In Proceedings of the Interna- tional Conference on Intelligent Robots and Systems (IROS), pages 2794–2800. IEEE/RSJ, 2007. [13] J.R. Bach, C. Fuller, A. Gupta, A. Hampapur, B. Horowitz, R. Humphrey, R. Jain, and C.F. Shu. The virage image search engine: An open framework for image management. In SPIE Storage and Retrieval for Image and Video Databases IV, pages 76–87, 1996. [14] Francois Bancilhon. Naive evaluation of recursively defined relations. Springer, 1986. [15] Francois Bancilhon, David Maier, Yehoshua Sagiv, and Jeffrey D Ullman. Magic sets and other strange ways to implement logic programs. In Proceedings of the fifth ACM SIGACT-SIGMOD symposium on Principles of database systems, pages 1–15. ACM, 1985. [16] Rudolf Bayer. Query evaluation and recursion in deductive database systems. Bibliothek d. Fak. für Mathematik u. Informatik, TUM, 1985. [17] Catriel Beeri and Raghu Ramakrishnan. On the power of magic. The journal of logic programming, 10(3):255–299, 1991. 106 [18] Niclas Bergman. Recursive bayesian estimation: Navigation and tracking applications. dissertations no 579. Linköping Studies in Science and Technology, SE-581, 83, 1999. [19] Bing Traffic API. http://msdn.microsoft.com/en-us/library. [20] Barry Bishop and Florian Fischer. Iris-integrated rule inference system. Advancing Reasoning on the Web: Scalability and Commonsense, page 18, 2010. [21] Jacek Blazewicz, Klaus H. Ecker, Erwin Pesch, Gunter Schmidt, and Jan Weglarz. Handbook on Scheduling: From Theory to Applications. Springer, 2007. [22] Cheng Bo, Xiang-Yang Li, Taeho Jung, Xufei Mao, Yue Tao, and Lan Yao. Smartloc: Push the limit of the inertial sensor based metropolitan localization using smartphone. In Proceedings of the 19th annual international conference on Mobile computing & networking. ACM, 2013. [23] James Bornholt. Abstractions and techniques for programming with uncertain data. PhD thesis, Honors thesis, Australian National University, 2013. [24] James Bornholt, Todd Mytkowicz, and Kathryn S McKinley. Uncertain< t>: A first-order type for uncertain data. In Proceedings of the 19th international conference on Architectural support for programming languages and operating systems. ACM, 2014. [25] D. Cai, X. He, Z. Li, W.Y . Ma, and J.R. Wen. Hierarchical clustering of www image search results using visual, textual and link information. In Proc. of the 12th annual ACM international conference on Multimedia, pages 952–959. ACM, 2004. [26] Massimo Canale and Stefano Malan. Analysis and classification of human driving behaviour in an urban environment*. Cognition, Technology & Work, 4(3), 2002. [27] Roberto Carisi, Eugenio Giordano, Giovanni Pau, and Mario Gerla. Enhancing in vehicle digital maps via gps crowdsourcing. In Wireless On-Demand Network Systems and Services (WONS), 2011 Eighth International Conference on. IEEE, 2011. [28] Renato Carmo, Tomás Feder, Yoshiharu Kohayakawa, Eduardo Laber, Rajeev Motwani, Liadan O’Callaghan, Rina Panigrahy, and Dilys Thomas. Querying priced information in databases: The conjunctive case. ACM Trans. Algorithms, 3(1), 2007. [29] CarPlay. https://www.apple.com/ios/carplay/. [30] Stefano Ceri, Georg Gottlob, and Luigi Lavazza. Translation and optimization of logic queries: the algebraic approach. In Proceedings of the 12th International Conference on Very Large Data Bases. Morgan Kaufmann Publishers Inc., 1986. [31] Stefano Ceri, Georg Gottlob, and Letizia Tanca. What you always wanted to know about datalog (and never dared to ask). IEEE Transactions on Knowledge and Data Engineering, 1(1), 1989. [32] Stefano Ceri, Georg Gottlob, and Letizia Tanca. Logic programming and databases. Springer Verlag, 1990. [33] Stefano Ceri and Letizia Tanca. Optimization of systems of algebraic equations for evaluating datalog queries. In Proceedings of the 13th International Conference on Very Large Data Bases. Morgan Kaufmann Publishers Inc., 1987. [34] S. Chatzichristofis and Y . Boutalis. Cedd: color and edge directivity descriptor: a compact descrip- tor for image indexing and retrieval. Computer Vision Systems, pages 312–322, 2008. 107 [35] S. Chatzichristofis, Y . Boutalis, and M. Lux. Selection of the proper compact composite descriptor for improving content based image retrieval. In Proc. of the 6th IASTED International Conference, volume 134643, page 064, 2009. [36] S.A. Chatzichristofis and Y .S. Boutalis. Fcth: Fuzzy color and texture histogram-a low level feature for accurate image retrieval. In Ninth International Workshop on Image Analysis for Multimedia Interactive Services. WIAMIS’08., pages 191–196. IEEE, 2008. [37] Frederic Chausse, Jean Laneurit, and Roland Chapuis. Vehicle localization on a digital map using particles filtering. In Proceedings of IEEE Intelligent Vehicles Symposium. IEEE, 2005. [38] Y . Chen, J.Z. Wang, and R. Krovetz. Content-based image retrieval by clustering. In Proc. of the 5th ACM SIGMM international workshop on Multimedia information retrieval, pages 193–200. ACM, 2003. [39] David Chu, Nicholas D Lane, Ted Tsung-Te Lai, Cong Pang, Xiangying Meng, Qing Guo, Fan Li, and Feng Zhao. Balancing energy, latency and accuracy for mobile sensor data classification. In Proceedings of the 9th ACM Conference on Embedded Networked Sensor Systems (Sensys’11). ACM, 2011. [40] David Chu, Lucian Popa, Arsalan Tavakoli, Joseph M Hellerstein, Philip Levis, Scott Shenker, and Ion Stoica. The design and implementation of a declarative sensor network system. In Proceedings of the 5th international conference on Embedded networked sensor systems (Sensys’07). ACM, 2007. [41] Ferdinando Cicalese and Eduardo Sany Laber. A new strategy for querying priced information. In Proceedings of the Thirty-seventh Annual ACM Symposium on Theory of Computing (STOC ’05). ACM, 2005. [42] Pavel Davidson, Jussi Collin, John Raquet, and Jarmo Takala. Application of particle filters for ve- hicle positioning using road maps. In 23rd International Technical Meeting of the Satellite Division of The Institute of Navigation, Portland, OR, 2010. [43] M. Davis, N. Van House, J. Towle, S. King, S. Ahern, C. Burgener, D. Perkel, M. Finn, V . Viswanathan, and M. Rothenberg. Mmm2: mobile media metadata for media sharing. In Proc. of CHI’05 extended abstracts on Human factors in computing systems, pages 1335–1338. ACM, 2005. [44] How Differential GPS works. http://www.trimble.com/gps_tutorial/dgps-how. aspx. 2011. [45] MWM Gamini Dissanayake, Paul Newman, Steve Clark, Hugh F Durrant-Whyte, and Michael Csorba. A solution to the simultaneous localization and map building (slam) problem. IEEE Trans- actions on Robotics and Automation, 2001. [46] S Dmitriev, A Stepanov, B Rivkin, and D Koshaev. Optimal map-matching for car navigation systems. In Proceedings of 6th International Conference on Integrated Navigation Systems, St. Petersburg. DTIC Document, 1999. [47] Arnaud Doucet. On sequential simulation-based methods for bayesian filtering. 1998. [48] East-North-Up Coordinates System. http://www.navipedia.net/index.php/ Transformations_between_ECEF_and_ENU_coordinates. 108 [49] Jakob Eriksson, Lewis Girod, Bret Hull, Ryan Newton, Samuel Madden, and Hari Balakrishnan. The pothole patrol: Using a mobile sensor network for road surface monitoring. In Proceedings of the 6th International Conference on Mobile Systems, Applications, and Services, MobiSys ’08. ACM, 2008. [50] Jay Farrell and Tony Givargis. Differential gps reference station algorithm-design and analysis. IEEE Transactions on Control Systems Technology, 2000. [51] Uriel Feige. A threshold of ln n for approximating set cover. Journal of the ACM (JACM), 45(4), 1998. [52] D.C. Festa, D.W.E. Mongelli, V . Astarita, and P. Giorgi. First results of a new methodology for the identification of road surface anomalies. In Proceedings of IEEE International Conference on Service Operations and Logistics, and Informatics (SOLI), 2013. [53] Tobias Flach, Nilesh Mishra, Luis Pedrosa, Christopher Riesz, and Ramesh Govindan. Carma: towards personalized automotive tuning. In Proceedings of the 9th ACM Conference on Embedded Networked Sensor Systems, pages 135–148. ACM, 2011. [54] Tobias Flach, Nilesh Mishra, Luis Pedrosa, Christopher Riesz, and Ramesh Govindan. Carma: towards personalized automotive tuning. In Proceedings of the 9th ACM Conference on Embedded Networked Sensor Systems. ACM, 2011. [55] Ford Sync. http://www.ford.com/technology/sync/. [56] M. Gabbouj, I. Ahmad, M.Y . Amin, and S. Kiranyaz. Content-based image retrieval for connected mobile devices. In Proc. of Second International Symposium on Communications, Control and Signal Processing (ISCCSP). Citeseer, 2006. [57] M. R. Garey and D. S. Johnson. Computers and Intractability: A Guide to the Theory of NP- Completeness. W.H. Freeman and Company, 1979. [58] MyGasFeed. http://www.mygasfeed.com/keys/api. [59] GM onStar. https://www.onstar.com/. [60] Google Direction API. https://developers.google.com/maps/documentation/ directions/. [61] Russell Greiner, Ryan Hayward, Magdalena Jankowska, and Michael Molloy. Finding optimal satisficing strategies for and-or trees. Artif. Intell., 170(1), 2006. [62] Santanu Guha, Kurt Plarre, Daniel Lissner, Somnath Mitra, Bhagavathy Krishna, Prabal Dutta, and Santosh Kumar. Autowitness: Locating and tracking stolen property while tolerating gps and radio outages. In Proceedings of the 8th ACM Conference on Embedded Networked Sensor Systems, SenSys ’10, pages 29–42. ACM, 2010. [63] Mordechai Haklay and Patrick Weber. Openstreetmap: User-generated street maps. Pervasive Computing, 2008. [64] Will Hedgecock, Miklos Maroti, Akos Ledeczi, Peter V olgyesi, and Rueben Banalagay. Accurate real-time relative localization using single-frequency gps. In Proceedings of the 12th ACM Confer- ence on Embedded Network Sensor Systems. ACM, 2014. [65] Will Hedgecock, Miklos Maroti, Janos Sallai, Peter V olgyesi, and Akos Ledeczi. High-accuracy differential tracking of low-cost gps receivers. In Proceeding of the 11th annual international con- ference on Mobile systems, applications, and services. ACM, 2013. 109 [66] Shaohan Hu, Lu Su, Hengchang Liu, Hongyan Wang, and Tarek F Abdelzaher. Smartroad: a crowd- sourced traffic regulator detection and identification system. In Information Processing in Sensor Networks (IPSN), 2013 ACM/IEEE International Conference on, pages 331–332. IEEE, 2013. [67] Albert S Huang and Seth Teller. Probabilistic lane estimation using basis curves. Robotics: Science and Systems (RSS), 2010. [68] J. Huang, S.R. Kumar, M. Mitra, W.J. Zhu, and R. Zabih. Image indexing using color correlo- grams. In Proc. of IEEE Computer Society Conference on Computer Vision and Pattern Recogni- tion(CVPR’97), pages 762–768. IEEE, 1997. [69] Shan Shan Huang, Todd Jeffrey Green, and Boon Thau Loo. Datalog and emerging applications: an interactive tutorial. In Proceedings of the 2011 ACM SIGMOD International Conference on Management of data, pages 1213–1216. ACM, 2011. [70] T Imkamon, P Saensom, P Tangamchit, and P Pongpaibool. Detection of hazardous driving be- havior using fuzzy logic. In 5th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology. IEEE, 2008. [71] C.E. Jacobs, A. Finkelstein, and D.H. Salesin. Fast multiresolution image querying. In Proc. of the 22nd annual conference on Computer graphics and interactive techniques, pages 277–286. ACM, 1995. [72] Y . Jiang, H. Qiu, M. McCartney, W. G. J. Halfond, F. Bai, D. Grimm, and R. Govindan. CARLOG: A Platform for Flexible and Efficient. Technical Report 14-949, University of Southern California, 2014. [73] Yurong Jiang, Hang Qiu, Matthew McCartney, William GJ Halfond, Fan Bai, Donald Grimm, and Ramesh Govindan. Carlog: a platform for flexible and efficient automotive sensing. In Proceedings of the 12th ACM Conference on Embedded Network Sensor Systems. ACM, 2014. [74] Karl Henrik Johansson, Martin Törngren, and Lars Nielsen. Vehicle applications of controller area network. In Handbook of Networked and Embedded Control Systems. Springer, 2005. [75] Derick A Johnson and Mohan M Trivedi. Driving style recognition using a smartphone as a sen- sor platform. In Proceedings of the 14th International Conference on Intelligent Transportation Systems (ITSC). IEEE, 2011. [76] J.S.Hare and P.H. Lewis. Content-based image retrieval using a mobile device as a novel interface. In Electronic Imaging 2005, pages 64–75. International Society for Optics and Photonics, 2005. [77] Younghyun Ju, Youngki Lee, Jihyun Yu, Chulhong Min, Insik Shin, and Junehwa Song. Sym- Phoney: a coordinated sensing flow execution engine for concurrent mobile sensing applications. In Proceedings of the 10th ACM Conference on Embedded Network Sensor Systems (Sensys’12). ACM, 2012. [78] Seungwoo Kang, Jinwon Lee, Hyukjae Jang, Hyonik Lee, Youngki Lee, Souneil Park, Taiwoo Park, and Junehwa Song. Seemon: scalable and energy-efficient context monitoring framework for sensor-rich mobile environments. In Proceedings of the 6th international conference on Mobile systems, applications, and services (Mobisys’08). ACM, 2008. [79] Seungwoo Kang, Youngki Lee, Chulhong Min, Younghyun Ju, Taiwoo Park, Jinwon Lee, Yunseok Rhee, and Junehwa Song. Orchestrator: An active resource orchestration framework for mobile context monitoring in sensor-rich mobile environments. In Proceedings of the International Con- ference on Pervasive Computing and Communications (PerCom’10). IEEE, 2010. 110 [80] J. Karuppuswamy, V . Selvaraj, M. M. Ganesh, and E. L. Hall. Detection and avoidance of simu- lated potholes in autonomous vehicle navigation in an unstructured enviornment. In Proceedings of Intelligent Robots and Computer Vision XIX: Algorithms, Techniques, and Active Vision, volume 4197, 2000. [81] David Kempe and Mohammad Mahdian. A cascade model for externalities in sponsored search. In Proceedings of the 4th International Workshop on Internet and Network Economics (WINE ’08). Springer-Verlag, 2008. [82] Christian Koch and Ioannis Brilakis. Pothole detection in asphalt pavement images. Adv. Eng. Inform., 25(3), 2011. [83] John Krumm, Eric Horvitz, and Julie Letchner. Map matching with travel time constraints. Techni- cal report, SAE Technical Paper, 2007. [84] K. Kumar, Y . Nimmagadda, Y .J. Hong, and Y .H. Lu. Energy conservation by adaptive feature loading for mobile content-based image retrieval. In ACM/IEEE International Symposium on Low Power Electronics and Design (ISLPED’08), pages 153–158. IEEE, 2008. [85] Richard B Langley. Dilution of precision. GPS world, 10(5):52–59, 1999. [86] Kyungmin Lee, Jason Flinn, T.J. Giuli, Brian Noble, and Christopher Peplin. Amc: Verifying user interface properties for vehicular applications. In Proceeding of the 11th Annual International Conference on Mobile Systems, Applications, and Services (Mobisys’13). ACM, 2013. [87] Scott Lenser and Manuela Veloso. Sensor resetting localization for poorly modelled mobile robots. In Proceedings of IEEE International Conference on Robotics and Automation (ICRA’00). IEEE, 2000. [88] Jesse Levinson, Michael Montemerlo, and Sebastian Thrun. Map-based precision vehicle localiza- tion in urban environments. In Robotics: Science and Systems, volume 4, page 1. Citeseer, 2007. [89] Dong Li, Tarun Bansal, Zhixue Lu, and Prasun Sinha. Marvel: multiple antenna based relative vehicle localizer. In Proceedings of the 18th annual international conference on Mobile computing and networking, pages 245–256. ACM, 2012. [90] A.H. Lipkus. A proof of the triangle inequality for the tanimoto distance. Journal of Mathematical Chemistry, 26(1):263–265, 1999. [91] Jun S Liu and Rong Chen. Sequential monte carlo methods for dynamic systems. Journal of the American statistical association, 1998. [92] M. Lux and S.A. Chatzichristofis. Lire: lucene image retrieval: an extensible java cbir library. In Proceeding of the 16th ACM international conference on Multimedia, pages 1085–1088. ACM, 2008. [93] S.R. Madden, M.J. Franklin, J.M. Hellerstein, and W. Hong. Tinydb: An acquisitional query pro- cessing system for sensor networks. ACM Transactions on Database Systems (TODS), 30(1):122– 173, 2005. [94] Enrique David Martí, David Martín, Jesús García, Arturo De La Escalera, José Manuel Molina, and José María Armingol. Context-aided sensor fusion for enhanced urban navigation. Sensors, 2012. [95] Mercedes-Benz mbrace. http://www.mbusa.com/mercedes/mbrace. 111 [96] P Merriaux, Y Dupuis, P Vasseur, and X Savatier. Wheel odometry-based car localization and track- ing on vectorial map. In Intelligent Transportation Systems (ITSC), 2014 IEEE 17th International Conference on, pages 1890–1891. IEEE, 2014. [97] Suman Nath. ACE: exploiting correlation for energy-efficient and continuous context sensing. In Proceedings of the 10th international conference on Mobile systems, applications, and services (Mobisys’12). ACM, 2012. [98] OBDLink . http://www.scantool.net/. [99] Texas Department of Transportation. TxDOT survey manual - GPS RTK surveying. http: //onlinemanuals.txdot.gov/txdotmanuals/ess/gps_rtk_surveying.htm. April 2011. [100] Athanasios Papoulis and S Unnikrishna Pillai. Probability, random variables, and stochastic pro- cesses. Tata McGraw-Hill Education, 2002. [101] Ignacio Parra, M Sotelo, David F Llorca, and Carlos Fernández. Visual odometry for accurate vehicle localization-an assistant for gps based navigation. In 17th International Intelligent Trans- portation Systems World Congress, pages 1–6, 2010. [102] A Ufuk Peker, Oguz Tosun, and Tankut Acarman. Particle filter vehicle localization and map- matching using map topology. In IEEE Intelligent Vehicles Symposium (IV). IEEE, 2011. [103] Liqun Qi, Defeng Sun, and Guanglu Zhou. A new look at smoothing newton methods for nonlinear complementarity problems and box constrained variational inequalities. Mathematical Program- ming, 2000. [104] C. Qin, X. Bao, R. Roy Choudhury, and S. Nelakuditi. Tagsense: a smartphone-based approach to automatic image tagging. In Proc. of the 9th international conference on Mobile systems, applica- tions, and services(Mobisys’11), pages 1–14. ACM, 2011. [105] Moo-Ryong Ra, Anmol Sheth, Lily Mummert, Padmanabhan Pillai, David Wetherall, and Ramesh Govindan. Odessa: Enabling interactive perception applications on mobile devices. In Proceedings of the 9th International Conference on Mobile Systems, Applications, and Services (MobiSys’11), 2011. [106] M.R. Ra, B. Liu, T.F. La Porta, and R. Govindan. Medusa: A programming framework for crowd- sensing applications. In Proc. of the 10th international conference on Mobile systems, applications, and services(Mobisys’12), pages 337–350. ACM, 2012. [107] Rajesh Rajamani. Vehicle dynamics and control. Springer Science & Business Media, 2011. [108] Rate My Driving. https://play.google.com/store/apps/details?id=com. howsmydriving. [109] Lenin Ravindranath, Arvind Thiagarajan, Hari Balakrishnan, and Samuel Madden. Code in the air: simplifying sensing and coordination tasks on smartphones. In Proceedings of the 12th Workshop on Mobile Computing Systems & Applications (HotMobile’12). ACM, 2012. [110] RTKLIB: An open source program package for GNSS positioning. http://www.rtklib. com/. 2011. [111] M Snir. Lower bounds on probabilistic decision trees. Theoretical Computer Science, pages 69–82, 1985. 112 [112] Eniuce Menezes de Souza, Joao Francisco Galera Monico, and Aylton Pagamisse. Gps satellite kinematic relative positioning: analyzing and improving the functional mathematical model using wavelets. Mathematical Problems in Engineering, 2009. [113] T.T. Tanimoto. An elementary mathematical theory of classification and prediction. International Business Machines Corporation, 1958. [114] Rose India Technologies. What is differential GPS. http://www.roseindia.net/ technology/gps/what-is-Differential-GPS.shtml. February 2008. [115] K Tuncay Tekle, Michael Gorbovitski, and Yanhong A Liu. Graph queries through datalog opti- mizations. In Proceedings of the 12th international ACM SIGPLAN symposium on Principles and practice of declarative programming. ACM, 2010. [116] The OpenXC Platform. http://openxcplatform.com. [117] Arvind Thiagarajan, Lenin Ravindranath, Hari Balakrishnan, Samuel Madden, Lewis Girod, et al. Accurate, low-energy trajectory mapping for mobile devices. In NSDI, 2011. [118] Arvind Thiagarajan, Lenin Ravindranath, Katrina LaCurts, Samuel Madden, Hari Balakrishnan, Sivan Toledo, and Jakob Eriksson. Vtrack: accurate, energy-aware road traffic delay estimation using mobile phones. In Proceedings of the 7th ACM Conference on Embedded Networked Sensor Systems, pages 85–98. ACM, 2009. [119] Sebastian Thrun. Particle filters in robotics. In Proceedings of the Eighteenth conference on Uncer- tainty in artificial intelligence, pages 511–518. Morgan Kaufmann Publishers Inc., 2002. [120] Torque Pro . https://play.google.com/store/apps/details?id=org.prowl. torque&hl=en. [121] Rudolph Triebel, Patrick Pfaff, and Wolfram Burgard. Multi-level surface maps for outdoor ter- rain mapping and loop closing. In IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2006. [122] Uber Pick up Location Problem. http://ubercustomersupport.com/ uber-pick-up-location-not-working/. [123] Ublox Chips. http://www.ublox.com/en/. [124] M.Y .S. Uddin, H. Wang, F. Saremi, G.J. Qi, T. Abdelzaher, and T. Huang. Photonet: a similarity- aware picture delivery service for situation awareness. In IEEE 32nd Real-Time Systems Symposium (RTSS’11), pages 317–326. IEEE, 2011. [125] Jeffrey D Ullman. Principles of database systems. Galgotia Publications, 1985. [126] UNA VCO consortium. http://www.unavco.org/instrumentation/networks/ status/pbo/overview/. [127] W. Viana, J. Bringel Filho, J. Gensel, M. Villanova-Oliver, , and H. Martin. Photomap: from loca- tion and time to context-aware photo annotations. Journal of Location Based Services, 2(3):211– 235, 2008. [128] Laurent Vieille. Recursive axioms in deductive databases: The query/subquery approach. In Expert Database Conf., 1986. [129] Laurent Vieille. A database-complete proof procedure based on sld-resolution. In ICLP, pages 74–103, 1987. 113 [130] C. Wang, L. Zhang, and H.J.Zhang. Learning to reduce the semantic gap in web image retrieval and annotation. In Proc. of the 31st annual international ACM SIGIR conference on Research and development in information retrieval, pages 355–362, 2008. [131] Yan Wang, Jie Yang, Hongbo Liu, Yingying Chen, Marco Gruteser, and Richard P. Martin. Sensing vehicle dynamics for determining driver phone use. In Proceeding of the 11th Annual International Conference on Mobile Systems, Applications, and Services, (MobiSys’13). ACM, 2013. [132] Yi Wang, Jialiu Lin, Murali Annavaram, Quinn A. Jacobson, Jason Hong, Bhaskar Krishnamachari, and Norman Sadeh. A framework of energy efficient mobile sensing for automatic user state recog- nition. In Proceedings of the 7th International Conference on Mobile Systems, Applications, and Services (MobiSys’09). ACM, 2009. [133] D.A. Wheeler. Sloccount, 2001. [134] B Wi´ sniewski, K Bruniecki, and M Moszy´ nski. Evaluation of rtklib’s positioning accuracy usingn low-cost gnss receiver and asg-eupos. TransNav: International Journal on Marine Navigation and Safety of Sea Transportation, 7(1), 2013. [135] Oliver J Woodman. An introduction to inertial navigation. University of Cambridge, Computer Laboratory, Tech. Rep. UCAMCL-TR-696, 2007. [136] T. Yan, V . Kumar, and D. Ganesan. Crowdsearch: exploiting crowds for accurate real-time image search on mobile phones. In Proc. of the 8th international conference on Mobile systems, applica- tions, and services(Mobisys’10), pages 77–90. ACM, 2010. [137] J. Yang, S. Park, H. Seong, H. Byun, and Y .K Lim. A fast image retrieval system using index lookup table on mobile device. In 19th International Conference on Pattern Recognition(ICPR’08), pages 1–4. IEEE, 2008. [138] Jie Yang, Simon Sidhom, Gayathri Chandrasekaran, Tam Vu, Hongbo Liu, Nicolae Cecan, Yingying Chen, Marco Gruteser, and Richard P Martin. Detecting driver phone use leveraging car speakers. In Proceedings of the 17th annual international conference on Mobile computing and networking (Mobicom’11), pages 97–108. ACM, 2011. [139] K.P. Yee, K. Swearingen, K. Li, and M. Hearst. Faceted metadata for image search and browsing. In Proc. of the SIGCHI conference on Human factors in computing systems, pages 401–408. ACM, 2003. [140] Chuang-Wen You, Martha Montes-de Oca, Thomas J Bao, Nicholas D Lane, Giuseppe Cardone, Lorenzo Torresani, and Andrew T Campbell. Carsafe app: Alerting drowsy and distracted drivers using dual cameras on smartphones. In Proceedings of the 11th international conference on Mobile systems, applications, and services (Mobisys’13). ACM, 2013. [141] Meng Yu. Improved positioning of land vehicle in ITS using digital map and other accessory information. PhD thesis, The Hong Kong Polytechnic University, 2006. [142] X Yu and E Salari. Pavement pothole detection and severity measurement using laser imaging. In Proceedings of IEEE International Conference on Electro/Information Technology (EIT), 2011. [143] Zhiwei Zhu and Qiang Ji. Real time and non-intrusive driver fatigue monitoring. In Proceedings of The 7th International IEEE Conference on Intelligent Transportation Systems., pages 657–662. IEEE, 2004. 114
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Efficient pipelines for vision-based context sensing
PDF
Improving efficiency, privacy and robustness for crowd‐sensing applications
PDF
Towards energy efficient mobile sensing
PDF
The benefits of participatory vehicular sensing
PDF
Cloud-enabled mobile sensing systems
PDF
Design of cost-efficient multi-sensor collaboration in wireless sensor networks
PDF
High-performance distributed computing techniques for wireless IoT and connected vehicle systems
PDF
A framework for runtime energy efficient mobile execution
PDF
Efficient crowd-based visual learning for edge devices
PDF
Active sensing in robotic deployments
PDF
Sensing with sound: acoustic tomography and underwater sensor networks
PDF
Elements of next-generation wireless video systems: millimeter-wave and device-to-device algorithms
PDF
Relative positioning, network formation, and routing in robotic wireless networks
PDF
Efficient indexing and querying of geo-tagged mobile videos
PDF
Sense and sensibility: statistical techniques for human energy expenditure estimation using kinematic sensors
PDF
Enabling virtual and augmented reality over dense wireless networks
PDF
Enhancing collaboration on the edge: communication, scheduling and learning
PDF
Utilizing context and structure of reward functions to improve online learning in wireless networks
PDF
Dynamic graph analytics for cyber systems security applications
PDF
Improving mobility in urban environments using intelligent transportation technologies
Asset Metadata
Creator
Jiang, Yurong
(author)
Core Title
Crowd-sourced collaborative sensing in highly mobile environments
School
Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Computer Science
Publication Date
06/20/2016
Defense Date
05/23/2016
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
accuracy,automotive,crowd-sensing,Datalog,feature-extraction,GPS,image-retrieval,latency,Map,mobile-device,OAI-PMH Harvest,predicate acquisition
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Govindan, Ramesh (
committee chair
), Krishnamachari, Bhaskar (
committee member
), Sukhatme, Gaurav (
committee member
)
Creator Email
jiangyurong609@gmail.com,yurongji@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c40-254663
Unique identifier
UC11281045
Identifier
etd-JiangYuron-4451.pdf (filename),usctheses-c40-254663 (legacy record id)
Legacy Identifier
etd-JiangYuron-4451-1.pdf
Dmrecord
254663
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Jiang, Yurong
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
automotive
crowd-sensing
Datalog
feature-extraction
GPS
image-retrieval
latency
mobile-device
predicate acquisition