Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Distributed algorithms for source localization using quantized sensor readings
(USC Thesis Other)
Distributed algorithms for source localization using quantized sensor readings
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
DISTRIBUTED ALGORITHMS FOR SOURCE LOCALIZATION USING QUANTIZED SENSOR READINGS by Yoon Hak Kim A Dissertation Presented to the FACULTY OF THE GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Ful¯llment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (ELECTRICAL ENGINEERING) December 2007 Copyright 2007 Yoon Hak Kim Dedication To my mother for her support throughout my studies. To my lovely wife, Hyun Bin for her constant love and encouragement. ii Acknowledgments First, I would like to thank my advisor, Prof. Antonio Ortega for his continual support, guidanceandpatience. Itwastheprivilegetoworkwithhimduringmydoctoralresearch. Our discussions were crucial factors in accomplishing this work. IwouldalsoliketothankProf. MitraandProf. Govindanforbeingonmydissertation committee and Prof. Krishnamachari and Prof. Neely for serving on my qualifying examination committee. I am very grateful to them for their valuable comments and suggestions. I would like to thank all my friends and colleagues for their help and friendship. My experience during the studies was much more enjoyable with them. I would like to thank my mother who has devoted herself to my education since my childhood and my family, in particular my mother-in-law who always provided me with strong support throughout years. Finally, I would like to express my deepest gratitude to my wife Hyun Bin, for the unmeasurable love. iii Table of Contents Dedication ii Acknowledgments iii List Of Tables vii List Of Figures ix Abstract xii Chapter 1: Introduction 1 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Distributed Algorithms for Source Localization System . . . . . . . . . . . 4 1.3.1 Distributed Quantizer Design Algorithm . . . . . . . . . . . . . . . 5 1.3.1.1 Rate Allocation . . . . . . . . . . . . . . . . . . . . . . . 6 1.3.2 Distributed Localization Algorithm based on Quantized Data . . . 7 1.3.3 Distributed Encoding Algorithm . . . . . . . . . . . . . . . . . . . 8 1.4 Outline and Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Chapter 2: Quantizer Design 11 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.2.1 Location Estimation based on Quantized Data . . . . . . . . . . . 15 2.2.2 Criteria for Quantizer Optimization . . . . . . . . . . . . . . . . . 16 2.3 Quantizer Design Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.3.1 Iterative Optimization Algorithm . . . . . . . . . . . . . . . . . . . 19 2.3.2 Constrained Design Algorithm . . . . . . . . . . . . . . . . . . . . 21 2.3.3 Convergence and stopping criteria . . . . . . . . . . . . . . . . . . 23 2.3.4 Summary of algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.4 Rate Allocation using GBFOS. . . . . . . . . . . . . . . . . . . . . . . . . 26 2.5 Application to Acoustic Amplitude Sensor Case . . . . . . . . . . . . . . . 28 2.5.1 Source localization using quantized sensor readings . . . . . . . . . 28 2.5.2 Quantizer design . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.5.3 Geometry-Driven Quantizers: Equally Distance-divided Quantizers 32 iv 2.6 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.6.1 Quantizer design . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.6.1.1 Comparison with traditional quantizers . . . . . . . . . . 33 2.6.1.2 Comparison with optimized quantizers . . . . . . . . . . 36 2.6.1.3 Sensitivity to parameter perturbation . . . . . . . . . . . 37 2.6.1.4 Performance analysis in a larger sensor network: compar- ison with traditional quantizers . . . . . . . . . . . . . . . 38 2.6.1.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.6.2 Rate allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.6.2.1 EDQ design . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.6.2.2 E®ect of quantization schemes . . . . . . . . . . . . . . . 41 2.6.2.3 Rate allocation under power constraints . . . . . . . . . . 41 2.6.2.4 Performance analysis { comparison with uniform rate al- location . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 2.6.2.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . 45 2.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Chapter 3: Localization Algorithm based on Quantized data 47 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 3.2 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 3.3 LocalizationAlgorithmbasedonMaximumAPosteriori(MAP)Criterion: Known Signal Energy Case . . . . . . . . . . . . . . . . . . . . . . . . . . 50 3.4 Implementation of Proposed Algorithm . . . . . . . . . . . . . . . . . . . 53 3.5 Unknown Signal Energy Case . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.6 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.6.1 Case of known signal energy . . . . . . . . . . . . . . . . . . . . . . 59 3.6.2 Case of unknown signal energy . . . . . . . . . . . . . . . . . . . . 60 3.6.3 Sensitivity to parameter mismatches . . . . . . . . . . . . . . . . . 61 3.6.4 Performance analysis in a larger sensor network . . . . . . . . . . . 62 3.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Chapter 4: Distributed Encoding Algorithm 64 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 4.2 De¯nitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 4.3 Motivation: Identi¯ability . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 4.4 Quantization Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 4.5 Proposed Encoding Algorithm. . . . . . . . . . . . . . . . . . . . . . . . . 70 4.5.1 Incremental Merging . . . . . . . . . . . . . . . . . . . . . . . . . . 73 4.6 Extension of Identi¯ability: p-identi¯ability . . . . . . . . . . . . . . . . . 74 4.7 Decoding of Merged Bins and Handling Decoding Errors . . . . . . . . . . 75 4.7.1 Decoding Rule 1: Simple Maximum Rule . . . . . . . . . . . . . . 76 4.7.2 Decoding Rule 2: Weighted Decoding Rule . . . . . . . . . . . . . 77 4.8 Application to Acoustic Amplitude Sensor Case . . . . . . . . . . . . . . . 78 4.8.1 Construction of S Q (p) . . . . . . . . . . . . . . . . . . . . . . . . . 79 4.9 Experimental results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 v 4.9.1 Distributed Encoding Algorithm . . . . . . . . . . . . . . . . . . . 80 4.9.2 Encoding with p-Identi¯ability and Decoding rules . . . . . . . . . 83 4.9.3 Performance Comparison: Lower Bound . . . . . . . . . . . . . . . 86 4.10 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 Chapter 5: Conclusion and Future work 90 Bibliography 93 vi List Of Tables Table 2.1 Comparison of LSQs with Optimized quantizers. The average local- ization error are computed using a test set of 2000 source locations. . . . 37 Table 2.2 Localization error (LE) of LSQ due to variations of the modelling parameters. LE = 1 100 P 100 l=1 E l (k x¡ ^ x k 2 ), where E l is the average lo- calization error for the l-th sensor con¯guration and is expressed in m 2 . LE (normal) is for test set from normal distribution with mean of (5,5) and unit variance and LE (uniform) from uniform distribution. LSQs are designed with R i = 3;a = 50;® = 2;g i = 1 and w i = 0 for uniform distribution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Table2.3 Averagelocalizationerror(m 2 )vs. numberofsensors(M =12;16;20) in a larger sensor ¯eld, 20£20m 2 . The localization error is averaged over 20 di®erent sensor con¯gurations where each quantizer uses R i =3 bits. . 39 Table 2.4 Localization error (m 2 ) for various sets of rate allocations where R ¤ EDQ ;R ¤ U andR ¤ areobtainedbyGBFOSusingEDQ,uniformquantizer and LSQ, respectively given P R i = 10. Localization error is computed by E(kx¡^ xk 2 ) using EDQ and LSQ. . . . . . . . . . . . . . . . . . . . 42 Table 2.5 Localization error (m 2 ) for various sets of rate allocations where R ¤ PW was obtained by GBFOS using EDQ given P = P i C i R i = P i 2C i . Localization error is given by E(kx¡^ xk 2 ): . . . . . . . . . . . . . . . . . 43 Table 3.1 Localization error (LE) (m 2 ) of MAP algorithm compared to en- ergy ratios based algorithm (ERA) under various mismatches. In each experiment, a test set is generated with M = 5 and ¾ = 0:05 and one of the parameters is varied. Localization error (LE) (m 2 ) is computed by E(kx¡^ xk 2 ) using ®=2;g i =1;R i =3 and uniform distribution of p(x). 62 Table 4.1 Total rate, R M in bits (Rate savings) achieved by various merging techniques. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 vii Table 4.2 Total rate R M in bits (Rate savings) achieved by distributed encod- ing algorithm (global merging technique). The rate savings is averaged over 20 di®erent node con¯gurations where each node uses LSQ with R i =3. 83 viii List Of Figures Figure 1.1 Block diagram of source localization system. We assume the chan- nel is noiseless and each sensor sends its quantized (Quantizer, Q i ) and encoded (ENC block) measurement to the fusion node where decoding and localization are conducted in a distributed manner. . . . . . . . . . . 2 Figure 2.1 Localization of the source based on quantized energy readings . . . 30 Figure 2.2 Comparison of LSQs with uniform quantizers and Lloyd quantiz- ers. The average localization error is plotted vs. the number of bits, R i , assigned to each sensor. The average localization error is given by 1 500 P 500 l E l (kx¡ ^ xk 2 ) where E l is the average localization error for the l-th sensor con¯guration. 2000 source locations are generated as a test set with uniform distribution of a source location. . . . . . . . . . . . . . . . . 34 Figure2.3 Partitioningofsensor¯eld(10£10m 2 )(grid=0:2£0:2)byuniform quantizer (left) and Lloyd quantizer (right). 5 sensors are deployed with 2-bit quantizers. Each partition corresponds to the intersection region of 5 ring-shaped areas. More partitions yield better localization accuracy. . . 35 Figure 2.4 Partitioning of sensor ¯eld (10£10m 2 ) (grid= 0:2£0:2) by EDQ (left) and LSQ (right). 5 sensors are deployed with 2-bit quantizers. Each partition corresponds to the intersection region of 5 ring-shaped areas. . . 35 Figure 2.5 Justi¯cation of EDQ design. The average localization error is plot- ted vs. the number of bits, R i , assigned to each sensor with M=5 (left) and vs. the number of sensors, M with R i =3bits (right). The average localizationerrorisgivenby 1 500 P 500 l E l (kx¡^ xk 2 )whereE l istheaverage oflocalizationerrorforthel-thsensorcon¯guration. 2000sourcelocations are generated as a test set with uniform distribution of source locations. . 40 ix Figure 2.6 Comparison of optimal rate allocation, R ¤ with uniform rate allo- cation R U . LSQs are designed for each R ¤ and R U . 4 curves are plotted for comparison. For example, \EDQ (or LSQ) withR U (orR ¤ )" indicates the curve of localization error computed when each sensor uses EDQ (or LSQ) designed for R=R U (or R ¤ ). . . . . . . . . . . . . . . . . . . . . . 44 Figure2.7 Gaininratesavingsachievedbyouroptimalrateallocation,R ¤ us- ingLSQsascomparedwithtrivialsolutionwhereeachsensorusesuniform quantizers of the same rate. . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Figure 2.8 Evaluation of optimal rate allocation for many di®erent sensor con- ¯gurations. Localization error is averaged over 100 sensor con¯gurations for two di®erent rate allocations: R U and R ¤ . . . . . . . . . . . . . . . . . 46 Figure 3.1 Source locations that generate the givenQ r for each variance (¾ = 0;0:05;0:16;0:5) are plotted. 5 sensors (marked as ±) are employed in a sensor ¯eld 10£10m 2 and each sensor uses a 2-bit quantizer. . . . . . . . 53 Figure 3.2 Localization accuracy of proposed algorithm under source signal energy mismatch (top). In this experiment, a test set of 2000 source lo- cations is generated for each source signal energy (a = 40;45;:::;55;60). Localization is performed by the proposed algorithm in Section 3.4 using a = 50 and ± = 1m. Distribution of weights vs. Number of weights cho- sen, L. (bottom) ( P L l W l P N k W k vs. L). A test set of 2000 source locations is generated and N=10 weights are computed for each source location. . . . 59 Figure 3.3 Localization algorithms based on MMSE and MAP criterion are tested when ¾ varies from 0.5 to 0 with R i = 3 (left) and when R i = 3;4 and 5 with ¾ =0:05 (right) and ± =1m respectively. w i »N(0;¾ 2 ). . . . 60 Figure3.4 LocalizationalgorithmsbasedonMMSEestimation,MAPcriterion and energy ratios are tested by varying source signal energy a from 20 to 100. We set N = 10;L = 3; and ± w = 1m in our algorithm. In this experiment, a test set with M = 5;R i = 3 is generated with uniform distributionofsourcelocationsforeachsignalenergyandthemeasurement noise is modeled by a normal distribution with zero mean and ¾ =0:05. . 61 Figure 3.5 Localization algorithms based on MAP criterion and energy ratios are tested in a larger sensor network by varying the number of sensors. The parameters are N =10;L=3 and ± w =1m in our algorithm. In this experiment, a test set of 4000 samples was generated for M = 12;16;20. Each sensor uses a 3 bit quantizer and the measurement noise is modeled by the normal distribution with zero mean and ¾ =0:05.. . . . . . . . . . 63 x Figure 4.1 Encoder-Decoder Diagram . . . . . . . . . . . . . . . . . . . . . . . 75 Figure 4.2 Average localization error vs. Total rate R M for three di®erent quantization schemes with distributed encoding algorithm. Average rate savings is achieved by the distributed encoding algorithm (global merging algorithm). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 Figure4.3 Averageratesavingsachievedbythedistributedencodingalgorithm (global merging algorithm) vs. number of bits, R i with M = 5 (left) and number of nodes with R i =3 (right) . . . . . . . . . . . . . . . . . . . . . 83 Figure4.4 Ratesavingsachievedbythedistributedencodingalgorithm(global merging algorithm) vs. SNR (dB) with R i =3 and M=5. ¾ 2 =0;:::;0:5 2 . 84 Figure 4.5 Average localization error vs. total rate R M achieved by the dis- tributed encoding algorithm (global merging algorithm) with simple max- imum decoding and weighted decoding, respectively. Total rate varies by changingpfrom0.8o0.95andweighteddecodingisconductedwithL=2. Solid line +¤: weighted decoding. Solid line + r: simple maximum de- coding. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 Figure 4.6 Average localization error vs. total rate, R M achieved by the dis- tributed encoding algorithm (global merging algorithm) with R i = 3 and M=5. ¾ = 0;0:05. S Q (p) is varied from p = 0:85;0:9;0:95. Weighted decoding with L=2 is applied in this experiment. . . . . . . . . . . . . . 86 Figure 4.7 Performance comparison: distributed encoding algorithm is lower bounded by joint entropy coding. . . . . . . . . . . . . . . . . . . . . . . . 88 xi Abstract Weconsidersensor-baseddistributedsourcelocalizationapplications,wheresensorstrans- mit quantized data to a fusion node, which then produces an estimate of the source lo- cation. For this application, the goal is to minimize the amount of information that the sensor nodes have to exchange in order to attain a certain source localization accuracy. We propose an iterative quantizer design algorithm that allows us to take into account the localization accuracy for quantizer design. We show that the quantizer design should be \application-speci¯c" and a new metric should be de¯ned to design such quantizers. Inaddition, weaddress, using the generalized BFOSalgorithm, the problem of allocating rates to each sensor so as to minimize the error in estimating the position of a source. We also propose a distributed encoding algorithm that is applied after quantization and achieves signi¯cant rate savings by merging quantization bins. The bin-merging technique exploits the fact that certain combinations of quantization bins at each node cannot occur because the corresponding spatial regions have an empty intersection. We apply these algorithms to a system where an acoustic amplitude sensor model is employed at each sensor for source localization. For this case, we propose a distributed source localization algorithm based on the maximum a posteriori (MAP) criterion. If the source signal energy is known, each quantized sensor reading corresponds to a region in xii which the source can be located. Aggregating the information obtained from multiple sensors corresponds to generating intersections between the regions. We develop algo- rithms that estimate the likelihood of each of the intersection regions. This likelihood can incorporate uncertainty about the source signal energy as well as measurement noise. Weshowthatthecomputationalcomplexityofthealgorithmcanbesigni¯cantlyreduced by taking into account the correlation of the received quantized data. Our simulations show the improved performance of our quantizer over traditional quantizer designs and that our localization algorithm achieves good performance with reasonable complexity as compared to minimum mean square error (MMSE) estimation. They also show that an optimized rate allocation leads to signi¯cant rate savings (e.g., over 60%) with respect to a rate allocation that uses the same rate for each sensor, with nopenaltyinlocalizatione±ciency. Inaddition,theydemonstrateratesavings(e.g.,over 30%, 5 nodes, 4 bits per node) when our novel bin-merging algorithms are used. xiii Chapter 1 Introduction 1.1 Motivation Insensornetworks, multiplecorrelatedobservationsareavailablefrommanysensorsthat can sense, compute and communicate. Often these sensors are battery-powered and op- erate under strict limitations on wireless communication bandwidth. This motivates the use of data compression in the context of various tasks such as detection, classi¯cation, localization and tracking, which require data exchange between sensors. The basic strat- egyforreducingtheoverallenergyusageinthesensornetworkwouldthenbetodecrease the communication cost at the expense of additional computation in the sensors [42]. Oneimportantsensorcollaborationtaskwithbroadapplicationsissourcelocalization. The goal is to estimate the location of a source within a sensor ¯eld where a set of distributed sensors measure the acoustic, seismic or thermal signals emitted by a source and manipulate the measurements to produce meaningful information such as signal energy, direction-of-arrival (DOA) and time di®erence-of-arrival (TDOA) [3,20]. In such cases, the sensor observations are correlated and usually corrupted by noise. In addition, 1 Figure 1.1: Block diagram of source localization system. We assume the channel is noiseless and each sensor sends its quantized (Quantizer, Q i ) and encoded (ENC block) measurement to the fusion node where decoding and localization are conducted in a distributed manner. since there is normally a physical separation between the sensors and the fusion node, use of e±cient data compression schemes becomes attractive for the sensor networks that normally have to operate under severely limited channel bandwidth. It should be noted that since practical systems will require quantization of the obser- vations before transmission, the estimation ought to be accomplished based on quantized observations. Thus, the goal of this thesis is to study the impact of quantization on the source localization performance of systems such as those in Figure 1.1. 1.2 Related Work Localization algorithms based on acoustic signal energy measured at individual acoustic amplitude sensors have been proposed in [1,11,19,30], where each sensor transmits un- 2 quantized acoustic energy readings to a fusion node, which then computes an estimate of the location of the source of these acoustic signals. Acoustic amplitude sensors are suitable for low cost systems such as sensor networks, even though measurements will be highly susceptible to environmental interference. The localization problem has been solved mostly through nonlinear least squares estimation, which is sensitive to local op- tima and saddle points. To overcome this drawback, alternative approaches that cast the problem as a convex feasibility problem have been proposed [1,11]. Localization can also be performed using DOA sensors (sensor arrays) [2{4]. Sensor arrays generally provide better localization accuracy, especially in far-¯eld, as compared to amplitude sensors, while they are computationally more expensive. TDOA can be estimated by using various correlation operations and a least squares (LS) formulation can be used to estimate source location [5,24,31]. Good localization accuracy for the TDOA method can be accomplished if there is accurate synchronization among sensors which may require communication overhead that could be signi¯cant in a wireless sensor network. It may be e±cient to deploy di®erent types of sensors (e.g., amplitude sensors and DOA sensors) in a sensor ¯eld of interest so that good localization accuracy can be achieved at reasonable cost [22]. None of these approaches take explicitly into account the e®ect of sensor reading quantization. Since the measurements should be quantized before transmission, estima- tion algorithms need to be developed based on quantized measurements. For example, in a simple distributed framework where a parameter of interest is directly estimated at each sensor, distributed estimators based on quantized data were derived in [23,40]; these results rely on the availability of measurement noise statistics. In [25], the authors 3 considered a source localization system where each sensor measures the signal energy, quantizes it and sends the quantized sensor reading to a fusion node where the local- ization is performed. In this framework, the authors addressed the maximum likelihood (ML)estimationproblemusingquantizeddataandderivedtheCramer-Raobound(CRB) for comparison. Note that in deriving the ML estimator, it was assumed that each sen- sor used identical (uniform) quantizers. In [26], heuristic quantization schemes were also proposed in order to select the quantization to be used at all sensors. Note that this approach does not take into account the sensor location in order to assign quantizers to each sensor. 1.3 DistributedAlgorithmsforSourceLocalizationSystem We consider a situation where a set of sensors and a fusion node wish to cooperate to estimate a source location. We assume that each sensor can estimate noise-corrupted source characteristics, such as signal energy or DOA, using actual measurements (e.g., time-series measurements or spatial measurements). We also assume that there is only one way communication from sensors to fusion node, i.e., there is no feedback channel, the sensors do not communicate with each other (no relay between sensors) and these various communication links are reliable. Theblockdiagramforthesourcelocalizationsystemweconsiderinthisthesisisgiven in Figure 1.1. We propose distributed algorithms for quantizer design (the quantizers we design are Q1;:::;Q M ), encoding of quantized data (ENC) and localization (Localization Algorithm). We also address the problem of allocating the rate to each sensor so as to 4 minimize the localization error. We show that if the sensor location is known during the quantizer design process, signi¯cant performance gains can be achieved with respect to uniform quantization at all the sensors. In particular, it will be seen that optimal strategies for allocating bits to sensors tend to target a uniform \bit density" throughout thesensor¯eld. Thus, thenumberofbitspersensortendstobelowinareaswheremany sensors are located, and conversely high where sensors are relatively far apart from each other. 1.3.1 Distributed Quantizer Design Algorithm Weaddressthequantizerdesignproblemandproposeaniterativealgorithmforquantizer design (Q 1 ;:::;Q M in Figure 1.1). Since standard design of scalar quantizers aims at minimizing the average distortion between the actual sensor reading and its quantized value,thereisnoguaranteethatthesequantizerswillreducethelocalizationerror. Thus, we propose that quantizer design should be \application-speci¯c". That is, to design such quantizers, a new metric should be de¯ned that takes into account the accuracy of the application objective. Application speci¯c quantizer designs have been proposed for several applications, including time-delay estimation [36,37], speech recognition [33,34], and speaker veri¯cation [35]. An overview of recent application speci¯c quantization techniquescanbefound in[9]. Inthisthesisweconsider as an application-speci¯c metric the localization error, i.e., the di®erence between the actual source location and that estimated based on quantized data. A challenging aspect of this problem is that, while quantization has to be performed independently at each sensor, the localization error, which we wish to minimize, depends on the readings from all sensors. Thus we have a 5 problem where independent (scalar) quantizers for each sensor have to be designed based on a global (vector) cost function. To solve this problem, we propose an iterative quantizer design algorithm for the localization problem (see [16,18]), as an extension of our earlier work [32]. We apply our algorithm to a system where an acoustic sensor model proposed in [19] is considered. Our experiments demonstrate the bene¯ts of using application-speci¯c designs to replace traditional quantizers, such as uniform quantizers and Lloyd quantizers. 1.3.1.1 Rate Allocation Obviously, improved localization accuracy can always be achieved with ¯ner quantiza- tion of the sensor measurements, but this requires higher overall power consumption for transmission, and thus potentially reduced lifetime for the sensors. Thus, we will ex- plore the trade-o® between rate (i.e., number of bits to represent the measurements) and overall localization accuracy. In [39], the authors considered an optimal power schedul- ing scheme which allowed them to determine the optimal rate for each sensor and thus the corresponding transmission power. In deriving the optimal scheme, they assumed that each sensor could measure directly the parameter to be estimated with error due to the measurement noise, and quantize its measurement using a uniform quantization scheme. However, in the case of source localization, each sensor can measure only the source signal (acoustic or seismic), from which estimates of signal energy or DOA can be obtained. Note that these measurements are nonlinear functions of the source location (the parameter to be estimated) and will be quantized before transmission to a fusion node. In addition, it will be generally more e±cient to use di®erent quantization schemes 6 at each sensor in order to achieve a certain degree of localization accuracy; this accuracy can vary signi¯cantly depending on the quantization scheme. We address the rate allocation problem while taking into account the e®ect of quan- tization on localization. Clearly, better rate allocation can be achieved if better quan- tization schemes have been employed at each sensor. We apply the generalized BFOS algorithm (GBFOS [28]) to solve the problem. We perform the rate allocation for a sys- tem where an acoustic amplitude sensor model proposed in [19] is considered. Our rate allocation results indicate that better performance can be achieved when allocation leads toapartitionofthesensor¯eldthatisasuniformaspossible. Thus,whenseveralsensors are clustered together, the rate per sensor tends to be lower than when the same sensors are more spread out. 1.3.2 Distributed Localization Algorithm based on Quantized Data We address source localization problem based on quantized sensor readings when an acoustic amplitude sensor is employed at each sensor (see block Localization Algorithm inFigure1.1). Weshowthatwhenthereisnomeasurementnoiseandknownsourcesignal energy, the localization is equivalent to computing the intersection of the regions, each of which corresponds to one quantized sensor reading from each sensor. In this thesis, we propose a distributed source localization algorithm that uses a maximum a posteriori (MAP) criterion (see [17]). To tackle this problem we use a probabilistic formulation, where we consider the likelihood that a given candidate source location would produce a given vector reading. We show that the complexity of the solution can be signi¯cantly reducedbytakingintoaccountthequantizatione®ectandthedistributedpropertyofthe 7 quantized data, without signi¯cant impact on localization accuracy. We also show that for the unknown source signal energy case, a good estimator of the source location can be found by computing a weighted average of the estimates obtained by our MAP-based algorithm under di®erent source energy assumptions. 1.3.3 Distributed Encoding Algorithm We propose a novel distributed encoding algorithm (blocks ENC and Decoder in Fig- ure 1.1) that exploits redundancies in the quantized data from sensors and is shown to achieve signi¯cant rate savings, while preserving source localization performance [15]. With our method, we merge (non-adjacent) quantization bins in a given sensor whenever we determine that the ambiguity created by this merging can be resolved at the fusion node once information from other sensors is taken into account. Note that this is an example of binning as can be found in Slepian-Wolf and Wyner-Ziv techniques [7,8,12]. In our approach, however, we do not use any channel coding. Instead, we propose design techniques that allow us to achieve rate savings purely through binning, and provide several methods to select candidate bins for merging. 1.4 Outline and Contributions The main contributions of this thesis are, ² Distributed Quantizer Design Agorithm We propose an iterative quantizer design algorithm which leads to quantizers that show improved performance over traditional quantizer designs. In addition, our 8 designalgorithmcanbecombinedwiththerateallocationprocesstoproducebetter results. ² Distributed Localization Algorithm Weviewthelocalizationestimationproblemasoneofmaximumaposteriori(MAP) detection problems so as to reduce the signi¯cant complexity that may be required by traditional estimators such as maximum likelihood (ML) and minimum mean square estimation (MMSE) estimators. We show that our distributed localization algorithm achieves good performance as compared with MMSE. ² Distributed Encoding Algorithm We propose a novel distributed encoding algorithm that merges quantization bins at each sensor and achieves rate savings without any loss of localization accuracy when there is no measurement noise. We show that a signi¯cant rate savings can be also obtained via our merging technique even when there is measurement noise. TheblockdiagraminFigure1.1illustratestheorganizationofthethesis. InChapter2 we address the problem formulation for designing quantizers (see block Q i ;i=1;:::M in Figure 1.1) and propose a distributed design algorithm that can be performed in an iter- ative fashion. We also show that signi¯cant gains can be obtained by using optimal rate allocation. In Chapter 3 we address the source localization problem based on quantized data and propose a distributed algorithm based on MAP criterion which shows good lo- calization accuracy with reasonable complexity as compared with MMSE estimation (see block Localization Algorithm in Figure 1.1). In Chapter 4, assuming no measurement noise, we ¯rst present a novel encoding algorithm (block ENC in Figure 1.1) and further 9 developdecodingrulestoresolvethedecodingerrorsthatmaybecausedbymeasurement noiseorparametermismatches. Itshouldbenotedthatthedecodingandthelocalization are conducted at the fusion node. As an example throughout this thesis, we consider a system where an acoustic amplitude sensor model is employed at each sensor. In each chapter, simulations are conducted to characterize the performance of our algorithms. Concluding remarks are given in Chapter 5. 10 Chapter 2 Quantizer Design 2.1 Introduction In this chapter, we address a quantizer optimization problem where the goal is to design independentquantizerswhichoperateontheirsensorreadingswhiletheyshouldminimize the localization error which depends upon all sensor readings. Instead of solving directly the problem by exhaustive search, we propose a distributed quantizer design algorithm which allows us to obtain independent quantizers by reducing the localization error in an iterative manner [6]. Similar procedures have been proposed for vector quantizer designs [21,32]. We show that our iterative technique achieves performance close to the exhaustive search among independent quantizers. Wealsoaddresstherateallocationproblem. Wearegivenatotalrateandeachsensor is assigned additional rate iteratively until the total rate is fully allotted. To solve the problem we apply the generalized BFOS algorithm (GBFOS [28]) which requires calcu- lation of Rate-Distortion (R-D) points for each candidate rate allocation. In addition, since power consumption due to rate transmission to the fusion node is proportional to 11 the distance between each sensor and the fusion node, the same rate at di®erent sensors may lead to di®erent power consumption. With this consideration, we also view the problem as a rate allocation under power constraints where the goal is to achieve optimal localization accuracy for a given power consumption. Note that the GBFOS algorithm allows us to choose the best rate allocation (R) that minimizes the localization error (D) computed using the quantized sensor readings, which are generated from any given set of quantizers designed for each candidate rate allocation. Thus, better rate allocation can be achieved if better quantizers have been employed at each sensor. While the proposed quantizer design algorithm allows us to obtain good quantizers for each rate allocation, having to redesign the quantizers for each iteration of the rate allocation process would be computationally complex 1 . To avoid having to redesign quantizersateachiteration,weintroduce\geometry-driven"quantizers,whicharesimple to implement and show good performance [16,18]. In the experiments, we consider the acoustic amplitude sensor system (see [16{18]). Extensive simulations have been conducted to characterize the performance of our algo- rithm. Ourexperimentsshowtheimprovedperformanceofourquantizersovertraditional quantizers such as uniform quantizers and Lloyd quantizers. We also perform the rate allocationwithseveralquantizationschemessuchasuniformquantizers,geometry-driven quantizersandtheproposedquantizers. Intheexperiments,ourrateallocationoptimized 1 Note that in some cases rate allocation and quantizer design can be done o®-line, e.g., when the numberandpositionofthesensorsdoesnotchange, butthatinmanycasesofinterestthesensornetwork could be recon¯gured regularly, e.g., some subsets of sensors would be activated, which would require on-line rate selection. In these latter cases, a low complexity rate allocation technique would be very important. 12 for source localization allowed us to achieve over 60% rate savings in some cases as com- pared to a uniform rate allocation, with no loss in localization accuracy. Thischapterisorganizedasfollows. Theproblem formulationof the quantizerdesign is given in Section 2.2. An iterative quantizer design algorithm is proposed in Section 2.3 and the rate allocation using the GBFOS algorithm is described in Section 2.4. In Sec- tion 2.5, we present an application to the case where an acoustic amplitude sensor model is employed. Simulation results are given in Section 2.6 and the conclusions are found in Section 2.7. 2.2 Problem Formulation Within the sensor ¯eld S of interest, assume there are M sensors located at known spatial locations, denoted x i ;i = 1;:::;M, where x i 2 S ½ R 2 . The sensors measure signals generated by a source located at an unknown locationx2S, which we assume to be static during the localization process 2 . Denote z i the measurement at the i-th sensor over a time interval k: z i (x;k)=f(x;x i ;P i )+w i (k) 8i=1;:::;M; (2.1) where f(x;x i ;P i ) denotes the sensor model 3 employed at sensor i and w i is a combined noisetermthatincludesbothmeasurementnoiseandmodelingerror. P i istheparameter vector for the sensor model (an example of P i for an acoustic amplitude sensor case is 2 Obviously, our proposed techniques can be readily extended to the case where the source is moving and estimates of its location are computed independently at each time. Tracking algorithms that would exploit the spatial correlation of the source location go beyond the scope of this work. 3 ThesensormodelsforacousticamplitudesensorsandDOAsensorscanbeexpressedinthisform[22]. 13 given in Section 2.5.1). It is assumed that each sensor measures its observation z i (x;k) at time interval k, quantizes it and sends it to a fusion node, where all sensor readings are used to obtain an estimate ^ x of the source location. 4 At sensor i we use a R i -bit quantizer with a dynamic range [z i;min z i;max ]. We assume that the quantization range can be selected for each sensor based on desirable properties of their respective sensing ranges. This will be illustrated in Section 2.5.2 with an example in the case of an acoustic amplitude sensor. Denote ® i (¢) the encoder at sensor i, which generates a quantization index Q i 2 I i = f1;:::2 R i g. In what follows, Q i will also be used to denote the quantization bin to which measurement z i belongs. Denote ¯ i (¢) the decoder corresponding to sensor i, which maps the quantization index Q i to a reconstructed quantized measurement ^ z i . Boththisformulationandthesubsequentdesignmethodologyaregeneralandcapture many scenarios of practical interest. For example, z i (x;k) could be the energy captured byanacousticamplitudesensor(thiswillbethecasestudypresentedinSection2.5), but it could also be a DOA measurement. 5 Each scenario will obviously lead to a di®erent sensor model f(x;x i ;P i ). We assume that the fusion node needs observations, z i (x;k), from all sensors in order to estimate the source location. In some cases one reading per sensor is used, while in other cases values of z i (x;k) for several k's are needed for localization. While multiple measurements can be made at each sensor, all individual measurements need not be sent. Instead, each sensor can compute a su±cient statistic 4 In this thesis, we assume that M sensors are activated prior to the localization process. However, selecting the best set of sensors for the localization accuracy would be important to improve the the system performance with limited energy budget [13,38]. 5 IntheDOAcaseeachmeasurementatagivensensorlocationwillbeprovidedbyanarrayofcollocated sensors. 14 for localization from the multiple measurements, which can then be quantized and trans- mitted. For example, considering the case where the source is not moving and multiple source signal energy measurements are made, it can be easily shown that the average of themeasurementsateachsensorisasu±cientstatisticforlocalization. Thuseachsensor would simply quantize and transmit the average of its signal energy measurements. In what follows we discuss the design of a complete localization system, including i) source localization techniques that operate on quantized data, ii) quantizer design for localiza- tion, and iii) an algorithm to select the quantizer to use at each sensor. 2.2.1 Location Estimation based on Quantized Data Clearly, for z i (x;k) to be useful for localization it must be a function of the relative positions of the source and the sensor. Thus, there exists some function g u (¢) that can provide an estimate of the source location ^ x based on the original, unquantized, obser- vations; these estimators have been the focus of most of the literature to date, for both sensornetworksandothersourcelocalizationscenarios. Instead,hereourgoalistodesign both quantizers ® i and the corresponding estimators g(¢) that operate on quantized data to estimate the source location ^ x: ^ x=g(® 1 (z 1 );:::;® M (z M )): (2.2) While speci¯c g(¢) choices depend on the sensor model f(¢), we can sketch some of their generalproperties(moredetailsforaspeci¯csensormodelcanbefoundinSection2.5.1). First, z i (x;k) must provide information (distance, angle, etc) about the relative position 15 of sensor and source. Thus, after quantization, each transmitted symbol will represent a range of positions (e.g., a range of distances from the sensor or an angular range). Second, with information obtained from all sensors, the source localization algorithm exploits the range information corresponding to each quantized symbol, Q i . This is in general better than reconstructing ^ z i and then using reconstructed sensor information within a standard estimator, g u (¢). That is, an optimal estimator, g(¢), should be a function of range information rather than reconstructed values. In Section 2.5.1 we provide concrete examples for acoustic amplitude sensors in the noiseless case, and our more recent work [17] explores improved estimators that take into account the noise. In both cases, we derive optimal estimators in the minimum mean square error (MMSE) sense that make use of the range information, rather than the reconstructed values. 2.2.2 Criteria for Quantizer Optimization We now consider, for a given rate allocated to each sensor, R = [R 1 ;:::;R M ], the prob- lem of designing the scalar quantizers that can achieve maximum localization accuracy. Assume the sensor model, f(x;x i ;P i ), and source localization function, g(¢), are given. We de¯ne a cost function J(x) for the quantizer design as follows: J(x)= M X i jz i ¡^ z i j 2 +¸kx¡^ xk 2 ; 8x2S; (2.3) where ^ z i is the reconstructed value assigned to z i and ^ x is the estimated source location using a localization function g(¢) that will also have to be designed. Note that the cost 16 function is a weighted sum of i) the standard mean squared error (MSE) in representing the sensor readings and ii) the localization error, k x¡ ^ x k 2 . The Lagrange multiplier, ¸¸0, controls the relative weight of these two cost metrics, so that when setting ¸=0, the problem of minimizing J(x) becomes a standard quantizer design problem. Clearly, for the localization problem we address in this work, we could choose ¸ = 1 since the goal is to design quantizers that minimize the localization error, kx¡ ^ xk 2 regardless of the MSE,jz i ¡^ z j i j 2 . However, in this chapter, we address the quantizer optimization problem using the weighted metric with a given ¸ 6= 0. This approach is chosen in order to limit the complexity of the quantizer design as will be described in what follows. Recall that in our formulation we are designing scalar quantizers. Assume we are givenasetofscalarquantizers, oneforeachsensor, andweseektoencodeanobservation inawaythatminimizesthelocalizationerror. Thekeypointtonoteisthattheestimated location ^ x is based on all the quantized readings. Then, localization optimized encoding will in fact depend on the observations made at all the sensors. Thus it is likely that, in order to optimize localization, an observation z i at sensor i will be assigned to di®erent quantization bins depending on the observations at other sensors z j for j 6= i. Such an unconstrained encoder would achieve optimality in terms of localization but could only be used if there is information exchange between sensors, which has been precluded in our formulation because of the communication overhead it entails. Instead we need to design a set of scalar quantizers that are constrained, in the sense that a given observation z i is always assigned to the same quantization index, no matter what the other sensor readings are. These are just standard scalar quantizers that apply 17 decision rules based on distance to encode z i . Our goal is then to ¯nd the best scalar quantizer assignment by searching the set of all possible constrained quantizers. Solvingdirectlytheproblem(i.e.,searchingonlyamongconstrainedquantizers)would require an exhaustive search and is not practical in general. Instead we will use iterative designtechniquesforagiven¸6=0in(2.3)wherewe allow unconstrained quantizers to be used. Within the design algorithm, mechanisms are then used to constraint the resulting quantizers. Essentially, this means that quantizers are designed so encoders minimize the metric of (2.3) with ¸6=0, but are then approximated by encoders that operate based on ¸=0, as required for the real system (localization information is not known at the time of encoding). While there will be a loss in localization performance relative to using unconstrained quantization,wewillshowexamplestoillustratethatouriterativetechniquescanachieve performance very close to exhaustive search among constrained quantizers. 2.3 Quantizer Design Algorithm The goal of our quantizer design algorithm is to minimize the expected value of the cost function 6 in (2.3) where we average over all possible source locations, characterized by a probability density function p(x): J avg =E(J(x))= Z S J(x)p(x)dx: (2.4) 6 For source localization, the cost function J(x) is replaced by the localization error kx¡^ xk 2 . 18 If no prior information is available about the relative likelihood of possible source locations, p(x) can be made uniform over the sensor ¯eld. For the purpose of training our quantizer, we generate a training set of observationsfz 1 (x;k);:::;z M (x;k)g based on the sensor model, f(x;x i ;P i ), with a given choice of p(x). Quantizer design is optimized for the known sensor locations and the given bit allocation. The optimal bit allocation will be discussed in Section 2.4. In what follows we ¯rst explain the iterative optimization algorithm for the weighted metric with a given ¸, then propose an iterative algorithm that allows us to consider unconstrained quantizers for quantizer design and ¯nally we discuss convergence and stopping criteria for our algorithm. 2.3.1 Iterative Optimization Algorithm ThecostfunctionJ(x)canthenberewrittenintermsoftheMquantizersandlocalization function g(¢) J(x)= M X i jz i ¡¯ i (® i (z i ))j 2 + ¸kx¡g(® 1 (z 1 (x;k));:::;® M (z M (x;k)))k 2 : (2.5) We propose an iterative solution to search for ® i (¢);¯ i (¢);g(¢);i=1;:::;M that minimizes J avg givenby(2.4). Foreachsensoriweoptimizethequantizerselection,whilequantizers for the other sensors remain unchanged. This is done successively for each sensor and repeatedoverallsensorsuntilastoppingcriterionissatis¯ed. Similariterativeprocedures have been proposed for constrained product VQ design [32] and for entropy constrained 19 mean gain shape VQ [21]. Furthermore, in designing quantizers at each sensor, with the other quantizers ¯xed, we take the approach in [6]. That is, at sensor i with ¯ i (¢) and g(¢) ¯xed, ® i (¢) is designed to minimize J avg (® i (¢);¯ i (¢);g(¢);i = 1;:::;M) = J avg (® i (¢)). Similarly, ¯ i (¢) (or g(¢)) is designed with ® i (¢) and g(¢) (or ® i (¢) and ¯ i (¢)) ¯xed. If optimal solutions for each of these steps can be found, then this method guarantees that J avg (® i (¢);¯ i (¢);g(¢)) is nonincreasing at each step, thus leading to at least a locally optimal solution. We now describe solutions for each of these problems. First, ¯x ¯ i (¢) and g(¢). The optimal encoder ® ¤ i (¢) that minimizes (2.5) (or equiva- lently, (2.4)) is such that: ® ¤ i (¢)=argmin ® i (¢) Z x2S [jz i ¡¯ i (® i (x))j 2 +¸kx¡g(® i (x))k 2 ]p(x)dx: (2.6) Note that only the i-th MSE,jz i ¡^ z i j 2 and the localization errorkx¡^ xk 2 are a®ected by the selection of ® ¤ i (¢), i.e., all the other MSE terms are unchanged. Clearly, exhaustive search over all ® i (¢)'s (with ¯ i (¢) and g(¢) ¯xed), guarantees that the overall cost would benon-increasing. Thiswillbeimpractical, especiallyforhighrates, butwewillusesuch an exhaustive search in Section 2.6.1.2 to serve as a benchmark to evaluate the simpler techniques we propose in Section 2.3.2. With® i (¢)andg(¢)¯xed,thedecoder¯ ¤ i (¢)thatminimizes(2.5)issimplythecentroid of all z i assigned to a speci¯c quantization bin Q j i for the i-th sensor 7 , i.e., ¯ ¤ i (Q j i )=E[z i (x)jx2fxj® i (x)=Q j i g]; j =1;:::;L i ;8 i (2.7) 7 note that ¯ ¤ i (¢) only a®ects the MSE cost, since the localization estimate, ^ x, is based on the quanti- zation intervals. 20 Finally, given ® i (¢) and ¯ i (¢), we can determine g ¤ (¢) that minimizes (2.5) as follows: g ¤ (¢)=argmin g(¢) Z x2S kx¡g(® i (x))k 2 p(x)dx=argmin g(¢) E[kx¡^ xk 2 ] (2.8) Noticethattheaveragelocalizationerrorcanbeminimizedbyg ¤ (¢)=E[xj® 1 (x);:::;® M (x)] which is the minimum mean square error (MMSE) estimator obtained given M encoders. Insummary, inourproposediterativeproceduretwoofthedesignstepscanbesolved optimally, while the remaining one (designing ® i (¢)) can also be solved optimally, but would require an exhaustive search. It can be easily shown that for a given sensor each step in the optimization reduces overall cost and so the algorithm will converge to a minimum for the metric of (2.4). Moreover, when quantization for a sensor is optimized, the MSE of the other sensors is not a®ected, so that again overall cost is reduced. Thus a locally optimal solution can be found using this procedure. We next explain how an e±cient constrained design for ® i can be obtained without requiring exhaustive search. 2.3.2 Constrained Design Algorithm Suppose that at sensor i, we are given an encoder ® i =fQ j i ;j = 1;:::;L i g with L i = 2 R i quantizationlevels. ApartitionV i =fV j i ;j =1;:::;L i ganalogoustotheVoronoipartition in the generalized Lloyd algorithm is constructed as follows: 8 V j i =fx:J(x;® i =Q j i )·J(x;® i =Q m i );8m6=jg j =1;:::;L i (2.9) 8 In the standard quantization (¸ = 0), the Voronoi partition Vi is equivalent to the encoder ®i. That is, V j i is the same region as the j-th quantization bin Q j i and given by V j i = fz i j jz i ¡ ^ z j i j 2 · jzi¡^ z m i j 2 ; 8m6=jg. 21 where the cost function is computed using ¯ ¤ i and g ¤ (¢) which are obtained from (2.7) and (2.8) respectively. Notice that V j i is a set of source locations that minimizes the cost function as mapped to Q j i . Then the average cost function J avg given in (2.4) can be computed using V i as follows: J avg (® i ;V i )= L i X j=1 E(J(x;® i )jx2V j i )p(x2V j i ) (2.10) J avg can be reduced by minimizing it for each V j i . As in the standard quantization (¸ = 0), we perform the minimization over ^ z j i for each V j i which will be achieved by taking the centroid, E(z i (x)jx2V j i ). Formally, J avg (® i ;V i ) ¸ L i X j=1 min ^ z j i E(jz i (x)¡^ z j i j 2 +¸kx¡g ¤ (® i )k 2 jx2V j i )p(x2V j i ) = J avg (® i ;¯ ¤ i (V i );V i ) (2.11) where ¯ ¤ i (V i ) produces the reconstructed values, ^ z j i ;j = 1;:::;L i by taking the centroid over fz i (x)jx 2 V j i g. It is noted that the encoding of sensor readings corresponding to V i in (2.9) is unconstrained since it requires knowledge of other sensor readings for encoding. Thus, the unconstrained encoder should be changed into the corresponding constrained one which in turn will be used for construction of V i at the next iteration. In our algorithm, we adopt a simple distance measure to obtain constrained encoders. We ¯rst ¯nd the centroid of each V j i obtained from (2.9) and then use these centroids to create a quantization partition, i.e., a quantization bin Q j i includes all inputs assigned to the centroid of V j i using the nearest neighbor rule: 22 Q j i =fz i j jz i ¡^ z j i j 2 ·jz i ¡^ z m i j 2 ; 8m6=jg (2.12) wheref^ z j i jj =1;:::;L i g generated from ¯ ¤ i (V i ). Itshouldbenoticedthattheencoder ^ ® i =fQ j i ;j =1;:::;L i gupdatedby(2.12)would not guarantee that the metric J avg is nonincreasing since the encoder ^ ® i is updated in a sense that only the ¯rst termjz i ¡^ z j i j 2 is minimized. That is, ^ ® i may increase the second termkx¡^ xk 2 =kx¡g ¤ (^ ® i )k 2 in J avg . The procedure of (2.9) to (2.12) will be repeated at each sensor until a certain stopping criterion is satis¯ed. 2.3.3 Convergence and stopping criteria The challenge for achieving convergence is that we need the unconstrained quantizers to minimize the metric with nonzero values of ¸ while they should be replaced by the constrained ones which are not guaranteed to minimize the metric. 9 As in the standard quantization (¸=0), we can seek to ¯nd the largest value ¸ max for ¸ that leads directly to constrained encoding of sensor readings in order to guarantee convergence. However, we observe that ¸ max tends to be very small and leads to localization errors thataregreater thanthoseachievedby¯rstdesigning unconstrained quantizersand then forcing them to be constrained. This is not surprising since we design quantizers with small ¸ while the localization error is minimized when ¸=1. 10 9 We can do exhaustive search at each iteration to obtain the constrained quantizers that minimize the metric in order to guarantee the convergence but it would be too computationally expensive in practice. 10 For some applications where the local metric (e.g., jzi¡ ^ z j i j 2 ) and the global metric (e.g., kx¡ ^ xk 2 ) are important, we should be able to ¯nd ¸ to maximize the application objective. For such cases, some techniques need to be developed to search for a reasonable ¸. 23 In order to avoid increase in the metric at each iteration by (2.12), we suggest to use a simple stopping criterion which forces the algorithm to stop whenever the metric gets worse. Thissimplestoppingrulewouldbee±cientforthedesignofdistributedquantizers for source localization since other quantizers are also designed so as to reduce the same metric,kx¡^ xk 2 and when the design process goes back again to say, i-th quantizer, the metricrecomputedatsensoritendstogetbetterthanthepreviousiteration, makingthe algorithm continue to work. At least with this stopping criterion, we can guarantee that the metric is nonincreasing. DespitethefactthatJ avg isnotalwaysnonincreasingdueto(2.12),wecanexpectthat the updated encoder ^ ® i will reduce the metric for most of iterations since it is updated based on the partition V i which is constructed in a sense that the metric is minimized. As an example, we experiment with our quantizers for the acoustic amplitude sensor case in Section 2.6.1.1 where the algorithm is shown to produce a good solution on the average as compared with typical quantizers such as uniform quantizer and Lloyd quantizers. Our quantizers are also shown to achieve performance close to that of the optimized quantizers designed using the exhaustive search explained in Section 2.6.1.2. 2.3.4 Summary of algorithm Given the number of quantization levels, L i = 2 R i , at sensor i, the algorithm is summa- rized as follows. 11 Algorithm 1 For simplicity, in what follows, z i (x;k) is written as z i (x). Step1 : Initialize the encoders ® i (¢);i=1;:::;M. Set thresholds ² 1 and ² 2 , set i=1, and 11 Our algorithm can be applied for arbitrary integer, L i , and not only those values corresponding to integer Ri. 24 set iteration indices · 1 =1 and · 2 =1. Step2 : Compute the cost function of (2.5). Step3 : Construct the partition, V i using (2.9). Step4 : Compute the average cost J · 1 avg =E[J(x)] Step5 : If (J · 1 ¡1 avg ¡J · 1 avg ) J · 1 avg <² 1 go to Step 7; otherwise continue Step6 : · 1 =· 1 +1: Update the encoder ® i using (2.12). Go to Step 2 Step7 : if i<M i=i+1 go to step 2; else if D · 2 ¡1 (x;^ x)¡D · 2 (x;^ x) D · 2(x;^ x) <² 2 Stop; else i=1;· 2 =· 2 +1; Go to Step 2, where D · 2 (x;^ x) is given by E(kx¡^ xk 2 ) at the · 2 -th iteration. Note that the quantizer design is performed o®-line using a training set that is gen- erated based on known values of P i and p(x); thus the quantizer training phase makes use of information about all sensors, but when the resulting quantizers are actually used, each sensor quantizes the information available to it independently. It is possible to introduce \geometry-driven" quantizers: for the amplitude sensor case, these quantizers are designed so as to partition uniformly the distance between sensors and source (see Section 2.5.3). Similar ideas can be applied to DOA sensors, where quantizers provide uniform quantization of the angle of arrival. In Section 2.6, these quantizers are shown to be simple and achieve good performance as compared with the proposed quantizers. A discussion of the robustness of our quantizer to model mismatches is also left for Section 2.6. 25 2.4 Rate Allocation using GBFOS Withtheproposedcostfunctionwecandesignquantizersforagivenrateallocation(bits assigned to each sensor). The next step is then to search for the rate allocation,R ¤ , that minimizes the average localization error, i.e., D = R x2S kx¡^ xk 2 p(x)dx for a given total rate R T = P M i R i . A more general problem formulation can take into consideration transmission costs, e.g., the power consumption in the network required to transmit bits to the fusion node. This power consumption will depend on the bits allocated to speci¯c sensors and also on the distance between these sensors and the fusion node. Thus, we can address the rate allocation problem under power constraints as follows: we are given a total power, P = P M i P i ;P i = C i R i where C i is the power required for sensor i to transmit one bit to the fusion node, x f ; Thus P i provides an approximation to the power consumption at sensor i. Our goal is then to ¯nd the rate allocation R ¤ that minimizes the average localization error for a given total power. Clearly, C i is proportional to the physical distance between x i and x f and thus once the sensors are deployed in a sensor ¯eld, it can be determined prior to the rate allocation 12 . Notice that only the relative values of C i 's will play a role in this rate allocation process. To solve the rate allocation problem for source localization, we can apply the well- known generalized BFOS algorithm (GBFOS) [28] to obtainR ¤ . The algorithm typically starts by assigning the given maximum rate, R T to each sensor and then reduces the number of bits optimally until the rate budget is met. Initially, R i = R T ;i = 1;:::;M 12 Ci can be written as Ci = °i k x f ¡xi k ®s where ®s is the exponent for path-loss and °i re°ects transmission method and other factors [39]; in this chapter ° i is assumed to be equal for all sensors and thus ignored for simplicity. 26 and at each iteration we reduce the rate allocated to one of the sensors by computing alternative rate-distortion (R-D) operating points for each candidate rate allocation and choosing the one that minimizes the slope of the R-D curve. Note that as the bit rate is reduced, the distortion (localization error) at each sensor will decrease at di®erent rates (equivalently, slope) in the R-D curve. This procedure guarantees the optimal reduction in bit rate at each iteration. This is done repeatedly until P M i R i = R T is satis¯ed. Formally, at the ´-th iteration, i ´ =arg min 1·i·M D i (´)¡D(´¡1) ¢R ´ i (2.13) where D(´¡1) is the average localization error at the previous step, D i (´)= R x2S kx¡ ^ x i (´)k 2 p(x)dx, ^ x i (´) is computed using g(¢) and M quantizers are designed for R i = (R ´ 1 =R ´¡1 1 ;:::;R ´ i =R ´¡1 i ¡¢R ´ i ;:::;R ´ M =R ´¡1 M ). Note that for each candidate rate allocation, we may design quantizers using the algorithm in Section 2.3.4 to achieve good rate allocation performance 13 or we can use simple quantization schemes that do not require redesigning quantizers at each iteration. For example, we can use M uniform quantizers (or the \geometry-driven" quantizers to be introduced later) for the purpose of obtaining the rate allocation. Then, once an optimalrateallocationisobtained,abettersetofquantizerscanbedesignedforthatrate usingthealgorithminSection2.3.4. OurexperimentsinSection2.6illustratetheimpact of using di®erent quantization schemes on the rate allocation performance. That is, it 13 Therateallocationproblemwouldbeoneoftheapplicationswherewecanobtainbene¯tsfromusing our quantizer design algorithm. 27 can be said that a signi¯cant gain can be achieved by taking into account quantization scheme during the rate allocation process. TheGBFOSalgorithmcanbealsoappliedtotherateallocationproblemunderpower constraints. In this case, at each step, we decrease the power consumed by individual sensors by reducing the bit rate assigned to them until P i C i R i = P is satis¯ed. That is, the same process as in the previous rate allocation will be performed except that the computation of the slope is conducted in terms of the power consumption so that at the ´-th iteration, i ´ =arg min 1·i·M D i (´)¡D(´¡1) ¢P ´ i (2.14) where ¢P ´ i =C i ¢R ´ i . 2.5 Application to Acoustic Amplitude Sensor Case 2.5.1 Source localization using quantized sensor readings Assuming no measurement noise (w i =0 in (2.1)), we consider source localization based on quantized data. Note that the localization algorithm to be explained in this section is designed for the high SNR regime (w i ¼ 0) but will also provide the foundation for localization based on noisy quantized data (see [17], and Chapter 3). Since each quan- tized sensor reading corresponds to a region where a source is located, all quantized sensor readings lead to a partition of a sensor ¯eld obtained by intersecting the regions corresponding to each sensor reading. Formally, A= M \ i=i A i ; A i =fxjf(x;x i ;P i )2Q i ; x2Sg (2.15) 28 where A i is the region corresponding to the quantized reading from sensor i. Once the intersectionregionAisobtained,wecancomputetheestimateas ^ x=E(xjx2A). Notice that the estimator is optimal in MMSE sense under the assumption of no measurement noise. As an example, we consider source localization based on acoustic sensor readings as proposed in [19], where an energy decay model of sensor signal readings is used for localizationbasedonunquantizedsensorreadings. 14 Thismodelisbasedonthefactthat the acoustic energy emitted omnidirectionally from a sound source will attenuate at a rate that is inversely proportional to the square of the distance [27]. When an acoustic sensor is employed at each sensor, the signal energy measured at sensor i over a given time interval k, and denoted by z i , can be expressed as follows: z i (x;k)=g i a kx¡x i k ® +w i (k); (2.16) where the parameter vectorP i in (2.1) consists of the gain factor of the i-th sensor g i , an energy decay factor ®, which is approximately equal to 2, and the source signal energy a. The measurement noise term w i (k) can be approximated using a normal distribution, N(0;¾ 2 i ). In (2.16), it is assumed that the signal energy, a, is uniformly distributed over the range [a min a max ]. Assumingthatthesignalenergy,a,isknown, 15 localizationbasedonquantizedsensor readingscanbeillustratedbyFigure2.1, whereeachring-shapedareacorrespondstoone quantizedobservationatasensor. Bycomputingtheintersectionofalltheringareas(one 14 The energy decay model was veri¯ed by the ¯eld experiment in [19] and was also used in [11,17,22]. 15 In practice, the signal energy is unknown and should be jointly estimated along with the source location as described in [17] and in Chapter 3. 29 Δ Δ Δ Δ ri (r i ) A i A sensor i : Sensor locations A : intersection of 3 ring- shaped areas Figure 2.1: Localization of the source based on quantized energy readings per sensor), it is possible to de¯ne the area where the source is expected to be located. Note that at least three observations are required to achieve a connected intersection and the region A i in (2.15) can be rewritten as follows: A i = fx:g i a kx¡x i k ® 2Q i g ^ x = E(xjx2 M \ i A i ); (2.17) where A i is the ring-shaped region obtained from the quantized bin Q i that z i falls into (Figure 2.1). If the source is uniformly distributed in the sensor ¯eld, the estimate, ^ x wouldbethesamplemeanintheintersectionA. Clearly, ^ xistheMMSEestimatorunder the assumption of known energy and no measurement noise. A similar approach can be applied to the DOA sensor case where each quantized sensor reading leads to a cone-shaped region and the localization will be performed by computing the intersection of the corresponding regions. 30 2.5.2 Quantizer design In quantizer design, we generate the training set assuming that the signal energy a is known and w i = 0. Notice that M quantizers (® 1 ;:::;® M ) are designed by reducing the metric, J(® i ;¯ ¤ i ;g ¤ (¢)) with ¸ = 1 at each iteration where ¯ ¤ i and g ¤ (¢) are given by (2.7) and (2.17), respectively. However, since the signal energy is generally unknown, the sensitivitytomismatchesinsignalenergywillbestudiedinChapter3, wherelocalization algorithms will be developed to handle measurement noise and unknown signal energy. Since the signal energy takes any value in the range [a min a max ] in real situations, quantizers should be designed to avoid quantizer overload by setting the dynamic ranges oftheM quantizersas[z i;min z i;max ]=[ a min r 2 i;max a max r 2 i;min ]where[r i;min r i;max ]istherange within which the i-th sensor is supposed to measure acoustic source energy. The value of r i;max can be set such that the probability that an arbitrary point inside the sensor ¯eld is sensed simultaneously by at least 3 sensors should be close to 1 [41]. Assuming that the distribution of the number of sensors in any given circle with area C d = ¼r 2 d is Poisson with rate ¸ d C d where ¸ d is the sensor density (sensors/m 2 ), the probability p is then given by p= 1 X i=3 e ¡¸ d ¼r 2 d (¸ d ¼r 2 d ) i i! (2.18) Given¸ d , wecancomputer i;max (¸r d )foradesirablevalueofp(say, 0.999). Inthisway, the likelihood of missing a source is minimized. In order to guarantee that ¯nite dynamic ranges are used, the value of r i;min is chosen as a small nonzero value (0:2 · r i;min · 1:0m). Note that if more sensors are used, better quantization in each sensor is possible 31 (the dynamic ranges will tend to be smaller). With this initialization step, the quantizer design as outlined in Section 2.3.4 can be used. 2.5.3 Geometry-DrivenQuantizers: EquallyDistance-dividedQuantizers Since each set of quantizers induces a partitioning of the sensor ¯eld, designing good quantizers for localization can be seen to be equivalent to making a good partition of the sensor ¯eld by adjusting the width, ¢ r i (r i ), of the ring-shaped areas in Figure 1. If no prior information is available about the source location, p(x) can be assumed to be uniform and thus choosing ¢ r i (r i ) to achieve a uniform partitioning of the sensor ¯eld would seem to be a good choice. Intuitively, a uniform partitioning of the sensor ¯eld is more likely to be achieved when the ring-shaped areas have the same width, ¢ r i (r i ) = const (this is certainly the case when the sensors are uniformly distributed). Thisconsiderationleadsustointroduceequallydistance-dividedquantizers(EDQ),which can be viewed as uniform quantizers in distance such that ¢ r i (r i ) = r i;max ¡r i;min L i ;8i. That is, EDQ allows each sensor to quantize the signal intensity such that the rings have equal width. To justify the EDQ design, we performed a simulation (see Figure 2.5) that shows that EDQ provides good localization performance, which comes close to that achievable by the quantizers proposed in Section 2.3.4. EDQ has the added advantage of facilitating the solution of the rate allocation problem. While the GBFOS algorithm [28] provides the optimal rate allocation, it would also require very large computational load, since it relies on the calculation of rate-distortion points at each iteration step, and the quantizers should be redesigned using the algorithm of Section 2.3.4 for each candidate rate allocation. Instead, in our experiments we use the GBFOS algorithm along with 32 EDQ, which does not require quantizer redesign for each candidate rate allocation. With this approach one can use EDQ to compute easily the optimal rate allocation for the particular sensor con¯guration, and then use the technique proposed in Section 2.3.4 to design a quantizer for the given optimal rate allocation. 2.6 Simulation Results Inourexperiments,wedenotelocalization-speci¯cquantizer(LSQ)thequantizerdesigned using the algorithm proposed in Section 2.3.4 and assume that each sensor uses the same dynamic range with r i;min = 1m and r i;max = r d (see Section 2.5.2 for details of the dynamic range) for all quantizers (uniform quantizer, Lloyd quantizer, EDQ and LSQ). 2.6.1 Quantizer design We design LSQs using a training set including 1500 source locations generated with a uniform distribution in a sensor ¯eld of size 10£10m 2 , where M = 3;4;5 sensors are randomly located. In these experiments, the cost function is given by J =k x¡ ^ x k 2 (equivalently, ¸ =1 in (2.3)) and the model parameters are a = 50;® = 2;g i = 1 and ¾ 2 i = 0. Note that Lloyd quantizers are designed with ¸ = 0 in (2.3). We evaluate the results in terms of the average localization error, E(kx¡^ xk 2 ) computed by (2.17). 2.6.1.1 Comparison with traditional quantizers InFigure2.2,LSQiscomparedwithtraditionalquantizerssuchasuniformquantizersand Lloydquantizersintermsoftheaveragelocalizationerror. Inthisexperiment, 500di®er- ent sensor con¯gurations are generated for M = 3;4;5, and for each con¯guration LSQs 33 are designed for R i =2;3;4. The 95% con¯dence interval for the localization error is also plotted to show the robustness of LSQ to the sensor con¯guration. It can be noted that LSQ is more robust than traditional quantizers, since the sensor con¯guration is already takenintoaccountintheLSQdesign. Incontrast,theotherquantizersaredesignedwith- out considering sensor location information. Because of this, they may perform poorly in certain sensor con¯gurations, thus leading to greater variance in localization error than with LSQ. 2 2.5 3 3.5 4 0 5 10 15 Number of bits at each sensor, M=5 Average localization error (m 2 ) Uniform Q Lloyd Q LSQ 3 3.5 4 4.5 5 0 5 10 15 Number of sensors involved, R i =3 Average localization error (m 2 ) Uniform Q Lloyd Q LSQ Figure 2.2: Comparison of LSQs with uniform quantizers and Lloyd quantizers. The average localization error is plotted vs. the number of bits, R i , assigned to each sensor. The average localization error is given by 1 500 P 500 l E l (kx¡^ xk 2 ) where E l is the average localization error for the l-th sensor con¯guration. 2000 source locations are generated as a test set with uniform distribution of a source location. SinceLSQmakesfulluseofthedistributedpropertyoftheobservations,itcanbeseen to provide improved performance over traditional quantizers. This can be also explained in terms of the partitioning of the sensor ¯eld, as shown in Figures 2.3 and 2.4. As expected, the ring-shaped areas for uniform quantizers are hardly overlapped, yielding 34 the worst partitioning of a sensor ¯eld, since most quantization bins are mapped into regions close to the sensors by the relation, z i / 1 r 2 i = 1 kx¡x i k 2 . It is also easily seen that LSQ leads to a more uniform partitioning, which in turn reduces the localization error. 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 Partitioning of Sensor field by Uniform Q with R=[2 2 2 2 2] 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 Partitioning of Sensor field by Lloyd Q with R=[2 2 2 2 2] Figure2.3: Partitioningofsensor¯eld(10£10m 2 )(grid=0:2£0:2)byuniformquantizer (left) and Lloyd quantizer (right). 5 sensors are deployed with 2-bit quantizers. Each partition corresponds to the intersection region of 5 ring-shaped areas. More partitions yield better localization accuracy. 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 Partitioning of Sensor field by EDQ with R=[2 2 2 2 2] 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 Partitioning of Sensor field by LSQ with R=[2 2 2 2 2] Figure 2.4: Partitioning of sensor ¯eld (10£10m 2 ) (grid= 0:2£0:2) by EDQ (left) and LSQ (right). 5 sensors are deployed with 2-bit quantizers. Each partition corresponds to the intersection region of 5 ring-shaped areas. 35 2.6.1.2 Comparison with optimized quantizers LSQs are also compared to optimized quantizers which are designed by an exhaustive search. In this experiment, since the computational complexity required to search for ® ¤ i in (2.6) is very high, we make some approximations to curtail the search size. That is, each encoder, ® ¤ i is designed based on two di®erent methods which approximate the exhaustive search. ² Method 1: ¯rst, we search over ® i which are constructed with a coarse grid ¢ c = z i;max ¡z i;min Ns whereN s determinesthesizeofthesetf® i gcomputedby N s ! (L i ¡1)!(N s ¡L i +1)! . Once we obtain the encoder with ¢ c , we observe that the boundary values that de- terminequantizationbinsarelocatedinthesubinterval,[z i;min z h i ]; z h i ¿z i;max . Thus, we search using a ¯ne grid, ¢ f = jz h i ¡z i;min j N s ;¢ f ¿ ¢ c with reduced com- plexity to ¯nd the encoder that minimizes the metric given by (2.6). ² Method 2: instead of searching with a ¯xed grid size, we consider a variable grid size that allows us to search with ¯ner grid over the encoders that show good per- formance. That is, by using the grid ¢ r = r i;max ¡r i;min Nr , we can construct candidate encoders with their quantization bins that uniformly divide the sensor ¯eld (for further details, see EDQ in Section 2.5.3). It is noted that the search grid mapped from ¢ r becomes smaller toward z i;min and larger toward z i;max . Forthesamesensorcon¯gurationofFigure2.4,theoptimizedquantizersaredesigned withR i =2 and tested for comparison. The number of partitions and the localization er- roraretabulatedforseveralquantizersinTable2.1. OPTQ1andOPTQ2correspondto the quantizers designed using Method 1 and Method 2, respectively. For comparison, the 36 search grids for the optimized quantizers are determined such that the search complexity for them is almost the same. Note that more partitions tend to yield better localization accuracy. ItcanbeseenthatLSQachievesperformanceclosetotheoptimizedones,both in terms of average localization error and in number of partitions. Table 2.1: Comparison of LSQs with Optimized quantizers. The average localization error are computed using a test set of 2000 source locations. Quantizer Type Uniform Q Lloyd Q LSQ OPT Q1 OPT Q2 Number of partitions 36 55 132 140 142 Avg. Localization error 11.1172 6.0463 0.3758 0.3088 0.3001 2.6.1.3 Sensitivity to parameter perturbation LSQ was evaluated under various types of mismatch conditions for 100 di®erent 5-sensor con¯gurations. In each test we modi¯ed one of the parameters with respect to what was assumed during quantizer training. The simulation results are tabulated in Table 2.2. In this experiment, 1000 source locations in a sensor ¯eld 10£ 10m 2 were generated under assumptions of both a uniform distribution and a normal distribution for each con¯guration. We assume that the true parameters can be estimated at the fusion node and used for localization. Note that our localization algorithms are not very sensitive to parameter mismatches (see [17] and Chapter 3). Thus, small errors in estimating parameters at the fusion node from received quantized data do not lead to signi¯cant localization errors. As seen in Table 2.2, LSQ is robust to mismatch situations where the parameters usedinquantizerdesignaredi®erentfromthosecharacterizingthesimulationconditions. 37 Thus, there would be no need to redesign the quantizers when the parameter mismatches are small. Table 2.2: Localization error (LE) of LSQ due to variations of the modelling parameters. LE= 1 100 P 100 l=1 E l (kx¡^ xk 2 ),whereE l istheaveragelocalizationerrorforthel-thsensor con¯gurationandisexpressedinm 2 . LE(normal)isfortestsetfromnormaldistribution with mean of (5,5) and unit variance and LE (uniform) from uniform distribution. LSQs are designed with R i =3;a=50;®=2;g i =1 and w i =0 for uniform distribution. Decay factor ® 1.8 1.9 2 2.1 2.2 LE (normal) 0.4654 0.2211 0.1468 0.3512 1.1254 LE (uniform) 1.7025 0.5586 0.1321 0.4612 1.4235 Gain factor g i 0.8 0.9 1 1.1 1.2 LE (normal) 0.5567 0.2332 0.1468 0.1521 0.2783 LE (uniform) 0.6734 0.2556 0.1321 0.2298 0.5176 2.6.1.4 Performance analysis in a larger sensor network: comparison with traditional quantizers Sincetherearealargenumberofsensorsinatypicalsensornetwork,wenowevaluateLSQ forlargesensornetworks. Inthisexperiment,20di®erentsensorcon¯gurationsinalarger sensor ¯eld, 20£20m 2 , are generated for M = 12;16;20. For each sensor con¯guration, LSQsaredesignedwithagivenrateofR i =3andshowgoodperformancewithrespectto traditionalquantizersinTable2.3. Notethatthesensordensityfor M =20in20£20m 2 is equal to 20 20£20 = 0:05 which is that for the case of M = 5 in 10£10m 2 . It is worth noting that the system with a larger number of sensors outperforms the system with a smaller number of sensors (M = 3;4;5) although the sensor density is kept the same. This is because localization performance degrades around the edges of the sensor ¯eld 38 and thus in a larger sensor ¯eld there is a relatively smaller number of source locations near the edge, as compared to a smaller ¯eld with the same sensor density. Table 2.3: Average localization error (m 2 ) vs. number of sensors (M = 12;16;20) in a larger sensor ¯eld, 20£20m 2 . The localization error is averaged over 20 di®erent sensor con¯gurations where each quantizer uses R i =3 bits. Number of sensors, M Uniform Q Lloyd Q LSQ 12 2.6007 0.7152 0.1105 16 1.7386 0.3976 0.0708 20 0.9541 0.2284 0.0525 2.6.1.5 Discussion Based on the above experiments, we make the following observations. First, each sensor should use a di®erent quantizer, designed based on the location of all sensors to achieve good localization accuracy. Second, the proposed algorithm provides a practical way of designing independent quantizers that reduce a global metric (localization error) without requiring exhaustive search. 2.6.2 Rate allocation 2.6.2.1 EDQ design Before employing the EDQs for the rate allocation problem, we compare their perfor- mance to that of LSQs. Refer to Figure 2.5, where 500 di®erent sensor con¯gurations are generated for each M (M = 3;4;5) and LSQ and EDQ are compared with a test set of 2000 source locations for each con¯guration. Figure 2.5 shows that EDQ provides good localization performance, which comes close to that achievable by LSQ. In Figure 2.4, 39 the partitioning of a 10£10m 2 sensor ¯eld with 5 sensors deployed is plotted for EDQ and LSQ respectively. Based on these experiments we observe that EDQ allows us to achieve a good parti- tioning, even though the sensor location information is not taken into account for EDQ design. Note that the bene¯t of LSQ design will become insigni¯cant as rate and/or number of sensor increases. (see Figures 2.2 and 2.5). This is because each partition (equivalently, the intersection of M ring-shaped areas) becomes smaller when the number of sensors and/or the rate increases and thus there will be less gain to be achieved by making each partition as equal in area as possible during LSQ design 2 2.5 3 3.5 4 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Number of bits, R i with M=5 Average localization error (m 2 ) −− X −− : EDQ −− O −− : LSQ 3 3.5 4 4.5 5 0 0.2 0.4 0.6 0.8 1 1.2 1.4 Number of sensors, M with R i =3bits Average localization error (m 2 ) −− X −− : EDQ −− O −− : LSQ Figure 2.5: Justi¯cation of EDQ design. The average localization error is plotted vs. the numberofbits,R i ,assignedtoeachsensorwithM=5(left)andvs. thenumberofsensors, MwithR i =3bits(right). Theaveragelocalizationerrorisgivenby 1 500 P 500 l E l (kx¡^ xk 2 ) where E l is the average of localization error for the l-th sensor con¯guration. 2000 source locations are generated as a test set with uniform distribution of source locations. 40 2.6.2.2 E®ect of quantization schemes In the same sensor con¯guration as in Figure 2.3, the rate allocation was conducted using three di®erent quantizers (uniform quantizers, EDQs and LSQs) to search for the optimalrateallocationthatwouldgivetheminimumlocalizationerror. Intheexampleof Figure2.3, itcanbeseenthatsensors3and5aresoclosetoeachotherthattheyprovide redundant information for localization and thus the optimal solution allocates few bits to both these sensors. In fact, in our example, at relatively low rates (an average of 2 bits per sensor) it is more e±cient to send information from only three sensors (sensors 1,2 and 4), i.e., allocating zero bits for the other two sensors (sensors 3 and 5). The results obtained by the rate allocation are provided in Table 2.4 where the localization errorswerecomputedusingEDQsandLSQsdesignedforseveraldi®erentrateallocations, demonstratingthatrateallocationisimportanttoachievegoodlocalizationperformance. To test the e®ect of di®erent quantization schemes at each sensor, the rate allocation was conductedusinguniformquantizers,EDQsandLSQsateachsensortoobtainR ¤ U ;R ¤ EDQ andR ¤ , respectively. Note that the same rate allocation (R ¤ =R ¤ EDQ ) was obtained for EDQs and LSQs. 2.6.2.3 Rate allocation under power constraints The rate allocation under power constraints given by (2.14) was also performed using EDQs for the sensor con¯guration in Figure 2.3. In this experiment, the fusion node x f , is assumed to be located at (10;10) and the power consumption at sensor i is assumed to be P i = C i R i =kx i ¡x f k ® s R i where ® s = 1 for simplicity. The optimal rate allocation 41 Table 2.4: Localization error (m 2 ) for various sets of rate allocations where R ¤ EDQ ;R ¤ U and R ¤ are obtained by GBFOS using EDQ, uniform quantizer and LSQ, respectively given P R i =10. Localization error is computed by E(kx¡^ xk 2 ) using EDQ and LSQ. Sets of rate allocations EDQ LSQ R ¤ =[4 3 0 3 0] 0.1533 0.1226 R ¤ EDQ =[4 3 0 3 0] 0.1533 0.1226 R=[3 3 0 4 0] 0.1615 0.1258 R=[3 4 0 3 0] 0.1543 0.1329 R=[3 2 2 3 0] 0.3005 0.2297 R=[2 2 2 2 2] 0.6199 0.3908 R ¤ U =[6 0 0 3 1] 1.2360 1.2142 R ¤ PW was obtained with the total power set as the power consumed by 5 sensors when each sensor uses R i =2 bits. The test results for R ¤ PW and several uniform rate allocations R U 's are provided for comparison in Table 2.5. It can be seen that the sensors (sensors 2,4 and 5) closer to x f will be assigned higher rate and sensor 3 will be allocated lower rate since it would provide redundant information along with sensor 5. Note that signi¯cant power savings (over 60%) can be achieved by the rate allocation,R ¤ PW as compared withR U = [5 5 5 5 5]. Obviously, the optimal rate allocation R ¤ = [4 3 0 3 0] obtained under equal power assumption (C i =constant) is no longer the optimal one for this rate allocation problem. 2.6.2.4 Performance analysis { comparison with uniform rate allocation Figure 2.6 demonstrates the bene¯ts of our optimal rate allocation, R ¤ , as compared to uniform rate allocation, R U = (R 1 = R T M ;:::;R M = R T M ). For the sensor con¯guration in Figure 2.3 the total rate, R T , is varied from 10 to 20 bits. For each R T , R ¤ is obtained 42 Table 2.5: Localization error (m 2 ) for various sets of rate allocations where R ¤ PW was obtained by GBFOS using EDQ given P = P i C i R i = P i 2C i . Localization error is given by E(kx¡^ xk 2 ): Sets of rate allocations EDQ Power Consumption R ¤ PW =[0 6 0 4 3] 0.0155 68.5841 R ¤ =[4 3 0 3 0] 0.1533 78.6355 R U =[2 2 2 2 2] 0.6199 71.9401 R U =[3 3 3 3 3] 0.1547 107.9101 R U =[5 5 5 5 5] 0.0204 179.8502 by rate allocation process using EDQs and then the LSQs are designed for each R ¤ for comparison. Thetworateallocations(R ¤ ;R U )weretestedusingatestsetof2000source locations with two measurement noise scenarios ¾ i = 0 and ¾ i = 0:05. In Figure 2.6, we compare performance under both EDQ and LSQ for both uniform and optimized rate allocation (R U and R ¤ respectively). This experiment shows that signi¯cant gains can beachievedbyusingoptimalrateallocationevenwhenthereismeasurementnoise. Note thatthegainachievedbyusingoptimalbitallocation(ratherthanuniformbitallocation) is greater for EDQ. This can be explained by the fact that EDQ does not use sensor location information when the bit allocation is uniform, whereas LSQ does incorporate location information even in the uniform case. Thus more gain can be expected for EDQ because the optimal bit allocation allows sensor location to be used to improve performance. We also note that EDQs with optimized allocation in fact provide better localizationaccuracythanLSQswithuniformallocation. Thissuggeststhatusingthebit allocation process to improve performance could be a good design strategy, as it allows competitive performance to be achieved even with simple quantizers such as EDQ. 43 10 12 14 16 18 20 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Total rate, R T with Var(w i )=0 Average localization error (m 2 ) −− + −−: EDQ with R U −− * −−−: LSQ with R U −− X −−: EDQ with R * −− O −−: LSQ with R * 10 12 14 16 18 20 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Total rate, R T with Var(w i )=0.05 2 Average localization error (m 2 ) −− + −−: EDQ with R U −− * −−−: LSQ with R U −− X −−: EDQ with R * −− O −−: LSQ with R * Figure 2.6: Comparison of optimal rate allocation, R ¤ with uniform rate allocation R U . LSQsaredesignedforeachR ¤ andR U . 4curvesareplottedforcomparison. Forexample, \EDQ(orLSQ)withR U (orR ¤ )"indicatesthecurveoflocalizationerrorcomputedwhen each sensor uses EDQ (or LSQ) designed for R=R U (or R ¤ ). A comparison between standard uniform quantization with uniform bit allocation (which would be a straightforward design for this system) and LSQ with optimal bit allocation is also useful. As can be seen in Figure 2.7 a signi¯cant gain (over 60% rate savings) can be achieved by the rate allocation optimized with LSQs. Finally, our rate allocation was evaluated for di®erent sensor con¯gurations where 5 sensorsarerandomlydeployedin10£10m 2 sensor¯eld. Intheexperiments,100di®erent sensorcon¯gurationsaregeneratedandforeachcon¯guration, rateallocationusingEDQ is performed to obtainR ¤ . The localization error is averaged over 100 con¯gurations and compared with that forR U . Figure 2.8 shows that the rate allocation is more important than quantizer design alone for obtaining good localization accuracy. 44 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 10 15 20 25 30 35 40 45 50 Average localization error (m 2 ) Total rate, R T Rate savings by Rate allocation and LSQ Figure 2.7: Gain in rate savings achieved by our optimal rate allocation, R ¤ using LSQs as compared with trivial solution where each sensor uses uniform quantizers of the same rate. 2.6.2.5 Discussion From our rate allocation experiments we observe ¯rst that signi¯cant gains can be achieved by assigning di®erent rates to sensors at di®erent locations. Second, we also note that in regions where sensors are clustered each sensor uses a smaller number of bits, while sensors that are further apart are allocated more bits. Third, EDQ is a useful practical design due to its simplicity, implying that some geometry-driven quantizers can be similarly introduced in real situations. Finally, noting the close relationship between sensor locations and their relative rates (equivalently, weights) assigned, we can develop simple and powerful algorithms that should use solely geometry information. For exam- ple, the relative distances between sensors or directional information could be e®ectively used for applications such as sensor networks that require a simple computing process and/or low power consumption. 45 10 11 12 13 14 15 16 17 18 19 20 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Total rate, R T Average localization error (m 2 ) EDQ with R * LSQ with R U Figure 2.8: Evaluation of optimal rate allocation for many di®erent sensor con¯gura- tions. Localization error is averaged over 100 sensor con¯gurations for two di®erent rate allocations: R U and R ¤ . 2.7 Conclusion Inthischapter,weaddressedthequantizerdesignandrateallocationproblemsforsource localization. We proposed an iterative design algorithm that allows us to reduce the localization error for quantizer design. We showed that we can obtain the optimal rate allocation by applying the well known GBFOS algorithm along with LSQ. To overcome thecomplexityduetotherateallocationprocess,weintroducedasimplequantizer,EDQ. As future work, we are considering new algorithms for rate allocation with low com- plexity so that they can be applied to large-scale sensor networks. We will also study the case of multiple sources and another application (e.g., vehicle tracking) which requires rate allocation. In addition, since the channel between each sensor and the fusion node is not perfect, this should be taken into account for future research. 46 Chapter 3 Localization Algorithm based on Quantized data 3.1 Introduction Source localization based on quantized acoustic sensor readings has been discussed in Section 2.5.1 for the case of no measurement noise and known source signal energy. In this chapter, we extend the work and propose a distributed source localization algorithm under more challenging conditions, including measurement noise as well as unknown source signal energy. If there is no measurement noise and the source signal energy is known, the only source location uncertainty is due to quantization. Because the source signal energy is known, each quantized reading can be mapped to a region in the sensor ¯eld, such that the shape of the region depends on the characteristics of the sensor. For the case of an acoustic sensor that provides no directional information, the region corresponding to one quantized reading takes the form of a \ring" centered at the sensor location (see Figure 2.1 in Section 2.5.1). Since the measurements are assumed to be noiseless the source can be located by intersecting the regions corresponding to each sensor. A vector of measurements (one per sensor) will correspond to a unique location 47 and, after quantization, there will be a unique (non-empty) intersection region that will contain the position of the source. In this scenario e±cient quantizer designs aim at minimizing the average area of all admissible intersections (see Chapter 2 and [16]). Clearly, localization becomes more di±cult when there is non-negligible measurement noise and/or the source signal energy is not known. In particular, some vector readings may lead to empty intersection regions and a given source location can produce di®erent quantized measurements, depending on noise conditions. To address these problems we use a probabilistic formulation, where we estimate the probability that each candidate location may have produced a given vector reading. In this context, we ¯rst formulate the source localization problem as minimum mean square error(MMSE)estimationproblem; thisapproachgenerallyhassigni¯cantcomputational complexity, especially for the non-Gaussian case 1 we are addressing in this chapter [14]. We show that the complexity can be signi¯cantly reduced by taking into account the quantization e®ect and the distributed property of the quantized data while maintaining good localization accuracy. Based on this, ¯rst, under the assumption of known source signal energy, we propose a distributed algorithm based on the maximum a posteriori (MAP) criterion. We also show that for the case of unknown source signal energy, good localization performance can be achieved by using a weighted average of the estimates obtained by our proposed localization algorithm under di®erent source energy assump- tions. 1 In the acoustic sensor case, the received quantized data and the source location are not jointly Gaussian. 48 3.2 Problem Formulation Recall the sensing scenario described in Section 2.2 where we discussed the localization problem considered when the source signal energy, a, is known and there is no measure- ment noise (w i = 0 in (2.1)). Consider now the case where there is measurement noise and/or the source signal energy is unknown. There is no guarantee that the intersection constructed from quantized readings (see Figure 2.1) is always nonempty. Furthermore, the source location, x might be located outside the intersection if it is nonempty. Thus, weneedtoconsiderallpossiblequantizedvaluesthatagivensourcelocationcanproduce with the measurement noise and unknown energy. Assuming the statistics of the measurement noise w i are known, we can formulate the source localization problem as MMSE estimation problem as follows: ^ x = E(xjQ r )= Z x2S xp(xjQ r )dx (3.1) = Z x2S xp(Q r jx)p(x)=p(Q r )dx (3.2) = R x2S x[ Q M i=1 R z i 2Q i p(z i jx)dz i ]p(x)dx p(Q r ) (3.3) wherep(Q r )= R x2S [ Q M i=1 R z i 2Q i p(z i jx)dz i ]p(x)dx. Theconditionalprobabilityp(z i jx);i= 1;:::;M in (3.3) can be obtained by the sensor model in (2.1) and knowledge of a (e.g., a pdf of a) as follows: p(z i jx)= Z a p(z i jx;a)p(a)da (3.4) where p(z i jx;a) is a normal distribution with mean g i a kx¡x i k ® and variance ¾ 2 i . Note that incomputingp(z i jx;a),itisassumedthatthemeasurementnoisew i isnormaldistributed 49 with zero mean and variance, ¾ 2 i . Clearly, if the conditional probability p(z i jx) and p(x) are given, the MMSE estimator ^ x can be obtained using (3.3). It can be noted that although the MMSE estimation provides an optimal estimate of the source location, it would require very large computational complexity to perform the integration over the sensor ¯eld S and would also require knowledge of the prior distributions, such as p(x) andp(a). Inthischapter,weuseanuninformativeoruniformpriordistributionwhenever there is ignorance about the parameters to be estimated, since this allows us to obtain an a posteriori distribution that will be approximately proportional to the likelihood. The uninformative prior has the added advantage of keeping subsequent computations relatively simple. Now, in the following sections, we develop our localization algorithm, which is shown to achieve good performance with reasonable complexity as compared with the MMSE estimation. 3.3 LocalizationAlgorithmbasedonMaximumAPosteriori (MAP) Criterion: Known Signal Energy Case First, we assume that the source signal energy a is known at the fusion node when the localization is performed. The case of unknown signal energy is treated in Section 3.5. Even when there is no measurement noise (w i =0 in (2.1)), there still exists some degree of uncertainty on the source location, due to the quantization. That is, the best thing we can do for localization based on quantized measurements would be to identify a region where the source is located. Thus, when quantization is used, we need candidate regions 50 where the source might be, rather than all candidate source locations. This is helpful in reducing the complexity of estimating source locations. Since each observed M-tuple corresponds to a di®erent spatial region in the noiseless case, we can partition the source ¯eld S into jS f Q j regions where S f Q is the set of the M-tuples that can be generated in a noise-free environment. This can be written as follows: S f Q =f(Q 1 ;:::;Q M )jg i a kx¡x i k ® 2Q i ;i=1;:::;M x2Sg (3.5) For the j-th element, Q j in S f Q , we can construct a corresponding region, A j in S as follows: A j = M \ i=i A i ; A i =fxjg i a kx¡x i k ® 2Q j i ; x2Sg (3.6) whereQ j =(Q j 1 ;:::;Q j M ). Clearly,thereisonetoonecorrespondencebetweeneachregion A j and each Q j in S f Q . In what follows, we ¯rst consider how to select the region that maximizes the proba- bility Pr[A j jQ r ]; 8j where the received M-tuple, Q r , is typically noise corrupted and thus Q r may not belong to S f Q . Then, we estimate the source location as the centroid of the region we selected. Based on this, the localization algorithm can be formulated as follows: LetH j be the j-th hypothesis, corresponding to the j-th region, A j , which can be ob- tainedusingthej-thM-tuple,Q j ,inthesetS f Q . Now,weseektoidentifythehypothesis, H ¤ , that maximizes the probability Pr[H j jQ r ];8j. 51 H ¤ = argmax j p(H j jQ r ) j =1;:::;jS f Q j = argmax j p(A j jQ r ) = argmax j p(Q r jx2A j )p j ; p j =p(x2A j ) = argmax j M Y i=1 p(Q i jx2A j )p j ; (3.7) where the conditional probability, p(Q i jx2A j ) is computed in the following way: p(Q i jx2A j )=p(¹ i (x)+w i 2Q i jx2A j ) = p(Q i;l ¡¹ i (x)·w i ·Q i;h ¡¹ i (x)jx2A j ) = Z x2A j [©( Q i;h ¡¹ i (x) ¾ i )¡©( Q i;l ¡¹ i (x) ¾ i )]p j (x)dx; (3.8) where ¹ i (x) = g i a kx¡x i k ® and p j (x) = p(xjx 2 A j ). Here ©(:) is the cdf for the nor- mal distribution, N(0;1). Once H ¤ is obtained, the source estimate ^ x is computed by E(xjH ¤ )=E(xjx2A ¤ ). Itshouldbenoticedthattheproposedalgorithm canbeapplied regardlessofthesen- sortypes,sinceeachsetofquantizedsensorreadingswouldgenerateauniqueintersection under the assumption of no measurement noise. That is, we can obtain H ¤ in (3.7) by replacing g i a kx¡x i k ® in (3.5) and (3.6) with the sensor model employed at each sensor. 52 3.4 Implementation of Proposed Algorithm While the computational complexityof our MAP-based localization as described by (3.7) is lower than that for the MMSE estimation, the integration of (3.8) has to be per- formed for each hypothesis, which is computationally complex. To make it practical, the localization algorithm needs some modi¯cation so that the complexity due to jS f Q j integrations per Q r should be signi¯cantly reduced, so that only a few integrations need to be performed. Figure 3.1: Source locations that generate the given Q r for each variance (¾ = 0;0:05;0:16;0:5) are plotted. 5 sensors (marked as ±) are employed in a sensor ¯eld 10£10m 2 and each sensor uses a 2-bit quantizer. It should beobservedthat, evenin the noisy case, if the source is in A j corresponding to Q j 2 S f Q , the corresponding quantized vector reading is likely to be Q j . Clearly, the 53 number of source locations belonging to A j that produce Q j would be larger at higher SNR. Figure3.1demonstratesthisobservation: inthisexperiment, atestsetof5000source locations is generated for each ¾ = 0;0:05;0:16;0:5 and only source locations generating the givenQ r are plotted for eachtest set. Based on this observation, it would be possible to construct a set, A s (½S) such that p(x2A s jQ r )¼1, allowing us to consider only the hypotheses (or regions) that belong to the set A s , leading to a reduction in the number of the hypotheses that have to be evaluated for the MAP-based localization. This set can be constructed by noting that given a source location x, and the corre- sponding noisy M-tuple Q r , it is be very likely that p(x 2 A i jQ r ) > p(x 2 A j jQ r ) as long as p(^ x i jQ r ) À p(^ x j jQ r ), where ^ x j is the centroid of the set A j . In other words, we ¯rst choose the most likely centroid, say ^ x c given Q r and construct A s (±) = fxj k x¡ ^ x c k< ±;x2 Sg such that we can obtain the set A s (±) satisfying P(x2 A s (±))¼ 1, with a reasonable choice of ±. Clearly, the smaller the ± we choose, the more reduction in computational complexity we can achieve at the cost of increased localization error. This consideration leads us to a practical implementation of the algorithm as follows: 1) Initial search (coarse search): ^ x c = argmax j p(^ x j jQ r );where ^ x j =E(xjx2A j ) = argmax j p(Q r j^ x j )p j = argmax j M Y i=1 [©( Q i;h ¡¹ i (^ x j ) ¾ i )¡©( Q i;l ¡¹ i (^ x j ) ¾ i )]p j 54 2) Re¯ned search: Once ^ x c is obtained, we can construct the set A s (±) = S K k=1 A k (clearly, A s 2 S) such thatk ^ x c ¡^ x k k<±; 8k =1;:::;K where ^ x k =E(xjx2A k ). Now, we can compute H ¤ using K hypotheses as follows: H ¤ = argmax k p(A k jQ r ); k =1;:::;K (3.9) = argmax k p(Q r jx2A k )p k ; p k =p(x2A k ) (3.10) = argmax k M Y i=1 p(Q i jx2A k )p k (3.11) Notice that K takes a small value (¿jS f Q j) with a good choice 2 of ±. 3.5 Unknown Signal Energy Case So far, we assumed that the source signal energy, a, is known to the fusion node where thelocalizationisperformedbasedonquantizeddata. However,thesourceenergyisgen- erally unknown and should be also estimated along with the source location. A possible solution would be to adopt the energy ratios-based source localization method proposed in [19] where the authors took ratios of the unquantized energy readings of a pair of sensors in the noise-free case to cancel out the energy, a, and formulated a nonlinear least square optimization problem. However, while the method shows good performance with low-complexity implementation for unquantized sensor readings and provides full robustness to unknown source energies, it would have some drawbacks when the sensor 2 A good choice of ± depends upon the experimental settings, such as M;Ri, sensor models and other factors. 55 energy readings are quantized. This is because the fusion node has to obtain the energy ratios based on quantization intervals. This leads to larger range of possible values for the energy ratios and thus increased uncertainty about the source location. In this section, we consider the localization problem with unknown energy as an extension of the estimation problem explained in the previous sections and we show that agoodestimateofthesourcelocationcanberepresentedbyaweightedaverageofsource estimates, each of which is obtained by the proposed algorithm in Section 3.3. With the unknown source signal energy a, termed a nuisance parameter [10], we can reformulate the MMSE estimation problem from Section 3.2 as follows: ^ x = E[xjQ r ]= Z x2S xp(xjQ r )dx (3.12) = Z x x[ Z a p(x;ajQ r )da]dx (3.13) = Z x x[ Z a p(xjQ r ;a)p(ajQ r )da]dx (3.14) = Z a [ Z x xp(xjQ r ;a)dx]p(ajQ r )da (3.15) = Z a ^ x MMSE (a)p(ajQ r )da (3.16) ¼ Z a ^ x prop (a)p(ajQ r )da: (3.17) In (3.17), the MMSE estimate, ^ x MMSE given by (3.3) is replaced by the estimate ^ x prop obtained by the localization algorithm proposed in Section 3.3. Note that computing p(ajQ r ) can be complex: 56 p(ajQ r ) = Z x p(x;ajQ r )dx (3.18) / p(a) Z x p(Q r jx;a)p(x)dx: (3.19) In order to reduce the complexity of computing (3.19), we further make the following approximations: 1. Whilethesourcesignalenergycantakecontinuousvaluesinapredeterminedinter- val [a min a max ], we consider only discrete energy values, since small variations in signal energy have a small impact on localization accuracy (see Figure 3.2). Based on this, (3.17) can be written as ^ x ¼ Z a ^ x prop (a)p(ajQ r )da (3.20) ¼ N X a k ^ x prop (a k )p(a k jQ r ) (3.21) = N X a k ^ x prop (a k )W k P W i (3.22) where N is the number of discrete energy values used and W k is the k-th weight factor which can be rewritten as W k =p(a k ) Z x2S p(Q r jx;a k )p(x)dx: (3.23) 2. Some signal energy values are bound to be less likely than others (for example, a particularenergyvaluecanleadtoanonemptyintersectionofquantizationregions, 57 while under other energy values the intersections may be empty). Thus, there will besomedominantweightsin(3.23)andifwecomputetheweights¯rst,weonlyneed to perform the localization algorithm for those weights that are su±ciently large. RefertoFigure3.2,whichillustratesthatafewweights(3or4)aresu±cientlylarge while others can be ignored. Thus, we can approximate (3.22) by P L l=1 ^ x prop (a l )W l P L i W i where it is assumed that the set of weights, fW k g N k=1 is arranged such that W 1 ¸ W 2 ¸:::¸W L ¸:::¸W N . 3. Finally, since we can construct a set A s (± w ;a k ) using the coarse search described in Section 3.4, this set can also be used to compute the weights given by (3.23): W l ¼p(a l ) Z x2A s (± w ;a l ) p(Q r jx;a l )p(x)dx (3.24) Clearly, there will be some trade-o®s between the computational complexity and the localization performance and this can be controlled by adjusting parameters such as N;L; and ± w . 3.6 Simulation Results We consider a sensor network of M sensors deployed randomly in a 10£10m 2 sensor ¯eld (M = 3;4 and 5 in our experiments). Each sensor measures an acoustic source energy based on the energy decay model in (2.1), quantizes it using a quantizer designed by the algorithm in Chapter 2 and sends it to a fusion node where localization is performed 58 Figure 3.2: Localization accuracy of proposed algorithm under source signal energy mis- match (top). In this experiment, a test set of 2000 source locations is generated for each source signal energy (a = 40;45;:::;55;60). Localization is performed by the proposed algorithm in Section 3.4 using a = 50 and ± = 1m. Distribution of weights vs. Number of weights chosen, L. (bottom) ( P L l W l P N k W k vs. L). A test set of 2000 source locations is generated and N=10 weights are computed for each source location. using our proposed localization algorithms. Note that the measurement noise is assumed to be normal distributed with zero mean and a variance of ¾ 2 . 3.6.1 Case of known signal energy First, assumingaisknown,thedistributedlocalizationalgorithmproposedinSection3.4 is tested using a test set of 2000 source locations generated with p(x) modeled by a uniform distribution for each (¾,M) pair, where ¾ varies from 0 to 0.5 and M=3,4 and 5. In Figure 3.3 the proposed algorithm is compared to the MMSE estimation given by (3.3) since the latter gives us a good lower bound for testing our localization algorithm. 59 Note that we choose ± = 1m in our algorithm. From Figure 3.3, it can be said that our algorithm provides localization accuracy close to that for MMSE estimation, especially when ¾ <0:16. Figure 3.3: Localization algorithms based on MMSE and MAP criterion are tested when ¾ varies from 0.5 to 0 with R i =3 (left) and when R i =3;4 and 5 with ¾ =0:05 (right) and ± =1m respectively. w i »N(0;¾ 2 ). 3.6.2 Case of unknown signal energy Our distributed localization algorithm for the unknown source signal energy case of Sec- tion 3.5 is tested and compared with the MMSE estimator and the energy ratio based method and also evaluated under various types of mismatches. In applying the local- ization algorithm, prior distributions for p(x) and p(a) are assumed to be uniform over x 2 S and a 2 [a min a max ] = [0 100], respectively and the parameters are set as N =10;a k 2f10;20;:::;100g;L=3;± w =1m;®=2;g i =1. 60 In Figure 3.4, the interval [a min a max ] is divided into 8 subintervals, namely [20 30],...,[90 100], and for each subinterval, a test set of 2000 source locations with the source signal energy randomly drawn from the subinterval is generated with ¾ = 0:05. Clearly, as we mentioned in Section 3.5, the energy ratio-based method provides worse localization accuracy than our proposed algorithm. 20 30 40 50 60 70 80 90 100 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 Source signal energy, with R i =3 and Var(w i )=0.05 2 Average localization error (m 2 ) −−+−− : Energy ratio based algorithm (ERA) −−x−− : MAP based algorithm −−o−− : MMSE estimation Sensitivity of LSQ to various localization functions Figure 3.4: Localization algorithms based on MMSE estimation, MAP criterion and energy ratios are tested by varying source signal energy a from 20 to 100. We set N = 10;L=3;and± w =1minouralgorithm. Inthisexperiment, atestsetwith M =5;R i = 3isgeneratedwithuniformdistributionofsourcelocationsforeachsignalenergyandthe measurement noise is modeled by a normal distribution with zero mean and ¾ =0:05. 3.6.3 Sensitivity to parameter mismatches Table 3.1 shows results when one of the parameters is randomly perturbed: that is, the actualvalueof®ingeneratingatestsetisrandomlydrawnfromtheinterval[2¡¢®; 2+ ¢®] with ¢® = 0;0:1;:::;0:4 and the actual gain also drawn randomly from a uniform distribution[1¡¢g 1+¢g]. Similarly,eachsensorlocation(x;y)israndomlygenerated 61 from [x¡¢x x+¢x]and[y¡¢y y+¢y], respectivelywith ¢x=¢y =0;0:1;:::;0:4. In addition, a test set of 2000 source locations with normal distribution with mean (5;5) and variance (¾ 2 x ;¾ 2 y ) is generated for ¾ x = ¾ y = 1;1:5;2;2:5 and 3. From the results in Table 3.1, it can be said that small perturbations do not result in signi¯cant degradation in localization accuracy. ¢® 0 0.1 0.2 0.3 0.4 LE (MAP) 0.5319 0.7360 1.4643 2.2653 3.6998 LE (ERA) 0.8886 1.1402 1.8658 2.7042 3.6696 ¢g i 0 0.1 0.2 0.3 0.4 LE (MAP) 0.5414 0.6293 0.8201 1.1606 1.6215 LE (ERA) 0.8980 0.9695 1.2012 1.6407 2.0873 (¢x;¢y) 0 0.1 0.2 0.3 0.4 LE (MAP) 0.5414 0.5380 0.5836 0.6242 0.7176 LE (ERA) 0.8980 0.8900 0.9167 1.0074 1.0760 (¾ x ;¾ y ) 1 1.5 2 2.5 3 LE (MAP) 0.2710 0.3554 0.4806 0.8992 1.7617 LE (ERA) 0.8879 0.9233 0.9732 1.4556 2.2024 Table 3.1: Localization error (LE) (m 2 ) of MAP algorithm compared to energy ratios based algorithm (ERA) under various mismatches. In each experiment, a test set is generated with M = 5 and ¾ = 0:05 and one of the parameters is varied. Localization error (LE) (m 2 ) is computed by E(kx¡ ^ xk 2 ) using ® = 2;g i = 1;R i = 3 and uniform distribution of p(x). 3.6.4 Performance analysis in a larger sensor network Our MAP-based localization algorithm was also tested and compared with ERA in a larger sensor network 20£20m 2 where the number of sensors is M =12;16;20. For each M,atestsetof4000sampleswasgeneratedusinguniformpriorsforp(x);p(a)andnormal measurement noise with ¾ =0:05. Our proposed algorithm still shows better localization accuracy than the energy-ratios based algorithm (ERA) in larger sensor networks. 62 12 13 14 15 16 17 18 19 20 0 0.5 1 1.5 2 2.5 3 Number of sensors, M Average Localization Error (m 2 ) −−o−− : Energy Ratios −−*−− : MAP Criterion Figure 3.5: Localization algorithms based on MAP criterion and energy ratios are tested in a larger sensor network by varying the number of sensors. The parameters are N = 10;L = 3 and ± w = 1m in our algorithm. In this experiment, a test set of 4000 samples wasgeneratedforM =12;16;20. Eachsensorusesa3bitquantizerandthemeasurement noise is modeled by the normal distribution with zero mean and ¾ =0:05. 3.7 Conclusion In this chapter, we considered the source localization based on acoustic sensor readings and proposed a distributed localization algorithm based on MAP criterion. We showed that the complexity could be signi¯cantly reduced by taking into account the correlation of quantized data without much degradation of localization accuracy. 63 Chapter 4 Distributed Encoding Algorithm 4.1 Introduction In the quantizer design proposed in Chapter 2, our algorithm seeks to reduce the local- izationerrorbyadjustingthesizeofa\¯xed"numberofquantizationbinsateachnode. 1 In this chapter, we consider a novel way of reducing the total number of bits transmitted tothe fusion nodewhile preserving localization performance. Itis shown thatthis can be accomplished by exploiting properties of the combinations of quantization bins that can be transmitted by the nodes. The basic concept can be motivated as follows. Suppose that one of the nodes reduces the number of bins that are being used. This will cause a correspondingincreaseinuncertainty. However,thefusionnodethatreceivesinformation fromallthenodescanoftencompensateforthisuncertaintybyusingdatafromtheother nodes as side information. Morespeci¯cally,inthecontextofsourcelocalization,sinceeachsourcelocationleads to a set of quantized sensor measurements that correspond to a non-empty intersection, the fusion node will only receive those combinations of quantized measurements that 1 Each node may employ one sensor or an array of sensors, depending on the applications. 64 correspondtorealsourcelocations. Obviously, thisistrueonlyinthenoiselesscase. The keyobservationinthischapteristhatthenumberofrealquantizedvectorobservationsis smaller than the total number of possible combinations of quantized values at the nodes. Thus, many arbitrary combinations of quantized readings at several nodes cannot be produced because the corresponding regions have an empty intersection. Therefore, there will still be some redundancy after quantization which we will seek to reduce in order to decrease overall transmission rate. Inthischapter, weconsideradistributedencodingalgorithmthatachievessigni¯cant rate savings by merging selected quantization bins without a®ecting localization perfor- mance. This algorithm is designed for the case when there is no measurement noise but we also show that it can be applied with slight modi¯cation (at the expense of potential decoding errors) even when there is measurement noise. The assumptions made in Section 2.2 still hold throughout this chapter. This chapter is organized as follows. The de¯nitions which will be used to derive our algorithm are provided in Section 4.2. The motivation is explained in Section 4.3. In Section 4.4, we consider quantization schemes that can be used with the encoding at each node. An iterativeencodingalgorithmisproposedinSection4.5. Foranoisysituation,weconsider the modi¯ed encoding algorithm in Section 4.6 and describe the decoding process and how to handle decoding errors in Section 4.7. In Section 4.8, we apply our encoding algorithm to the source localization system where an acoustic amplitude sensor model is employed. Simulation results are given in Section 4.9 and the conclusions are found in Section 4.10. 65 4.2 De¯nitions Let S M =I 1 £I 2 £:::£I M be the cartesian product of the sets of quantization indices. S M containsjS M j=( Q M i L i ) M-tuples representing all possible combinations of quanti- zation indices. We denote S Q the subset of S M that contains all the quantization index combinations that can occur in a real system, i.e., all those generated as a source moves around the sensor ¯eld and produces readings at each node: S Q =f(Q 1 ;:::;Q M )j9x2S;Q i =® i (z i (x)); i=1;:::;Mg (4.1) We denote S j i the subset of S Q that contains all M-tuples in which the i-th node is assigned quantization bin Q j i : S j i =f(Q 1 ;:::;Q M )2S Q jQ i =jg: (4.2) For a given Q j i we can always construct the corresponding S j i from S Q . Note also that S j i ½S Q . Alongwiththis,wedenoteS j i ,thesetof(M¡1)-tuplesobtainedfromM-tuples inS j i ,whereonlythequantizationbinsatpositionsotherthanpositioniarestored. That is, if (Q 1 ;:::;Q M )=(a 1 ;:::;a M )2S j i then we have (a 1 ;:::;a i¡1 ;a i+1 ;:::;a M )2S j i . There is a one to one correspondence between the elements in S j i and S j i so thatjS j i j=jS j i j. 4.3 Motivation: Identi¯ability In this section, we assume that Pr[(Q 1 ;:::;Q M ) 2 S Q ] = 1, i.e., only combinations of quantizationindicesbelongingtoS Q canoccurandthosecombinationsbelongingtoS M ¡ 66 S Q , which lead to an empty intersection, never occur. These sets can be easily obtained when there is no measurement noise (i.e., w i = 0) and no parameter mismatches. As discussedintheintroduction, therewillbeelementsin S M thatarenotinS Q . Therefore, simple scalar quantization at each node would be ine±cient because a standard scalar quantizerwouldallowustorepresentanyofthe M-tuplesinS M , butjS M j¸jS Q j. What we would like to determine now is a method such that independent quantization can still be performed at each node, while at the same time we reduce the redundancy inherent in allowing all the combinations in S M to be chosen. Note that, in general, determining that a speci¯c quantizer assignment in S M does not belong to S Q requires having access to the whole vector, which obviously is not possible if quantization has to be performed independently at each node. In our design we will look for quantization bins in a given node that can be \merged" without a®ecting localization. As will be discussed next, this is because the ambiguity created by the merger can be resolved once information obtained from the other nodes is taken into account. Note that this is the basic principle behind distributed source coding techniques: binning at the encoder, which can be disambiguated once side information is made available at the decoder [7,8,12] (in this case quantized values from other nodes). Merging of bins results in bit rate savings because fewer quantization indices have to be transmitted. To quantify the bit rate savings we need to take into consideration that quantization indices will be entropy coded (in this chapter Hu®man coding is used). Thus,whenevaluatingthepossiblemergeroftwobins,wewillcomputetheprobabilityof the merged bin as the sum of the probabilities of the merged bins. For example, suppose 67 that Q j i and Q k i are merged into Q min(j;k) i . Then we can construct the set S min(j;k) i and compute the probability for the merged bin as follows: S min(j;k) i =S j i [S k i (4.3) P min(j;k) i =P j i +P k i (4.4) where P j i = R A j i p(x)dx, p(x) is the pdf of the source position and A j i is given by A j i =fxj(Q 1 =® 1 (z 1 (x));:::;Q M =® M (z M (x)))2S j i g (4.5) Suppose the encoder at node i merges Q j i and Q k i into Q l i with l = min(j;k), and sends the corresponding index to the fusion node. The decoder will construct the set S l i for the merged bin using (4.3) and then will try to determine which of the two merged bins (Q j i or Q k i in this case) actually occurred at node i. To do so, the decoder will use the information provided by the other nodes, i.e., the quantization indices Q m (m6= i). Consider one particular source position x 2 S for which node i produces Q j i and the remaining nodes produce a combination of M¡1 quantization indicesQ2S j i . Then, for this x there would be no ambiguity at the decoder, even if bins Q j i and Q k i were to be merged, as long asQ = 2S k i . This follows because ifQ = 2S k i the decoder would be able to determine that only Q j i is consistent with receivingQ. With the notation adopted earlier this leads to the following de¯nition: De¯nition 1 Q j i and Q k i are identi¯able, and therefore can be merged, i® S j i \S k i =;: 68 The question that remains is how to merge identi¯able bins in order to minimize the total rate used by the M nodes to transmit their quantized observations. 4.4 Quantization Schemes As mentioned in the previous section, there will be redundancy in M-tuples after quanti- zation which can be eliminated by our merging technique. However, we can also attempt to reduce the redundancy during quantizer design before the encoding of the bins is per- formed. Thus,itwouldbeworthconsideringthee®ectofselectionofagivenquantization scheme on system performance when the merging technique is employed. In this section, we consider three schemes as follows: ² Uniform quantizers Since they do not utilize any statistics about the sensor readings for quantizer design, there will be no reduction in redundancy by the quantization scheme. Thus only the merging technique plays a role in improving the system performance. ² Lloyd quantizers Using the statistics about the sensor reading, z i available at node i, the i-th quan- tizer is designed using the generalized Lloyd algorithm [29] with the cost function jz i ¡ ^ z i j 2 which is minimized in an iterative fashion. Since each node consider only the information available to it during quantizer design, there will still exist much redundancy after quantization which the merging technique can attempt to reduce. ² LSQ proposed in Chapter 2 While designing a quantizer at node i, we take into account the e®ect of sensor 69 readings at other nodes on the quantizer design by de¯ning a new cost function, J = P M i jz i ¡ ^ z i j 2 +¸kx¡ ^ xk 2 which will be minimized in an iterative manner. Since the correlation between sensor readings is exploited during quantizer design, LSQ along with our merging technique will show the best performance of all. We will discuss the e®ect of quantization and encoding on the system performance based on experiments for an acoustic amplitude sensor system in Section 4.9.1. 4.5 Proposed Encoding Algorithm Ingeneraltherewillbemultiplepairsofidenti¯ablequantizationbinsthatcanbemerged. Often, all candidate identi¯able pairs cannot be merged simultaneously, i.e., after a pair has been merged, other candidate pairs may become non identi¯able. In what follows we propose algorithms to determine in a sequential manner which pairs should be merged. In order to minimize the total rate, an optimal merging technique should attempt to reduce the overall entropy as much as possible, which can be achieved by (1) merging high probability bins together and (2) merging as many bins as possible. It can be observedthatthesetwostrategiescannotbepursuedsimultaneously. Thisisbecausehigh probabilitybins(underourassumptionofuniformdistributionofthesourceposition)are large and thusmerging largebinstends toresult in fewerremaining merging choices(i.e., a larger number of identi¯able bin pairs may become non-identi¯able after two large identi¯able bins have been merged). Conversely, a strategy that tries to maximize the number of merged bins will tend to merge many small bins, leading to less signi¯cant 70 reductions in overall entropy. In order to strike a balance between these two strategies we de¯ne a metric, W j i , attached to each quantization bin: W j i =P j i ¡°jS j i j; (4.6) where °¸0. This is a weighted sum of the bin probability and the number of quantizer combinations that include Q j i . If P j i is large this would be a good candidate bin for merging under criterion (1), whereas a small value of jS j i j will indicate a good choice under criterion (2). In our proposed procedure, for a suitable value of °, we will seek to prioritize the merging of those identi¯able bins having largest total weighted metric. This will be repeated iteratively until there are no identi¯able bins left. The proposed global merging algorithm is summarized as follows: Step 1: Set F(i;j)=0, where i=1;:::;M;j =1;:::;L i , indicating that none of the bins, Q j i , have been merged yet. Step 2: Find (a;b) = argmax (i;j)jF(i;j)=0 (W j i ), i.e., we search over all the non-merged bins for the one with the largest metric W b a . Step 3: Find Q c a ;c6=b such that W c a =max j6=b (W j a ), where the search for the maximum is done only over the bins identi¯able with Q b a at node a. If there are no bins identi¯able with Q b a , set F(a;b) = 1, indicating the bin Q b a is no longer involved in the merging process. If F(i;j)=1;8i;j, stop; otherwise go to Step 2. Step 4: Merge Q b a and Q c a to Q min(b;c) a with S min(b;c) a =S b a [S c a . Set F(a;max(b;c))=1. Given M quantizers, we can construct the sets S j i , and the metric W j i ;8i;j, perform themergingusingtheproposedalgorithmand¯ndtheparameter°in(4.6)thatminimizes 71 the total rate 2 . In the proposed algorithm, the search for the maximum of the metric is done for the bins of all nodes involved. However, di®erent approaches can be considered for the search. These are explained as follows: Method 1: Complete sequential merging. In this method, we process one node at a time inaspeci¯edorder. Foreachnode,wemergethemaximumnumberofbinspossiblebefore proceeding to the next node. Merging decisions are not modi¯ed once made. Since we exhaustallpossiblemergersineachnode, afterscanningallthenodesnomoreadditional mergers are possible. Method 2: Partial sequential merging. In this method, we again process one node at a time in a speci¯ed order. For each node, among all possible bin mergers, the best one according to a criterion is chosen (the criterion could be entropy-based and for example, (4.6) is used in this thesis) and after the chosen bin is merged we proceed to the next node. This process is continued until no additional mergers are possible in any node. This may require multiple passes through the set of nodes. These two methods can be easily implemented with minor modi¯cations to our pro- posed algorithm. Notice that the ¯nal result of the proposed encoding algorithm will be M merging tables, each of which has the information about which bins can be merged at each node in real operation. That is, each node will merge the quantization bins using the merging table stored at the node and will send the merged bin to the fusion node which then tries to determine which bin actually occurred via the decoding process using M merging tables and S Q . 2 A heuristic method can be used to search °. Clearly, ° depends on the application. 72 4.5.1 Incremental Merging The complexity of the above procedures is a function of the total number of quantization bins, andthusofthenumberofnodes involved. Thus, theseapproachescouldpotentially be complex for large sensor ¯elds. We now show that incremental merging is possible, that is, we can start by performing the merging based on a subset of N sensor nodes, N < M, and will be guaranteed that merging decisions that were valid when N nodes were considered will remain valid when all M nodes are taken into account. To see this, suppose that Q j i and Q k i are identi¯able when only N nodes are consid- ered. That is, from De¯nition 1, S j i (N)\S k i (N) = ;, where N indicates the number of nodes involved in the merging process. Note that since every element Q j (M) = (Q 1 ;:::;Q N ;Q N+1 ;:::;Q M ) 2 S j i (M) is constructed by concatenating M ¡ N indices Q N+1 ;:::;Q M with the corresponding element, Q j (N) = (Q 1 ;:::;Q N )2 S j i (N), we have that Q j (M) 6= Q k (M) if Q j (N) 6= Q k (N). Thus, by the property of the intersection operator\, we can claim that S j i (M)\S k i (M)=; 8M ¸N, implying that Q j i and Q k i are still identi¯able even when we consider M nodes. Thus,wecanstartthemergingprocesswithjusttwonodesandcontinuetodofurther mergingbyaddingonenode(orafew)atatimewithoutchangeinpreviouslymergedbins. When many nodes are involved, this would lead to signi¯cant savings in computational complexity. In addition, if some of the nodes are located far away from the nodes being added (that is, the dynamic ranges of their quantizers do not overlap with those of the nodes being added), they can be skipped for further merging without loss of merging performance. 73 4.6 Extension of Identi¯ability: p-identi¯ability Since for real operating conditions, there exist measurement noise (w i 6= 0) and/or pa- rameter mismatches, the assumption that Pr[(Q 1 ;:::;Q M )2 S Q ] = 1 is no longer valid. Thus, we need to consider modi¯cations to the merging technique to allow it to operate under noisy condition. We start by constructing the set S Q by taking elements in S M thathavehighprobabilities. Inotherwords, wediscardsome M-tuplesinS M thatrarely happen so that we can maintain Pr[Q r 2 S Q (p)] = p(' 1) and still achieve good rate savings by the merging technique. Formally, S Q (p)=fQ (1) ;:::;Q (jS Q (p)j) g; Pr[Q (i) ]¸Pr[Q (j) ] i<j; i;j =1;:::;jS M j (4.7) Notice that a decoding error will occur at the fusion node whenever an element in S M ¡ S Q (p) is produced. Clearly, there will be trade-o® between rate savings and decoding errors. If we choose S Q (p) to be as small as possible, we can achieve better rate savings at the expense of larger decoding error, which could lead to signi¯cant degradation of localization performance. Handling of decoding errors will be discussed in Section 4.7. With this consideration, De¯nition 1 can be restated as follows: De¯nition 2 Q j i and Q k i are p-identi¯able, and therefore can be merged, i® S j i (p)\ S k i (p)=;; where S j i (p) and S k i (p) are constructed from S Q (p). Notice that p-identi¯ability allows us to apply the merging technique explained in Section 4.5 and thus achieve good rate 74 savingsat the expense ofdecodingerror. Clearly, the probabilityof decoding errorwould be less than 1¡p. 4.7 DecodingofMergedBinsandHandlingDecodingErrors Figure 4.1: Encoder-Decoder Diagram When there is a decoding error, the ¯rst step we should take would be to obtain the possibleM-tuples,Q D 1 ;:::;Q D K fromthereceivedM-tupleQ r (encodedversion)byusing the M merging tables. Note that the merging process is done o®-line in a centralized manner. In real operation, each node stores its merging table to perform the encoding and the fusion node uses S Q (p) and M merging tables to do the decoding. For example when M = 3;R i = 2, suppose that according to node 1's merging table, Q 1 1 and Q 4 1 can be merged into Q 1 1 , implying that node 1 will transmit Q 1 1 to the fusion node whenever 75 z 1 belongs to Q 1 1 or Q 4 1 . If the fusion node receives Q r = (1;2;4), it decomposes (1;2;4) into (1;2;4) and (4;2;4) by using node 1's merging table. This decomposition will be performed for the other M¡1 merging tables. Suppose we have a set of K M-tuples, S D = fQ D 1 ;:::;Q D K g decomposed from Q r . Then clearly, Q r 2 S D and Q t 2 S D where Q t is the true M-tuple before encoding (see Figure 4.1). It is observed that since the decomposed M-tuples are produced via the M merging tables from the true one, Q t , it is very likely that Pr(Q D k )¿ 1, where Q D k 6=Q t ; k =1;:::;K. In other words, since the encoding process allows us to merge thequantizationbinswheneveranyM-tuplesthatcontaineitherofthemareveryunlikely to happen at the same time, the M-tuples,Q D k (6=Q t ), tend to take very low probability. 4.7.1 Decoding Rule 1: Simple Maximum Rule Since the received M-tuple Q r has ambiguity produced by encoders at each node, the decoder at fusion node should be able to ¯nd the true M-tuple by using appropriate decoding rules. As a simple rule, we can take the M-tuple (out of Q D 1 ;:::;Q D K ) that is most likely to happen. Formally, Q D =argmax k Pr[Q D k ]; k =1;:::;K (4.8) where Q D is the decoded M-tuple which will be forwarded to the localization routine. Now we consider two possible cases as follows: ² Q t 2S Q (p) There will be no decoding error and there exists only one M-tuple Q D from K 76 M-tuples that belongs to S Q (p). Note that Pr[Q D ]¸ Pr[Q D k ] k = 1;:::;K due to the property of S Q (p) in (4.7) and thus the decoding rule in (4.8) allows us to obtain Q t out of K M-tuples without decoding error. ² Q t 2S M ¡S Q (p) Since the decoding error occurs only when Pr[Q t ] < Pr[Q D ], the decoding error probability will be less than 1¡p. Note that Pr[Q2S Q (p)]=p. 4.7.2 Decoding Rule 2: Weighted Decoding Rule Instead of choosing only one decoded M-tuple, we can treat each decomposed M-tuple as a candidate for decoding with a corresponding weight based on likelihood. For example, we can viewQ D k as the decoded M-tuple with weight of W k = Pr[Q D k ] P K l Pr[Q D l ] k =1;:::;K. Itshouldbenotedthattheweighteddecodingrulewillbeusedalongwiththelocalization routine as follows: ^ x= K X k ^ x k W k k =1;:::;K (4.9) where ^ x k istheestimatedsourcelocationusingthedecodedM-tuple,Q D k . Forsimplicity, we can take a few dominant M-tuples for the weighted decoding and localization. That is, ^ x= L X k ^ x (k) W (k) k =1;:::;L (4.10) where W (k) is the weight of Q D (k) and Pr[Q D (i) ] ¸ Pr[Q D (j) ] if i < j. Typically L is chosen as a small number (e.g., L=2). Note that the weighted decoding rule with L=1 is equivalent to the simple maximum rule in (4.8). 77 4.8 Application to Acoustic Amplitude Sensor Case In order to perform distributed encoding at each node, we ¯rst need to obtain the set S Q , which can be constructed from (4.1) as follows: S Q =f(Q 1 ;:::;Q M )j9x2S;Q i =® i (g i a kx¡x i k ® +w i )g (4.11) where the i-th sensor reading z i (x) is expressed by the sensor model g i a kx¡x i k ® , and the measurement noise, w i (see Section 2.5.1 for further details about the expression). When the signal energy a, is known and there is no measurement noise (w i = 0), it would be straightforward to construct the set S Q . That is, each element in S Q corresponds to one region in sensor ¯eld which is obtained by computing the intersection, A, of M ring-shaped areas, A 1 ;:::;A M . For example, using an j-th element Q j = (Q j 1 ;:::;Q j M ) in S Q , we can compute the corresponding intersection A j as follows: A i = fxjg i a kx¡x i k ® 2Q j i ;x2Sg; i=1;:::;M (4.12) A j = M \ i A i (4.13) Clearly, since the nodes involved in localization of any given source location generate the same M-tuple, the set S Q will be computed deterministically. In other words, Pr[Q 2 S Q ] = 1. Thus, using S Q , we can apply our merging technique to this case and achieve signi¯cant rate savings without any degradation of localization accuracy (no decoding error). 78 However, measurement noise and/or unknown signal energy will make this problem complicated by allowing random realizations of M-tuples generated by M nodes for any given source location. Since the condition for no decoding error, i.e., Pr[Q 2 S Q ] = 1, is satis¯ed only when S Q takes all possible M-tuples, the size of S Q should be increased to reduce the decoding errors and this will result in large reduction in the rate savings achieved by our encoding algorithm. Thus, by noting that the decoding errors will occur whenever Q r 2 S M ¡S Q , we can construct S Q by including only those elements with high probability, so that decoding errors are unlikely. Formally, we construct S Q (p) such that Pr[Q 2 S Q (p)] = p(¼ 1) and then apply our modi¯ed merging algorithm with p-identi¯ability explained in Section 4.6. 4.8.1 Construction of S Q (p) In order to construct S Q (p), which should meet the property (4.7), we need to have all possible M-tuples and compute their probabilities to obtain the sorted version of Q (1) ;:::;Q (jS Q j) . Practically, since this would incur very large computation cost, we in- stead construct S Q as follows: ² Assuming no measurement noise, construct multiple S Q (a k )'s using (4.11) where a = a k ;w i = 0;k = 1;:::;L a . Note that L a = amax¡a min ¢a , and ¢a = a k+1 ¡a k is chosen as a small value (¿1). ² Check if Pr[Q2S Q (p)= S L a k=1 S Q (a k )]=p. ² Otherwise, generate random realizations of M-tuples by using measurement noise until Pr[Q2S Q (p)]=p. 79 Note that this construction allows S Q (p) to take most of M-tuples with high proba- bility due to the assumption of normal distribution of measurement noise. 4.9 Experimental results The distributed encoding algorithm is applied to the system where each node employs an acoustic amplitude sensor model for source localization. The experimental results are providedintermsofaveragelocalizationerror,Ekx¡^ xk 2 ,andratesavings(%)computed by R T ¡R M R T £100; where R T is the rate consumed by M nodes when only the independent entropy coding (Hu®man coding) is applied after quantization and R M is the total rate computed when the merging technique is applied before entropy coding. In performing the localization based on the quantized noisy sensor readings, the localization algorithm proposed in Chapter 3 was applied to compute Ekx¡^ xk 2 . Wealsoassumethateachnodeusesthequantizer(LSQ)proposedinChapter2except for the experiments where otherwise noted. 4.9.1 Distributed Encoding Algorithm First, we assume that each node can measure the known signal energy without measure- mentnoise. ThedistributedencodingalgorithmproposedinSection4.5hasbeenapplied to the acoustic sensor system. Figure 4.2 shows the overall performance of the system for each quantization scheme. In this experiment, 100 di®erent 5-node con¯gurations were generated in a sensor ¯eld 10£10m 2 . For each con¯guration, a test set of 2000 random source locations was used to obtain sensor readings, which are then quantized by three di®erent quantizers, namely, uniform quantizers, Lloyd quantizers and LSQs. The 80 average localization error and total rate R M are averaged over 100 node con¯gurations. As expected, the overall performance for LSQ is the best of all since the total reduction in redundancy can be maximized when the application-speci¯c quantization (LSQ) and the distributed encoding are used together. Figure 4.2: Average localization error vs. Total rate R M for three di®erent quantization schemes with distributed encoding algorithm. Average rate savings is achieved by the distributed encoding algorithm (global merging algorithm). Our encoding algorithm with the di®erent merging techniques outlined in Section 4.5 is applied for comparison and the results are provided in Table 4.1. Methods 1 and 2 are as described in Section 4.5, and Method 3 is the global merging algorithm discussed in that section. We can observe that even with relative low rates (4 bits per node) and a small number of nodes (only 5) signi¯cant rate gains (up to 30%) can be achieved with our merging technique. 81 Table 4.1: Total rate, R M in bits (Rate savings) achieved by various merging techniques. R i Method 1 Method 2 Method 3 2 9.4 (8.7%) 9.4 (8.7%) 9.10 (11.6%) 3 11.9 (20.6%) 12.1 (19.3%) 11.3 (24.6%) 4 13.7 (31.1%) 14.1 (29.1%) 13.6 (31.6%) The encoding algorithm was also applied to many di®erent node con¯gurations to characterize the performance. In this experiment, 500 di®erent node con¯gurations were generated for each M(M = 3;4;5) in a sensor ¯eld 10£ 10m 2 . After quantization, the global merging technique has been applied to obtain the rate savings. In obtaining the metric in (4.6), the source distribution is assumed to be uniform. The average rate savings achieved by the encoding algorithm is computed as a sample mean over 500 di®erent node con¯gurations and plotted in Figure 4.3. Note that the performance of our encoding algorithm is dependent upon the set S Q given by (4.1). Since there are a large number of nodes in a typical sensor network, our distributed algorithms have been applied to the system with an acoustic sensor model in a larger sensor¯eld(20£20m 2 ). Inthisexperiment,20di®erentnodecon¯gurationsaregenerated for each M(= 12;16;20) and for each node con¯guration, our encoding algorithm is applied after quantization with assumption of no measurement noise. Note that the node density for M =20 in 20£20m 2 is equal to 20 20£20 =0:05 which is also the node density for the case of M = 5 in 10£10m 2 . In Table 4.2 it is worth noting that the system with a larger number of nodes outperforms the system with a smaller number of nodes (M =3;4;5)althoughthenodedensityiskeptthesame. Thisisbecausetheincremental property of the merging technique allows us to ¯nd more identi¯able bins at each node. 82 2 2.5 3 3.5 4 16 18 20 22 24 26 28 30 32 34 36 Number of bits assigned to each node, R i with M=5 Averate rate savings 3 3.5 4 4.5 5 10 12 14 16 18 20 22 24 26 28 Number of nodes involved, M with R i =3 Average rate savings Figure 4.3: Average rate savings achieved by the distributed encoding algorithm (global merging algorithm) vs. number of bits, R i with M = 5 (left) and number of nodes with R i =3 (right) Table 4.2: Total rate R M in bits (Rate savings) achieved by distributed encoding algo- rithm (global merging technique). The rate savings is averaged over 20 di®erent node con¯gurations where each node uses LSQ with R i =3. M Total rate R M in bits (Rate savings) 12 17.3237 (51.56%) 16 20.7632 (56.45%) 20 23.4296 (60.69%) 4.9.2 Encoding with p-Identi¯ability and Decoding rules The distributed encoding algorithm with p-identi¯ability described in Section 4.6 was applied to the case where each node collects noise-corrupted measurements of unknown source signal energy. First, assuming known signal energy, we checked the e®ect of mea- surement noise on the rate savings, and thus decoding error by varying the size of S Q (p) (see Figure 4.4). In this experiment, the variance of measurement noise, ¾ 2 , varies from 83 0 to 0:5 2 and for each ¾ 2 , a test set of 2000 source locations was generated with a = 50. Figure 4.4 illustrates that good rate savings can be still achieved in a noisy situation at the expense of a small decoding error. Clearly, it can be noted that better rate savings can be achieved at higher SNR 3 and (or) with a larger decoding error (<0:05) allowed. Figure 4.4: Rate savings achieved by the distributed encoding algorithm (global merging algorithm) vs. SNR (dB) with R i =3 and M=5. ¾ 2 =0;:::;0:5 2 For the case of unknown signal energy where we assume that a 2 [a min a max ], we constructed S Q (p) = S La k=1 S Q (a k ) with ¢a = a k+1 ¡ a k = amax¡a min La = 0:5 by varying p=0:8;:::;0:95 whereS Q (a k ) is the set S Q constructed when a=a k in noise-free condition (w i =0). Using S Q (p), we applied the merging technique with p-identi¯ability to evaluate the performance (rate savings vs. localization error). In the experiment, a test set of 2000 samples is generated from uniform priors for p(x) and p(a) with each 3 Note that for practical vehicle target, the SNR is often much higher than 40dB [19]. 84 noise variance (¾ =0;0:05). In order to deal with decoding errors, two decoding rules in Section4.7wereappliedalongwiththelocalizationalgorithminChapter3. InFigure4.5, the performance curves for two decoding rules were plotted for comparison. As can be seen, the weighted decoding rule performs better than the simple maximum rule since the former takes into account the e®ect of the other decomposed M-tuples on localization accuracy by adjusting their weights. It is also noted that when decoding error is very low (equivalently, p¼1), both of them show almost the same performance. Figure 4.5: Average localization error vs. total rate R M achieved by the distributed encoding algorithm (global merging algorithm) with simple maximum decoding and weighted decoding, respectively. Total rate varies by changing p from 0.8 o 0.95 and weighted decoding is conducted with L = 2. Solid line + ¤: weighted decoding. Solid line +r: simple maximum decoding. To see how much gain we can obtain from the encoding, we compared this to the system which uses only the entropy coding without applying the merging technique. 85 In Figure 4.6, the performance curves are plotted by changing the size of S Q (p) with ¾ =0;0:05. It should be noted that we can determine the size of S Q (p) (equivalently, p) that provides the best performance from this experiment. Figure 4.6: Average localization error vs. total rate, R M achieved by the distributed encoding algorithm (global merging algorithm) with R i = 3 and M=5. ¾ = 0;0:05. S Q (p) is varied from p=0:85;0:9;0:95. Weighted decoding with L=2 is applied in this experiment. 4.9.3 Performance Comparison: Lower Bound We address the question of how our technique compares with the best achievable perfor- mance for this source localization scenario. As a bound on achievable performance we considerasystemwhere(i)eachnodequantizesitsobservationindependentlyand(ii)the quantization indices generated by all nodes for a given source location are jointly coded (in our case we use the joint entropy of the vector of observations as the rate estimate). 86 This approach can be applied to both the original quantizer designed and the quantizer obtained after merging. Note that this is not a realistic bound because joint coding cannot be achieved unless the nodes are able to communicate before encoding. Note that in order to approximate the behavior of the joint entropy coder via distributed source coding techniques one would have to transmit multiple observations of the source energy from each node, as the source is moving around the sensor ¯eld. Some of the nodes could send observations that are directly encoded, while others could transmit a syndrome produced by an error correcting code based on the quantized observations. Then, as the fusion node receives alltheinformationfromthevariousnodesitwouldbeabletoexploitthecorrelationfrom theobservationsandapproximatethejointentropy. Thismethodwouldnotbedesirable, however, because the information in each node depends on the location of the source and thus to obtain a reliable estimate of the measurement at all nodes one would have to have observations at a su±cient number of positions of the source. Thus, instantaneous localization of the source would not be possible. The key point here, then, is that the randomness between observations across nodes is based on the localization of the source, which is precisely what we wish to observe. For one 5-node con¯guration, the average rate per node was plotted with respect to the localization error in Figure 4.7, with assumption of no measurement noise (w i = 0) and known signal energy. As can be seen from Figure 4.7, our distributed encoding algorithm in Section 4.5 outperform techniques based on uniform quantization. For this particular con¯guration we can observe a gap of less than 1 bit/node, at high rates, between the performance achieved by proposed quantizer with distributed encoding and 87 thatachievablewiththesamequantizerifjointentropycodingwaspossible. Insummary, our merging technique with the proposed quantization scheme provides substantial gain over straightforward application of known techniques and comes close to the optimal achievable performance. Figure 4.7: Performance comparison: distributed encoding algorithm is lower bounded by joint entropy coding. 4.10 Conclusion Using the distributed property of the quantized observations, we proposed a novel encod- ing algorithm which allows us to obtain signi¯cant rate savings by merging quantization bins. We also developed decoding rules to deal with the decoding errors which can be caused by measurement noise and/or parameter mismatches. In the experiment, we showed that the system equipped with the quantizers proposed in Chapter 2 and the 88 distributed encoders achieved signi¯cant data compression as compared with standard systems. 89 Chapter 5 Conclusion and Future work In this thesis, we have studied the impact of quantization on the performance of source localizationsystemswheredistributedsensorsmeasuresourcesignals, quantizethenoise- corrupted measurements, encode them and send them to a fusion node that performs decoding and the localization based on quantized data to estimate the source location. We proposed an iterative design algorithm which allows us to reduce the localization error for quantizer design. We showed that quantizer design should be \application- speci¯c". We addressed the rate allocation problem and showed that the rate allocation result could be improved by taking into account the quantization scheme at each sensor. We also proposed a novel distributed encoding algorithm that merges quantization bins at each sensor whenever the ambiguity created by this merging can be resolved at the fusion node by using information from other sensors. As an example of the localization application, we considered an acoustic amplitude sensor system where each sensor measures the source signal energy based on an energy decaysensormodel. Forthisapplication,weproposedadistributedlocalizationalgorithm based on the maximum a posteriori (MAP) criterion. We showed that the localization 90 algorithm achieves good performance with reasonable complexity as compared to the minimum mean squared error (MMSE) estimation. Extensivesimulationshavebeenconductedtocharacterizetheperformanceofourdis- tributedalgorithms. Theydemonstratedthebene¯tsofusingapplication-speci¯cdesigns to replace traditional quantizers such as uniform quantizers and Lloyd quantizers. They also showed that the rate allocation optimized for source localization achieved signi¯cant gains as compared to a uniform rate allocation. In addition, the experiments showed the complexity of the localization algorithm could be signi¯cantly reduced by taking into account the distributed property of the quantized data without much degradation of the localization accuracy. They also showed that signi¯cant rate savings could be achieved via our encoding algorithm. In future work, many relevant topics and ideas can be addressed. First, in this thesis, we have considered a speci¯c source localization system and used the simulation based data to test our algorithms. Thus, it would be worth applying the algorithms to real data obtained in the test ground for various systems where di®erent types of sensors can be employed at each sensor for di®erent tasks. Second, we assumed that the com- munication link between the sensors and the fusion node is fully reliable. Further work should be conducted to incorporate the noisy link to our design framework. Third, the topic that can be addressed is the joint design of quantizers and encoders since there exists dependency between quantization and encoding of quantized data which will be exploited to obtain more performance gain. Finally, complexity should be addressed for practical applications. Note that our design algorithms operate o®-line by using infor- mation of sensor locations. However, in many cases of interest the sensor network could 91 be recon¯gured regularly and would require on-line tasks. In these cases, low complexity techniques would be very important. 92 Bibliography [1] D.BlattandA.O.Hero. Energybasedsensornetworksourcelocalizationviaprojec- tionontoconvexsets(POCS). IEEE Transactions on Signal Processing,54(9):3614{ 3619, September 2006. [2] J. C. Chen, R. E. Hudson, and K. Yao. Maximum-likelihood source localization and unknown sensor location estimation for wideband signals in the near-¯eld. IEEE Transactions on Signal Processing, 50(8):1843{1854, August 2002. [3] J. C. Chen, K. Yao, and R. E. Hudson. Source localization and beamforming. IEEE Signal Processing Magazine, 19(2), March 2002. [4] J. C. Chen, K. Yao, and R. E. Hudson. Acoustic source localization and beamform- ing: Theory and practice. EURASIP Journal on Applied Signal Processing, pages 359{370, 2003. [5] J. C. Chen, L. Yip, J. Elson, H. Wang, D. Maniezzo, K. Yao, R. E. Hudson, and D. Estrin. Coherent acoustic array processing and localization on wireless sensor networks. In IEEE Proceedings, August 2003. [6] P. A. Chou, T. Lookabaugh, and R. M. Gray. Entropy-constrained vector quantiza- tion. IEEE Trans. on Acoustic, Speech, and Signal processing, pages 31{41, January 1989. [7] T. M. Cover and J.A.Thomas. Elements of Information Theory. Wiley-Interscience Publication, 1991. [8] T. J. Flynn and R. M. Gray. Encoding of correlated observations. IEEE Trans. on Information Theory, November 1987. [9] P. Frossard, O. Verscheure, and C. Venkatramani. Signal processing challenges in distributedstreamprocessingsystems. In IEEE International Conference on Acous- tic, Speech, and Signal Processing (ICASSP), Toulouse, France, May 2006. [10] P. H. Garthwaite, I. T. Jolli®e, and B. Jones. Statistical Inference - Second Edition. Oxford University Press, 2002. [11] A. O. Hero and D. Blatt. Sensor network source localization via projection onto convex sets (POCS). In IEEE International Conference on Acoustic, Speech, and Signal Processing (ICASSP), March 2005. 93 [12] P. Ishwar, R. Puri, K. Ramchandran, and S. S. Pradhan. On rate-constrained dis- tributed estimation in unreliable sensor networks. IEEE Journal on Selected Areas in Communications, 23:765{ 775, April 2005. [13] V.IslerandR.Bajcsy. Thesensorselectionproblemforboundeduncertaintysensing models. In IEEE International symposium on Information Processing in Sensor Networks (IPSN), April 2005. [14] S. M. Kay. Fundamentals of Statistical Signal Processing. Prentice-Hall Inc., New Jersey, 1993. [15] Y. H. Kim. and A. Ortega. Quantizer design and distributed encoding algorithm for source localization in sensor networks. In IEEE International symposium on Information Processing in Sensor Networks (IPSN), April 2005. [16] Y. H. Kim. and A. Ortega. Quantizer design for source localization in sensor net- works. InIEEEInternationalConferenceonAcoustic, Speech, andSignalProcessing (ICASSP), March 2005. [17] Y. H. Kim. and A. Ortega. Maximun a posteriori (MAP)-based algorithm for dis- tributed source localization using quantized acoustic sensor readings. In IEEE In- ternational Conference on Acoustic, Speech, and Signal Processing (ICASSP), May 2006. [18] Y.H.KimandA.Ortega. Quantizerdesignandrateallocationforsourcelocalization in sensor networks. Submitted to IEEE Transactions on Signal Processing, 2007. [19] D. Li and Y. H. Hu. Energy-based collaborative source localization using acoustic microsensor array. EURASIP Journal on Applied Signal Processing, pages 321{337, 2003. [20] D. Li, K. D. Wong, Y. H. Hu, and A. M. Sayeed. Detection, classi¯cation and tracking of targets. IEEE Signal Processing Magazine, 19(2):17{29, March 2002. [21] M.LightstoneandS.K.Mitra. Optimalvariable-ratemean-gain-shapevetorquanti- zationforimagecoding. IEEETrans.onCircuitsandSystemsforVideoTechnology, pages 660{668, December 1996. [22] J.Liu,J.Reich,andF.Zhao. Collaborativein-networkprocessingfortargettracking. EURASIP Journal on Applied Signal Processing, pages 378{391, 2003. [23] Z.-Q. Luo. Universal decentralized estimation in a bandwidth constrained sensor network. IEEE Trans. on Information Theory, pages 2210{2219, June 2005. [24] C. Meesookho, U. Mitra, and S. Narayanan. Distributed range di®erence based target localization in sensor networks. In Proceedings of Asilomar Conference on Signals, Systems and Computers, Paci¯c Grove, CA, October 2005. [25] R. Niu and P. K. Varshney. Target location estimation in wireless sensor networks using binary data. In Proceedings of the 38th Annual Conference on Information Sciences and Systems, Princeton, NJ, March 2004. 94 [26] R.NiuandP.K.Varshney. Targetlocationestimationinsensornetworkswithquan- tized data. IEEE Transactions on Signal Processing, 54(12):4519{4528, December 2006. [27] T. S. Rappaport. Wireless Communications:Principles and Practice. Prentice-Hall Inc., New Jersey, 1996. [28] E. A. Riskin. Optimal bit allocation via the generalized BFOS algorithm. IEEE Trans. on Information Theory, 37:400{402, March 1991. [29] K. Sayood. Introduction to Data Compression - Second Edition. Morgan Kaufmann Publishers, 2000. [30] X. Sheng and Y. H. Hu. Maximum-likelihood multiple-source localization using acoustic energy measurements with wireless sensor networks. IEEE Transactions on Signal Processing, 53(1):44{53, January 2005. [31] J. O. Smith and J. S. Abel. Closed-form least-squares source location estimation from range-di®erence measurements. IEEE Trans. on Acoustics, Speech and Signal Processing, pages 1661{1669, December 1987. [32] N. Srinivasamurthy and A. Ortega. Joint compression-classi¯cation with quan- tizer/classi¯er dimension mismatch. In Visual Communications and Image Pro- cessing 2001, San Jose, CA, Jan. 2001. [33] N. Srinivasamurthy, A. Ortega, and S. Narayanan. Towards optimal encoding for classi¯cation with applications to distributed speech recognition. In Proc. of Eu- rospeech, http://sipi.usc.edu/»ortega/Papers/NaveenEuro03.pdf, Geneva, Septem- ber 2003. [34] N. Srinivasamurthy, A. Ortega, and S. Narayanan. E±cient scalable encoding for distributed speech recognition. Speech Communication, 48:888{902, 2006. [35] I. H. Tseng, O. Verscheure, D. S. Turaga, and U. V. Chaudhari. Quantization for adapted GMM-based speaker veri¯cation. In IEEE International Conference on Acoustic, Speech, and Signal Processing (ICASSP), Toulouse, France, May 2006. [36] L. Vasudevan, A. Ortega, and U. Mitra. Application-speci¯c compression for time delay estimation in sensor networks. In First ACM Conference on Embedded Net- worked Sensors, Los Angeles, CA, Nov 2003. [37] L. Vasudevan, A. Ortega, and U. Mitra. Jointly optimized quantization and time delay estimation for sensor networks. In First Intl. Symposium on Control, Commu- nications, and Signal Processing, Tunisia, March 2004. [38] H. Wang, K. Yao, G. Pottie, and D. Estrin. Entropy-based sensor selection heuristic for target localization. In IEEE International symposium on Information Processing in Sensor Networks (IPSN), 2004. 95 [39] J.-J. Xiao, S. Cui, Z.-Q. Luo, and A. Goldsmith. Joint estimation in sensor net- works under energy constraints. In IEEE First Conference on Sensor and Ad Hoc Communications and Networks, October 2004. [40] J.-J. Xiao, A. Ribeiro, Z.-Q. Luo, and G. B. Giannakis. Distributed compression- estimation using wireless sensor networks. IEEE Signal Processing Magazine, 23(4):27{41, July 2006. [41] H.YangandB.Sikdar. Aprotocolfortrackingmobiletargetsusingsensornetworks. In Proceedings of IEEE Workshop on Sensor Network Protocols and Applications, Anchorage, AK, May 2003. [42] F. Zhao, J. Shin, and J. Reich. Information-driven dynamic sensor collaboration for target tracking. IEEE Signal Processing Magazine, 19(2):61{72, March 2002. 96
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Distributed wavelet compression algorithms for wireless sensor networks
PDF
Robust acoustic source localization in sensor networks
PDF
Efficient and accurate in-network processing for monitoring applications in wireless sensor networks
PDF
Distributed edge and contour line detection for environmental monitoring with wireless sensor networks
PDF
Compression algorithms for distributed classification with applications to distributed speech recognition
PDF
Radio localization techniques using ranked sequences
PDF
Realistic modeling of wireless communication graphs for the design of efficient sensor network routing protocols
PDF
QoS-aware algorithm design for distributed systems
PDF
On location support and one-hop data collection in wireless sensor networks
PDF
Lifting transforms on graphs: theory and applications
PDF
Algorithmic aspects of throughput-delay performance for fast data collection in wireless sensor networks
PDF
Models and algorithms for energy efficient wireless sensor networks
PDF
Application-driven compressed sensing
PDF
Transport layer rate control protocols for wireless sensor networks: from theory to practice
PDF
Reliable languages and systems for sensor networks
PDF
Rate adaptation in networks of wireless sensors
PDF
Elements of next-generation wireless video systems: millimeter-wave and device-to-device algorithms
PDF
Gradient-based active query routing in wireless sensor networks
PDF
Adaptive sampling with a robotic sensor network
PDF
A radio frequency based indoor localization framework for supporting building emergency response operations
Asset Metadata
Creator
Kim, Yoon Hak
(author)
Core Title
Distributed algorithms for source localization using quantized sensor readings
School
Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Electrical Engineering
Publication Date
08/29/2007
Defense Date
04/05/2007
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
distributed algorithms,OAI-PMH Harvest,quantizer design,sensor networks,source localization
Language
English
Advisor
Ortega, Antonio (
committee chair
), Govindan, Ramesh (
committee member
), Mitra, Urbashi (
committee member
)
Creator Email
yhk@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-m800
Unique identifier
UC1181846
Identifier
etd-Kim-20070829 (filename),usctheses-m40 (legacy collection record id),usctheses-c127-538513 (legacy record id),usctheses-m800 (legacy record id)
Legacy Identifier
etd-Kim-20070829.pdf
Dmrecord
538513
Document Type
Dissertation
Rights
Kim, Yoon Hak
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Repository Name
Libraries, University of Southern California
Repository Location
Los Angeles, California
Repository Email
cisadmin@lib.usc.edu
Tags
distributed algorithms
quantizer design
sensor networks
source localization