Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Neighbor discovery in device-to-device communication
(USC Thesis Other)
Neighbor discovery in device-to-device communication
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Neighbor Discovery in Device-to-Device Communication by Arash Saber Tehrani A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy Communication Science Institute Ming Hsieh Department of Electrical Engineering Viterbi School of Engineering University of Southern California Committee in charge: Professor Giuseppe Caire, chair Professor Andreas Molisch Professor Larry Goldstein Los Angeles, CA, August 2015 “I don’t think ignorance is necessarily a bad thing. The more you know, the more problems you have.” — Hachiman Hikigaya To my family ... Acknowledgements It was a long journey, through which I had the privilege of learning from many brilliant people. The individuals who patiently and elegantly pave the way to this work through their guidance, support, and help. First and foremost, I want to sincerely thank my primary adviser, Giuseppe Caire, for his continued support and guidance. For me, he is a true mentor, and more than that a role model. I also want to thank my secondary adviser Alex Dimakis, who was a great friend along being a mentor. This research would not have been possible without the continuous encouragement and inspiration of them and I am truly honored to be their advisee. I am also deeply indebted to Andy Molisch who guided me during the absence of my advisers. His insightful comments and invaluable advice made this work possible. I was fortunate to work with him and have learned a lot from his integrity and erudition. A big thank you also goes to Larry Goldstein, my committee member, for reviewing this dissertation. Additionally, I am grateful to my colleagues Ansuman, Ryan, Hassan, Song Nam, Seun, and Mingyue for their friendly help during my USC years. I want to thank our great CSI staff Gerrie, Anita, Susie, and Corine for their support and help. It was thanks to their great “coffee” that I made it this far. My deepest gratitude goes to Shahram for everything he has done for me. From the first day that he picked me up form the airport until now, he has been my supportive angel. Having him was the greatest relief, as I knew whatever the problem, Shahram is only a phone call away. i Acknowledgements I also want to pay my respect and gratitude to my deceased mentor Ralf Koetter for showing me the wonders of the research world and patiently guiding me when I was a complete novice. Finally, I want to say thank you to my Mom, Dad and brother for their endless support, sacrifice, and love. Los Anegeles, 15 June 2015 A. S. T. ii Abstract The neighbor discovery in wireless networks is the problem of devices identifying other devices which they can communicate to effectively, i.e., those whose signal can be received with a power great enough. The neighbor discovery is crucial and is the first task which must be performed in the an ad-hoc or device-to-device network, is the prerequisite for channel estimation, scheduling, and communication. This dissertation presents two solutions for the neighbor discovery problem: compressed sensing neighbor discovery and ZigZag scheme. The former relies on the connection between the neighbor discovery and compressed sensing as number of neighbors a device has is small compares to the total number of nodes and thus sparse vector approximation methods can be applied. Exploiting our recent work on optimal deterministic sensing matrices, we show that the discovery time can be reduced to K log(N=K). The latter is based on serial interference cancellation that can provide further efficiency improvement, i.e., discovery period on the scale of K. We show a theoretical connection between the performance of these schemes and channel coding through which we derive performance bounds for the methods in the ideal setting. To analyze the performance of the methods, we compare them against each other and against random access scheme. We further extend the ZigZag discovery to support devices equipped with directional steering beam antennas. Note that such extension is not straight forward as devices need to scan their surrounding and two devices must agree on becoming the neighbor as both should aim their beams at one another in future communications. iii Acknowledgements Furthermore, as the neighbor discovery methods perform poorly in the presence of interference from non-neighbors, we suggest a cellular based scheduling to overcome methods We further, analyze the performance of these methods in a realistic environment with presence of shadowing and fading, and introduce theoretic and practical measures to protect the performance of these methods against interference caused by non-neighbors. iv Contents Acknowledgements i Abstract iii List of figures ix List of tables xiii 1 Introduction 1 1.1 Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.1.1 Compressed Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.1.2 Neighbor Discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.1.3 Directional Neighbor Discovery . . . . . . . . . . . . . . . . . . . . 6 1.1.4 Interference in Device-to-Device Systems . . . . . . . . . . . . . . . 6 1.2 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2 Optimal Measurement Matrices for Compressed Sensing 9 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2 Connection between Channel Coding and CS . . . . . . . . . . . . . . . . 13 2.3 Measurement Matrix Construction . . . . . . . . . . . . . . . . . . . . . . 22 2.4 Achievability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.4.1 Linear Sparsity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.5 Optimal number of measurements . . . . . . . . . . . . . . . . . . . . . . . 27 v Contents 2.6 Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 3 Neighbor Discovery in Wireless Networks 39 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 3.2 Model and Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 3.2.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.3 Compressed Sensing Neighbor Discovery . . . . . . . . . . . . . . . . . . . 43 3.4 ZigZag Neighbor Discovery . . . . . . . . . . . . . . . . . . . . . . . . . . 45 3.4.1 ZigZag meets MP over BEC . . . . . . . . . . . . . . . . . . . . . . 47 3.4.2 ID format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 3.5 Aloha-like Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 3.6 Measure of Success and Error . . . . . . . . . . . . . . . . . . . . . . . . . 50 3.7 The Collision Free Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.7.1 CS neighbor discovery . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.7.2 ZigZag . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.7.3 Aloha-like scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.8 The Interference Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3.9 Experimental Results for the Interference Model . . . . . . . . . . . . . . . 62 3.9.1 CS neighbor discovery . . . . . . . . . . . . . . . . . . . . . . . . . 63 3.9.2 ZigZag . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 3.9.3 Aloha-like scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 3.10 Geometric scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 3.11 Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 3.11.1 Proof of Theorem 5 . . . . . . . . . . . . . . . . . . . . . . . . . . 68 3.11.2 Proof of Theorem 6 . . . . . . . . . . . . . . . . . . . . . . . . . . 73 4 Directional ZigZag: Neighbor Discovery with Directional Antennas 77 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 4.2 System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 vi Contents 4.3 Directional ZigZag Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 80 4.3.1 The transmission pattern . . . . . . . . . . . . . . . . . . . . . . . 82 4.3.2 Performance Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 83 4.4 Practical Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 4.4.1 ID format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 4.4.2 Interference Control . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.4.3 Encoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.5 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 4.6 Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 4.6.1 Proof of the Theorem . . . . . . . . . . . . . . . . . . . . . . . . . 89 5 Interference in Device-to-Device Communications 95 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 5.2 Channel and System Models . . . . . . . . . . . . . . . . . . . . . . . . . . 97 5.2.1 Cellular System and Frequency Reuse . . . . . . . . . . . . . . . . 97 5.2.2 Distribution of the nodes . . . . . . . . . . . . . . . . . . . . . . . 97 5.2.3 Wireless Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 5.2.4 Approximating the area of a cell . . . . . . . . . . . . . . . . . . . 99 5.2.5 Intra-cell approximation . . . . . . . . . . . . . . . . . . . . . . . . 100 5.2.6 Inter-cell approximation . . . . . . . . . . . . . . . . . . . . . . . . 100 5.3 Aggregate Interference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 5.3.1 Mean and Variance of the Aggregate Interference . . . . . . . . . . 103 5.4 Characteristic function of the aggregate interference . . . . . . . . . . . . 105 5.5 Signal-to-Interference-Plus-Noise Ratio . . . . . . . . . . . . . . . . . . . . 106 5.6 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 5.7 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 6 Contributions and Outlook 119 Bibliography 121 vii List of Figures 2.1 Compressed sensing system Ax = y, where x is a k-sparse signal. . . . . . 9 2.2 for (C R ;c ` ;c r ) = (2; 10; 0:1); (2:5; 10; 0:05), and (3:5; 15; 0:05); andt2 [0; 1]. 32 3.1 Detecting signatures of nodes v 1 ;v 2 , and v 3 by v 0 through ZigZag algorithm. 46 3.2 Node identifiers and the network overhead. . . . . . . . . . . . . . . . . . . 49 3.3 The collision model experiment scenario: we uniformly distribute K 1 points inside a unit disk with v 0 at its center and v 0 0 on its boundary. . . 55 3.4 The probability of success and fractional miss error rate in recovering set S conditioned on the number of neighborsjSj for the CS neighbor discovery for different transmission patterns (M;d ` ;d r ). The dashed lines correspond to theK-largest detector (3.16), while the straight lines depicts the performance of the set estimator (3.15). . . . . . . . . . . . . . . . . . 57 3.5 The probability of success and the fractional miss error rate F M (S; b S) in recovering set S conditioned on the number of neighborsjSj for ZigZag. . 58 3.6 The probability of success and the fractional miss error rate F M (S; b S) in recovering set S conditioned on the number of neighborsjSj for Aloha-like scheme. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3.7 The probability of successful recoveryP(S = ^ SjjSj = M=) versus the ratio =M=K for the ZigZag and CS neighbor discovery methods. . . . . 60 ix List of Figures 3.8 Simulation Scenario for analyzing the interference model with geometric scheduling (with hexagonal blocks in this example) where N nodes are distributed uniformly over a disk of radiusR and each node has a neighbor- hood radius ofr. Since nodes in the boundary suffer from lower interference, we only consider the nodes close to center, i.e., the nodes shown in green. 63 3.9 Probability of perfect detection of the neighborhoodP(S = b S), fractional miss rate F M (S; ^ S), and fractional false alarm rate F FA (S; ^ S) for CS discovery when averagejSj =K for different transmission patterns. . . . . 64 3.10 Probability of perfect detection of the neighborhoodP(S = b S), fractional miss rate F M (S; ^ S), and fractional false alarm rate F FA (S; ^ S) for ZigZag when averagejSj =K for different transmission patterns. . . . . . . . . . 65 3.11 Probability of perfect detection of the neighborhoodP(S = b S), fractional miss rateF M (S; ^ S), and fractional false alarm rateF FA (S; ^ S) for Aloha-like scheme when averagejSj =K for different transmission patterns. . . . . . 65 3.12 Geometric scheduling with hexagonal cells. . . . . . . . . . . . . . . . . . . 66 3.13 The signature matrix of the geometric scheduling with hexagonal blocks. . 67 3.14 A (d ` ;d r )-tree with depth T. . . . . . . . . . . . . . . . . . . . . . . . . . . 68 4.1 Node identifiers and the network overhead. . . . . . . . . . . . . . . . . . . 85 4.2 The ratio =M=k for directional ZigZag in the collision mdoel. . . . . . . 88 4.3 The ratio =M=k for directional ZigZag in the realistic model andw ==2. 89 5.1 Geometric scheduling with (a)jCj = 4 (b)jCj = 7 classes. . . . . . . . . . 97 5.2 Approximating the area of a hexagonal cell with radius d by a disk with radius 0:9094d for a node located at a corner of the cell. . . . . . . . . . . 100 5.3 Approximating the area of a hexagonal cell with radius d by an annulus sector with angle # = d p 3 3 2 d+2D and length a = 3 2 d. . . . . . . . . . . . . . . . 101 5.4 The second approximation for the area of a hexagonal cell with radius d solely based on the knowledge of center and radius of the cells. . . . . . . 102 x List of Figures 5.5 The mean of the aggregate interference as a function of the reuse distance for the fixed cell radius d. . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 5.6 The scaled variance of the aggregate interference as a function of the reuse distance for the fixed cell radius d. . . . . . . . . . . . . . . . . . . . . . . 112 5.7 Inter-cell interference received from a single cell at distance D = 6 for the case 1, i.e., pathloss only scenario. . . . . . . . . . . . . . . . . . . . . . . 113 5.8 Inter-cell interference received from a single cell at distance D = 6 for the case 2, i.e., pathloss and Nakagami-m fading scenario. . . . . . . . . . . . 113 5.9 Inter-cell interference received from a single cell at distance D = 6 for the case 1, i.e., pathloss and log-normal shadowing scenario. . . . . . . . . . . 114 5.10 Inter-cell interference received from the first tier neighboring cells for the reuse distanceD = 3 and propagation case 1, i.e., pathloss only scenario. . 114 5.11 Inter-cell interference received from the first tier neighboring cells for the reuse distanceD = 3 and propagation case 2, i.e., pathloss and Nakagami-m fading scenario. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 5.12 Inter-cell interference received from the first tier neighboring cells for the reuse distanceD = 3 and propagation case 3, i.e., pathloss and log-normal shadowing scenario. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 5.13 Inter-cell interference received from the first tier neighboring cells for the reuse distanceD = 3 and propagation case 1, i.e., pathloss only. . . . . . . 116 5.14 The bound (5.25) on the aggregate interference against the cumulative sum of the histogram for the reuse distanceD = 3 and propagation case 2, i.e., pathloss and Nakagami-m fading scenario. . . . . . . . . . . . . . . . . . . 116 5.15 The bound (5.25) on the aggregate interference against the cumulative sum of the histogram for the reuse distanceD = 3 and propagation case 3, i.e., pathloss and log-normal shadowing scenario. . . . . . . . . . . . . . . . . . 117 xi List of Tables 2.1 Summary of the Explicit matrix constructions (only the best known results are shown). The columns describe: citation; sparsity k for which the con- struction is valid; type of performance guarantee; number of measurements m; approximation guarantee, i.e., either the matrix satisfies (2:2) for a certain decoder or it has RIP(2;k;). Here, "> 0 and c> 2 are specific constants. p is a prime number and 0 < r < p. For the construction in [DeV07];n =p r+1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.1 The ratio for the regular codes with (d ` ;d r ) degrees over BEC and BSC. 60 xiii 1Introduction The wireless technology is adopted more and more into our daily life. While people are accustomed to using smart phones, tablets, and laptops; smart wearable technologies such as watches and glasses are introduced into the market. The services these equipments provide, strongly depend on the supporting wireless system (cellular, WiFi, LTE), e.g., phone and tablet applications, cloud computing and storage, GPS, and many more require data communication. The current wireless technologies, however, by no means are adequate to accommodate this exponential growth rate [VIN14] as bandwidth and power are limited. Thus new approaches are required. One such approach is to increase the spatial reuse and have the devices to directly communicate with one another, i.e., the device-to-device (D2D) communication. In D2D systems, devices directly communicate with each other without the intervention of a base station or a controller. This allows for more spectral efficiency and flexibility [DRW + 09], through which, many links may be scheduled simultaneously. Operations should be as distributed as possible, especially in the case where the D2D network can serve as a back-up to the conventional cellular network, in the case of a major system failure or emergency. Due to path loss and power constraints, each device can only communicate with a small number of neighboring nodes. The neighbor discovery is the process of devices identifying 1 those which they can communicate to effectively. Clearly, this is the first task that must be performed in a network formation, since it enables all other operations such as scheduling [BE81], and routing [HPS02]. Fast discovery of neighbors is essential especially in mobile networks, since it must be repeated often such that sufficient knowledge of the network topology is maintained. Thus, it is important to reduce the discovery period as much as possible. Most neighbor discovery algorithms rely on random transmission and reception [MB01, VKT05, BEM07] similar to Aloha. The performance of these randomized methods, however, greatly deteriorate as the number of devices increases and the network becomes denser. Thus new approaches are needed with better scale of the discovery period and recovery probability in number of devices. We offer two approaches; namely, compressed sensing and ZigZag neighbor discovery. The compressed sensing neighbor discovery establishes a connection between the neighbor discovery problem and the compressed sensing problem. A key observation is that number of neighbors a node has is much smaller than total number of nodes in the network. Therefore neighbor discovery for each node is similar to sparse signal recovery and can be treated as a compressed sensing problem. In this connection, the zero/one measurement matrices of compressed sensing are used as a predetermined transmission pattern of the devices. We thus, required to build transmission patterns such that we ensure each device identifies its neighboring devices. The ZigZag neighbor discovery, relies on devices transmitting their unique identifier according to a pre-determined transmission pattern. This allows them to cancel the recovered identifiers from previous received signals and discover other neighbors. Further, we extend these algorithms to the directional case where devices are equipped with adaptive antennas. Adaptive Antennas are capable of electronically changing their gain pattern and in particular form beams and steer them in arbitrary directions. Many literature [MB01, BEM07, STC14] analyze the performance of the discovery al- gorithms over an ideal setting where the unwanted signals from non-neighbors are not 2 1.1. Organization considered. As it turns out, however, the aggregate interference from non-neighbors is the Achilles’ heel of these algorithms, which greatly deteriorates their performance. We suggest geometric scheduling and frequency reuse as a solution for this problem. That is, we control the interference by dividing the area containing the nodes into cells, and then scheduling the nodes in certain cells simultaneously. We show that by doing so, we can reduce the interference to the level so that we guarantee the successful discovery. 1.1 Organization This dissertation is divided into three parts: The first part discusses compressed sensing (CS) problem and describes the construction of order optimal zero-one measurement matrices with provable guarantees. This part serves as a backbone for the next part as we model the neighbor discovery problem as a compressed sensing one. As mentioned, the second part reviews neighbor discovery and introduces the methods we have offered. Further, it extend these methods to the directional case. Finally, in the third part, we analyze the interference in neighbor discovery. We suggest the cell-based approach to control the interference. Here, we discuss these sections in further detail. 1.1.1 Compressed Sensing We discuss the compressed sensing problem in chapter 2. The problem of recovering vector x from linear system of measurements Ax = y when number of measurements is smaller than signal dimension is called compressed sensing. Specifically, compressed sensing analyzes the possibility and constraints required to recover x2 R n from observation y2R m and linear measurements A2R mn whenmn. Obviously, this is not possible under normal conditions as the necessary and sufficient for a linear system to have a unique solution is for matrix A to be invertible. Thus, other constraints should be considered for the system to have a unique solution. The customary considered by the compressed sensing problem is for vector x to be k-sparse, i.e., it has only k non-zero elements. 3 1.1. Organization To recover the vector x, we need to solve the ` 0 - minimization minimize kx 0 k 0 subject to Ax 0 = y: (1.1) Apparently, the ` 0 -minimization is computationally hard to solve, i.e., it is NP-hard. Hence, the relaxed version of the optimization is used where the ` 0 is replaced by ` 0 : minimize kx 0 k 1 subject to Ax 0 = y: (1.2) This optimization is also known as basis pursuit. Recently, it is shown [CT05, Don06, DT05] that there exist measurement matrices A with m = (k log(n=k)), for which it is possible to recover k-sparse signals using the ` 1 -minimization. This measurement scaling is also optimal over all possible recovery algorithms, as shown in [GI10, DBIPW10a]. Different certificates such as restricted isometry property (RIP) [CT05, Don06, DT05], Nullspace [DH01, SXH08], and high expansion have been introduced for measurement matrix A to be able to recover k-sparse vector x under the ` 1 -minimization. These results, however, rely on randomized constructions of A. Unfortunately, the required properties restricted isometry property (RIP), Nullspace, and high expansion have no known ways to be deterministically constructed, or efficiently checked. Thus, explicitly constructing an order optimal measurement matrix A with m =k log(n=k) is a standing open problem in compressed sensing. We construct order optimal zero/one measurement matrices with the null-space property. Our construction is based on construction of parity check matrices of LDPC codes whose corresponding bipartite graph has a large girth. The construction of such matrices is discussed in more details in section 2.3. Our matrices have the Nullspace condition, a condition that is known to be necessary and sufficient for recoveringk-sparse signals. Our proof technique is coding-theoretic and relies 4 1.1. Organization on recent results connecting LP decoding for channel decoding by Feldman et al. [FWK05] to CS with basis pursuit reconstruction [DSV10b]. The performance analysis is based on a density evolution technique developed by Koetter and Vontobel [KV06] and improved by Arora et al. [ADS09] establishing the best-known finite-length threshold results for LDPC codes under LP decoding. The translation of this method to the real-valued CS recovery problem via [DSV10b] allows us to obtain our result. Note that even though our analysis involves a density evolution argument, our reconstruction function is the ` 1 minimization in (1.2). 1.1.2 Neighbor Discovery The neighbor discovery problem is presented in chapter 3. The ND problem shares strong similarities to the compressed sensing problem as number of neighbors a node has is much smaller than the total number of devices, i.e., the vector of neighbor indices of each device is a k-sparse vector. Through this connection, we introduce the compressed sensing neighbor discovery (CSND) in section 3.3. We use the columns of the zero/one measurement matrices we constructed in chapter 2 as the transmission pattern of the devices. That is, they transmit on one entries of the matrix. The optimal order of m = (K log(N=K)) derived in CS part directly applicable for CSND as the scale of the discovery period where for a network of N devices, K represents the average number of neighbors, devices in the network have. We further improve this scale to m = (K) by introducing the ZigZag neighbor discovery in 3.4. ZigZag uses the successive cancelation of identifiers. The ZigZag algorithm is firmly related to the message passing over the erasure channel. Through this connection, we import many results and tools from coding theory into neighbor discovery. Finally, we make a thorough comparison between these methods and against the Aloha-like discovery method, in both ideal and realistic channel environments. 5 1.2. Notation 1.1.3 Directional Neighbor Discovery In many applications, devices are equipped with beam steering antennas. Extending omni-directional ND methods to the directional case is not really straight-forward as for a successful discovery both the transmitter and receiver must point their antennas at each other. In chapter 4, we present the extend the ZigZag ND to support directional neighbor discovery, through which we introduce the first directional ND method with the discovery period that scales in average number of neighbors. We further evaluate the performance of the suggested discovery in both ideal and realistic channel environments. 1.1.4 Interference in Device-to-Device Systems As we discuss in chapter 3, the suggested discovery methods, perform poorly in the presence of interference. Hence, finding an approach to control the interference seems necessary. To overcome this problem, we suggest using cellular based systems. Specifically,weanalyzethestochasticbehavioroftheinterferenceandsignal-to-interference- plus-noise (SINR) and find the reuse distance for different coding rate of node identifiers. We introduce a novel geometrical approximation for the area of a hexagonal cell. Specifi- cally, we use an annulus sector with the same area as the hexagonal cell. This approxi- mation, allows us to effortlessly compute the statistics of the interference without much precision cost. 1.2 Notation Let us briefly discuss the notation used in the sequel. Bold font capital letters such as A denote matrices. We denote the (i;j)th element of A by A ij . Further, bold font small letters stand for vectors. Further, a i and a i are the i th column and row of matrix A, respectively. Finally a j is the j th element of vector a. Given two functions f and g, we have: 1) f(n) = O(g(n)) if there exists a constant c and integer N such that f(n)cg(n) for n>N. 2) f(n) = (g(n)) if f(n) =O(g(n)) 6 1.2. Notation and g(n) = O(f(n)). 3) f(n) = o(g(n)) if f(n) cg(n) for every c when n N. 4) f(n) =!(g(n)) if f(n)cg(n) for every c nN. 7 2 Optimal Measurement Matrices for Compressed Sensing 2.1 Introduction Assumeweobservemlinearprojectionsor(“measurements”) y = Ax ofanunknownvector x2R n . The projections are defined by the rows of a measurement matrix A2R mn , as shown in Fig. 2.1. Compressed Sensing (CS) is concerned with the recovery of x from y whenm<n, under the condition that x is a sparse vector, i.e., it has at most k non-zero entries. n m A x k = y m 1 Figure 2.1: Compressed sensing system Ax = y, where x is a k-sparse signal. Recentbreakthroughs[CT05,Don06,DT05]showedthatthereexistmeasurementmatrices A with m = (k log(n=k)) measurements for which it is possible to recover k-sparse signals using polynomial-time algorithms. This number of measurements is also order- optimal over all possible deterministic recovery algorithms satisfying the ` =` guarantee [GI10, DBIPW10a] (see later definition). 9 2.1. Introduction In this work, we restrict the recovery algorithm to be basis pursuit [CT05, Don06], i.e., the recovery functionb x(y; A), mapping y and A into an estimate of x, is the solution of the convex optimization problem: minimize kx 0 k 1 subject to Ax 0 = y: (2.1) It is well-known and immediate to see that for real-valued x and A, (2.1) is a linear program (LP). Several different statements on the recovery performance guarantee have been proved in the literature. In this work, we shall be concerned with the ` 1 =` 1 approximation for random signals, that we formally define in the following. We denote the set of n-dimensional real k-sparse signals by (k) R n =fx2 R n :kxk 0 = jSupp(x)j kg. For given positive ;, C and fixed n;m;k we say that the matrix A2R mn has the ` =` k-sparse approximation property with constant C with respect to x2R n if kxb x(y; A)k Ckx x k ; (2.2) where x = arg min x 0 2 (k) R n kx x 0 k , i.e., x is the best k-sparse vector approximation to x in the ` -norm sense (in other words, x coincides with x on the positions of its largest (in magnitude) k entries, and is equal to zero elsewhere). For the sake of completeness and in order to relate our work with respect to the existing CS literature, it is worthwhile to distinguish between the following classes of results: The CS literature [Tao07, GI10] has often considered the case where x is arbitrary and A is a sequence of random matrices (for increasing n) such that P (kxb x(y; A)k Ckx x k )! 1; 8 x2 (k) R n : (2.3) 10 2.1. Introduction This type of performance guarantee is sometimes referred to as the “for each” ` =` k-sparse approximation guarantee [GI10]. A stronger version of the arbitrary signal setting consists of requiring that P 0 B @ \ x2 (k) R n fkxb x(y; A)k Ckx x k g 1 C A! 1: This type of performance guarantee is known as the “for all” guarantee [DMM09, GI10]. Perhaps the most natural setting for signal processing and communications is the case where x is a random vector and the performance guarantee statement is about the existence of a deterministic sequences of matrices A ? (for increasing n) such that P (kxb x(y; A ? )k Ckx x k )! 1; (2.4) where now P() indicates the probability measure of the random vector x. 1 This is referred to in the following as the “random signal” setting. As with the random coding approach in Information Theory, the proof of the existence of a sequence of matrices A ? can be obtained by considering ensembles of random matrices A and by showing that the ensemble average approximation error probability P (kxb x(y; A)k Ckx x k ) converges to 1 as n!1. Then, we conclude immediately that there must exist a sequence of deterministic matrices performing at least as well as the ensemble average. The random coding proof is typically non- constructive, i.e., existence is proven without explicitly constructing such sequence of matrices. As anticipated before, we are concerned with the deterministic construction of such sequence of matrices A ? for the random signal setting under the ` 1 =` 1 approximation 1 In general, we useP() to indicate the probability measure of an appropriate probability space that contains all the random variables appearing in its argument. 11 2.1. Introduction guarantee (i.e., = = 1 in (2.4)). Notice that if (2.3) is satisfied for some random matrix ensemble, then the existence of deterministic sequences of matrices satisfying (2.4) follows immediately. Furthermore, the CS literature has identified properties of the sensing matrix for which if one of such property holds, then (2.2) holds for all x2 (k) R n [BGI + 08, DBIPW10b, XH07, DSV10a]. Such properties are, in particular, the Restricted Isometry Property (RIP) [BGI + 08], the Expansion Property [XH07], and the Nullspace Property [DSV10a]. Although it is known that there exist random matrix ensembles with order-optimal dimensions (i.e., withm = (k log(n=k))) for which such properties holds with high probability as n!1 [CT05, Don06, DT05, XH07, BGI + 08], checking that such properties hold for a specific matrix is NP-hard [BDMS12]. Therefore, even if we can generate a matrix with the desired property with high probability by a simple “coin-flip,” it is still difficult to ensure that the property is actually satisfied for a specific instance. Here, wefocusontheexplicitlyconstructionofbinary 2 measurementmatricesforthesparse approximation problem with = = 1, for sparsity regimes k =!(n 3=4 ), constructing the first known family of deterministic matrices with m = (k log(n=k)) measurements (order optimal). As observed before, this also implies that these matrices achieve exact recovery for k-sparse signals. Our matrices have the nullspace condition, which is known to be necessary and sufficient for recovering k-sparse signals. Our proof technique is coding-theoretic and relies on recent results connecting LP decoding by Feldman et al. [FWK05] to CS with basis pursuit recovery [DSV10b]. The performance analysis presented in this paper is based on a density evolution technique developed by Koetter and Vontobel [KV06] and improved by Arora et al. [ADS09] establishing the best-known finite-length threshold results for LDPC codes under LP decoding. The translation of this method to the real-valued CS recovery problem via [DSV10b] allows us to obtain our result. Note that even though our analysis involves a density evolution argument, our recovery function is basis pursuit, i.e., 2 In this paper, “binary” means with element in {0,1}. 12 2.2. Connection between Channel Coding and CS Table 2.1: Summary of the Explicit matrix constructions (only the best known results are shown). The columns describe: citation; sparsity k for which the construction is valid; type of performance guarantee; number of measurements m; approximation guarantee, i.e., either the matrix satisfies (2:2) for a certain decoder or it has RIP(2;k;). Here, " > 0 and c > 2 are specific constants. p is a prime number and 0 < r < p. For the construction in [DeV07];n =p r+1 . Paper sparsity k A/E m Approx. Guarantee [BDF + 11] n 1=2" ;n 1=2+" A (k 2" ) RIP (2;k;n " ) [DeV07] p=r + 1 A (rk) 2 RIP (2;k; (k 1)r=p) [BGI + 08] O(n) A k log c (n) ` 1 =` 1 [GLR10] large k A k(log(n)) d ` log log logn ` 2 =` 1 This paper !(n 3=4 ) random signal k log(n=k) ` 1 =` 1 the ` 1 minimization in (2.1), which is substantially different from the related work on message-passing algorithms for CS [DMM09, ZP09]. Table 2.1 compares our construction with other explicit constructions. 2.2 Connection between Channel Coding and CS In this section we provide the theoretical foundation of our proposed deterministic sensing matrix construction. In short, we shall use the binary parity-check matrices of certain LDPC codes as sensing matrices over the reals (simply interpreting the 0 and 1 symbols overF 2 as 0 and 1 overR). This connection between channel coding and CS follows from known results already present in the liteature. However, we believe that it is of some interest to line up these results in a coherent manner and work out the details, such that even though our result follows as a corollary, there is still some merit in making this connection explicit. Consider a binary linear (n;nm)-codeC =fx2F n 2 : Hx = 0g where H2F mn 2 is a parity-check matrix forC. In particular, a regular LDPC code has constant Hamming weight on all rows and all columns of H. As usual, we denote by d r and d ` the rows and columns Hamming weights, respectively. In the associated bipartite code Tanner graph, d r and d ` are referred to as the “right” and “left” degrees, or also the “check node” and “bit node” degrees [RU08]. When a codeword x2C is transmitted through 13 2.2. Connection between Channel Coding and CS a binary-input memoryless channel and the channel output sequence y is received, the maximum likelhood decoder can be expressed by [FWK05]: CC-MLD: minimize T x 0 subject to x 0 2 conv(C); where is the vector of log-likelihood ratios, with components i = log( P(y i jx i =0) P(y i jx i =1) ), for i = 1;:::;n, and conv(C) is the convex hull of all codewords ofC inR n . For the binary symmetric channel (BSC) with bit-error probability p, we have log( P(0jx i = 0) P(0jx i = 1) ) = log( P(1jx i = 0) P(1jx i = 1) ) = log 1p p : Since the solution of CC-MLD for given is the same as for any> 0, without loss of generality we may normalize the log-likelihood ratios tof1;1g. As the polytope conv(C) in CC-MLD is hard to describe (in general, it is defined by an exponentially large number of supporting hyperplanes), the well-known channel decoding LP relaxation is [FWK05]: CC-LPD: minimize T x 0 subject to x 0 2P(H); whereP(H) is known as the fundamental polytope [FWK05, KV06], and it is defined as follows: Let h T j denote the j-th row of H, then P(H) = \ 1jm conv(C j ); (2.5) whereC j =fx2F n 2 : h T j x = 0g is the j-th single-parity-check “super-code” ofC. Next, assume that the all zero codeword is transmitted over the BSC and consider a perturbed receiver where an “evil genie” changes all the log-likelihood ratios in position i 14 2.2. Connection between Channel Coding and CS corresponding to a bit-flip of the channel from +1 to +C R , or from1 toC R , for some C R > 1. Obviously, a smart receiver that is aware of the evil genie can easily decode the transmitted codeword by simply flipping back the bits corresponding toC R , which are identified without errors since C R is larger than 1. However, a decoder using CC-LPD and unaware of the evil genie will degrade its error probability, since the cost of bit flips is increased in the objective function of CC-LPD. We refer to this artificially perturbed channel as the Perturbed BSC (PBSC). We are now ready to state the main theorem that relates channel coding to CS: Theorem 1. Consider a codeC with parity-check matrix H2F mn 2 , and an index set Sf1;:::;ng. Let denote the output weight vector resulting from transmitting the all-zero codeword ofC over the PBSC with parameter C R > 1, when exactlyjSj bit-flips occur in positions i2S. If the result of CC-LDP with inputs and H is the all-zero codeword (in short we say: if H recovers the error pattern S for the PBSC under LP decoding), then kxb x(y; A H )k 1 2 C R + 1 C R 1 kx S k 1 (2.6) where x S is the vector of elements of x overS, and A H is a measurement matrix obtained by interpreting the elements of H as real numbers, y = A H x. The proof can be found in [DV09]. Let us illustrate the chain of implications (corollaries of known results) that lead to Theorem 1. We start with the definition of Nullspace Property: 3 Definition 1. LetSf1;:::;ng andC R > 1. We say that A has the Nullspace Property with respect to the support set S and constant C R ; and write A2NSP (S;C R ), if C R k S k 1 k S k 1 ; 8 2N (A): where, for a vector and support set S, we denote by S the vector obtained by setting to 3 We state the following results in a form which is convenient to the exposition of this paper, although this may not be the most general form. 15 2.2. Connection between Channel Coding and CS zero all the components of with indices i = 2S. The relation between Nullspace Property and ` 1 =` 1 k-sparse approximation is given by the following lemma [DV09]: Lemma 1. Let Sf1;:::;ng and C R > 1. If A2NSP (S;C R ), then kxb x(y; A)k 1 2 C R + 1 C R 1 kx S k 1 (2.7) where x2R n is any vector. On the channel coding side, we recall the definition of the fundamental cone: Definition 2. The fundamental coneK(H) is the set of all vectors w2R n that satisfy w i 0; 8i2f1;:::;ng; w i P i 0 2I j nfig w i 0; 8i2I j ; j2f1;:::;mg; whereI j =fi2f1;:::;ng : [H] j;i = 1g (i.e., the set of bit-nodes adjacent to the j-th check-node in the code Tanner graph). Given aC with parity-check matrix H, we define the Fundamental Cone Property as follows: Definition 3. Let Sf1;:::;ng and C R > 1. We say that the parity check matrix H has the Fundamental Cone Property with respect to the support set S and constant C R denoted by H2FCP (S;C R ) if C R kw S k 1 kw S k 1 ; 8 w2K(H): The relation between the Fundamental Cone Property and successful decoding for the PBSC under CC-LPD is given by the following lemma [DSV10a]: 16 2.2. Connection between Channel Coding and CS Lemma 2. Consider a codeC with parity-check matrix H2 F mn 2 , and an index set Sf1;:::;ng. Let denote the output weight vector resulting from transmitting the all-zero codeword ofC over the PBSC with parameter C R > 1, when exactlyjSj bit-flips occur in positions i2S. H recovers the error pattern S for the PBSC under LP decoding if and only if Fundamental Cone Property with respect to the support set S and constant C R . Finally, we need a connection between the fundamental coneK(H) and the nullspace N (A H ). This is provided by the following lemma [DSV10a]: Lemma 3. Let H2F mn 2 be a parity-check matrix of a codeC and let A H be the same binary matrix with entries over R. Then, 2N (A H ))j j2K(H); wherej j is the vector obtained by applyingjj componentwise to . In other words H2FCP (S;C R )) A H 2NSP (S;C R ): (2.8) The derivation of (2.8) is easily shown by considering that if A H has not the Nullspace Property with respect to the support set S and constant C R , then H has not the Fundamental Cone Property with respect to the support setS and constantC R . 4 Suppose that A H has not the Nullspace Property with respect to the support set S and constant C R . Then, there exists a vector 2N (A H ) such that C R k S k 1 >k S k 1 : (2.9) By Lemma 3, it must be that w =j j is a vector inK(H). However, from (2.9) we have that C R kw S k 1 >kw S k 1 ; 4 For two statements P and Q, showing that P) Q is equivalent to showing that not-Q) not-P. 17 2.2. Connection between Channel Coding and CS implying that H does not have the Fundamental Cone Property with respect to the support set S and constant C R . At this point, Theorem 1 is obtained form the following chain of implications: 1. H recovers the error pattern S for the PBSC under LP decoding ) H has the Fundamental Cone Property with respect to the support set S and constantC R (by Lemma 2). 2. H has the Fundamental Cone Property with respect to set S and constant C R ) A H has the Nullspace Property with respect to set S and constant C R . 3. A H has the Nullspace Property with respect to set S and constant C R ) A H yields (2.8) (by Lemma 1). Note that we need another step to show the ` 1 =` 1 k-sparse approximation guarantee for vector x as the set of flips S is independent of the vector x and thus (2.8) does not provide any information on recovering all k-sparse vectors as in (2.2). Specifically, to show the ` 1 =` 1 , we need to show that the (2.8) holds when S is the same as the support of the k largest elements of x, denoted by S x . The support of flips S is coming from a Bernoulli trial with probability p denoted byB(n;p), where a particular pattern occurs with probability p jSj (1p) njSj . On other hand, S x is independent of S and may come from a distribution which is not necessarily identical to the distribution of S, i.e.,B(n;p). We provide the extension from (2.8) to (2.2) for three different classes of distributions over x and S x . For now, assume that the codeC wit parity-check matrix H cannot recover the random flip patternS coming from PSBC with flip probabilityp and constantC R with probability at least (n) where (n) is some vanishing function in n, i.e., P SB(n;p) (H = 2FCP (S;C R ))(n); where the subscript S B(n;p) emphasizes that the flip pattern of the perturbed symmetric channel comes from the Bernoulli trialB(n;p). Note that by (2.8), this means 18 2.2. Connection between Channel Coding and CS that P SB(n;p) (A H = 2NSP (S;C R ))(n): (2.10) As mentioned, we need to identify the class of distributions on the support of x (or its k largest elements), for which we can translate the bound (2.10) which holds for the bernoulli flip pattern into the support of x. Let us present such distributions: 1. x is an exactly sparse signal whose elementsx i are i.i.d. according topf X (x) (1 p)(x), where f X (x) is some probability distribution. Apparently, for this class, S x is distributed according toB(n;p). That is, P SB(n;p) (A H = 2NSP (S;C R )) =P SxB(n;p) (A H = 2NSP (S x ;C R )); and thus by Lemma 1 we have P SxB(n;p) (A H = 2NSP (S x ;C R ))(n))P SxB(n;p) (kxb xk 1 > 0)(n): The bound is also valid when the distribution of the elements are not identical but independent, i.e., P X i (x i ) =pf X i (x i ) (1p)(x). Of course, we assume that the distributions f X i (x i ) are not ill-conditioned and have no mass at zero. After giving it more thought, I see that it is not required, i.e., it works even if they have a mass at zero. Note that here, we did not focus on the k-sparse signals as any vector x coming from the defined distribution will be recovered with probability 1(n). 2. Finally, we consider the class of k-sparse signals, where we assume the support of the k largest elements (in magnitude) of x is chosen uniformly at random from all n k possible supports, i.e., S x U(n;k). Our goal is to bound the probability P SxU(n;k) (A H = 2NSP (S;C R )) (2.11) 19 2.2. Connection between Channel Coding and CS The key observation for translating our result over the Bernoulli distributed support to a uniform one, i.e., going from (2.10) to bounding (2.11), is to notice that a pattern of Bernoulli flips when conditioned on the number of its successes is distributed uniformly. That is, a specific pattern S occurs with probability p jSj (1 p) njSj . However, if we know that one of the n k patterns of k flips will occur, then the probability of seeing a specific pattern is 1= n k as all the k flip patterns are equiprobable. In mathematical terms, the probability of choosing S is P SB(n;p) (S) =p jSj (1p) njSj ; while P SB(n;p) (SjjSj =k) = 1 n k =P SU(n;k) (S): Hence, we can re-write (2.11) as P SxB(n;p) (A H = 2NSP (S x ;C R )jjS x j =k): We proceed as follows: P SxU(n;k) (A H = 2NSP (S x ;C R )) =P SxB(n;p) (A H = 2NSP (S x ;C R )j jS x j =k) = P SxB(n;p) (A H = 2NSP (S x ;C R );jS x j =k) P SxB(n;p) (jS x j =k) P SxB(n;p) (A H = 2NSP (S x ;C R )) P SB(n;p) (jS x j =k) (a) = P SB(n;p) (A H = 2NSP (S;C R )) P SB(n;p) (jSj =k) (b) (n) P SB(n;p) (jSj =np) (c) n(n); 20 2.2. Connection between Channel Coding and CS where (a) is a simple variable change S x to S, (b) follows by replacing (2.10) and choosing the flip probability p =k=n; and (c) is obtained by usingP SB(n;p) (jSj = np) 1=n. In other words, the cost of extending the result from the bernoulli support to k sparse signals is that the bound is increased by a factor n. Thus, for the bound to go to zero we require (n) =o(n). Finally, by Lemma 1 we have P SU(n;k) (A H = 2NSP (S;C R ))n(n))P SxU(n;k) (kxb xk 1 > 0)n(n): (2.12) 3. Ensemble of random signals x2P (k) R n , defined as follows: Definition 4. A random vector x belongs to classP (k) R n if the support of itsk largest (in magnitude) elements is uniformly distributed over all possible n k support sets of size k on n components. Examples of random vectors x2P (k) R n include: 1) i.i.d. random vectors (i.e., with joint distribution in product form such that p X n(x) = Q n i=1 p X (x i ) for some given marginal distributionp X ()); 2) random vectors obtained as x = x 0 , where is a random permutation matrix chosen uniformly over the n-dimensional symmetric group, and x 0 is a fixed deterministic vector. In passing, we also make here the trivial observation that the previous case is the special case of this class and ` 1 =` 1 approximation guarantee implies exact recovery when x2 (k) R n . Hence, for random vectors x2P (k) R n such thatP(x2 (k) R n ) = 1, the bound (2.12) changes to P SU(n;k) (A H = 2NSP (S;C R ))n(n))P SxU(n;k) kxb xk 1 >Ckx Sx k 1 n(n): In particular, any fast enough vanishing lower bound (n) onP(H = 2FCP (p;C R )) for a specific parity-check matrix H yields immediately a lower bound onP kxb x(y; A H )k 1 2 C R +1 C R 1 kx x k 1 for A H as a measurement matrix. We will make use of the following 21 2.3. Measurement Matrix Construction lower bound, which is obtained by straightforward modification of the result of Arora et al. [ADS09]. Theorem 2. LetC be a regular (d ` ;d r )-LDPC code whose corresponding bipartite graph has girth equal to g. Further, let p be the bit-error probability PBSC with perturbation constant C R > 1. Let = min t0 (t;C R ;p;d r ;d ` )< 1; where (t;C R ;p;d r ;d ` ) = (1p) dr1 e t + (1 (1p) dr1 )e C R t (d r 1)((1p)e t +pe C R t ) 1=(d ` 2) : (2.13) Then, P(H2FCP (p;C R )) 1n d ` (d ` 1) T=21 d ` ; (2.14) where Tbg=2c. Using Theorem 2 and Theorem 1 we conclude that if H is the parity-check matrix of a regular LPDC code for which we can ensure < 1, then the corresponding measurement matrix A H achieve the ` 1 =` 1 k-sparse approximation guarantee for the random signal setting given by P kxb x(y; A H )k 1 2 C R + 1 C R 1 kx x k 1 1n 2 d ` (d ` 1) T=21 d ` : (2.15) 2.3 Measurement Matrix Construction In this section we present our construction. That is, as mentioned in Section 2.2, we build a parity-check matrix of a (d ` ;d r )-regular LDPC code of length n. From (2.14), we can deduct that the lower bound on the probability of error can approach 1 if either d ` or 22 2.3. Measurement Matrix Construction girth (correspondingly T) grows. Let us elaborate on what order of girth with respect to n;d ` , and d r we consider to be large and what constructions are out there to make such codes. It is well-known in coding literature [Gal62] that a code is considered to have a large girth g if it satisfies the log relationship g = ( log(n) log((d r 1)(d ` 1)) ): (2.16) Note that the root of (4.3) is the fact that for T <bg=2c a parity-check matrix satisfies 1 + T1 X i=1 d i ` d i r < n: (2.17) Assuming that eitherd ` d r 1 orT!1, we can neglect the smaller terms, and taking logarithm from both sides of (2.17) ends us up with (4.3). Many construction such as Gallager’s [Gal62], progressive edge growth (PEG) method [HEA05], or successive level growth [AA07] satisfy (4.3). On other hand, It is shown [Fos04] that cyclic and quasi cyclic codes do not satisfy (4.3) and have g 12. Therefore, any (d ` ;d r )-regular LDPC code satisfying (4.3) with the following degrees and number of rows has the mentioned guarantees (to be proven later) as a measurement matrix: m = ck log(n=k) d r = c r n=k d ` = c ` log(n=k); (2.18) wherec;c r < 1;c ` are constants which must satisfy the hand-shaking lemma, i.e.,c ` =cc r . Note that, as mentioned, for our analysis we require girthg 10. This condition, however, cannot be satisfied for k = o( p n), as whenever kd ` d r nk, many cycles of length four which includes elements in the support of sparsity exist. On the other hand, for 23 2.3. Measurement Matrix Construction k = (n), as shown in [KSTDH11], the variable and check degrees are constant and we can successfully construct the desired matrix. Here, we try to find the exact range of k for which we can construct a matrix with the desired scaling of girth. We consider two constructions; namely, Gallager’s [Gal62] and PEG [HEA05] and see for what sparsities we can build codes with g 10. Starting with the Gallager’s construction, we see that for successfully building a code with girth ten, we require (equation C.7 in [Gal62]) (d r 1) 4 (d ` 1) 4 dr(d ` 1) n 2d r and thus the construction is only valid for k =n 3=4+ for some small constant . PEG performs similar to Gallager’s construction and for g 10 it requires (equation (3) in [HEA05]) d ` + 3 X i=1 d ` (d ` 1) i (d r 1) i <m; which is easily satisfied for k =n 3=4+ for any small positive constant . For these constructions, let us also analyze the scale of the lower bound on the girth. The Gallager’s construction has g 2b log(n) log ((d r 1)(d ` 1)) c (2.19) while PEG obtains g 2b log (md r nm + 1) log ((d r 1)(d ` 1)) c (2.20) = 2b log ((d ` 1)nm + 1) log ((d r 1)(d ` 1)) c: (2.21) As shown, we could not construct a measurement matrix with g 10 for k =O(n 3=4 ). We also considered Quasi Cyclic (QC) codes [TW67]. Again, we could not find a code 24 2.4. Achievability which can haveg 10 for our parameters whenk =O(n 3=4 ). The best we could find were constructions satisfying g 6 for k p n which cannot help us with our analysis. Thus far, for the parameters (4.4), we have provided constructions whose corresponding girth can attain (4.3) for particular sparsity k =!(n 3=4 ). Next, we show that the probability of error in recovering sparse signals goes to zeroes for these constructions. Let us present an example to clarify our construction method. Example 1 (Construction for sparsityk =n 3=4+" ). In this part, we consider the sparsity k =n 3=4 and present our construction for it. using parameters (4.4), we get m = ck log(n=k) =c 0 n 3=4+" log(n) d r = c r n=k =c r n 1=4" d ` = c ` log(n=k) =c 0 ` log(n) 2.4 Achievability In this section we present the the main result, i.e., Theorem 3, as follows: Theorem 3. Let x belong to one of the mentioned classes; namely, mix Bionomial, uniform k-sparse, andP (k) R n ; where k = !(n 3=4 ). Then, we deterministically construct binary matrices A of size mn with m = (k log(n=k)), such that P(kxb x(y; A )k 1 Ckx x k 1 )! 1; (2.22) with parameter C > 2. Proof of Theorem 3. We construct a large girth (d ` ;d r )-regular LDPC code with param- eters (4.4). As mentioned, for the lower bound (2.14) to tend to one, we require < 1 and T or d ` to be a growing function of n. That is, we require to show that there exist constantsc;c r ;c ` , andt for which the function (t;C R ;p;d r ;d ` ) defined in (2.13) becomes less than one for our construction. This is done in the following lemma. 25 2.4. Achievability Lemma 4. There exists c;c r ;c ` , and t for which the function (t;C R ;p;d r ;d ` ) (2.13) with values (4.4) is less than one. The proof can be found in the appendix. Now we need to show that the probability of error (2.14) for our construction goes to zero. Recall that by Lemma 4, is a constant strictly less than one. The degree of variable nodes in our construction is d ` =c ` log(n=k). Further, note that the upper bound on the probability of error is n d ` T as shown in (2.14). As mentioned in section 2.3, when the sparsity k decreases from n to n 3=4+" , the girth also goes from log(n) to a constant. The variable node degree d ` , however, grows as k decreases. This growth helps us to force the function n d T ` to zero. Let us analyze the case k =n 3=4+ and g = 10. Obviously, for larger k, we can force the probability of error to zero as d T ` is an increasing function of k for our parameters (4.4). Note that for g = 10, we have T = 4, and thus For k =n 3=4+ , d ` =c ` (1=4) log(n). Hence P[error] n d ` 2 = n exp(d ` 2 log( )) (a) = n exp(c ` ((1=4) log(n)) 2 log( )); where equality (a) is gained by the substitution of the value ofd ` . Settingc 0 ` =c ` (1=2) 2 gives P[error] exp(c 0 ` (log(n)) 2 log( ) + log(n)) which goes to zero as n!1. Note that for the classes uniform k-sparse andP (k) R n , where as shown by (2.12), we have an extra factor of n in the bound, we get P[error] exp(c 0 ` (log(n)) 2 log( ) + 2 log(n)); (2.23) 26 2.5. Optimal number of measurements which still goes to zero as n!1. The proof is complete by taking the steps in Theorem 1. 2.4.1 Linear Sparsity Letusaddsomenotesonthelinearsparsitycasek =pnwherep< 1issomesmallconstant. Indeed, the benefit of compressed sensing is in having the number of measurements m to be much less than number of unknowns, i.e., n. Thus, given the construction of Theorem 3 and the numbers of Fig. 2.2 we see that even for c = 25 we require p < 0:04 to get m<n. In other words, our construction requires m =n measurements to recover linear sparsity p = 0:04. This, however, is not the best LDPC codes as measurement matrices can achieve for the linear sparsity regime. In [KSTDH11], we show that we can recover sparsity p = 0:045 with m = 1 2 n measurements. This is done by applying weights in the proof of Arora et al. [ADS09]. We point interested readers to [KSTDH11] for better results on recovering the linear sparsities with parity-check matrices as measurement matrices. 2.5 Optimal number of measurements We present a sanity check in this section: we confirm the optimality of order k log(n=k) measurements by showing that, for the class of measurement matrices obtained from LDPC parity check matrices considered in this paper, the probability of recovery error is bounded away from zero when m =o(klog(n=k)). Theorem 4. For any sparsity k = !(n 3=4 ), and x belonging to classP (k) R n , if m = o(k log(n=k)), then there exists some positive constant such that for all design parameters (namely, d ` ;d r , and t in (2.13),(4.4)): P(kxb x(y; A)k 1 Ckx x k 1 )>: (2.24) Before we presenting the proof of Theorem 4, we require some preliminary lemmas. We prove the theorem for C R = 1, i.e., recovery of exactly k-sparse signals. In the coding 27 2.5. Optimal number of measurements theoretic language, that is we consider the binary symmetric channel with flip probability p =k=n. Note that decoding the PSC with C R > 1, i.e., giving the ` 1 =` 1 guarantee, is harder than the case C R = 1, and thus showing that it is impossible to recover exactly k-sparse signals with m =o(k log(n=k)) also proves the unfeasibility of ` 1 =` 1 guarantee for approximately k-sparse signals. We start by showing that constant d 1=d ` r guarantees m = (np log(1=p)). Lemma 5. if d 1=d ` r =O(1) then the number of measurements m is (np log(1=p)). After showing that constant d 1=d ` r provides the optimal measurement guarantee, we show that for growing d 1=d ` r (i.e., !(1)) we cannot enforce (t; 1;p;d r ;d ` ) < 1 for any t 0. To do so, we consider some guarantees for (t; 1;p;d r ;d ` )< 1, beginning with the case " =o(1) where " =pd r . Lemma 6. If d 1=d ` r =!(1) then for (t; 1;p;d r ;d ` )< 1, " can neither be a constant nor a growing function of n, i.e., " =o(1). Next, we present an upper bound on e t when < 1. Lemma 7. Assume d 01=d 0 ` r > 1 and let " = o(1). If pe t 1, then the function (t; 1;p;d r ;d ` ) will be greater than one for all valuest 0. That is for (t; 1;p;d r ;d ` )< 1 it is necessary to have pe t < 1. We are ready now to present the main lemma for the growing d 1=d ` r . We use the previous lemmas, namely, Lemmas 6, 7 in the proof. Lemma 8. If m =o(np log(1=p) and d 1=d ` r =!(1) then > 1 for all values t 0. Finally we are ready to present the proof of Theorem 4. proof of Theorem 4. To prove this Theorem, we consider two cases, namely, d 1=d ` r =O(1) and d 1=d ` r = !(1). For the first case, by Lemma 5 we have m = (np log(1=p)) and by the achievability Theorem (Theorem 3), we show that for this case we can build (d ` ;d r )-regular LDPC codes and find some t 0 for which < 1. 28 2.6. Appendix For the second case d 1=d ` r =!(1), we prove the statement by contradiction. Specifically, we assume m =o(np log(1=p)) and then we use Lemma 8 to show that there exists no t 0 for which < 1. Thus the proof is complete. 2.6 Appendix Here we present the proof of the lemmas. proof of Lemma 4. Let us introduce the following factors of (2.13) for more clarity, = 1 (d 0 r 2 ) 1=d 0 ` 1 = (1p) d 0 r e t + (1 (1p) d 0 r )e C R t 2 = (1p)e t +pe C R t where p =k=n, d 0 r =d r 1 and d 0 ` =d ` 1. We begin by analyzing the behavior of 1 ; 2 and see if we can force to become less than one for some values of t. Set a = (1p) d 0 r where a< 1. Thus, 1 = ae t +e C R t (1a) : Let us first bound t to enforce e C R t (1a)< 1)t< 1 C R log( 1 1a ): (2.25) Further, d 1=d ` r = exp 1 d ` log(d r ) = exp log(c r ) c ` log(n=k) + 1=c ` exp(1=c ` ); 29 2.6. Appendix where the second inequality is due to c r < 1. To force 2 < 1 we only require t < (1=C R ) log(1=p) which is looser than (2.25). As a result, (t;p;d r ;d ` ) ae t +e C R t (1a) e 1=c ` : (2.26) Then for (2.26) to be less than one we require ae t +e C R t (1a)<e 1 c ` : (2.27) Note that the minimum of the left-hand side occurs when e C R t (1a) = (a=C R )e t . This minimum occurs at t = 1 C R + 1 log( a C R (1a) )> 0; which is smaller than the bound (2.25), and for it to be positive, we require log( a C R (1a) )> 0: (2.28) That is we need a! 1. More specifically, we require a>C R =(C R + 1), where C R is a constant of our choice. Note that for k =o(n), a can be approximated by e cr . Thus, to satisfy (2.28), we choose c r < log(1 + 1 C R ): (2.29) Going back to (2.27), all we need to have < 1 is e t (a + a C R )<e 1 c ` : Simplifying it gives c ` > (t log(a + a C R )) 1 (2.30) 30 2.6. Appendix Note that since c ` is a constant of our choice and a( C R +1 C R ) > 1 we can find a t that satisfies both (2.25) and (2.30); and thus the proof is complete. 31 2.6. Appendix 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.8 1 1.2 1.4 1.6 1.8 2 2.2 t Γ(t, p, d r ,d ! )=1 C R =2, c l =10, c r = 0.1 C R =2.5, c l =10, c r = 0.05 C R =3.5, c l =15, c r = 0.05 Figure 2.2: for (C R ;c ` ;c r ) = (2; 10; 0:1); (2:5; 10; 0:05), and (3:5; 15; 0:05); andt2 [0; 1]. As an example, we draw the upper bound (2.26) as a function of t for (C R ;c ` ;c r ) = (2; 10; 0:1); (2:5; 10; 0:05), and (3:5; 15; 0:05). AsshowninFig. 2.2, thecurve (t)goesbelow the line = 1. proof of Lemma 5. To begin, note that the assumption d 1=d ` r =O(1) is equivalent to 1 d ` log(d r )c d ; for some constant c d > 0: (2.31) Applying the hand-shaking lemma to substitute for d ` , we get 1 d ` log(d r ) = n m log(d r ) d r : (2.32) Thus, a lower bound on m can be derived by substituting (2.32) in (2.31) as follows m 1 c d n log(d r ) d r For " = pd r , we consider two cases, namely, " = O(1) and " = !(1). Starting with " = O(1), i.e., pd r c " for some constant c " > 0, we substitute d r c " =p into the 32 2.6. Appendix previous inequality to get m 1 c d c " np log( c " p ) = 1 c d c " np(log(c " ) + log( 1 p )) = c m np log( 1 p ) +o(np log( 1 p )); where c m = 1=(c d c " )> 0 is any constant. Thus m = (nplog(1=p)). Now, consider the case where " = !(1), i.e., d r > c " =p for any constant c " > 0. Note that d 1=d ` r = O(1) gives d ` c ` log(d r ) for some constant c ` > 0. Combining the two inequalities give d ` > c ` log(c " =p). We use this to replace for d ` in the hand-shaking lemma md r =nd ` to get m = n d ` d r m > n c ` log(c " =p) d r Further, we use the trivial assumption d r 3 to get m> c ` 3 n log(c " =p); for any positive constant c ` . That is m =!(np log(1=p)) and the proof is complete. proof of Lemma 6. Assume not. That is " = (1). We consider two cases, namely, constant " and for " to be a growing function of n. To start, let " = c " where c " > 0 is a constant. Note that since d 1=d ` r = !(1) then d 0 r =d r 1 =!(1). Further, constant " provides p =c " =d 0 r which gives p =o(1). Thus 33 2.6. Appendix we can consider the tight approximation = 1 1=d 0 ` 2 where 1 = e tc" +e t (1e c" ) 2 = d 0 r e t c " e t +c " e t : The minimum of the function 1 for t> 0 can be easily obtained by setting @ 1 =@t = 0 which gives min 1 = 2 p e c" (1e c" ). Notice that min 1 is a constant. For function 2 , it is easy to see that its minimum occurs at exp(t) = p (1p)=p which gives 2 2d 0 r p p(1p) = 2d 01=2 r p "(1p) (a) p 2d 01=2 r p c " ; where inequality (a) is derived by substituting p 1=2. As a result, given the assumption d 1=d ` r =!(1), we have 1=d ` 2 =!(1) and so is = 1 1=d 0 ` 2 which completes the proof for this case. Using the preliminaries from the previous part, the case where " =!(1) is now easy to prove. Apparently, either p is a constant or p =o(1), 1 = (o(1)e t + (1o(1))e t ). Further, by the proof of the previous case, 2 > p 2d 1=2 r " 1=2 . Therefore, we can form the following bound on > (o(1)e t + (1o(1))e t ) p 2("d 0 r ) 1 2d 0 ` > p 2 (1o(1))e t (d 0 r ) 1=2d 0 ` " 1=2d 0 ` > 1; for t 0, where the last inequality is the result of the assumption d 1=d ` r = !(1) and " 1=2d 0 ` e t > 1. 34 2.6. Appendix From here on, whenever" =o(1), by the approximation" = (1e " ), we use the following formulation for (t;p;d r ;d ` ) = e t" +"e t d 01=d 0 ` r ((1p)e t +pe t ) 1=d 0 ` (2.33) 1 = e t" +"e t (2.34) 2 = d 0 r 3 =d r e t "e t +"e t (2.35) 3 = (1p)e t +pe t (2.36) proof of Lemma 7. Using" =o(1) we use the formulation (2.33) for . Notice thatpe t > 1 can be manipulated to give t> log(1=p): (2.37) We know that d 01=d 0 ` r > 1. Further, pe t > 1 asserts 3 > 1. So the only hope we have to make less than one is to have 1 < 1. But to have 1 < 1 we require all its elements to be less than one, namely, "e t < 1. This, however, means that t< log(1=p) log(1=d r ) which contradicts (2.37) and the proof is complete. proof of Lemma 8. Before we start the proof, let us point out that by Lemma 7 we know pe t must be less than one, i.e., e t < 1=p. Further, note that by the assumption, d 1=d ` r =!(1), i.e., d r =!(1). Furthermore, by Lemma 6, " =o(1). That is p<c " =d r for any constant c " > 0. Thus, we can use the formulation (2.33) for . We use the assumption m = o(np log(1=p), i.e., m < c m np log(1=p) for any constant c m > 0, to find a lower bound on logd r as follows m < c m np log(1=p) nd ` d r < c m np log(1=p) by hand-shaking lemma d r > d ` c m 1 p log(1=p) 35 2.6. Appendix Note that the minimum d r is achieved when d ` is a constant. Let the constant c r > 0 denote d ` =c m for the constant d ` . Thus d r > c r 1 p log(1=p) log(d r ) > log(c r ) + log( 1 p ) log log( 1 p ): (2.38) To prove the Theorem, we consider three cases, namely, 1. e t < p 1=" 2. p 1="<e t < p 1=p 3. p 1=p<e t < 1=p, and then show for each case that we cannot have < 1 for all values t 0. Starting with the first case, by the assumption e t < p 1=", we have 1 > c 1 e t and 3 >c 3 e t for some constants c 1 ;c 3 > 0. As a result = 1 d 01=d 0 ` r 1=d 0 ` 3 > c 1 e t d 01=d 0 ` r (c 3 e t ) 1=d 0 ` = ce t(d 0 ` +1)=d 0 ` d 01=d 0 ` r ; where c =c 1 c 1=d 0 ` 3 is a positive constant. For to be less than one, we require ce t(d 0 ` +1)=d 0 ` d 1=d 0 ` r < 1 c 0 d 0 r <e t(d 0 ` +1) c 0 d 0 r < 1 pd 0 r (d 0 ` +1)=2 ; inserting e t < p 1=" d 0 r < 1 c 0 1 p (d 0 ` +1)=(d 0 ` +3) log(d 0 r )< (d 0 ` +1) (d 0 ` +3) log( 1 p ) + log(1=c 0 ) (2.39) 36 2.6. Appendix Note that since (d 0 ` + 1)=(d 0 ` + 3)< 1, equation (2.39) contradicts with (2.38) as p! 0. Thus cannot be less than one and the proof of the first case is complete. Moving on to the second case, for p 1="<e t < p 1=p we have 1 > c 1 "e t 3 > c 3 e t As a result, we have > c 1 "e t d 01=d 0 ` r (c 3 e t ) 1=d 0 ` = c"e t(d 0 ` 1)=d 0 ` d 01=d 0 ` r : And again for to be less than one, we get c"e t(d 0 ` 1)=d 0 ` d 1=d 0 ` r < 1 c 0 d 0 r e t(d 0 ` 1) " d 0 ` < 1 c 0 d 0 r 1 pdr (d 0 ` 1)=2 (pd 0 r ) d 0 ` < 1; by e t > p 1=" d 0 r < 1 c 0 1 p (d 0 ` +1)=(d 0 ` +3) log(d 0 r )< (d 0 ` +1) (d 0 ` +3) log( 1 p ) + log(1=c 0 ) (2.40) As in the first case, (d 0 ` + 1)=(d 0 ` + 3)< 1, and as a result equation (2.39) contradicts with (2.38) as p! 0 which completes the proof of the second case. Finally, for the third case p 1=p<e t < 1=p, we have 1 > c 1 "e t 3 > c 3 pe t : 37 2.6. Appendix Similar to the proof of the previous cases, we get > c 1 "e t d 01=d 0 ` r (c 3 pe t ) 1=d 0 ` = c"e t(d 0 ` +1)=d 0 ` d 01=d 0 ` r p 1=d 0 ` : And since we require to be less than one, we have cd 0(d 0 ` +1)=d 0 ` r e t(d 0 ` +1)=d 0 ` p (d 0 ` +1)=d 0 ` < 1;using " =pd r c 0 d 0 r e t p< 1 c 0 d 0 r 1 p 1=2 p< 1; applying e t > p 1=p d 0 r < 1 c 0 1 p 1=2 log(d 0 r )< 1 2 log( 1 p ) + log(1=c 0 ); (2.41) where using the same logic as before shows that equation (2.41) contradicts with (2.38) as p! 0 and the proof of the Lemma is complete. 38 3 Neighbor Discovery in Wireless Networks 3.1 Introduction Beyond the traditional mobile ad-hoc networks (MANETs), Device to Device (D2D) networks are predicted to become an important component of modern wireless com- munications, with application to cyber-physical systems, internet of things, car-to-car communications, peer-to-peer wireless video streaming and to a variety of location-based services. In particular, industry development [WTS + 10] and on-going standardization efforts [LAGR13] have made significant progress towards the definition and adoption of D2D networks for large-scale consumer applications. In D2D networks, devices transmit data from one to another possibly across multiple hops. Network operations should be as distributed as possible, especially where the D2D network serves as a back-up to the conventional cellular network, in the case of a major system failure or emergency. Due to path loss and power constraints, each device can only communicate with a small number of neighboring nodes. Neighbor discovery is the process in which each device detects its neighbors. Neighbor discovery is crucial in the network setup, and it is the first task that must be performed in a network, since enables all other operations such as scheduling [BE81], and routing [HPS02]. Fast discovery of neighbors is essential especially in mobile networks, since it must be repeated often such that sufficient knowledge of the network topology is maintained. 39 3.1. Introduction Thus, it is important to reduce the discovery period as much as possible. An interesting approach introduced in [WTS + 10] consists of letting all nodes transmit their unique identifiers simultaneously. Then, each node detects the identifiers that it can decode and marks the owners as its neighbors. This can be done using multiuser detection, which is by now well-understood in the context of code-division multiple access (CDMA). Neighbor discovery algorithms can be categorized in different ways: randomized versus de- terministic, centralized versus distributed, and synchronous versus asynchronous [KRS15]. In the randomized algorithm, the nodes—due to their half-duplex property—choose to either transmit or listen randomly on each time slot so that each node gets a chance to hear and be heard by its neighbors. Examples for such randomized algorithms in- clude the “birthday protocol” [MB01], directional antenna neighbor discovery [VKT05], and slotted random transmission and reception [BEM07]. We refer to such methods as “Aloha-like” schemes. In the deterministic algorithms, on the other hand, nodes transmit their identifiers according to a pre-determined transmission pattern. Methods such as compressed sensing (CS) [LG08, ZLG12, STDC13] and ZigZag [STC14] are deterministic algorithms. Further, although most algorithms are analyzed for the synchronous case, (i.e., all nodes have "roughly" the same timing, based either on distributed synchronization or observation of a beacon signal) they may be extended to support the asynchronous case [BEM07]. In this paper 1 , we develop two new discovery methods; namely, compressed sensing (CS), and ZigZag, and compare them to Aloha-like schemes. We analyze the performance of these algorithms, first theoretically by finding lower bounds on their performance in an ideal environment and then experimentally through running simulations in a more realistic setting. Note that we assume the nodes know ahead of time when the algorithm supposed to begin. For the Aloha-like scheme—where nodes randomly transmit or receive on each slot—we prove in Theorem 6 that it requires a long discovery period for nodes to detect their 1 The content of this paper is partly published in [STDC13] and [STC14] 40 3.2. Model and Assumptions neighbors reliably. Specifically, we show that for a node with K neighbors to detect the neighbors with high probability, it requires to run the algorithm for the order of (K 2 log(K)) slots. Note that, this is a re-derivation of a result from [VTGK09] where due to a mistake the bound is found to be (K log(K)). Next, we introduce the CS neighbor discovery [STDC13], where we use the similarity of neighbor discovery and sparse signal recovery. A key observation is that the number of neighbors a node has is much smaller than the total number of nodes in the network, which allow us to employ compressed sensing. To do so, we again assume nodes to transmit according to an on/off pattern. However, instead of transmitting an identifier, nodes only require to transmit a PN sequence as sparse signal recovery only requires receivers to estimate the received power of other nodes. Finally, we introduce “ZigZag” algorithm [STC14] which can outperform random access discovery methods both in performance and speed. The main idea is that instead of randomly transmitting or receiving, nodes transmit and receive according to a pre- determinedon-offpatterns. Thus, thenodesmaycanceltheknownidentifiersfromreceived signals to retrieve identifiers of other neighbors. The challenge consists of designing the transmission patterns so that the discovery time remains short while all nodes manage to detect all their neighbors with high probability. The ZigZag algorithm can be deterministic or random depending on the construction of the transmission pattern, i.e., whether it is built deterministically or randomly. 3.2 Model and Assumptions We make the following assumptions for the network throughout this paper: 1. Each node is assigned a unique ID. 2. Each node is capable of half-duplex only, i.e., node can either transmit or listen. 3. Nodes are synchronous at the frame and slot level (coarse synchronization), i.e., they know when the discovery phase starts and transmit in synchronous slots, accounting 41 3.2. Model and Assumptions for some suitably defined guard time that is significantly smaller than the slot duration. 4. The channel between nodes v i and v j is frequency flat, and characterized by a (complex) amplitude gain h v i ;v j . 5. Transmission powerP t is equal for all nodes. Hence when a transmitterv i transmits, the received power atv j isP t jh v i ;v j j 2 . The vector of power of all channel coefficients of node i, h (i) whose elements h (i) j =jh v i v j j 2 We present different definitions for a a neighbor in a “wireless network” versus a “system of links”. The former Definition 5. In a wireless network, the neighborhood of a node is the set of all nodes that can receiver powe from that node at a level that is greater than a certain threshold . The proper choice of the threshold depends on required error probability, modulation and coding, etc. Specifically, N (v i ) =fv j :P mt jh v i v j j 2 g: (3.1) Definition 6. In a set of links with predetermined set of transmittersT and receivers R where each transmitter is assigned a unique receiver, we say the neighbors of the link v t ! v r are all transmitter and receivers whose channel gain to either v t or v r is greater thanjh vt;vr j, i.e., they can be heard stronger at v t or v r than v r or v t , respectively. Specifically, N (v t !v r ) = fv i :v i 2Rnfv r g;jh vtvr j 2 jh vtv j j 2 g [fv j :v j 2Tnfv t g;jh vtvr j 2 jh v j vr j 2 g: (3.2) Accordingly, we may define the set of neighbors of a transmitter v t to be the set of receivers N (v t ) =fv i :v i 2R;jh vtvr j 2 jh vtv j j 2 g, and Similarly, the neighborhood for a receiver v r is the set of transmittersN (v r ) =fv j :v j 2T;jh vtvr j 2 jh v j vr j 2 g. 42 3.3. Compressed Sensing Neighbor Discovery It is easy to check that the two definitions coincide when for a node in a system of links (transmitter or receiver) we set equal to the link gain, i.e., =jh vtvr j 2 . 3.2.1 Notation Let us briefly discuss the notation used in the sequel. Bold font capital letters such as A denote matrices. We denote the (i;j)th element of A by A ij . Further, bold font small letters stand for vectors. Further, a i and a i are the i th column and row of matrix A, respectively. Finally a j is the j th element of vector a. 3.3 Compressed Sensing Neighbor Discovery As we mentioned, we consider a network of N nodes and the goal is for each node to identify its neighbors. For that, each node is assigned a unique transmission pattern which we refer to as signature, and it knows all the signatures. Assign each column of an MN zero-one measurement matrix A as the transmission pattern (signature). That is, node v i is assigned column a i as its signature, and transmits during slots corresponding to“one” entries of a i , while listening to the channel on “zero” entries to receive the other nodes transmissions. Considering that length of each PN sequence is one time slot, then A ci = 1 means that node v i transmits on c th time slot. The receiver is simply an energy detector, which measures the level of energy received. When 0 is transmitted, the received energy is determined by the noise floor, while when a 1 is transmitted, the receiver measures the sum of signal and noise energy. In practice, each slot consists of a some random-noise sequence. Since statistical fluctuations are essentially negligible given the fact that the energy levels are integrated over many chips per slot, a noiseless model where non-negative energy levels add up (non coherent combining) becomes relevant in this context. Thus, reading any energy by the receiver means the presence of transmission. Let A (i) denote matrix A whose rows a c where A ci = 1 are omitted. This corresponds to the "effective" measurement matrix, due to the half-duplex property. Then, the signal 43 3.3. Compressed Sensing Neighbor Discovery received by node v i denoted by y (i) is y (i) =P t A (i) h (i) : (3.3) 44 3.4. ZigZag Neighbor Discovery Let g (i) = P t h (i) , then each node v i needs to solve the following ` 1 -minimization to identify its neighbors: CS-NeiDis: minimize kg (i) k 1 subject to A (i) g (i) = y (i) ; g (i) 0: Theorem ?? provides a guarantee on the performance CS-NeiDis through which we may have a reliable discovery given the large enough M. Further, we know the scale of discovery period in K and N for CS neighbor discovery as M =K log(N=K). 3.4 ZigZag Neighbor Discovery As described in the previous section, CS neighbor discovery estimates the received power from the neighbor and the index assigned to each neighbor. Some applications, however, require more specific identifiers (ID) such as MAC address or location information of each neighbor. Here, we introduce ZigZag to obtain such information. Similar to CS neighbor discovery, columns of the measurement matrix serve as their transmission pattern of the nodes (signature). In ZigZag, however, instead of transmitting a constant one, nodes transmit their ID on the one entries of their signature. Thus, a node transmits its ID during the “on” slots, and will listen for possible neighbor ID transmissions during the “off” slots. Asmentioned, oneachslotc = 1;:::;M, allnodesv i withA ci = 1transmittheiridentifiers. Further, node v i detects its neighbor v j when it is the only neighbor transmitting its ID. That is, there is a slot c such that A cj = 1, A ci = 0, and A ck = 0 for any other node v k 2N (v i ), where as in 3.1,N (v i ) denotes the neighborhood of nodev i . Let us call such slots “free chunks”. ZigZag involves successive cancellation of already detected neighbors. If node v i finds a free chunk of node v j 2N (v i ) such that it can decode its identifier, then it knows the transmission pattern of node v j , and is thus able to remove all other transmissions of this node from other slots, such that it will be able to create more free 45 3.4. ZigZag Neighbor Discovery v 0 v 1 v 2 v 3 t 1 t 2 t 3 s2 s1 s3 s2 s3 s2 s1 s3 s2 s3 1 Figure 3.1: Detecting signatures of nodes v 1 ;v 2 , and v 3 by v 0 through ZigZag algorithm. chunks to detect other neighbors, even in the presence of collisions. Let us clarify this by an example. Consider the scenario shown in Fig. 3.1. Nodes v 1 ;v 2 , and v 3 are neighbors of v 0 . On slott 1 , nodesv 1 andv 2 transmit their IDs s 1 and s 2 , respectively. Ont 2 , nodesv 2 andv 3 transmit their IDs, and finally on slot t 3 node v 3 transmits its ID. Thus node v 0 receives the following signals on these slots y (0) [t 1 ] = h v 0 v 1 s 1 +h v 0 v 2 s 2 y (0) [t 2 ] = h v 0 v 2 s 2 +h v 0 v 3 s 3 y (0) [t 3 ] = h v 0 v 3 s 3 ; where the additive Gaussian noise is neglected for simplicity, since we assume that the identifiers are encoded such that error-free detection is achieved in the absence of collisions. As shown in Fig. 3.1, v 0 can recover s 3 from slot t 3 (a free chunk) and cancel it from slot t 2 in order to recover s 2 . Finally v 0 recovers s 1 by canceling s 2 from slot t 1 . Thus, node v 0 can recover the MAC address and the channel coefficients of all its three neighbors, despite two out of three slots having collisions. Clearly, ZigZag relies on the existence of free-chunks. Thus, we should design the matrix A so that when nodes transmit according to A, free-chunks exist with high probability for all nodes. We introduce two versions of the ZigZag algorithm, namely deterministic 46 3.4. ZigZag Neighbor Discovery and random. The difference corresponds to the construction of the matrix A. 3.4.1 ZigZag meets MP over BEC There is a complete formal equivalence between ZigZag decoding and message passing (MP) decoding of LDPC codes over binary erasure channels (BEC) (see also [PLC11, NP12]). In the erasure channel BEC(p), each bit is either received error free with probability (1p) or received as an erasure with probability p. Let A be the parity check matrix of a binary code. We map each user in the wireless network to a bit-node and each time slot to a check-node of the code Tanner graph [RU08]. The fraction of neighbors in the network, p, corresponds to the BEC erasure probability. We say that a check-node is: (i) satisfied if it is not connected to any erasure bit; (ii) a free-chunk if it is connected to exactly one erasure; (iii) unsatisfied if it is connected to more than one erasure. In MP decoding, an erased bit can be recovered if it is connected to a check where all other bits are not erased, i.e., a free-chunk. After an erasure is decoded, the corresponding symbol is perfectly known (not an erasure anymore), and its value is propagated through the Tanner graph, such that the process of bit recovery can proceed. During this process, an unsatisfied check will become a free-chunk and eventually satisfied if its neighboring erasures get decoded one-by-one through other free-chunks. The only minor difference between ZigZag and MP over the BEC is that in ZigZag neighbor discovery, every node v i runs its own independent ZigZag algorithm defined on the projection of the overall transmission pattern matrix A in which all rows (i.e., slots)c for which A ci = 1 are eliminated. This is because of the half-duplex constraint, for which a node cannot receive when it is transmitting. That is, instead of running MP over the original code, each node makes its own reduced version by deleting some “check-nodes” (i.e., the slots during which it transmits) and run MP over the reduced code graph. When A is generated from a classical LDPC random ensemble [RU08], i.e., the random ZIgZag case, erasing a finite number of checks has no effect on the ensemble itself, in the limit of large N. Hence, the evolution of the (average) fraction of undiscovered neighbors is described by the same density evolution equation for every node v i in the network. For 47 3.4. ZigZag Neighbor Discovery example, in the case of (d ` ;d r )-regular LDPC codes, we have [RU08] Z i =p(1 (1Z i1 ) dr1 ) d ` 1 ; (3.4) whereZ 0 =p andZ i denotes the residual average fraction of erasures at iteration i of the ZigZag process. In the limit of N!1, a randomly selected node can identify its neighbors with high probability if lim i!1 Z i = 0. In turns, this happens if p < p th , where p th is the BP decoding threshold of the LDPC ensemble (i.e., on the left and right degree distribution). For finite N, well-known finite length analysis of random LDPC ensembles over the BEC applies, such as the finite-length scaling law of [AMRU09]. As for the deterministic ZigZag algorithm, we use parity-check matrix of regular LDPC codes with large girths satisfying (4.3). We build a (d ` ;d r )-regular LDPC code satisfying (4.3) with M = K d ` = ` d r = r N K (3.5) where ; r < 1; ` 4 are constants which must satisfy ` = r by hand-shaking lemma. We prove the following performance guarantee for the deterministic ZigZag neighbor discovery. Theorem 5. Consider a network represented by a random geometric graphs with N nodes where each node has in average K =p(N 1) neighbors. Further, assume nodes broadcast their unique identifiers according to the on-off transmission patterns specified by the columns of the parity-check matrix of a large girth regular LDPC code with parameters (3.5) Then, under the collision model, all nodes are able to discover their neighbors using the ZigZag algorithm with high probability as N!1. The proof can be found in Appendix 3.11.1. Note that deterministic ZigZag requires 48 3.5. Aloha-like Scheme pilots for channel estimation 1OFDM sym MAC address 8bytes CRC 4bytes node signature 1 Figure 3.2: Node identifiers and the network overhead. M = K slots which seems intuitive as the capacity of BEC(p) is p for p2 [0; 0:5). Further, this discovery period is much smaller compared to the time Aloha-like schemes take, which scales in (K) 2 log(K). Further, the length of the discovery period for the CS neighbor discovery scales in K log(N=K) which is for the K that is linear in N, has the same scale as ZigZag. 3.4.2 ID format Consider the following packet format for the node IDs, , see Fig. 4.1. The 6 bytes MAC address serves as the node identifier. We add a cyclic redundancy check (CRC) of 4 bytes to it so that the nodes manage to detect the errors when recovering. With coding rate of 1=2 and QPSKmodulation, the coded ID becomes 20 bytes, which is 2 OFDM symbols. Further, ZigZag relies on consecutive cancellation of identifiers and thus it requires channel information. Hence, we require to add 1 OFDM symbol for channel estimation to the identifiers. 3.5 Aloha-like Scheme As mentioned, the Aloha-like scheme is a randomized scheme where at each iteration, a user decides to transmit with some probability p a and/or listens otherwise. Each node is assigned a unique ID and a node recognizes another node as its neighbor if it manages to decode its ID from the received signal. As before, the length of each iteration of the algorithm, i.e., one time slot, is equal to the time required for transmitting an ID. For the Aloha-like scheme, the transmission pattern A can be considered to be a random MN zero-one matrix where each element is one 49 3.6. Measure of Success and Error with probability p a . The ID format is the same as Fig. 4.1. For the collision model where the ID of a neighbor can be detected on slot c if and only if it is the only neighbor transmitting on that slot, Aloha-like scheme requires M = K 2 e(log(K) +O(1))) slots for each node to detect all its K neighbors with high probability. Theorem 6. In a clique of size K, for the Aloha-like scheme to succeed with high probability, the optimal strategy is for each node to transmit its identifier with probability 1=K on each slot and Aloha-like scheme requires M = (K 2 log(K)) transmission slots when K!1. The proof can be found in Appendix 3.11.2. The proof is quite similar to the proof presented in [VTGK09]. The authors, however, made a slight error in deriving the probability in [VTGK09] and showed the scaleM = (Ke log(K)), which made us derive the bound again. The theorem is proved through the connection between the neighbor discovery and the coupon collector’s problem. 3.6 Measure of Success and Error Here we introduce our measure to analyze and compare the performance of different discovery methods. For a node that wishes to detect its neighbors, two kinds of errors are possible: miss and false-alarm. Both Aloha-like and ZigZag Schemes are only prone to misses as a neighbor is detected only if its identifier is recovered. For long-enough identifier (e.g., the MAC address), the probability of decoding a non-existent identifier and declaring it as neighbor is negligible, also because decoding errors can be detected with very high probability and small overhead by using standard CRC error-detecting codes, as ubiquitously done in today’s practical MAC wireless protocols. The CS neighbor discovery, on the other hand, suffers from both misses and false alarms. Let S denote the set of neighbors of a certain node by definitions 3.1 or 6. Further, let b S denote the estimated set of neighbors by the discovery algorithm. ThenjS b Sj depicts number of misses, whilej b SSj is the number of false alarms. We are interested 50 3.6. Measure of Success and Error in the probability of successful recovery of the set conditioned on the number of neighbors jSj. Apparently, one good measure to analyze the performance of an algorithm is the probability of success, i.e., P S = b SjjSj =K : (3.6) Further, we define the conditional probabilities P jS b Sj = 0jjSj =K (3.7) P j b SSj = 0jjSj =K ; (3.8) for successful recovery without misses and false alarms, respectively. Further, we define fractional errors to be F M (S; b S) = jS b Sj jSj (3.9) F FA (S; b S) = j b SSj j b Sj ; (3.10) for misses and false alarms, respectively. The mentioned probabilities help us to estimate the performance of the discovery methods in different random network settings through the Bayes’ rule. That is, let for a certain node and specific random network setting with N nodes, andP(jSj = K), i.e., the probability that the node has K neighbors, then P(f(S; b S)) = N1 X i=0 P f(S; b S)jjSj =k P(jSj =k); (3.11) where the f(S; b S) can be substituted by any of the eventsjS b Sj,j b SSj, or S = b S. Note that for theK-largest detector, i.e., the detector that chooses theK largest received 51 3.6. Measure of Success and Error signals (in power) as neighbors, we have P S = b S =P jS b Sj = 0 =P j b SSj = 0 : To quantify the reliability of a specific transmission pattern A we present the following definition. Definition 7. The critical detectable number of neighbors K for a specific transmission pattern A and a discovery method (CS, ZigZag,...) is denoted by K and is defined to be the largest K for which P(S = b SjjSjk) 1" (3.12) for all KK and 0<"< 1. Then we say that A under the discovery method and K is (1")-reliable. In case we do not have access to the exact number of neighbors, we can present the definitions for the average number of neighborsK = (N 1)p for network densityp. Note that the same definition can be used for other probability measure introduced here to define reliability over number of misses, false-alarms or fractional errors. The measure is also generalizable. For example, we can define the reliability based on fractional errors, as follows Definition 8. The critical detectable average number of neighbors K for a specific trans- mission pattern A and a discovery method is denoted byK and is defined to be the largest K for which F M (S; b S) +F FA (S; b S)" (3.13) for all KK and 0<"< 1. Then we say that A under the discovery method and K is (1")-reliable. Note that the summation in (3.13) lets us compare the methods which suffer from both 52 3.7. The Collision Free Model misses and false alarms (such as CS discovery) with methods like ZigZag which only allow misses. In case an application has different priority for misses and false alarms, we may use a weighted sum in (3.13). 3.7 The Collision Free Model Here, we present simulations for the “collision free” model where the signals transmitted by non-neighbors of a node, have no effect on the received signal. For ZigZag and Aloha-like scheme we assume a node can detect the identifier of its neighbor only if it is the only neighbor transmitting, i.e., the transmission is received collision free. Nodes whose identifiers are recovered are announced as neighbors. We assume an error-free channel estimation whenever an ID is recovered through the pilots included in the transmitted identifiers. Further, for CS neighbor discovery, we assume the received power is determined by the geometric pathloss d for neighbors, where is a normalization factor. Specifically, the received power at node v i becomes y (i) = A (i) g (i) ; (3.14) where the vector g (i) = P t H (i) n (i) is exactly K-sparse, and H (i) = diag(h (i) ) with jh j (i) j 2 =d v i v j , n (i) 2f0; 1g N is the K-sparse vector of neighbor indices of node v i . By running CS-NeiDis, each node v 0 of the link v 0 v 0 0 , estimates the magnitude of the channel coefficientsj ^ h v 0 ;v i j 2 for allv i . Then, to find the strong interferers as in Def. 6, we identify the set b S v 0 =fv i :j ^ h v 0 ;v i j 2 j ^ h v 0 ;v 0 0 j 2 g; (3.15) as neighbors. We refer to this detector as the set detector. Note that, to be specific, we should only consider nodes that are receivers/transmitters if v 0 is a transmitter/receiver by definition (6). As mentioned, the same setting can be extended for a network of nodes by setting =j ^ h v 0 ;vr j 2 . 53 3.7. The Collision Free Model Another detector to consider, is the one detecting the K strongest neighbors without comparing them with the received power from v r . That is, each node sorts the estimated powers in magnitude and choose the K strongest ones, i.e., if ^ h v i 1 ;v 0 ^ h v i 2 ;v 0 ^ h v in ;v 0 then b S K v 0 =fv i :j ^ h v i ;v 0 j 2 j ^ h v i K ;v 0 j 2 g: (3.16) We call (3.16) the K-largest detector. Notice that for the K-largest detector the miss and false alarm rates are equal. 54 3.7. The Collision Free Model 0 0.2 0.4 0.6 0.8 1.0 0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 ρ =m/k P ! S = ˆ S||S| =k " ZigZag CS (500,3,6) (250,3,12) (100,3,30) ρDE(3,4) S∪{v 0 } |h v0,vr |=1 v r v 0 R=1 1 Figure 3.3: The collision model experiment scenario: we uniformly distributeK 1 points inside a unit disk with v 0 at its center and v 0 0 on its boundary. Consider a network with N = 1000 nodes, i.e., A has 1000 columns. As shown in Fig. 3.3, we consider a subset of the network that is contained within a disk of radius one with the node v 0 at its center. Further, we put v r (the receiver) on the boundary of the disk. Finally, we uniformly distribute K 1 nodes over the disk so thatjSj = K. Recall that for CS discovery in the collision model we consider the scale-free geometric path loss, i.e.,jh v i v j j 2 =d v i v j . By setting = 1, both definitions 3.1 and 6 coincide by =jh v 0 ;v 0 0 j 2 = 1, and the result is valid for both settings. We first analyze the performance of each algorithm separately, to see how number of reliably detected neighbors K scale with respect to the discovery period M. For CS and ZigZag, we depict a measurement matrix parameters with the tuple (M;d ` ;d r ). Note that the number of nodes in the network is N =Md r =d ` by hand-shaking lemma. Further, for the Aloha-like scheme we consider the discovery periodM and the transmission probability p a shown by the tuple (M;p a ). 3.7.1 CS neighbor discovery We start by analyzing the performance of CS neighbor discovery and as mentioned, use the Gallager’s construction [Gal62] to construct the transmission pattern. We are interested in the probability of successful recovery conditioned on number of neighbors (3.6), and F M (S; b S) defined in (3.9) whenjSj = K, which are shown in Fig. 3.4-top and right, respectively. The dashed lines shows the performance of the K-largest 55 3.7. The Collision Free Model detector, while the solid lines represent the set detector’s. Note that dashed and solid lines in Fig. 3.4 overlap almost completely, indicating that both detectors have similar neighborhood detection performance. Their performances, however, diverge for the fractional miss rate F M (S; b S), where the set detector surpasses the K-largest detector, see Fig. 3.4-bottom. Note that for the K-largest detector F M (S; b S) =F FA (S; b S). The reason we did not include a figure for the fractional false alarm rate is that under the collision model, the set detector does not concede any false alarm. Recall that for the construction of the transmission pattern, we set the discovery period M = K log(N=K) for a network of size K and average neighborhood size of K. It is interesting to see how scales using the simulation data. Using Def. 7 and setting " = 0:05, we get the values K = 8; 63; 89; 171; 239; 255 for the transmission patterns listed in Fig. 3.4 from which we deduce to be roughly equal to 1:4. Note that, the CS neighbor discovery has a good error resilience. That is, even for K > K the method misses only a small fraction of neighbors as shown in Fig. 3.4-bottom. 3.7.2 ZigZag For the ZigZag algorithm, again we use the Gallager construction to build the transmission patterns. We get the performance shown in Fig. 3.5. Here, we observe that K = 62; 192; 408 for the transmission patterns shown in Fig. 3.5-top. Note that similar to decoding over BEC, the performance of ZigZag has a sharp drop for K >K . That is, the probability of successful discovery drops rapidly and goes to zero. The same happens to the fractional number of misses which gets to one for K slightly greater than K as shown in Fig. 3.5-bottom. 3.7.3 Aloha-like scheme Finally, we consider the performance of Aloha-like scheme. As shown in the proof, for the average number of neighbors K the discovery period is shortest when the probability of transmitting the ID on each slot is set to p a = 1=K. Fig. 3.6 shows the probability of successful recovery and fractional miss rate for Aloha-like. 56 3.7. The Collision Free Model 0 50 100 150 200 250 300 350 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 P(S = ˆ S||S| = K) K (100,3,30) (250,3,12) (300,3,10) (400,4,12) (500,3,6) (500,4,8) 0 50 100 150 200 250 300 350 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Average FM(S, ˆ S)for |S| = K K (100,3,30) (250,3,12) (500,3,6) Figure 3.4: The probability of success and fractional miss error rate in recovering set S conditioned on the number of neighborsjSj for the CS neighbor discovery for different transmission patterns (M;d ` ;d r ). The dashed lines correspond to the K-largest detector (3.16), while the straight lines depicts the performance of the set estimator (3.15). As shown, the performance of Aloha-like scheme is inferior to other schemes and requires a much longer discovery period. Fig. 4.2 compares the performance of the two methods against each other. As shown, the ratio that can reliably discovered is much smaller for ZigZag and is close to one for reasonable number of measurements, while depending on the number of measurements, the ratio of CS varies. Recall that for CS neighbor discovery = log(N=K), which is why the ratio curves for CS in Fig. 4.2 are spread instead of being concentrated like 57 3.7. The Collision Free Model 0 50 100 150 200 250 300 350 400 450 500 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 P(S = ˆ S||S| = K) K (100,3,30) (250,3,12) (500,3,6) 0 50 100 150 200 250 300 350 400 450 500 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Average FM(S, ˆ S)for |S| = K K (100,3,30) (250,3,12) (500,3,6) Figure 3.5: The probability of success and the fractional miss error rate F M (S; b S) in recovering set S conditioned on the number of neighborsjSj for ZigZag. ZigZag curves. At the first glance, the reader might assume that based on Fig. 4.2, ZigZag requires a shorter discovery period compare to CS neighbor discovery. This is not true, however, since M in definition of is measured in the number of slots. For CS neighbor discovery, as discussed before, each slot consists of transmitting a PN sequence which is one OFDM symbol for a completely synchronous system [WTS + 10]. For ZigZag, on the other hand, each slot should be long enough so that the ID shown in Fig. 4.1 can be transmitted. As discussed in section 4.4.1, our suggested ID format with 6 bytes of MAC address, 4 bytes of CRC, and 1 OFDM symbol of pilots for channel estimation is in total three 58 3.8. The Interference Model 0 10 20 30 40 50 60 70 80 90 100 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 P(S = ˆ S||S| = K) K (100,1/10) (250,1/15) (900,1/30) 0 50 100 150 200 250 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Average FM(S, ˆ S)for |S| = K K (100,1/10) (250,1/15) (900,1/30) Figure 3.6: The probability of success and the fractional miss error rate F M (S; b S) in recovering set S conditioned on the number of neighborsjSj for Aloha-like scheme. OFDM symbols long. That is for the CS neighbor discovery (in slots/# of neighbors) = (in OFDM symbols/# of neighbors), while for ZigZag (in slots/# of neighbors) = 3 (in OFDM symbols/# of neighbors). Thus, for a fair comparison, the curves for the performance of ZigZag in Fig. 4.2 should be shifted to right by a scale of three. 3.8 The Interference Model The analysis over the collision model is to verify the result of the theorems and fails to consider some major attributes of wireless networks such as path loss and interference. 59 3.8. The Interference Model 0 0.2 0.4 0.6 0.8 1.0 0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 ρ =M/K P ! S = ˆ S||S| =M/ρ " ZigZag CS (100,3,30) (250,3,12) (300,3,10) (400,4,10) (500,3,6) ρ DE (3,4) 1 Figure 3.7: The probability of successful recoveryP(S = ^ SjjSj =M=) versus the ratio =M=K for the ZigZag and CS neighbor discovery methods. Table 3.1: The ratio for the regular codes with (d ` ;d r ) degrees over BEC and BSC. (d ` ;d r ) (3; 6) (4; 8) (3; 5) (4; 6) (3; 4) DE 1.1644 1.3041 1.1592 1.3173 1.1585 Sha 4.5455 4.5455 4.1096 3.8314 3.4884 Recovering a neighbor’s ID only when there is no collision is far from reality since the identifiers of the nodes are received with different strength at a receiver depending on the distance and shadowing. Thus, as long as the SINR of a received identifier of a node is large enough, then the identifier is decodable. Furthermore, isolating a clique as the worst case scenario neglect all the received interference from non-neighbors. The accumulated interference, in many cases, turn out to be large and make the recovery of the identifiers impossible. To take into account the signal from non-neighbors, i.e., interference, we consider the interference model. In the interference model, the path gains between the nodes are governed by WINNER model [IW07]. According to WINNER model, the path loss is given in dB PL(d) = A log 10 (d) + B + C log 10 (f c =5) + X; (3.17) 60 3.8. The Interference Model whered (in meters) is the distance between two nodes,f c (in GHz) is the carrier frequency, and X is the log-normal shadowing with standard deviation d B. Further, parameters A, B, and C are constants which together with 2 dB depend on the scenario. The large-scale path gain between nodes v i and v j at distance d v i ;v j is given byjh v i ;v j j 2 = 10 PL(dv i ;v j )=10 . In the interference model, we consider the geometric pathloss. That is, we set A = 35, B = 0,f c = 2:4, andX = 0. Further, we assumePL = 0 ford< 3m. That is, nodes that are closer than 3 meters are received power with no power loss. For CS neighbor discovery, this means that the vector g in CS-NeiDis is not exactly K-sparse and have many non-zero elements, most of which are small and correspond to power received from non-neighbors. Note that, the guarantee offered by Theorem ?? shows that the recovery error is bounded. As for the ZigZag and Aloha-like schemes, we consider an information theoretic recovery model. That is, we assume that the nodes use QPSK modulation with coding rate 1=2 for transmitting their identifiers. For a node to decode the received signals, i.e., identifiers of other nodes, we consider successive interference cancellation (SIC). 2 In SIC, at each time slot, node v i tries to decode the strongest ID and if successful, then cancels it from the received signal and continue this until it fails to recover the strongest. More specifically, if the received identifiers are sorted from strongest to weakest, i.e., t 1 jh v 1 ;v i j>t 2 jh v 2 ;v i j>>t v n1 jh v n1 ;v i j, then nodev i on ther th round can decode the identifier of r th node if log + t r jh vr;v i j 2 P 1 + P k2[r+1:n]nfi;jg t k jh v k ;v i j 2 P ! R; (3.18) whereh v i ;v j is the channel coefficient of the link between nodev i andv j ,t j 2f0; 1g is the transmission variable that is one if nodev j is transmitting on that slot and zero otherwise, is the noise power, and [1 :n] =f1; ;ng. R denotes the coding rate which is 1 for QPSK modulation. Of course, this is too optimistic since the bound is proven for the 2 We could also consider treating interference as noise recovery but since ZigZag by nature use interference cancellation we found it more appropriate. 61 3.9. Experimental Results for the Interference Model asymptotic case where length of the identifiers goes to infinity and not the short finite packets we transmit for neighbor discovery. In the interference model, depending on the strength of the interference a node receives, it might fail to detect the ID of its neighbors. Thus, it would be interesting to find a way to cancel this unwanted interference and get a performance similar to that of the collision model. A slick, yet simple, solution is to use geometric scheduling. 3.9 Experimental Results for the Interference Model Here we study the performance of discovery methods in the interference model. We use the scenario shown in Fig. 3.8. We consider a field of radius R, and uniformly distribute N = 1000 nodes over it. Here we assume that each node transmits with the power P t = 20dBm, while the receivers can decode received signals with power as low as50dBm. Further, to have a reliable communication, we assume a neighbor is a node which can be heard with power greater than40dBm, i.e., in definition Def. 3.1 corresponds to40dBm. These parameters results in the fixed neighborhood radius r = 50 given the parameters of the interference model. To avoid the boundary effect, we perform the discovery only for the nodes located close to center which are the nodes inside the green circle in Fig. 3.8. Here, we analyze the performance of the methods with respect to the average number of neighbors K which we change by varying R. For a particular N and r, the average number of nodes in a disk of radius r is K =N( r R ) 2 , and thus, to get a certain K, we set R =r r N K : Note that, as R becomes smaller, the network gets denser which increases the amount of interference the nodes suffer from. Thus, even if the value K changes slightly, the discovery becomes harder due to the increase in interference. 62 3.9. Experimental Results for the Interference Model r R 1 Figure 3.8: Simulation Scenario for analyzing the interference model with geometric scheduling (with hexagonal blocks in this example) where N nodes are distributed uniformly over a disk of radius R and each node has a neighborhood radius of r. Since nodes in the boundary suffer from lower interference, we only consider the nodes close to center, i.e., the nodes shown in green. 3.9.1 CS neighbor discovery We start by analyzing the performance of CS neighbor discovery over the interference model. Like before, we use the Gallager’s construction [Gal62] for the transmission patterns. Similar to the collision model, we analyze the probability of perfect recovery of the neighborhood, i.e.,P(S = b S), F M (S; b S), and F FA (S; b S) conditioned on the average number of neighbors K defined in (3.9), (3.10), which are shown in Fig. 3.9. AsshowninFig. 3.9, theprobabilityofperfectrecoveryoftheneighborhoodforinterference model is much worse than the one for the collision model. Recovering all neighbors exactly without a miss and/or a false alarm, is a hard task for the CS discovery. The fractional errors, however, performs well. Although CS does not miss many neighbors, it tends to falsely alarm non-neighbors as ones. 3.9.2 ZigZag Fig.’s 3.10 depicts the performance metricsP(S = b S) and F M (S; b S) for ZigZag in the interference model. As shown, ZigZag performs pretty well. We believe this is partly due 63 3.9. Experimental Results for the Interference Model 20 40 60 80 100 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 P(S = ˆ S|E|S| = ¯ K) ¯ K (150,3,20) (250,3,12) (500,3,6) 0 100 200 300 0 0.05 0.1 0.15 0.2 0.25 Average F M (S, ˆ S)for E|S| = ¯ K ¯ K (150,3,20) (250,3,12) (500,3,6) 0 100 200 300 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Average F FA (S, ˆ S)for E|S| = ¯ K ¯ K (150,3,20) (250,3,12) (500,3,6) Figure 3.9: Probability of perfect detection of the neighborhoodP(S = b S), fractional miss rate F M (S; ^ S), and fractional false alarm rate F FA (S; ^ S) for CS discovery when average jSj =K for different transmission patterns. to the assumption in (4.7) which provide capacity achieving approximation for ZigZag. Further, this is also due to our design for the transmission patterns. Note that, our transmission patterns are quite sparse and thus at each time slot only few nodes transmit which reduce the interference. For example for (500; 3; 6) transmission pattern, only six nodes transmit on each time slot which keeps the interference at a very low level. 3.9.3 Aloha-like scheme Finally, Fig.’s 3.11 shows the probability of successful recovery and fractional miss rate for Aloha-like. As shown, the performance of Aloha-like scheme is so inferior to other algorithms and requires a much longer period. We introduced two novel neighbor discovery algorithms for ad-hoc and D2D networks, and analyze their performance both theoretically and experimentally. We showed a strong connection between our methods and the channel coding problem. For future directions, we analyze the performance of mentioned methods in a more realistic setting. Further, as the interference from non-neighbors might worsen the performance of 64 3.9. Experimental Results for the Interference Model 50 100 150 200 250 300 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 P(S = ˆ S|E|S| = ¯ K) ¯ K (150,3,20) (250,3,12) (500,3,6) 100 200 300 400 500 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Average F M (S, ˆ S)for E|S| = ¯ K ¯ K (150,3,20) (250,3,12) (500,3,6) Figure 3.10: Probability of perfect detection of the neighborhoodP(S = b S), fractional miss rate F M (S; ^ S), and fractional false alarm rate F FA (S; ^ S) for ZigZag when average jSj =K for different transmission patterns. 20 40 60 80 100 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 P(S = ˆ S|E|S| = ¯ K) ¯ K (1200,1/10) (1200,1/20) (1200,1/30) 0 50 100 150 200 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Average F M (S, ˆ S)for E|S| = ¯ K ¯ K (1200,1/10) (1200,1/20) (1200,1/30) Figure 3.11: Probability of perfect detection of the neighborhoodP(S = b S), fractional miss rateF M (S; ^ S), and fractional false alarm rate F FA (S; ^ S) for Aloha-like scheme when averagejSj =K for different transmission patterns. the discovery methods, we may use geometric scheduling. That is clustering the nodes into sets based on their location and scheduling each cluster separately. This way, we can 65 3.10. Geometric scheduling d 1 Figure 3.12: Geometric scheduling with hexagonal cells. shut down the interference received from other clusters and achieve a performance close to that of collision model. 3.10 Geometric scheduling As observed, the interference received from non-neighbors impact the performance of discovery methods greatly. To overcome this problem, we suggest geometric scheduling. Geometric scheduling comprises of dividing the area containing the nodes into cells, and then scheduling the nodes in certain cells, called cluster, simultaneously or on the same frequency band. Also, let the i th period denote the discovery period for the nodes in the i th cluster. Here we consider the customary hexagonal cells. Fig. 3.12, shows such system with reuse factor three. Note that, depending on the radius of the cells d, we can manipulate the strength of the interference a node receives. Let us point out that this is much different from randomly grouping the nodes and use discovery methods. LetjCj denote number of classes in a geometric scheduling. Then randomly dividing the nodes intojCj classes will reduce the total amount of interference a node suffers by a factor of jCj, while geometric scheduling kills strong interfering signals from neighboring cells while leaving nodes in the same cell (mostly consists of neighboring nodes) transmitting. Thus, geometric scheduling makes the performance of the neighbor discovery algorithms in the interference model similar to the ones in the collision model. 66 3.10. Geometric scheduling m m m 1 st class 2 nd class 3 rd class n m A 1 A 2 A 3 A 1 A 2 A 3 0 0 0 0 0 0 1 Figure 3.13: The signature matrix of the geometric scheduling with hexagonal blocks. The geometric scheduling if done in the time-domain, is equivalent to padding zeros to the transmission pattern of the nodes. That is, assume that we use matrix A as the transmission pattern. Let us elaborate with an example. Consider the reuse factor 3. Without loss of generality, reorder the columns of the matrix such that the firstn 1 columns of the matrix A denoted by A 1 corresponds to the nodes belonging to the first cluster, the next n 2 columns denoted by A 2 to the second cluster, and the last n 3 columns denoted by A 3 to the third cluster such that n 1 +n 2 +n 3 =n. The transmission pattern of the geometric scheduling is shown in Fig. 3.13. In other words, the discovery time in which nodes of a certain cluster are transmitting, appears as a block of zeros in the transmission pattern of the nodes belonging to other clusterss. In case each cluster operates in a specific frequency band different from others, then all clusters can be scheduled simultaneously according to a particular transmission pattern. In chapter 5, we thoroughly analyze the received interference for different channel propagation effects. 67 3.11. Appendix d ! d r −1 d r −1 d ! −1 d ! −1 d r −1 d r −1 ··· ··· ··· ··· ··· ··· ··· ··· . . . . . . . . . . . . . . . ··· ··· ··· ··· ··· ··· level 0 level 1 level 2 level 3 level T− 1 level T 1 Figure 3.14: A (d ` ;d r )-tree with depth T. 3.11 Appendix 3.11.1 Proof of Theorem 5 Let us start with presenting some background information. Anmn zero-one transmission pattern matrix A can be represented by a bipartite graph G A = (V;C;E) whereV = fv 1 ; ;v n g corresponds to the user set,C =fc 1 ; ;c m g corresponds to the time slot set, and an edge e i;j 2E connecting v j to c i exists if A i;j = 1, i.e., if user v j is “on” on time slotc i . Because of the analogy with MP decoding over the BEC, we shall refer to the elements v j 2V as the “bit-nodes” and to the elements c i 2C as the “check-nodes”. As said, A corresponds to the parity-check matrix of the underlying LDPC code. A bipartite graph is called regular if all bit-nodes have same degree d ` and all the check-nodes have same degree d r . Let g denote the girth (length of the shortest cycle) of the bipartite graph G A and let T <g=2 be an even integer. LetT T (v r ) denote the directed tree rooted at a bit-node v r and spanning all nodes up to depth T. The root v r has out degree d ` and the leaves on level T have out degree zero. The vertices on even levels correspond to bit-nodes and have out degree d ` 1. The vertices on odd levels correspond to check-nodes and have out degree d r 1 (see Fig. 3.14). 68 3.11. Appendix Further, define a skinny subtree of b T T (v r ) to be a connected subtree ofT T (v r ), also rooted at v r with depth T, where all bit-nodes have full out-degree and all check-nodes have out degree one. From Section 3.4, recall that a node v 0 can recover its neighbors through free-chunks. Proof of Theorem 5. According to the collision protocol model, we consider only the interference (collisions) from neighbors. First, we bound the probability that v 0 fails to recover a sample neighbor v r . Then, we obtain the final result by taking a union bound over all the neighbors. Notice that limiting the ZigZag process to the tree routed atv r of depthT can only increase the probability of error. It is apparent that ZigZag (limited to depth T) fails to recoverv r if and only if there exists at least one skinny subtree b T T (v r ) ofT T (v r ) with all-neighbor bit-nodes. Such sub-tree is called “complete”, in the following. For example, in Fig. ?? if node v 62 is a neighbor, then b T T (v r ) formed byfc 1 ;c 2 ;c 3 ;c 4 ;v 11 ;v 23 ;v 33 ;c 6 ;c 7 ;v 62 ;v 72 g abd the corresponding edges is complete and v r cannot be recovered. We define a score accumulation process onT T (v r ), such that the successful detection of v r will be shown to be equivalent to a level-crossing condition of the score accumulated at node v r . Let V i ;C j correspond score values at bit and check nodes v i ;c j , respectively, and define the score propagation equations: V i = i ; for leaf nodes v i (3.19) C j = minfV i 1 ; ;V i d r1 g; where v i k 2N # (c j ) (3.20) V i = i + X c j 2N # (v i ) C j ; (3.21) where i =1 if v j is a neighbor of v 0 , and i = for some >1, otherwise, and N # () represents the set of down-stream neighbors of a node inT T (v r ). The score of the root node v r is computed by applying recursively the rules (4.9) - (4.11) from the leafs (level T) to the root (level 0). Consider the score accumulated at v r restricting to a complete skinny subtree c T T (v r ) ofT T (v r ). Since all such bit-nodes contribute 69 3.11. Appendix with i =1, it is not difficult to see that the score accumulated at the bit-nodes at level T 2t, for t = 0;:::;T=2 1, depends only on the level index and is given by V (T2t) =1 (d ` 1) t+1 1 d ` 2 (we obviously assume d ` > 2). For the last step, we have V r =V (0) =1 +d ` V (2) =s, where we define s = 1 +d ` (d ` 1) T=21 1 d ` 2 : (3.22) Next, we notice that adding any bit-node v i 2T T (v r )n b T T (v r ) to the computation tree results in the same scores. This is seen noticing that the score accumulated at any bit-node at level T 2t, for t = 0;:::;T=2 1, cannot be smaller than V (T2t) , achieved if and only if the subtree routed at such bit-node is complete. Instead, if some of such nodes is a non-neighbor, the score accumulated is strictly larger thanV (T2t) . In any case, such value propagated through the “above” check-node has no effect on the total score, because of the min function in (4.10). On the other hand, through the same argument, we observe that, if there is no complete skinny sub-tree, then the accumulated score at v r is strictly larger thans. To bound the probability of error, we consider the case where the scores are random variables, generated by the recursive computation (4.9) - (4.11) where i are i.i.d. random variables taking on values1 and with probabilityp and 1p, respectively. Specifically, we let X (0) ;X (2) ; ;X (T ) denote the random scores of bit-nodes at even levels of the tree, and Y (1) ;Y (3) ; ;Y (T1) denote the random scores of check-nodes at odd levels of the tree. By symmetry, the scores of the nodes at a given level` are identically distributed. Hence, we define the following random score evolution: for levels ` =T;T 1;:::; 1 X (T ) = T Y (`) = minfX (`+1) 1 ; ;X (`+1) dr1 g; ` odd; X (`) = ` +Y (`+1) 1 + +Y (`+1) d ` 1 ` even (3.23) where ` are i.i.d. distributed as said above, and wherefX (`) i g andfY (`) i g are independent 70 3.11. Appendix replicas of X (`) and of Y (`) , respectively. The last step of the recursion is given by X (0) =1 +Y (1) 1 + +Y (1) d ` ; (3.24) since by assumption v r is a neighbor of v 0 . For what said above, the probability of error in recovering the sample neighbor v r is given by P (1) error = Pf9 a complete skinny subtreeg = PfX 0 sg Note that due to the half duplex constraint, the rows of A on which v 0 transmits are omitted and thus the actual computation tree may be non-regular because of these missing check-nodes. We will resolve this issue at the end of the proof. Using the Markov inequality we have P (1) error =Pfe X (0) e s ge s M 0 (); (3.25) where we define the Laplace transform of the pdf of X (`) as M ` () =Ee X (0) and (4.14) holds for all 2R + . From (4.13) we have M 0 () =e Ee Y (1) d ` : (3.26) We use the following Lemma to further upper bound the right-hand side of (4.14). Lemma 9. For ` even, with 0<`T, we have M ` ()M T () [(d r 1)M `+2 ()] d ` 1 : (3.27) 71 3.11. Appendix Proof. Since X (`) is a sum of independent random variables, we have Ee X (`) = Ee ` Ee Y (`+1) d ` 1 : Now, the proof is completed by noticing that e Y (`+1) P dr1 i=1 e X (`+2) i and that, by definition ofX (T ) and by the fact that the variables ` are identically distributed for all `, we have M T () =Ee T . By repeatedly applying Lemma 9, it is not difficult to show that M T2t () = [(d r 1)M T ()] P t i=0 (d ` 1) i d r 1 ; for t = 0;:::;T=2 1. Also, from (4.15) and using an argument similar to the proof of Lemma 9 and the definition of s in (3.22) we have M 0 ()e [(d r 1)M T ()] s1 (3.28) Replacing (4.16) in the bound (4.14) we find P (1) error e (s1) [(d r 1)M T ()] s1 = (d r 1)e M T () s1 : (3.29) Let % = (d r 1)e M T (). Apparently, we need %< 1. Explicitly, we have M T () =Ee T = (pe + (1p)e ): By the constraints on r , we assure %< 1 and since s grows unbounded for T!1, and T can be made large as n!1 since we consider a sequence of LDPC codes with large girth g, given by (4.3). Hence, P (1) error vanishes for large N. To bound the probability of detecting all neighbors, we use the union bound. That is, P (1) error is the probability of missing of a neighbor. Hence, probability of not perfectly 72 3.11. Appendix detecting the neighborhood, i.e., the probability of missing at least one neighbor, is bounded by P error KP (1) error NP (1) error : (3.30) As mentioned, in our construction the girth is lower bounded by log(n)= log((d r 1)(d ` 1)) for some > 0 as n!1. Using this into (4.17) and the result in (4.21) we have P error n exp(exp( log(n) 2 log((d r 1)(d ` 1)) 1) log(d ` 1) log : (3.31) Since log < 0, then P error # 0 as n!1. As mentioned, due to the half duplex constraint, the slots on which any node v 0 is “on” cannot be observed by the receiver of v 0 and the corresponding check nodes must be removed from the code graph. To show that our bound is still valid, LetC(v 0 ) denote the removed check-nodes, i.e., the check-nodes connected to bit-node v 0 . For g> 4 (which is always verified for sufficiently large n), no other bit-node can be connected to more than a single check-node inC(v 0 ), otherwise, we would have a “butterfly”, i.e., a cycle of length 4, in the graph. It follows that by removing the check-nodes inC(v 0 ), all the remaining check-nodes inT T (v r ) preserve their degree d r and all the bit-nodes inT T (v r ) may have their degree d ` decreased by at most 1. As a result, our bound can be made valid by replacing d ` by d ` 1. It is then sufficient to choose d ` > 3 (which is compatible with (3.5)), and the proof is complete. 3.11.2 Proof of Theorem 6 Before starting the proof, let us briefly state the coupon collector’s problem: Each box of a product contains one of K different coupons. We want to collect all the K coupons, and we want to know how many boxes we need to buy so that we collect all coupons with high probability. Let M i denote the number of boxes bought to get the (i + 1) th coupon 73 3.11. Appendix while the buy already have i coupons. And let M be the total number of boxes we need to buy to get all K coupons, i.e., M = P K1 0 M i . Apparently, X i is a geometric variable with distribution P(M i =j) = (1 i K )( i K ) j1 It is easy to show that E[M] = K1 X i=0 E[M i ] =K log(K) Now consider an altered version of the problem where only p s fraction of the boxes where 0<p s 1 contain coupons and the rest out empty. This time, usingE[Mp s ] =K log(K), we require to buy E[M] = K p s log(K) (3.32) boxes to recover all coupons. Now we are ready to present the proof. Proof of Theorem 6. We start by considering a clique of size K. That is, all nodes are neighbor of each other. Further, as mentioned we consider the collision model, i.e., a node can be discovered by other neighbors on some slot if it is the only node transmitting on that slot. For a node in the clique, the problem is similar to altered version of the problem where if each node transmit on each slot with some probability p a then p s =p a (1p a ) K1 : Obviously, the minimum duration is accomplished whenE[M] in (3.32) is minimized. That is when p s is maximized, which gives the optimal probability of transmission p a = 1=K. 74 3.11. Appendix Thus in the limit where K!1 we get p s = 1 K (1 1 K ) K1 = 1 Ke ; and the average number of transmissions required is E[M] = K 2 e log(K): Now we want to show that the distribution of the variable M is concentrated around its mean and a node can recover all IDs with high probability by deviating a constant c from the mean. This can easily be achieved through the use of the result on the section 5.4.1 of [MU05] where it maps the coupon collector’s problem to the balls and bins problem and use the Poisson approximation. Thus the proof is complete. 75 4 Directional ZigZag: Neighbor Dis- covery with Directional Antennas 4.1 Introduction In recent years, there has been a surge of interest in device-to-device communications, due to the improved spectral efficiency and flexibility [DRW + 09]. In such networks, devices transmit information directly between each other (possibly under control of a BS, but without sending payload via it), either in a single-hop or via multi-hopping. Directional antennas can greatly enhance the performance of such systems, as they improve the SNR, which is especially important due to the higher pathloss in D2D communications (compared to cellular systems) both at microwave and millimeter-wave frequencies [Mol10]. Furthermore, adaptive antennas strongly reduce interference by receiving mostly in the beam direction and as a result, increase the spatial reuse by allowing the scheduling of multiple nodes at the same time. The early work on neighbor discovery can be viewed as part of MAC protocols and focuses on omni-directional antenna based systems. How to harmonize the directional transmission and reception in the system is an essential research issue as deployment of directional antennas complicates the neighbor discovery. That is, each node should scan its surroundings by advertising its unique identifier in different directions. Further, unlike omni-directional neighbor discovery where the nodes could detect their neighbors without them knowing, the nodes should agree on becoming neighbors. This is due to the fact 77 4.2. System Model that a node may only transmit to a neighbor when both aim their antenna beams toward one another. Furthermore, fast discovery of neighbors is essential especially in mobile networks, since it must be repeated often enough for sufficient knowledge of the network topology is maintained. Several approaches have been proposed for directional wireless links by modifying the IEEE802.11MAC protocol [RRS + 05], [TMBR02]. They use the Directional Virtual Carrier Sensing (DVCS) concept that extends the IEEE802.11 Distributed Coordinated Function (DCF) to directional wireless networks. However, these protocols require the receiver in omni-mode to receive Request to Send (RTS) control packets. A major challenge is the fact that if one link end uses a beamwidth that is smaller than during actual communications (as is the case in the 802.11 MAC), then not all viable neighbors may be detected. Furthermore, the efficiency of those algorithms is very limited as they rely on random transmission and reception and thus their performance deteriorates as the network becomes denser and denser. Here, we take a different approach, namely extend our highly efficient omni-directional discovery method from the previous chapter and introduce “directional ZigZag” for devices with beamsteering antennas. As explained, The main idea of ZigZag is that the nodes may cancel the known identifiers from received signals to retrieve identifiers of other neighbors, i.e., perform a sort of iterative interference cancellation. The challenge consists of designing the transmission patterns so that the discovery time remains short while all nodes manage to detect all their neighbors with high probability. 4.2 System Model We consider a network with n devices with the following assumptions: Each node is assigned a unique identifier (ID). Each node can either transmit or listen (half-duplex assumption). All nodes know when the discovery starts and transmit synchronously (synchronous model). This can be achieved in a D2D system, e.g., through a beacon signal sent 78 4.2. System Model by the BS. All nodes are equipped with electronically steerable antenna arrays that can be used to either provide directivity or enable an omni-directional mode. sectored “flat-top” directional beampattern, i.e., the antenna gain pattern in di- rectional mode is a circular sector with angle w and radius equal to the transmis- sion/reception range. All nodes transmit with the same power P t , and use beam width w t and w r while transmitting and receiving, respectively. Antenna gains g wt and g wr are associated with these beam widths. A noiseless model, i.e., energy at the receiver is offset by the thermal floor such that 0 corresponds to just noise. Thus, given the large enough transmission power, reading any energy by the receiver means the presence of transmission. Then, the goal is for each node to detect all its neighbors. Let w t and w r be the minimum formable beam width for a receiver and transmitter antenna, and g t ;g r be the corresponding gains. The area covered by antennas depends on the beam width of the antennas of both transmitter and receiver. Since the maximum gain is achieved at the minimum beam width, we let the neighborhood area to be the one corresponding to these maximum gains. Definition 9. The neighborhood of a node is the set of nodes that can be heard with power greater than a certain threshold which depends on the communication channel and purpose of the communication, i.e., N (v i ) =fv j : max i ; j P t g t g r jh v i ( i ;w t );v j ( j ;w r ) j 2 g; (4.1) where h v i ( i ;w i );v j ( j ;w j ) is the channel gain between nodes v i and v j when they point their beams to directions i ; j with width w r ;w t , respectively; and the maximum is taken over all possible directions. 79 4.3. Directional ZigZag Algorithm Apparently, when usersemplow beamwidth greater than w they fail to cover the whole neighborhood. We refer to the covered area as “in-reach” neighborhood which is a monotonically increasing function of antenna gain. As shown by (4.1), the neighborhood relation is reciprocal if nodes use similar antenna patterns. Further, as shown by (4.1), nodes require to record the angle on which the transmission/reception to/from the channel is most powerful and thus the profile of a neighbor includes both its ID and angle. Furthermore, each two neighbors must agree on being a neighbor and setting some time for future contact. The protocol by which nodes decide on the future communication, which could be either directly between nodes, or supported by the BS, is out of the scope of the current paper. A node that uses beam antenna needs to scan all its surroundings. For an antenna with beam width w, the number of beams required to scan in 2-dimension is 2 =d 2 w e, while in 3-dimension 3 =d 2 1cos( w 2 ) e beam directions [Ste03] are needed. Many works on directional neighbor discovery focus on using directional transmitters and omni-directional receivers (DTOR). While this case keeps the interference low, it suffers from complications of forming the transmitting antenna beam. On the other hand, it is easy to form receiving antennas with small beam patterns. Hence, for directional ZigZag, we also consider the case with omni-directional transmitting and directional receiving (OTDR). Note that this method suffers from more interference compared to the previous case since nodes broadcast their ID in all direction and disturb the surrounding receivers. Finally, we consider directional antennas on both transmitter and receiver (DTDR). 4.3 Directional ZigZag Algorithm Here we extend the Zigzag algorithm to the directional case. Apparently, we should introduce a way for each node to scan all its surrounding as both receiver and transmitter. This, however, is not a trivial task. Specifically, this should be done in a manner that for any two nodes v i and v j , with high probability, there exists a time slot, in which v i transmits its ID toward v j while v j is receiving in the direction of v i , and further v j gets 80 4.3. Directional ZigZag Algorithm to decode the ID of v i . Fix t and r according to w t and w r , respectively. Let the algorithm runs for phases where = t phases for DTOR and DTDR, and = r phases for OTDR. Further, definePi i to be a random ordering (permutation) of indices such that i () is the-th element of i . Furthermore, the transmission pattern of the nodes on the -th phase is represented by a mn zero-one matrix A . That is, we assign each node a column of A as its transmission pattern on the -th phase. Each node knows all the transmission patterns, i.e., A . Given the time slot to be the time required to transmit the ID, then A (c;i) = 1 means that node v i transmits on the c-th time slot in direction . Further, nodes listen to the channel on zero entries of their transmission pattern to receive other nodes identifiers. For OTDR, on phase , each node chooses a direction i () where 2f1;:::; r g for its receiver to use during the zero entries of the pattern a i . On the one entries of the transmission pattern, nodes transmit omni-directionally to its surrounding and the discovery has r phases. For DTOR, on the other hand, we do the reverse. That is, on one entries, node v i transmits in direction i () for 2f1;:::; t g and use an omni- directional pattern for receiving on zero entries. For DTDR, we use the system similar to OTDR. On one entries of the pattern, however, nodes randomly scan the surrounding. That is, they transmit in all directions 2f1;:::; t g in a random order which enable the neighbors to hear them. Thus, if we denote the total discovery period with M = m, then = r ; t for DTOR and OTDR, respectively; and = r t for DTDR. Let us abuse the notation and use h i to denote the vector of channel power gains whose elementsh i (j) =jh v i ( i ();wr );v j ( j ();wt) j 2 . Due to the half-duplex condition, where node v i can either transmit or receive, node v i listens to the channel on slots c for which A (c;i) = 0. Let A i denote matrix A whose rows a c where A (c;i) = 1 are omitted. Then, the signal received by the i th node during the th phase by y i is y i =P t g wt g wr A i h i : (4.2) 81 4.3. Directional ZigZag Algorithm We may use a different transmission pattern for each phase or use the same transmission pattern times. We can also use the same transmission pattern, but assign the columns, i.e., patterns, randomly to different nodes for each phase. As pointed out before, in neighbor discovery two neighboring nodes must agree on neighbor relationship and choose a future time for their future rendezvous. To address this issue, we assume each node runs the ZigZag at the end of each phase and detects all the neighbors in a particular direction (or as in DTOR, announces itself to the neighbors in particular direction). Then, from the second phase, each node includes the index of the detected neighbors with the rendezvous in its ID. Thus, the detected neighbors can hear them and agree on the future communication. The only concern is when two neighbors point their antennas at each other on the same phase which happens with probability 1 r t . Note that, in practical systems, for small enough beam width and due to multi-path fading, a neighbor can be heard in several directions, which decreases the probability of one-sided agreement greatly. Alternatively, the neighbor lists can be communicated to the BS, which then in a broadcast lets all devices know their neighbors and the associated pattern directions. The discussion on how the future rendezvous is scheduled depends on the MAC protocol and is beyond the scope of current manuscript. 4.3.1 The transmission pattern Our goal is to design the matrices A so that when nodes transmit according to A , free-chunks exist with high probability for all nodes. In chapter 3, we showed a formal equivalence between ZigZag decoding and message passing (MP) decoding for LDPC codes over binary erasure channels (BEC) (see also [PLC11, NP12]). This connection allows us to use the parity-check matrix of “good” regular LDPC codes with large girths 1 . It is well-known in coding literature that a code of length n is considered to have a large 1 Regular means that all nodes have the same degree d ` and all checks have the same degree dr. 82 4.3. Directional ZigZag Algorithm girth g if it satisfies the log relationship g = ( log(n) log((d r 1)(d ` 1)) ); (4.3) where d ` ;d r represent the degree of nodes and measurements. Many constructions such as Gallager’s [Gal62] or progressive edge growth method [HEA05] satisfy (4.3). On other hand, it is known [Fos04] that cyclic and quasi cyclic codes do not satisfy (4.3) and thus cannot be used for our purpose. Let k denote the average number of neighbors a node has. Further, let k denote the average number of neighbors whose ID can be heard during a phase which we call effective number of neighbors. For example, if nodes are distributed uniformly over the field, and each node hask neighbors in average, thenk isk= t ;k= r ;k=( t r ) for DTOR, OTDR, and DTDR, respectively. For the transmission pattern of each phase, we build a (d ` ;d r )-regular LDPC code satisfying (4.3) with the following degrees and number of rows: m = k d r = r n=k d ` = ` ; (4.4) where ; r ; ` are constants which must satisfy ` = r by hand-shaking lemma. Apparently, 1. Let us point out again that the number of rows of the transmission pattern m is the period of one phase (in slots) for DTOR and OTDR, while for DTDR the phase period is t m. 4.3.2 Performance Analysis Our main result is the Theorem 7 which bounds the probability of error of Directional ZigZagalgorithmforfinitesetupandshowthevanishingprobabilityoferrorasymptotically. we are ready to present the bounds on the performance of Directional ZigZag with DTOR 83 4.3. Directional ZigZag Algorithm and OTDR. Theorem 7. Consider a random network with n nodes where each node has on average k =pn neighbors. Further, assume nodes broadcast their unique identifiers according to an on-off transmission pattern specified by the columns of the parity-check matrix of a large girth regular LDPC code with parameters (4.4) and girth g> 4. Then assuming collision model, the probability of a node not detecting an in-reach neighbor for all Directional ZigZag discoveries; namely, DTOR, OTDR, and DTDR is bounded by P (1) error (d ` 1) (d ` 2) T=21 1 d ` 3 ; (4.5) for all p< r =d r , when a) r t for DTOR; b) r r for OTDR; c) r t r for DTDR, where T <g=2, and r ; t are number of receiving and transmitting directions. Further, the probability of detecting the neighbors goes to one as n and respectively T grows. The proof is similar to the one presented for ZigZag in 3.11.1 and is presented in the appendix. As shown by Theorem 7, all three schemes require time on the order of the average number of neighbors k. The length of one phase of DTOR and OTDR, m is on the order of k=, while for DTDR, a period of a phase is r m and m scales as k=( r t ). Thus M DTDR M OTDR M DTOR k (4.6) where M denotes the total discovery period. Our simulations show that is around 2. Thus, to detect the same number of neighbors k, DTDR requires a time (number of slots) similar to DTOR and OTDR. This surprising result is true for large n while for a 84 4.4. Practical Considerations pilots for channel estimation 1OFDM sym MAC address 8bytes CRC 4bytes node signature 1 Figure 4.1: Node identifiers and the network overhead. finite-sized setups we observe an approximate version of this rule. This result —since the in-reach neighborhood of DTDR is the same as the real neighborhood, while OTDR and DTOR fail to cover the whole neighborhood— can make the use of DTOR and OTDR obsolete. 4.4 Practical Considerations Here we want to address some of the practical concerns of the directional ZigZag. Note that in a practical setup, nodes receive interference from non-neighbors. On the other hand, the practical setup, also offers some attributes and advantages compare to the collision model. The identifiers are protected by encoding against the interference and noise. Further, successive interference cancellation allows recovering more than one ID from the received signal in one slot, which improves the performance. 4.4.1 ID format For ZigZag discovery, we consider the packet format shown in Fig. 4.1 as the node IDs. That is, we use the 6 bytes MAC address as the node identifier. Further, to set the future rendezvous and avoiding one-sided agreements, we assign space for the transmission of indices of the neighbors. The length of this part depends on the protocol in use. Finally, we also add cyclic redundancy check (CRC) of 4 bytes to the ID similar to the OFDM standard, so that the nodes manage to detect the errors when recovering. Further, ZigZag relies on consecutive cancellation of identifiers and thus it requires channel information. As a result, we require to add symbols for channel estimation to the identifiers. These pilot symbols allow us to assume estimation of channel coefficients whenever a receiver 85 4.4. Practical Considerations manages to decode the signature and use interference cancellation for decoding. 4.4.2 Interference Control The amount of interference depends on the number of nodes transmitting on each slot, i.e., d r . Note that, out of these d r nodes, on average d r = of them point their antennas toward the same direction. Thus, by choosing the smallest possible d r for fixed n and target m, we can reduce the interference. 4.4.3 Encoding As we move beyond the collision model, we should consider the case where ID’s are decoded based on the received signal to interference ratio (SIR). We consider an information theoretic recovery model. That is, we assume that the nodes use QPSK modulation with coding rate 1=2 for transmitting their identifiers. For a node to decode the received signals, i.e., identifiers of other nodes, we consider successive interference cancellation (SIC). 2 In SIC, at each time slot, node v i tries to decode the strongest ID and if successful, then cancels it from the received signal and continue this until it fails to recover the strongest. More specifically, if the received identifiers are sorted from strongest to weakest, i.e., t 1 jh v 1 ( 1 ;w);v i ( i ;w i ) j>t 2 jh v 2 ( 1 ;w);v i ( i ;w i ) j>>t v n1 jh v n1 ( n1 ;w);v i ( i ;w i ) j, then node i on the r th round can decode the identifier of r th node if log 1 + trjh vr(r;w);v i ( i ;w) j 2 Pt N 0 + P k2[r+1:n]nfi;jg t k jh v k ( k ;w);v i ( i ;w) j 2 Pt R (4.7) wheret j 2f0; 1g is the transmission variable that is one if nodev j is transmitting on that slot, [1 :n] =f1; ;ng, and N 0 is the noise power. R denotes the capacity in bits/s/Hz which is 1 for QPSK modulation. Of course, this is too optimistic since the bound is proven for the asymptotic case where length of the identifiers goes to infinity and not the 2 We could also consider treating interference as noise recovery but since ZigZag by nature use interference cancellation we found it more appropriate. 86 4.5. Experimental Results short finite packets we transmit for neighbor discovery. This, however, may be taken into an account by considering a value larger than R = 1 as the threshold. Further, note that if the transmitted identifiers on the same slot have well separated receiver powers, then SIC can recover more than one identifier from the same slot. 4.5 Experimental Results To verify our theoretic results we run directional ZigZag in the collision model and realistic setup. We consider a network withn = 2000 nodes. First, we consider the collision model where a node does not receive any interference from non-neighbors. We consider a disk of radius one with the node v 0 at its center. Then, we uniformly distribute k nodes over the disk, i.e.,jN v 0 j = k. We depict a transmission pattern parameters with the tuple (m;d ` ;d r ). We consider the nodes to use the same beam-width for transmission and reception, i.e., w t =w r =w. We run the simulations for w ==6, and w ==2, and we are interested in the probability of successful detection of the whole neighborhood by v 0 conditioned on the number of neighbors k, i.e., P S = b SjjSj =k ; (4.8) where S; b S denote the real set of neighbors of node v 0 and its estimate by ZigZag, respectively. To analyze the performance of ZigZag, let us introduce the discovery period to neighbor size ratio =M=k. The value helps us measure in average how many slots we need to discoverk neighbors. Note that we already know an lower bound on this ratio through the connectionbetweenZigZagandMPdecodingoverBEC.Thatis, wefindthemaximump DE for which density evolution converges to zero and compute the corresponding = d ` =dr p . Fig. 4.2 shows for our simulations where dashed lines correspond to w ==2 and solid lines to w = =6. As depicted, directional ZigZag requires about 2k slots to detect k neighbors. For a more realistic setup where we consider the interference from non-neighbors, we use 87 4.5. Experimental Results 0 0.2 0.4 0.6 0.8 1.0 0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 ρ =M/K P ! S = ˆ S||S| =M/ρ " ZigZag CS (100,3,30) (250,3,12) (300,3,10) (400,4,10) (500,3,6) ρ DE (3,4) 1 Figure 4.2: The ratio =M=k for directional ZigZag in the collision mdoel. a model similar to the one in [VKT05]. That is, we consider a geometric pathloss model d for = 4. All nodes use P t = 20dBm and beamwidth w ==2. For our simulation, we choose neighborhood radius R n = 150m for DTDR and R n = 107m for other schemes. For our simulation, we consider a circular field with radius R f and locate node v 0 at its center. Further, for different values of k, we uniformly distribute k nodes in the neighborhood of v 0 , i.e., inside a circle with radius R n and center v 0 . Then, we uniformly distributenk1 outside the neighborhood ofv 0 . To preserve the density of the network p, however, we shrink the radius of the field R f as we increase k. Specifically, we set R f =R n p k=(n 1). As shown in Fig. 4.3, the performance of directional ZigZag does not deteriorate much due to interference. For DTOR and OTDR, the ratio of recovery changes from 2k to 3k, while for DTDR it remains unchanged. This is because due to directivity at both ends, the amount of received interference is reduced by factor of 2 which protect the method against the interference. 88 4.6. Appendix 0 0.2 0.4 0.6 0.8 1.0 0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 ρ=M/k P ! S = ˆ S||S|=k " (100,3,60) (150,3,40) DTOR OTDR DTDR 1 Figure 4.3: The ratio =M=k for directional ZigZag in the realistic model and w ==2. 4.6 Appendix 4.6.1 Proof of the Theorem Here, we present the proof of Theorem 7. The proof is quite similar to the proof of Theorem 5 presented in 3.11.1. Proof of Theorem 7. SImilar to the proof of Theorem 5, we consider theT T (v r ) to be the directed tree rooted at a bit-node v r , and define the score propagation equations: V i = i ; for leaf nodes v i (4.9) C j = minfV i 1 ; ;V i d r1 g; where v i k 2N # (c j ) (4.10) V i = i + X c j 2N # (v i ) C j ; (4.11) where i =1 if v j is a neighbor of v 0 , and i = for some >1, otherwise, andN # () represents the set of down-stream neighbors of a node inT T (v r ). The score of the root node v r is computed by applying recursively the rules (4.9) - (4.11) from the leafs (level T) to the root (level 0). As discussed, to bound the probability of error, we define the following random score 89 4.6. Appendix evolution: for levels ` =T;T 1;:::; 1 X (T ) = T Y (`) = minfX (`+1) 1 ; ;X (`+1) dr1 g; ` odd X (`) = ` +Y (`+1) 1 + +Y (`+1) d ` 1 ` even (4.12) where ` are i.i.d. distributed as said above, and wherefX (`) i g andfY (`) i g are independent replicas of X (`) and of Y (`) , respectively. The last step of the recursion is given by X (0) =1 +Y (1) 1 + +Y (1) d ` ; (4.13) since by assumption v r is a neighbor of v 0 . For what said above, the probability of error in recovering the sample neighbor v r is given by P (1) error = Pf9 a complete skinny subtreeg = PfX 0 sg Note that due to the half duplex constraint, the rows of A on which v 0 transmits are omitted and thus the actual computation tree may be non-regular because of these missing check-nodes. We will resolve this issue at the end of the proof. Using the Markov inequality we have P (1) error =Pfe X (0) e s ge s M 0 (); (4.14) where we define the Laplace transform of the pdf of X (`) as M ` () =Ee X (`) and (4.14) holds for all 2R + . From (4.13) we have M 0 () =e Ee Y (1) d ` : (4.15) 90 4.6. Appendix Similar to before, we use the Lemma 9 to further upper bound the right-hand side of (4.14). By repeatedly applying Lemma 9, it is not difficult to show that M T2t () = [(d r 1)M T ()] P t i=0 (d ` 1) i d r 1 ; for t = 0;:::;T=2 1. Also, from (4.15) and using an argument similar to the proof of Lemma 9 and the definition of s in (3.22) we have M 0 ()e [(d r 1)M T ()] s1 (4.16) Replacing (4.16) in the bound (4.14) we find P (1) error e (s1) [(d r 1)M T ()] s1 = (d r 1)e M T () s1 : (4.17) Let = (d r 1)e M T (). Apparently, we need < 1. Computing theM T () is the only part of the proof which is different from the proof of Theorem 5 and varies for methods DTOR, OTDR, DTDR. Specifically a) In DTOR, during each phase, each node may hear another neighbor if it aims its beam toward it which occurs with probability 1= t and thus M T () =Ee T = ( 1 t pe + (1p 1 t )e ) (4.18) Hence, to force < 1 we require d r < 1 + t =p, i.e., r t . b) Recall that in OTDR nodes use omni-directional transmitters, and thus a node can hear all neighbors in the directin of it receiving antenna beam. Thus, M T () =Ee T = ( 1 r pe + (1p 1 r )e ) (4.19) Hence, to force < 1 we need d r < 1 + r =p, i.e., r r . 91 4.6. Appendix c) Finally, for DTDR, since nodes when transmitting scan the whole surrounding in a random order but when receiving point their beam in a random direction, and thus the probability of hearing a neighbor is 1=( t r ) which gives M T () =Ee T = ( 1 t r pe + (1 p t r )e ) (4.20) Hence, to force < 1 we need d r < 1 + t r =p, i.e., r t r . Thus, the derivation of (4.5) is complete with the difference that d ` is changed to d ` 1 due to the omission of the slots (rows) that nodev 0 transmits on. We will discuss this later in more detail. By the constraints on r , we assure < 1 and since s grows unbounded for T!1, and T can be made large as n!1 since we consider a sequence of LDPC codes with large girth g, given by (4.3). Hence, P (1) error vanishes for large n. To bound the probability of detecting all neighbors, we use the union bound. That is, P (1) error is the probability of missing of a neighbor during a phase. This probability is the same for all phases. Thus, probability of not perfectly detecting the in-reach neighborhood, i.e., the probability of missing at least one neighbor, is bounded as P error kP (1) error nP (1) error : (4.21) As mentioned, in our construction the girth is lower bounded by log(n)= log((d r 1)(d ` 1)) for some > 0 as n!1. Using this into (4.17) and the result in (4.21) we have P error n exp(exp( log(n) 2 log((d r 1)(d ` 1)) 1) log(d ` 1) log : (4.22) Since log < 0, then P error # 0 as n!1. As mentioned, due to the half duplex constraint, the slots on which any node v 0 is “on” cannot be observed by the receiver of v 0 and the corresponding check nodes must be removed from the code graph. In this case, we have to show that our bound is still 92 4.6. Appendix valid. LetC(v 0 ) denote the removed check-nodes, i.e., the check-nodes connected to bit-nodev 0 . Forg> 4 (which is always verified for sufficiently large n), no other bit-node can be connected to more than a single check-node inC(v 0 ), otherwise, we would have a “butterfly”, i.e., a cycle of length 4, in the graph. It follows that by removing the check-nodes inC(v 0 ), all the remaining check-nodes inT T (v r ) preserve their degree d r and all the bit-nodes inT T (v r ) may have their degree d ` decreased by at most 1. As a result, our bound can be made valid by replacing d ` by d ` 1 as done in (4.5). It is then sufficient to choosed ` > 3 (which is compatible with (4.4)), and the proof is complete. 93 5 Interference in Device-to-Device Communications 5.1 Introduction Most modern wireless communication systems are interference limited [Mol10], since the high demand for wireless data necessitates a dense deployment and thus a significant amount of interference. The computation of the interference distribution is thus of great practical importance, and has been investigated at length in the literature [MG82, NA87, PK91, ADB + 91, PK93]. It turns out that the geometry of the considered system has a significant impact. For cellular systems with a regular (hexagonal) cell structure, Alouini and Goldsmith [AG99] derived best-case and worst-case SINR (signal-to–interference and noise) and area spectral efficiency in the cellular uplink. Later papers investigated the impact of power control (e.g., [DM15] and references therein). For peer-to-peer systems, Win et al. [WPS09] investigated the distribution of interference from randomly deployed interferers on a victim device. The theory of random geometry [Hae12] has been used as the basis for deriving the SINR distribution in randomly deployed networks, both single-layer (as, e.g., in a peer-to-peer network), and multi-layer, e.g., randomly deployed femto base stations (BSs) overlaid on randomly deployed macro-BSs. Here we turn our attention to the interference situations in device-to-device (D2D) communications, in which devices (also known as cellular user equipment UEs or mobile 95 5.1. Introduction stations MSs) communicate directly with each other, possibly under control (but without active participation of) the BS ([DRW + 09], [Mol10] and references therein). This can give rise to a mixed deterministic-stochastic geometry: the frequency reuse (i.e., which device can use which frequency), is determined by the cellular structure, while the devices (which act as both TX and RX) are randomly distributed in the cells. Specifically, we consider a network of devices where any device can communicate with other devices within its cell given that they are in the communication range. Further, we assume devices within the same cell are assigned the same frequency band. Hence, the amount of interference any receiver gets, depends on the devices transmitting within the cell and frequency reuse distance, i.e., transmission in other cells operating in the same frequency band. Note that the nature of interference in D2D systems is different from uplink or downlink systems. In the latter, as the BS assign frequency sub-bands to devices, there is no intra-cell interference and the only cause of interference, is the co-channel interference, which is made by devices from other cells operating in the same frequency band. Obviously, increasing the frequency reuse distance decreases the received interference. In the D2D system, however, intra-cell interference is also present as other devices within the same cell may operate on the similar frequency. Apparently, increasing the reuse distance has no effect on the intra-cell interference. The remainder of this chapter is organized as follows: Section 5.2 describes system and channel model, including the approximations of the cell geometry. Sections 5.3 and 5.4 formulate the interference and derive the moments and characteristic function of the aggregate interference. Section 5.5 obtains the SINR and outage probability, while Section 5.6 briefly discusses some applications of the derived results. Finally, in Section 5.7 we confirm the derivations by simulations. 96 5.2. Channel and System Models a) |C|=4 b) |C|=7 1 Figure 5.1: Geometric scheduling with (a)jCj = 4 (b)jCj = 7 classes. 5.2 Channel and System Models 5.2.1 Cellular System and Frequency Reuse We consider a cellular communication system with a regular hexagonal cell structure with finite frequency reuse. 1 . Note that our framework can be used to analyze the interference for any reuse factorjCj. Two examplesjCj = 4; 7 are shown in Fig. 5.1, where the frequency reuse is over the cells with the same color, i.e., the whole available bandwidth is divided intojCj sub bands, and the cells with the same color use the same sub band. We use the term “cluster” for both the cells operating in the same frequency band, and the devices inside those cells (note that sometimes in the literature, the term "cluster" is instead used for a group of cells that do not use the same frequency). Further, for a cell, its cluster refers to the cluster it belongs to. We denote the clusters by C 1 ;:::;C jCj . The size of the hexagon is determined by the distance from the base station (located in the center) to the farthest point in the cell, d. 5.2.2 Distribution of the nodes Devices are distributed according to Poisson distribution with density . Further, on each time slot p fraction of nodes are scheduled on average. Hence, the spatial distribution of the transmitting nodes is a homogeneous Poisson Point Process (PPP) with parameter =p, i.e., the probability of having k nodes inside an areaA transmitting during a 1 Recall that a valid reuse factor isjCj =i 2 +ij +j 2 for i;j2Z+, see [MD79, Mol10], Chapter 17. 97 5.2. Channel and System Models time slot is P(k inA) = (jAj) k k! e jAj : (5.1) Note that can be considered as the spatial density of the “scheduled” nodes, in nodes per unit area. We assume in-band transmission, such that the devices employ a frequency allocation scheme within each cell that follows the same reuse pattern. Further, we assume the employed scheduling protocol does not allow other devices to transmit within some distance d 0 from a receiving device. This is possible through collision sensing [WTS + 10] or using a centralized controller [NA14]. Note that this assumption is crucial, as without considering the effect of the scheduling algorithm, the intra-cell interference will dominate the total interference greatly, and the inter-cell interference becomes irrelevant. 5.2.3 Wireless Channel We consider a channel model similar to the one used in [WPS09]. That is, the following relationship governs transmission and receiver power, namely, P t and P r : P r =P t Q k Z k r ; (5.2) whereisthepathlossexponent, andtheZ k ’sarerandomvariablesdescribingpropagation effects such as small-scale and large-scale fading. The pathloss coefficient can in principle take on arbitrary (positive) values, though in practice it lies in 2 [1:6; 4] [Mol10]. Further, we assume the scheduling algorithm (through a central controller like BS or through distributed collision sensing) does not allow any other activity within the radius d 0 from the receiver. Similar to [WPS09], we consider four different cases of the propagation effects. The model, as shown in [WPS09] is general and accounts for various channel attributes. 1. Only path loss: Z 1 = 1. 98 5.2. Channel and System Models 2. Path loss and small-scale Nakagami-m fading: Z 1 = 2 , where 2 (m; 1=m) 2 . 3. Path loss and log-normal shadowing: Z 1 =e 2G , where GN (0; 2 ) and is the shadowing variance. It is a common practice to use the logarithmic form (3.17) for the log-normal shadowing. In this case, log-normal shadowing becomes a normal random variable with distributionN (0; dB ) where dB = (20=ln10), and usually ranges between 6 to 12 [Mol10]. 4. All together, i.e., path loss, Nakagami-m fading, and log-normal shadowing: Z 1 = 2 with 2 (m; 1=m), and Z 2 =e 2G with GN (0; 1). hereN (; 2 ) denotes a normal distribution with mean and variance 2 . Further, (x;) denotes the gamma distribution with mean x and variance x 2 . For further use, we letZ = Q k Z k (with realization z) denote the product of the propagation effects. 5.2.4 Approximating the area of a cell In order to provide a descriptive statistic for the interference, we try to approximate the area of a cell with another geometrical shape to ease the derivation. This, however, is not an easy task and thus many approximations have been proposed [AG99, HR86, MD79, NS96]. Some of these geometrical approximations [AG99, HR86], though being easy to integrate over, are not area preserving and provide a loose estimate of the interference. Other approximations [NS96, ZLCP11] while being area preserving and providing a tight approximate, do not present much computational benefit. Here, we try to offer the best of the two worlds. The principle of our approximation is area preservation, i.e., for a hexagonal cell with radius d and area 3 p 3 2 d 2 , our approximation has a slightly modified shape (which eases computations) but the same area (which preserves the statistics of the number of devices, and thus interferers, in the considered area). For easier analysis, we divide the received interference at a given node v into two components: Intra-cell and inter cell. The former corresponds to the interference caused by the devices located in the same cell, while the latter is produced by the nodes from other cells. 2 denotes a realization of a random variable distributed according to. 99 5.2. Channel and System Models d 0.9094d 1 Figure 5.2: Approximating the area of a hexagonal cell with radiusd by a disk with radius 0:9094d for a node located at a corner of the cell. 5.2.5 Intra-cell approximation Similar to [AG99], we approximate the area of the cell a node belongs to by a circle with radius q 3 p 3 2 d = 0:9094d. Set d = 0:9094d. Fig. 5.2 shows this approximation. 5.2.6 Inter-cell approximation For a node at distance D from a cell, we approximate the area of the cell by an annulus sector with angle 2# and length a, as shown in Fig. 5.3. Specifically, let L denote the line connecting the center of the cell to the node v for which we want to approximate the area of the cell. Find the point on the cell that is closest to v. Then construct an annulus sector with length a starting from this point and angle 2# with line L as the angle bisector. For the annulus sector, to have the area equal to the area of a hexagonal cell with radius d, we set #((D +a) 2 D 2 ) = 3 p 3 2 d 2 : We set a = 3 2 d and hence # = d p 3 3 2 d + 2D : (5.3) Since the approximation brings a bigger area close to the receiver, as shown in Fig. 5.3, the interference received from the approximated area is slightly more than the interference received from the hexagonal cell. As shown in Fig. 5.3, to use the approximation, we need 100 5.2. Channel and System Models d D a= 3 2 d ϑ d D a= 3 2 d ϑ 1 Figure 5.3: Approximating the area of a hexagonal cell with radiusd by an annulus sector with angle # = d p 3 3 2 d+2D and length a = 3 2 d. to find the closest point on the cell to the receiver. This means that we should know how the cell is faced to the node as shown in upper and lower parts of Fig. 5.3. This becomes harder as we consider the location of receiving node to be random. Thus we propose an easier approximation which only requires the knowledge of the center and radius of the cell. Specifically, instead of choosing the closest point of the cell to v, choose a point at distance D such that the center of the cell is at distance D +d from v. Specifically, as shown in Fig. 5.4, connect the node v to the center of the cell with a line such that the line is the bisector of the angle of the annulus sector 2# with the value (5.3). Set D such that the distance of v from the center of the cell becomes D +d. Let the annulus sector have lengtha = 3 2 d from distance D toD + 3 2 d. This way, we may use the approximation by having the knowledge of the center and radius of the cells. 101 5.3. Aggregate Interference d d 3 2 d 1 Figure 5.4: The second approximation for the area of a hexagonal cell with radius d solely based on the knowledge of center and radius of the cells. 5.3 Aggregate Interference In this section, we define and formulate the aggregate interference for both D2D and uplink systems. Then, we characterize the first and the second moments of the aggregate interference. As mentioned, we assume that on average p fraction of devices transmit on each slot. Using the Poisson approximation for the binomial distribution, the number of “transmitting” nodes in cellA can be approximated by a Poisson point process with spatial density =p in (5.1). Letfx i g 1 i=1 denote the sequence of locations of the random points of a two dimensional Poisson process with spatial density . Further, definefR i g 1 i=1 to be the sequence of distances of these points from the location of the receiver v as we map the location of v to the origin. Also, letfZ i g 1 i=1 be a sequence of i.i.d. real non-negative random variables corresponding to the propagation effects, which is independent of the distances. The aggregate interference introduced by a particular cellA at node v is thus Y v (A) = 1 X i=1 Z i R i 1 A (x i ); (5.4) 102 5.3. Aggregate Interference where the function 1 A (x i ) is an indicator function which is one if x i 2A and zero otherwise. The aggregate interference Y v (C 1 ) (5.6) is the sum of the interferences from different cells belonging to the cluster C 1 , i.e., Y v (C 1 ) = X A2C 1 Y v (A): (5.5) Putting it all together, the aggregate interference at node v from the scheduled nodes located in different cells from cluster C 1 is Y v (C 1 ) = X A2C 1 1 X i=1 Z i R i 1 A (x i ); (5.6) where we slightly abused the notation, as C 1 corresponds to both a random variable representing the first cluster, and the area covered by cells belonging to the first cluster. Foreasieranalysis, wedividethereceivedinterferenceatnodev intotwocomponents: Intra- cell and inter cell. The former corresponds to the interference caused by non-neighbors belonging to the same cell as node v, while the latter is produced by non-neighbors from other cell. 5.3.1 Mean and Variance of the Aggregate Interference We start by deriving the mean and variance of the aggregate interference for a node v, taking as specific examples the nodes at a center of a cell v 0 and a corner v c . We slightly develop our notation, as Y v (A(D)) denotes the interference received at node v from a particular cellA located at distance D from v, where D = 0 corresponds to the cell node v belongs to, i.e., Y v (A(0)) denotes the intra-cell interference. As is customary in such derivations, we map the location of the receiver v to the axis origin. As shown in [BB09, Hae12], the mean and the variance of the interference received at node v is given 103 5.3. Aggregate Interference by v (A) = Z A z r f Z (z)rdrdzd = E [Z] Z A 1 r 1 drd (5.7) 2 v (A) = Z A z r 2 rdrdf Z (z)dz; (5.8) where f Z (z) is the probability density function of the propagation effects, and we normal- ized the transmission power of mobile devices to 1, i.e., P t = 1. For the D2D system, starting with the intra-cell interference, for a node v at distance R v from the center of the cell, its mean aggregate intra-cell interference is v (A(0)) = 2E [Z] Z 0 Z max(d 0 ; q d 2 +R 2 v +2Rvd cos()) d 0 r 1 drd (5.9) For example, for nodes located at the center and a corner of a cell denoted by v 0 ;v c , respectively; the above reduces to v 0 (A(0)) = 2E [Z] Z 0 Z d d 0 r r drd =E [Z] d 2 d 0 2 1 2 vc (A(0)) =E [Z] Z =2 =2 Z max(d 0 ;2d cos()) d 0 r r drd: In a similar manner we can easily compute the variance of the intra-cell interference by (5.8). For a inter-cell interference, we consider a cell whose center is at distance D from node v. The mean inter-cell interference at v is given by v (A(D)) = Z # # Z D+ 3 2 d D r r drdE [Z] = # (D + 3 2 d) 2 (D) 2 1 2 E [Z]; (5.10) 104 5.4. Characteristic function of the aggregate interference where # is given in (5.3). Similarly, we can compute the inter-cell interference variance. Summing up the intra-cell and all the inter-cell interferences, we can compute the average aggregate interference a node receives For example, consider a cellular system with reuse factorjCj = 4 where v 0 is located at the center of a cell. Then, the average aggregate interference the node v 0 receives in a D2D system is v 0 (C 1 ) = v 0 (A(0)) + X A2C 1 v 0 (A(D)) =(A(0)) + 6 v 0 (A(2 p 3d) + 6 v 0 (A(4 p 3d) +::: : 5.4 Characteristic function of the aggregate interference In this section, we find the characteristic function of Y v (A) denoted by Y;v (w;A) = E e jwY (A) . To do so, we employ the well-known Campbell’s Theorem [Kin92]: Y (w;A) = exp RR A R z (1e jwz r 1 A )f Z (z)dzrdrd = exp RR A (1 z ( w r ))rdrd : (5.11) Hence, the characteristic function of the intra-cell aggregate interference becomes: Y;v (w;A(0)) = exp 2 Z 0 Z max(d 0 ; q d 2 +R 2 v +2Rvd cos() ) d 0 (1 z ( w r ))rdrd ; For example, for a corner node v c and # = cos 1 ( b R 2d ) we have Y;vc (w;A(0)) = exp Z # # Z 2d cos() d 0 (1 z ( w r ))rdrd ! ; We may use the variable change t = jwj r to get Y;v (w;A(0)) = exp 0 @ jwj 2= Z 0 Z jwj d 0 min( jwj d 0 ; jwj (d 2 +R 2 v +2Rvdcos()) =2 ) 1E e jsign(w)zt t 1+2= dtd 1 A ; 105 5.5. Signal-to-Interference-Plus-Noise Ratio The integral does not have a closed form solution but it can be computed numerically. We can, however, make it simpler using the following relationship [AS06] Z 1e jzt t 1+ dt = t 1 (itz) G(;itz) ; (5.12) where G(a;z) is the upper incomplete Gamma function defined as follows [AS06] G(a;z) = Z 1 z t a1 e t dt: (5.13) For the aggregate inter-cell interference received from a cell at distance D of node v, denoted byA(D), we have Y;v (w;A(D)) = exp Z # # Z D+ 3 2 d D (1 z ( w r ))rdrd ! ; where# is given in (5.3). The characteristic function of the aggregate interference at node v is Y;v (w;C 1 ) = Y A2C 1 Y;v (w;A); (5.14) based on which we can find the cumulative distribution function F Y (y) that measures the probability of the aggregate interference at v being less than y. Unfortunately, o the best of our knowledge, no closed-form solution exists for F Y (y), except for some special simple cases. However, it can be computed numerically. 5.5 Signal-to-Interference-Plus-Noise Ratio The SINR associated with node v is SINR(v) = S v Y v (C 1 ) + ; (5.15) 106 5.5. Signal-to-Interference-Plus-Noise Ratio where S v is the received power at v from the corresponding transmitter and is the constant normalized noise power. By (5.2), the desired signal power is S v =P s Z s r s ; (5.16) where the subscript s refers to the desired signal which is received with power S and is located at distance r s . As before, since all nodes transmit with the same power, we normalize the transmission power of all devices to 1 and hence P s = 1. To simplify the derivation, we fix the transmitter’s distance r s . As given by (5.2), the power received from a device located at r s has distribution P(Ss) =P(Z s r s s): (5.17) Here, we are interested in the probabilityP(SINR v T), i.e., the probability of SINR being larger than a certain threshold T. Note that this is closely related to the outage probability defined to beP(log(1 +SINR)<C). By the law of total probability with respect to the random variablesZ s and Y we have P(SINRT ) =E Y [P Zs (Z s r s T (Y +)jY )]: (5.18) We may reformulate 5.18 to get P(SINRT ) = 1P(SINR<T ) = 1E Y [P Zs (Z s r s T (Y +)jY )] = 1E Y [F Z (r s T (Y +)jY )]; (5.19) whereF Z (:) is the CDF of the propagation effectZ. Changing the order of the operations, we can alternatively get P(SINRT ) =E Zs F Y Z s r s T ; (5.20) 107 5.6. Applications where F Y is the cdf of the interference Y. Let us analyze the SINR for the different cases of the propagation effects: 1. Path loss only: For this case,Z = 1 and from (5.20) we have P(SINRT ) =F Y 1 r s T (5.21) 2. Path loss and Nakagami-m fading: In this case, (5.18) becomes P(SINRT ) = 1 (m) E Y G(m;r s T (Y +)m) ; (5.22) where G(:;:) is the upper incomplete Gamma function defined in (5.13). 3. Path loss and log-normal shadowing: In this case, (5.20) becomes P(SINRT ) =E Y Q( 1 2 log(r s T (Y +))) ; (5.23) where Q() is the Gaussian Q-function. 4. Path loss, Nakagami-m fading, and log-normal shadowing: P(SINRT ) =E Y;Gs (m; r s T (Y +)m e 2Gs ) : (5.24) 5.6 Applications In this section we discuss some extensions and applications of the theory we have derived in previous sections. Note that deriving the exact distribution of interference and SINR accordingly, is still computationally demanding as number of cells causing the inter-cell interference increases. We, however, may bound the probability of aggregate interference 108 5.6. Applications through the well-known Chernoff bound. Specifically, P(Y >y) E Y e $Y e $y =e $y Y (j$;C 1 ): (5.25) Note that this is a bound on approximation and does not serve as a bound on the real probability of interference. However, if the bound is tight, it may as well be used as a new approximation. As shown, we have already derivedE Y e $Y = Y (j$;C 1 ). Similarly, we can tightly bound the probability of SINR: P(SINR>T ) =P( S Y + >T ) =E S P Y > S T jS E S h e $( S T ) E Y e $Y i =e $ z (j $ Tr s ) Y (j$;C 1 ) (5.26) where we used E Y e $Y = Y (j$;C 1 ). The above bounds can be optimized over $> 0. Another side product of our result is analyzing the reuse distance. Note that such a problem is analyzed for cellular uplink in [AG99], where the authors use Monte-Carlo simulations. Note that the interference in the D2D system is different from the cellular one as solely increasing the reuse distance will not decrease the interference as the intra-cell interference grows. Here, instead of using the Monte-Carlo method and comparing with the best case and worst case as in [AG99], we can find the optimal d with respect to expectation over SINR 3 and location of the receiver. The optimal cell radius d for the 3 Our derivation only suffers from the geometric approximation we used. 109 5.7. Experimental Results D2D network is given by d = arg min d E xv E [SINR(v)jX v ] =E xv E Yv 1 Y v + E [SjX v ;Y v ]jX v ; (5.27) where x v is the location of the receiver v and E xv is over the area of the central cell. note that we fix the location of transmitter with respect to the receiver to be r s , i.e., we assume the link length is fixed. Hence,E[SjX v ;Y ] =r s . 5.7 Experimental Results Here, we analyze and verify some of our findings through simulations. We consider jAj = 10, i.e., in average 10 nodes are scheduled in each cell. It is a common practice to have the area of integration to be at least one unit of distance away from the receiver as it prohibits the numerical analysis from overshooting due to the small values of denominator. Hence, we normalize the protection distance d 0 = 1. Further, we assume the cell radius is d = 4d 0 . Further, we locate our receiver v at [2:4; 0] T on a cell centered at 0. For the propagation parameters, we set the pathloss coefficient to = 2, Nakagami-m fading parameter to m = 3, and the variance of the log-normal shadowing to d B = 9. Let us start by observing the mean (5.7) and the variance (5.8) of the aggregate interference with respect to the reuse distanceD.Fig.’s 5.5, 5.6 show the mean and the variance as a function of the the ratioD=d for different pathloss coefficients . As shown, when the reuse distance increases, the dominant factor becomes the intra-cell interference and the curves converge to the mean and variance of the intra-cell interference. Let us now investigate the efficiency and precision of our approximations. To do so, we compare the cumulative distribution function given by our approximation with the cumulative sum of the histogram obtained from the simulation. We start by analyzing the inter-cell interference received from a single cell centered at [12; 0] T , i.e.,D = 6. Fig.’s 5.7, 5.8, and 5.9 show the comparison for different propagation 110 5.7. Experimental Results 2 3 4 5 6 D d 0 1 2 3 4 5 m v HC 1 L a= 2 a= 3 a= 4 Figure 5.5: The mean of the aggregate interference as a function of the reuse distance for the fixed cell radius d. effects, i.e., cases 1, 2, and 3, respectively. As shown, the suggested approximation, closely follows the simulation results and demonstrates high precision. Note that to compute the characteristic function for the second case, we used the sharp approximation suggested in [AJRN14]. The fourth case is more computationally challenging and thus is skipped. Secondly, we analyze the cumulative distribution function of the aggregate inter-cell interference for the reuse distanceD = 3. Fig.’s 5.10, 5.11, and 5.12 show the comparison for different propagation effects, i.e., cases 1, 2, and 3, respectively. Again, the suggested approximation, closely follows the simulation results and demonstrates high precision. Finally, we analyze the aggregate interference, i.e., the sum of inter-cell and intra-cell interference, for the reuse distanceD = 3. Fig. 5.13 shows the comparison of the aggregate interference for the case 1. Unfortunately, the computational cost to compute the aggregate interference for other cases is a lot. We, however, can use the bound in (5.25). That is, if we find a $ for which the bound in (5.25) is tight, then we can use the bound as a less computationally involved approximation. Fig.’s 5.14, 5.15 show the performance of these bounds for cases 2 and 3, respectively. As shown, the bound performs worse than the approximation, but it still gives an acceptable result given the computation relief that it provides. 111 5.7. Experimental Results 2 3 4 5 6 D d 0.0 0.2 0.4 0.6 0.8 1.0 s v 2 HC 1 L EHZ 2 L a= 2 a= 3 a= 4 Figure 5.6: The scaled variance of the aggregate interference as a function of the reuse distance for the fixed cell radius d. Now given the approximative cumulative distribution function for different channels, we can easily use the formulas of Section 5.5, and find the SINR statistics. 112 5.7. Experimental Results −16 −15 −14 −13 −12 −11 −10 −9 −8 −7 −6 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 F Y (y,A(6)) y in dB Our Approximation Simulation Figure 5.7: Inter-cell interference received from a single cell at distance D = 6 for the case 1, i.e., pathloss only scenario. −16 −15 −14 −13 −12 −11 −10 −9 −8 −7 −6 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 F Y (y,A(6)) y in dB Our Approximation Simulation Figure 5.8: Inter-cell interference received from a single cell at distance D = 6 for the case 2, i.e., pathloss and Nakagami-m fading scenario. 113 5.7. Experimental Results −20 −15 −10 −5 0 5 10 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 F Y (y,A(6)) y in dB Our Approximation Simulation Figure 5.9: Inter-cell interference received from a single cell at distance D = 6 for the case 1, i.e., pathloss and log-normal shadowing scenario. −6 −5.5 −5 −4.5 −4 −3.5 −3 −2.5 −2 −1.5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 F Y (y,A(D> 0)) y in dB Our Approximation Simulation Figure 5.10: Inter-cell interference received from the first tier neighboring cells for the reuse distanceD = 3 and propagation case 1, i.e., pathloss only scenario. 114 5.7. Experimental Results −6 −5.5 −5 −4.5 −4 −3.5 −3 −2.5 −2 −1.5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 F Y (y,A(D> 0)) y in dB Our Approximation Simulation Figure 5.11: Inter-cell interference received from the first tier neighboring cells for the reuse distanceD = 3 and propagation case 2, i.e., pathloss and Nakagami-m fading scenario. −4 −2 0 2 4 6 8 10 12 14 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 F Y (y,A(D> 0)) y in dB Our Approximation Simulation Figure 5.12: Inter-cell interference received from the first tier neighboring cells for the reuse distanceD = 3 and propagation case 3, i.e., pathloss and log-normal shadowing scenario. 115 5.7. Experimental Results −2 0 2 4 6 8 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 F Y (y,A(C 1 )) y in dB Our Approximation Simulation Figure 5.13: Inter-cell interference received from the first tier neighboring cells for the reuse distanceD = 3 and propagation case 1, i.e., pathloss only. −2 0 2 4 6 8 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1− F Y (y,A(C 1 )) y in dB Our Approximation Simulation Figure 5.14: The bound (5.25) on the aggregate interference against the cumulative sum of the histogram for the reuse distanceD = 3 and propagation case 2, i.e., pathloss and Nakagami-m fading scenario. 116 5.7. Experimental Results 0 5 10 15 20 25 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1− F Y (y,A(C 1 )) y in dB Our Approximation Simulation Figure 5.15: The bound (5.25) on the aggregate interference against the cumulative sum of the histogram for the reuse distanceD = 3 and propagation case 3, i.e., pathloss and log-normal shadowing scenario. 117 6 Contributions and Outlook In this chapter we briefly discuss the contributions of this dissertation and discuss few possible future directions. I start by listing the contributions of this dissertation: Introduced the first explicit construction of order optimal measurement matrices with the null-space property. Constructed two novel neighbor discovery algorithms with provable guarantees. Introduced the first directional neighbor discovery method with order of discovery period linear in average number of neighbors with provable recovery guarantee. Introduced a novel geometric approximation for the area of the hexagonal cell through which analyzed the interference for different propagation effects. As for the future directions, for the neighbor discover methods, it is interesting to extend them incorporate proximity services. For example, each device searches the nearby devices to find common applications. For the interference analysis, it is interesting to find the optimal cell radius given the fixed reuse factor. 119 Bibliography [AA07] Tsvetan Asamov and Nuh Aydin. Ldpc codes of arbitrary girth. In Infor- mation Theory, 2007. CWIT’07. 10th Canadian Workshop on, pages 69–72. IEEE, 2007. [ADB + 91] Adnan Abu-Dayya, Norman C Beaulieu, et al. Outage probabilities of cellular mobile radio systems with multiple nakagami interferers. Vehicular Technology, IEEE Transactions on, 40(4):757–768, 1991. [ADS09] S. Arora, C. Daskalakis, and D. Steurer. Message passing algorithms and improved lp decoding. In Proceedings of the 41st annual ACM symposium on Theory of computing, pages 3–12. ACM, 2009. [AG99] M-S Alouini and Andrea J Goldsmith. Area spectral efficiency of cellu- lar mobile radio systems. Vehicular Technology, IEEE Transactions on, 48(4):1047–1066, 1999. [AJRN14] Søren Asmussen, Jens Ledet Jensen, and Leonardo Rojas-Nandayapa. On the laplace transform of the lognormal distribution. Methodology and Computing in Applied Probability, pages 1–18, 2014. [AMRU09] Abdelaziz Amraoui, Andrea Montanari, Tom Richardson, and Rüdiger Urbanke. Finite-length scaling for iteratively decoded ldpc ensembles. Information Theory, IEEE Transactions on, 55(2):473–498, 2009. 121 Bibliography [AS06] Milton Abramowitz and Irene A Stegun. Handbook of mathematical func- tions with formulas, graphs, and mathematical tables. In Conference on XYZ. Dover, 2006. [BB09] François Baccelli and Bartlomiej Blaszczyszyn. Stochastic Geometry and Wireless Networks: Volume 1: THEORY, volume 1. Now Publishers Inc, 2009. [BDF + 11] J. Bourgain, S. Dilworth, K. Ford, S. Konyagin, and D. Kutzarova. Explicit constructions of rip matrices and related problems. Duke Mathematical Journal, 159(1):145–185, 2011. [BDMS12] Afonso S Bandeira, Edgar Dobriban, Dustin G Mixon, and William F Sawin. Certifying the restricted isometry property is hard. arXiv preprint arXiv:1204.1580, 2012. [BE81] Dennis J Baker and Anthony Ephremides. The architectural organization of a mobile radio network via a distributed algorithm. Communications, IEEE Transactions on, 29(11):1694–1701, 1981. [BEM07] S.A. Borbash, A. Ephremides, and M.J. McGlynn. An asynchronous neigh- bor discovery algorithm for wireless sensor networks. Ad Hoc Networks, 5(7):998–1016, 2007. [BGI + 08] Radu Berinde, Anna C Gilbert, Piotr Indyk, Howard Karloff, and Martin J Strauss. Combining geometry and combinatorics: A unified approach to sparse signal recovery. In Communication, Control, and Computing, 2008 46th Annual Allerton Conference on, pages 798–805. IEEE, 2008. [CT05] E.J. Candes and T. Tao. Decoding by linear programming. Information Theory, IEEE Transactions on, 51(12):4203–4215, 2005. [DBIPW10a] K. Do Ba, P. Indyk, E. Price, and D.P. Woodruff. Lower bounds for sparse recovery. In Proceedings of the Twenty-First Annual ACM-SIAM 122 Bibliography Symposium on Discrete Algorithms, pages 1190–1197. Society for Industrial and Applied Mathematics, 2010. [DBIPW10b] Khanh Do Ba, Piotr Indyk, Eric Price, and David P Woodruff. Lower bounds for sparse recovery. In Proceedings of the Twenty-First Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1190–1197. Society for Industrial and Applied Mathematics, 2010. [DeV07] R.A. DeVore. Deterministic constructions of compressed sensing matrices. Journal of Complexity, 23(4-6):918–925, 2007. [DH01] D.L. Donoho and X. Huo. Uncertainty principles and ideal atomic decom- position. Information Theory, IEEE Transactions on, 47(7):2845–2862, 2001. [DM15] Vikas Kumar Dewangan and Neelesh B Mehta. Timer-based distributed node selection scheme exploiting power control and capture. Wireless Communications, IEEE Transactions on, 14(3):1457–1467, 2015. [DMM09] D.L. Donoho, A. Maleki, and A. Montanari. Message-passing algorithms for compressed sensing. Proceedings of the National Academy of Sciences, 106(45):18914–18919, 2009. [Don06] D.L. Donoho. Compressed sensing. Information Theory, IEEE Transactions on, 52(4):1289–1306, 2006. [DRW + 09] Klaus Doppler, Mika Rinne, Carl Wijting, Cássio B Ribeiro, and Klaus Hugl. Device-to-device communication as an underlay to lte-advanced networks. Communications Magazine, IEEE, 47(12):42–49, 2009. [DSV10a] A.G. Dimakis, R. Smarandache, and P.O. Vontobel. Channel coding lp decoding and compressed sensing lp decoding: further connections. In Proc. 2010 Intern. Zurich Seminar on Communications, pages 3–5, 2010. 123 Bibliography [DSV10b] A.G. Dimakis, R. Smarandache, and P.O. Vontobel. Ldpc codes for com- pressed sensing. Information Theory, IEEE Transactions on, (99):1–1, 2010. [DT05] D.L. Donoho and J. Tanner. Neighborliness of randomly projected simplices in high dimensions. Proceedings of the National Academy of Sciences of the United States of America, 102(27):9452, 2005. [DV09] A.G. Dimakis and P.O. Vontobel. Lp decoding meets lp decoding: a connection between channel coding and compressed sensing. In Communi- cation, Control, and Computing, 2009. Allerton 2009. 47th Annual Allerton Conference on, pages 8–15. IEEE, 2009. [Fos04] Marc PC Fossorier. Quasicyclic low-density parity-check codes from cir- culant permutation matrices. Information Theory, IEEE Transactions on, 50(8):1788–1793, 2004. [FWK05] J. Feldman, M.J. Wainwright, and D.R. Karger. Using linear programming to decode binary linear codes. Information Theory, IEEE Transactions on, 51(3):954–972, 2005. [Gal62] Robert G Gallager. Low-density parity-check codes. Information Theory, IRE Transactions on, 8(1):21–28, 1962. [GI10] A. Gilbert and P. Indyk. Sparse recovery using sparse matrices. Proceedings of the IEEE, 98(6):937–947, 2010. [GLR10] Venkatesan Guruswami, James R Lee, and Alexander Razborov. Almost euclidean subspaces of ` n 1 via expander codes. Combinatorica, 30(1):47–68, 2010. [Hae12] Martin Haenggi. Stochastic geometry for wireless networks. Cambridge University Press, 2012. 124 Bibliography [HEA05] X.Y. Hu, E. Eleftheriou, and D.M. Arnold. Regular and irregular progressive edge-growth tanner graphs. Information Theory, IEEE Transactions on, 51(1):386–398, 2005. [HPS02] Zygmunt J Haas, Marc R Pearlman, and Prince Samar. The zone routing protocol (zrp) for ad hoc networks. draft-ietf-manet-zone-zrp-04. txt, 2002. [HR86] Daehyoung Hong and Stephen S Rappaport. Traffic model and performance analysis for cellular mobile radio telephone systems with prioritized and non- prioritized handoff procedures. Vehicular Technology, IEEE Transactions on, 35(3):77–92, 1986. [IW07] II IST-WINNER. Deliverable 1.1. 2 v. 1.2,“winner ii channel models”, ist-winner2. Technical report, Tech. Rep., 2008 (http://projects. celti c-initiative. org/winner+/deliverables. html), 2007. [Kin92] John Frank Charles Kingman. Poisson processes, volume 3. Oxford univer- sity press, 1992. [KRS15] Athar Ali Khan, Mubashir Husain Rehmani, and Yasir Saleem. Neighbor discovery in traditional wireless networks and cognitive radio networks: Basics, taxonomy, challenges and future research directions. Journal of Network and Computer Applications, 52:173–190, 2015. [KSTDH11] A. Khajehnejad, A. Saber Tehrani, A.G. Dimakis, and B. Hassibi. Explicit matrices for sparse approximation. In Information Theory Proceedings (ISIT), 2011 IEEE International Symposium on, pages 469–473. IEEE, 2011. [KV06] R. Koetter and P.O. Vontobel. On the block error probability of lp decoding of ldpc codes. Arxiv preprint cs/0602086, 2006. 125 Bibliography [LAGR13] Xingqin Lin, Jeffrey G Andrews, Amitava Ghosh, and Rapeepat Ratasuk. An overview on 3gpp device-to-device proximity services. arXiv preprint arXiv:1310.0116, 2013. [LG08] J. Luo and D. Guo. Neighbor discovery in wireless ad hoc networks based on group testing. In Communication, Control, and Computing, 2008 46th Annual Allerton Conference on, pages 791–797. IEEE, 2008. [MB01] M.J. McGlynn and S.A. Borbash. Birthday protocols for low energy de- ployment and flexible neighbor discovery in ad hoc wireless networks. In Proceedings of the 2nd ACM international symposium on Mobile ad hoc networking & computing, pages 137–145. ACM, 2001. [MD79] Verne H Mac Donald. Advanced mobile phone service: The cellular concept. Bell System Technical Journal, The, 58(1):15–41, 1979. [MG82] R Muammar and SC Gupta. Cochannel interference in high-capacity mobile radio systems. Communications, IEEE Transactions on, 30(8):1973–1978, 1982. [Mol10] Andreas F Molisch. Wireless communications, volume 15. John Wiley & Sons, 2010. [MU05] Michael Mitzenmacher and Eli Upfal. Probability and computing: Ran- domized algorithms and probabilistic analysis. Cambridge University Press, 2005. [NA87] Yoshinori Nagata and Yoshiniko Akaiwa. Analysis for spectrum efficiency in single cell trunked and cellular mobile radio. Vehicular Technology, IEEE Transactions on, 36(3):100–113, 1987. [NA14] Navid Naderializadeh and Amir Salman Avestimehr. Itlinq: A new approach for spectrum sharing in device-to-device communication systems. Selected Areas in Communications, IEEE Journal on, 32(6):1139–1151, 2014. 126 Bibliography [NP12] Krishna R Narayanan and Henry D Pfister. Iterative collision resolution for slotted aloha: An optimal uncoordinated transmission policy. In Turbo Codes and Iterative Information Processing (ISTC), 2012 7th International Symposium on, pages 136–139. IEEE, 2012. [NS96] Mahmoud Naghshineh and Mischa Schwartz. Distributed call admission control in mobile/wireless networks. Selected Areas in Communications, IEEE Journal on, 14(4):711–717, 1996. [PK91] Ramjee Prasad and Adriaan Kegel. Improved assessment of interference lim- its in cellular radio performance. Vehicular Technology, IEEE Transactions on, 40(2):412–419, 1991. [PK93] Ramjee Prasad and Adriaan Kegel. Effects of rician faded and log-normal shadowed signals on spectrum efficiency in microcellular radio. Vehicular Technology, IEEE Transactions on, 42(3):274–281, 1993. [PLC11] Enrico Paolini, Gianluigi Liva, and Marco Chiani. High throughput random access via codes on graphs: Coded slotted aloha. In Communications (ICC), 2011 IEEE International Conference on, pages 1–6. IEEE, 2011. [RRS + 05] Ram Ramanathan, Jason Redi, Cesar Santivanez, David Wiggins, and Stephen Polit. Ad hoc networking with directional antennas: a complete system solution. Selected Areas in Communications, IEEE Journal on, 23(3):496–506, 2005. [RU08] Tom Richardson and Rüdiger Leo Urbanke. Modern coding theory. Cam- bridge University Press, 2008. [STC14] A. Saber Tehrani and G. Caire. Zigzag neighbor discovery in wireless net- works. In Information Theory Proceedings (ISIT), 2014 IEEE International Symposium on. IEEE, 2014. 127 Bibliography [STDC13] Arash Saber Tehrani, Alexandros G Dimakis, and Giuseppe Caire. Optimal measurement matrices for neighbor discovery. In Information Theory Pro- ceedings (ISIT), 2013 IEEE International Symposium on, pages 2134–2138. IEEE, 2013. [Ste03] Martha E Steenstrup. Neighbor discovery among mobile nodes equipped with smart antennas. In Proc. Scandinavian Workshop on Wireless Adhoc Networks, 2003. [SXH08] M. Stojnic, W. Xu, and B. Hassibi. Compressed sensing-probabilistic analysis of a null-space characterization. In Acoustics, Speech and Signal Processing, 2008. ICASSP 2008. IEEE International Conference on, pages 3377–3380. IEEE, 2008. [Tao07] T Tao. Open question: deterministic uup matrices. Weblog at http://terrytao. wordpress. com, 2007. [TMBR02] Mineo Takai, Jay Martin, Rajive Bagrodia, and Aifeng Ren. Directional virtual carrier sensing for directional antennas in mobile ad hoc networks. In Proceedings of the 3rd ACM international symposium on Mobile ad hoc networking & computing, pages 183–193. ACM, 2002. [TW67] Richard Townsend and E Weldon. Self-orthogonal quasi-cyclic codes. In- formation Theory, IEEE Transactions on, 13(2):183–195, 1967. [VIN14] Cisco visual networking index: Forecast and methodology, 2014–2019. Feb 2014. [VKT05] S. Vasudevan, J. Kurose, and D. Towsley. On neighbor discovery in wireless networks with directional antennas. In INFOCOM 2005. 24th Annual Joint Conference of the IEEE Computer and Communications Societies. Proceedings IEEE, volume 4, pages 2502–2512. IEEE, 2005. 128 Bibliography [VTGK09] Sudarshan Vasudevan, Donald Towsley, Dennis Goeckel, and Ramin Khalili. Neighbor discovery in wireless networks and the coupon collector’s prob- lem. In Proceedings of the 15th annual international conference on Mobile computing and networking, pages 181–192. ACM, 2009. [WPS09] Moe Z Win, Pedro C Pinto, and Lawrence A Shepp. A mathematical theory of network interference and its applications. Proceedings of the IEEE, 97(2):205–230, 2009. [WTS + 10] X. Wu, S. Tavildar, S. Shakkottai, T. Richardson, J. Li, R. Laroia, and A. Jovicic. Flashlinq: A synchronous distributed scheduler for peer-to-peer ad hoc networks. In Communication, Control, and Computing (Allerton), 2010 48th Annual Allerton Conference on, pages 514–521. IEEE, 2010. [XH07] Weiyu Xu and Babak Hassibi. Efficient compressive sensing with determin- istic guarantees using expander graphs. In Information Theory Workshop, 2007. ITW’07. IEEE, pages 414–419. IEEE, 2007. [ZLCP11] Yanyan Zhuang, Yuanqian Luo, Lin Cai, and Jianping Pan. A geomet- ric probability model for capacity analysis and interference estimation in wireless mobile cellular systems. In Global Telecommunications Conference (GLOBECOM 2011), 2011 IEEE, pages 1–6. IEEE, 2011. [ZLG12] L. Zhang, J. Luo, and D. Guo. Neighbor discovery for wireless networks via compressed sensing. Performance Evaluation, 2012. [ZP09] F. Zhang and H.D. Pfister. On the iterative decoding of high-rate ldpc codes with applications in compressed sensing. arXiv preprint arXiv:0903.2232, 2009. 129
Abstract (if available)
Abstract
The neighbor discovery in wireless networks is the problem of devices identifying other devices which they can communicate to effectively, i.e., those whose signal can be received with a power great enough. The neighbor discovery is crucial and is the first task which must be performed in the an ad-hoc or device-to-device network, is the prerequisite for channel estimation, scheduling, and communication. ❧ This dissertation presents two solutions for the neighbor discovery problem: compressed sensing neighbor discovery and ZigZag scheme. The former relies on the connection between the neighbor discovery and compressed sensing as number of neighbors a device has is small compares to the total number of nodes and thus sparse vector approximation methods can be applied. Exploiting our recent work on optimal deterministic sensing matrices, we show that the discovery time can be reduced to K log(N/K). The latter is based on serial interference cancellation that can provide further efficiency improvement, i.e., discovery period on the scale of K. ❧ We show a theoretical connection between the performance of these schemes and channel coding through which we derive performance bounds for the methods in the ideal setting. To analyze the performance of the methods, we compare them against each other and against random access scheme. ❧ We further extend the ZigZag discovery to support devices equipped with directional steering beam antennas. Note that such extension is not straight forward as devices need to scan their surrounding and two devices must agree on becoming the neighbor as both should aim their beams at one another in future communications. ❧ Furthermore, as the neighbor discovery methods perform poorly in the presence of interference from non-neighbors, we suggest a cellular based scheduling to overcome methods. We further analyze the performance of these methods in a realistic environment with presence of shadowing and fading, and introduce theoretic and practical measures to protect the performance of these methods against interference caused by non-neighbors.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Exploiting side information for link setup and maintenance in next generation wireless networks
PDF
Structured codes in network information theory
PDF
Fundamental limits of caching networks: turning memory into bandwidth
PDF
Elements of next-generation wireless video systems: millimeter-wave and device-to-device algorithms
PDF
Design and analysis of large scale antenna systems
PDF
Optimal distributed algorithms for scheduling and load balancing in wireless networks
PDF
Distributed interference management in large wireless networks
PDF
Using beams carrying orbital angular momentum for communications and remote sensing
PDF
Algorithmic aspects of energy efficient transmission in multihop cooperative wireless networks
PDF
Fundamentals of two user-centric architectures for 5G: device-to-device communication and cache-aided interference management
PDF
Application-driven compressed sensing
PDF
Double-directional channel sounding for next generation wireless communications
PDF
Design, modeling, and analysis for cache-aided wireless device-to-device communications
PDF
Wineglass mode resonators, their applications and study of their quality factor
PDF
Channel state information feedback, prediction and scheduling for the downlink of MIMO-OFDM wireless systems
PDF
Propagation channel characterization and interference mitigation strategies for ultrawideband systems
PDF
Space-time codes and protocols for point-to-point and multi-hop wireless communications
PDF
Joint communication and sensing over state dependent channels
PDF
Large system analysis of multi-cell MIMO downlink: fairness scheduling and inter-cell cooperation
PDF
Nanomaterials for macroelectronics and energy storage device
Asset Metadata
Creator
Tehrani, Arash Saber
(author)
Core Title
Neighbor discovery in device-to-device communication
School
Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Electrical Engineering
Publication Date
08/04/2015
Defense Date
04/04/2015
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
compressed sensing,D2D,neighbor discovery,OAI-PMH Harvest,peer discovery
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Caire, Giuseppe (
committee chair
), Goldstein, Larry (
committee member
), Molisch, Andreas F. (
committee member
)
Creator Email
saberteh@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c3-625930
Unique identifier
UC11305995
Identifier
etd-TehraniAra-3800.pdf (filename),usctheses-c3-625930 (legacy record id)
Legacy Identifier
etd-TehraniAra-3800.pdf
Dmrecord
625930
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Tehrani, Arash Saber
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
compressed sensing
D2D
neighbor discovery
peer discovery