Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Optimizing statistical decisions by adding noise
(USC Thesis Other)
Optimizing statistical decisions by adding noise
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
OPTIMIZING STATISTICAL DECISIONS BY ADDING NOISE by Ashok Patel A Thesis Presented to the FACULTY OF THE GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements for the Degree MASTER OF ARTS (APPLIED MATHEMATICS) May 2008 Copyright 2008 Ashok Patel Table of Contents List of Figures iii Abstract iv Chapter 1 Noise Benefits in Nonlinear Systems 1 Chapter 2 Problem Formulation and Derivation of Optimal SR Noise Densities 7 2.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2 Derivation of Optimal SR Noise Densities . . . . . . . . . . . . . . . . . . 8 Chapter 3 SR Noise Finding Algorithm 23 Chapter 4 Applications of SR Noise Algorithm 31 4.1 Near-optimal SR noise for a suboptimal one-sample Neyman-Pearson hy- pothesis test of variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 4.2 Near-optimalsignalpowerrandomizationforanaverage-power-constrained signal transmitter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 References 38 ii List of Figures 1.1 SR noise benefits in Neyman-Pearson signal detection . . . . . . . . . . . 6 4.1 Finding near-optimal Neyman-Pearson SR noise . . . . . . . . . . . . . . 33 4.2 SR noise (signal-strength randomization) benefits in optimal anti-podal signal detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 iii Abstract This thesis presents an algorithm to find near-optimal “stochastic resonance” (SR) noise to maximize the expected payoff in statistical decision problems subject to a single in- equality constraint on the expected cost. The SR effect or noise benefit occurs when the expected cost satisfies the inequality constraint while the expected payoff in the pres- ence of a noise or randomization is larger than in the case without noise. The payoff and cost functions are real-valued bounded nonnegative Borel-measurable functions on a finite-dimensional noise space N. We show that the optimal SR noise is just the ran- domization of two noise realizations if the statistical decision problem is subject only to a single inequality constraint and if the optimal noise exists. We give necessary and sufficient conditions for the existence of such optimal noise. If the optimal noise does not exist then there exists a sequence of noise random variables such that the limit of the respective expected-payoff sequence is optimal. We develop an algorithm that finds an SRnoise ˜ N ′ fromafinitesetofnoiserealizations ˜ N ⊆N. Thisnoise ˜ N ′ isnearlyoptimal if the payoff function on the actual noise space N is sufficiently close to its restriction to ˜ N. An upper bound limits the number of iterations that the algorithm requires to find such near-optimal SR noise. Two applications demonstrate the SR noise algorithm. iv The first application finds a near-optimal SR noise for a suboptimal one-sample Neyman- Pearson hypothesis test of variance. The second application gives a near-optimal signal power randomization for an average-power-constrained anti-podal signal transmitter in the presence of additive Gaussian-mixture channel noise where the receiver uses a maxi- mum a posteriori (MAP) method for optimal signal detection. These applications show that the algorithm finds near-optimal noise or randomization in just a few iterations. The algorithm has potential applications in many signal processing and communication systems that have randomized optimal solutions. v Chapter 1 Noise Benefits in Nonlinear Systems Stochastic Resonance (SR) is phenomenon where the noise or randomness benefits the system and the noise-modified or randomized system performs better on average than the system without noise. Such SR noise benefits have wide range of applications in physics, biology, and medicine [7, 16, 17, 18, 21, 23, 27, 31, 33, 48, 52, 54, 58]. A large and growing literature has documented SR in numerous physical and biological nonlinear systems [3, 4, 11, 13, 14, 15, 22, 26, 32, 34, 35, 40, 50, 53, 56, 57]. Many of these nonlinear systems act as signal detectors such that a deliberate noise injection improves the detection performance of such systems [5, 45, 9, 24, 41, 43, 45, 44, 51, 59, 60]. The noise benefits insignal detection can take many forms such assignal-to-noise ratio (SNR) [28, 38, 47], mutual information [12, 20, 29, 30, 37, 39], cross-correlation [10, 31, 28, 48], and detection or error probabilities [6, 8, 42, 46, 24, 25]. We focus on noise benefits that maximize the expected payoff in statistical decision problems with only one inequality constraint on the expected cost. We define the noise N ∈ N ⊆ R m as SR noise if its deliberate addition or injection improves the expected payoffE f N (h(N))whiletheexpectedcostE f N (c(N))staysatorbelowapresetmaximum 1 level γ. Here functions h and c are the respective bounded nonnegative Borel-measurable payoffandcostfunctionsonthenoisespaceN. Weassumethathandcdonotdependon the noise probability density f N . Suppose that the payoff in the absence of noise is h(0). Then E f N (h(N)) > h(0) and E f N (c(N)) ≤ γ ifN is an SR noise. We define noiseN opt as the optimal SR noise if E f N (h(N))≤ E f N opt (h(N opt )) for any other SR noiseN and if E f N opt (c(N opt ))≤ γ. Examples of such noise or randomization benefits include Neyman- Pearson (N-P) hypothesis testing in decentralized and energy-efficient detection in sensor networks [1, 55], pattern-classification systems [49], time-sharing strategies for average- power-constrained signal transmitters and jammers [2], and pricing and scheduling for access points to maximize the averaged profit in wireless and other networks [19]. Thisthesisaddressesthreekeyresearchquestionsforsuchnoise-enhancedconstrained optimization problems: (1) If the SR effect occurs then what are the necessary and sufficient conditions for the existence of optimal SR noise? (2) What is the form of the best or optimal SR noise if such noise exists? (3)IftheoptimalSRnoiseformisknownthenhowcanwecomputetheoptimalSRnoise? We present three main SR results for noise benefits in constrained optimization prob- lems. ThefirstresultisthattheexistenceofSRnoiseN doesnotitselfimplytheexistence of optimal SR noiseN opt . We derive necessary and sufficient conditions for the existence of optimal SR noise. The second result is the derivation of the form of the optimal SR noise probability density f Nopt directly in the noise domain N ⊆ R m . We show that if 2 N opt exists then it is just a randomization of at most two noise realizations if the opti- mization problem is subject only to a single inequality constraint. If such optimal noise does not exist then there exists a sequence of noise random variables so that the limit of their respective average payoff is optimal. The third SR result is an algorithm that finds near-optimal SR noise from a finite set of noise realizations ˜ N ⊆N. This noise is nearly optimal if the payoff function h on the actual noise space N is sufficiently close to its restriction to ˜ N. TheseSRresultsextendandcorrectpriorworkin“detectorrandomization”oradding noise for improving the performance of N-P signal detection. Tsitsiklis [55] explored the mechanism of detection-strategy randomization for a finite set of detection strategies (operating points)indecentralizeddetection. Hefirstshowed thatthereexistsarandom- ized detection strategy that uses a proper convex or random combination of at most two existing detection strategies and that gives the optimal N-P detection performance. Such optimal detection strategies lie on the upper boundary of the convex hull of the Receiver Operating Characteristic (ROC)-curve operating points. Scott et al. [49] and Appad- wedula et al. [1] later used the same optimization principle in classification systems and energy-efficient detection in sensor networks respectively. Then Chen et al. [8] used a fixed detector structure for N-P signal detection. They injected noise in the data samples to obtain a proper random combination of operating points on the ROC curve. They showed that the optimal N-P SR noise for fixed detectors is a proper randomization of no more than two noise realizations. But Chen et al. [8] assumed that the convex hull V of the set of ROC curve operating pointsU ⊂R 2 alwayscontainsitsboundary∂V andthusthattheconvexhullV isclosed. 3 This is not true in general. The topological problem is that the convex hull V need not be closed if U is not compact: the convex hull of U is open if U itself is open. Chen et al. arguedcorrectlyalongthelinesoftheproofofTheorem3in[8]whentheyconcludedthat the “optimum pair can only exist on the boundary.” But their later claim that “each z on the boundary can be expressed as the convex combination of only two elements of U” is not true in general because V may not include all of its boundary points. The optimal N-P SR noise need not exist at all in a fixed detector [42]. Figure 1 shows a case where theN-PSRnoiseexistsbutthe optimal N-PSRnoisedoesnotexistinthenoisespaceN = R. We show in Chapter 4.1 that if we restrict the noise space to the compact interval [−5,5] then the optimal SR noise does exist. Our algorithm finds a nearly optimal N- PSRnoisefromadiscretizedsetofnoiserealizations ˜ N =[-5:0.0001:5]injust9iterations. Examples of noise or randomization benefits are not limited to Neyman-Pearson-type inequality-constrained statistical decisions. Researchers have found randomization ben- efits in average-power-constrained signal transmission and jamming strategies [2] and in pricingandschedulingforanetworkaccesspointtomaximizeitsowntime-averageprofit under the average-transmission-rate constraint [19]. Azizoglu [2] proved that the error probability of maximum a posteriori (MAP) receiver is convex with respect to the signal power for anti-podal signaling and a broad class of unimodal channel-noise probability densities. Then an average-power-limited anti-podal transmitter cannot improve the er- ror performance via time-sharing or randomizing between different power levels. But an on-off time-sharing aids an average-power-constrained jammer if its noise probability density is from a subclass of symmetric unimodal pdfs and if its average noise power 4 is below a certain critical value. He also showed that the optimum channel switching strategy for an average-power-constrained transmitter is to time-share between at most two channels and power levels to minimize the error probability at the receiver end in the presence of multiple additive noise channels. Longbo and Neely [19] studied a pricing and transmission scheduling problem for a network access point (AP) to maximize its own time-average profit. They considered a binary business decision at the AP to decide whether or not to allow new data at the beginning of each time slot and a compact set of price options if the AP allows new data in the time slot. They showed that a proper randomization of at most two business-price tuples is enough for the AP to achieve its optimal time-average profit under the average-transmission-rate constraint. We show in Chapter 4.2 that the detection performance of the MAP receiver can sometimesbenefitfromsignal-powerrandomizationinanaverage-power-constrainedanti- podal signal transmitter if the channel noise pdf is not unimodal. We suppose that the transmitter transmits equiprobable anti-podal signals {−S,S} with S ∈ S = [0.5, 3.75] and that the additive channel noise has a symmetric bimodal Gaussian-mixture prob- ability density. Then the respective error probability of the optimal MAP receiver is nonconvex and the transmitter can improve the detection performance by time-sharing orrandomizingbetweentwopowerlevelsforsomevaluesofmaximumaveragepowercon- straint. We apply our algorithm to find a near-optimal signal power randomization from a discretized subset of signal-strength realizations ˜ S = [0.5:0.0001:3.75]. The algorithm finds such signal-power distribution in just 13 iterations. The next three chapters present our SR results and illustrate some of their noise benefits. 5 −6 −4 −2 0 2 4 6 0 0.1 0.2 0.3 0.4 0.5 Signal X Binary threshold hypothesis testing of H 0 vs. H 1 at α = 0.4 f 0 (x) f 1 (x) Threshold θ = 0.2534 H 0 : f X (x,H 0 ) = f 0 (x) H 1 : f X (x,H 1 ) = f 1 (x) Decide H 1 if X+N > θ H 0 otherwise θ α = 0.4 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Probability of False Alarm P FA Probability of Detection P D Reciever Operating Characteristic Curve (ROC) for H 0 vs. H 1 a b c d H 0 : X ~ N(0,1) H 1 : X ~ N(0,4) Reject H 0 if X+N > θ = 0.2534 α = 0.4 f e ∂V ∂V U (a) (b) Figure 1.1: SR noise benefits in Neyman-Pearson signal detection (a) The thin solid line shows the probability density function (pdf) f0 of signal X under the normal hypothesis H0: X ∼ N(0,1) while the dashed line shows the pdf f1 of X under the alternative normal hypothesis H1: X ∼ N(0,4). The detector rejects H0 if the noisy observation X +N > θ. The thick vertical solid line shows the threshold θ. (b) The solid line shows the monotonic but nonconcave ROC curve U = {(pFA(n),pD(n)): n∈R} of the detector where n is the realization of the additive noise N in X, pFA(n) = 1-Φ(θ-n), and pD(n) = 1-Φ( θ−n 2 ) for standard normal cumulative distribution function Φ. The detector operates at point a = (pFA(0),pD(0)) = (0.4, 0.4496) on the ROC curve in the absence of noise. Nonconcavity of the ROC curve U between the points b = (pFA(n1),pD(n1)) and c = (1,1) allows the N-P SR effect to occur. A proper convex or random combination of two operating points b and e = (pFA(n2),pD(n2))givesabetterdetectionperformance(pointf)thanpointaatthesamefalse-alarmlevel pFA(0) = α = 0.4. Such a random combination of operating points results from adding a discrete noise N with pdf fN(n) = λδ(n-n1) + (1-λ)δ(n-n2) to the data sample X where λ = (pFA(n2)-α)/(pFA(n2)- pFA(n1)). Point d is on the upper boundary ∂V of the ROC curve’s convex hull (dashed tangent line between b and c). So d is the supremum of detection performances that random or convex combination of operating points on the ROC can achieve so that α remains 0.4. Note that d is the convex combination of b and c but it is not realizable by adding only noise in the data sample X because point c = (1,1) is not on the ROC curve since there is no noise realization n ∈R such that 1-Φ(θ-n) = 1 = 1-Φ( θ−n 2 ). Thus the N-P SR noise exists but the optimal N-P SR noise does not exist in the noise space N =R. 6 Chapter 2 Problem Formulation and Derivation of Optimal SR Noise Densities 2.1 Problem Formulation Consider a constrained optimization problem where we want to maximize average pay- off E f N (h(N)) = R N h(n)f N (n)dn subject to the constrained average cost E f N (c(N)) = R N c(n)f N (n)dn ≤ γ. Here h(n) and c(n) are the respective payoff and cost functions when the noiseN has realization n. N ⊆ R m is the noise space andN ∈ N is a noise vector with pdf f N . The noise vectorN can be random or even a deterministic constant such as f N (n) = δ(n−n o ). Payoff and cost functions h and c are real-valued nonnegative Borel-measurable bounded functions of noiseN. Functions h and c do not depend on the noise probability density f N . We want to find the optimal SR noiseN opt such that E f N opt (c(N opt )) ≤ γ and E f N (h(N)) ≤ E f N opt (h(N opt )) for any other SR noiseN with E f N (c(N)) ≤ γ. For notational simplicity we write h(f N ) for E f N (h(N)) and c(f N ) for E f N (c(N)) as the respective average payoff and average cost when the noise pdf is f N . LetW be the set 7 ofallpdfs. ThenW isconvex. Supposethatf Nopt isthepdfoftheoptimalSRnoiseN opt . We assume that c(n) ≤ γ for some noise realization n to avoid a trivial nonexistence of such optimal noise pdf f Nopt . Then we need to find f Nopt = arg max f N ∈W R N h(n)f N (n)dn (2.1) such that f Nopt (n) ≥ 0 for all n, (2.2) R N f Nopt (n)dn = 1 and (2.3) c(f Nopt ) = R N c(n)f Nopt (n)dn ≤ γ. (2.4) Conditions (2.2) and (2.3) are defining properties of the pdf while (2.1) and (2.4) state the given optimization criterion. We solve this optimization problem directly in the noise domain R m by the primal-dual method [36]. 2.2 Derivation of Optimal SR Noise Densities The Lagrangian of the above inequality-constrained optimization problem is L(f N ,k) = Z N h(n)f N (n)dn−k Z N c(n)f N (n)dn−γ (2.5) = Z N (h(n)−k(c(n)−γ))f N (n)dn. (2.6) 8 The Lagrange duality implies that sup f N ∈W Z N h(n)f N (n)dn = min k≥0 sup f N ∈W L(f N ,k) (2.7) and the solution of the optimization problem is equivalent to finding k ∗ ≥ 0 and the pdf f Nopt such that min k≥0 sup f N ∈W L(f N ,k) = L(f Nopt ,k ∗ ). (2.8) The next two theorems give necessary and sufficient conditions for the existence of the optimal N-P SR noise and the form of its pdf if such noise exists. We need the following definitions for Theorems 1 and 2. Define first the sets D + = {n∈N :(c(n)−γ)≥0}, (2.9) D − = {n∈N :(c(n)−γ)≤0}, and (2.10) D 0 = {n∈N :c(n)−γ =0} = D − ∩D + . (2.11) Now define the supremum of h(n) over the sets D + , D − , and N P D + sup = sup n h(n):n∈D + , (2.12) P D − sup = sup n h(n):n∈D − , and (2.13) P Dsup = sup n {h(n):n∈N}. (2.14) 9 Then P Dsup = max {P D + sup ,P D − sup }. Next define a function g(n,k) = h(n)−k(c(n)−γ) (2.15) and its supremum over the sets D + , D − , and N d + (k) = sup n g(n,k):n∈D + , (2.16) d − (k) = sup n g(n,k):n∈D − , and (2.17) d(k) = sup n {g(n,k):n∈N}. (2.18) Finally let G + = {n∈D + :h(n)=P D + sup } and (2.19) G − = {n∈D − :h(n)=P D − sup }. (2.20) Note that now we can rewrite the Lagrangian of equation (2.6) as L(f N ,k) = Z N g(n,k)f N (n)dn. (2.21) Then equation (2.8) becomes min k≥0 sup f N ∈W L(f N ,k) = min k≥0 sup f N ∈W Z N g(n,k)f N (n)dn (2.22) = min k≥0 d(k). (2.23) 10 The last equality follows because sup f N ∈W Z N g(n,k)f N (n)dn ≤ sup f N ∈W Z N d(k)f N (n)dn (2.24) = d(k) = sup n {g(n,k):n∈N} (2.25) and because if the L.H.S. of (2.24) is less than its R.H.S then there exists n 1 ∈ N such that sup f N ∈W Z N g(n,k)f N (n)dn < g(n 1 ,k) because of (2.25) (2.26) = Z N g(n,k)δ(n−n 1 )dn (2.27) which is a contradiction. Note that d(k) = max{d − (k),d + (k)} so we can rewrite (2.23) as min k≥0 sup f N ∈W L(f N ,k) = min k≥0 max{d − (k),d + (k)}. (2.28) Theorem 1: (a) Suppose that P D − sup ≥P D + sup . If the set G − is nonempty then f Nopt (n) = δ(n−n o ) (2.29) for some n o ∈ G − is an optimal SR noise pdf and c(f Nopt ) ≤ γ. If G − is empty then optimal SR noise does not exist for the given maximum average cost γ. 11 Butthereexistsanoisepdfsequence{f Nr } ∞ r=1 oftheform(2.29)suchthatc(f Nr )≤γ for all r and lim r→∞ h(f Nr ) = P Dsup . (2.30) (b) Suppose that P D − sup < P D + sup . If the optimal SR noise pdf f Nopt (n) exists then c(f Nopt ) =γ. Proof: Part (a): Suppose that P D − sup ≥ P D + sup and that G − is nonempty. Note also that if k ≥ 0 then d − (k) = sup n h(n)−k(c(n)−γ):n∈D − (2.31) ≥ sup n h(n):n∈D − = P D − sup . (2.32) Similarly d + (k) ≤ P D + sup for all k ≥ 0. Then d − (k) ≥ d + (k) for all k ≥ 0 if P D − sup ≥P D + sup . The L.H.S. of equation (2.28) then becomes min k≥0 sup f N ∈W L(f N ,k) = min k≥0 d − (k) (2.33) = d − (0) because d − (k) is a nondecreasing function of k = P D − sup = P Dsup . (2.34) Thus (2.6), (2.8), and (2.34) imply that k ∗ = 0 and we need to find the pdf f Nopt such that 12 L(f Nopt ,0) = Z N h(n)f Nopt (n)dn = P D − sup . (2.35) Now if G − is nonempty then the definition of G − implies that for anyn o ∈G − such that h(n o ) = P D − sup = P Dsup . So if we choose f N (n) = δ(n−n o ) (a unit-impluse at n o ) then Z N h(n)f N (n)dn = h(n o ) = P D − sup (2.36) Z N c(n)f N (n)dn = c(n o ) ≤ γ. (2.37) Therefore f N is the optimal SR noise pdf and hence f Nopt = δ(n−n o ) for any n o ∈G − . (2.38) Suppose now that the set G − is empty. Then h(n) < P Dsup = P D − sup for all n ∈N. Suppose that f N ′ is the optimal SR noise pdf. Then h(f N ′) = Z N h(n)f N ′(n)d(n) (2.39) < Z N P Dsup f N ′(n)d(n) because h(n)<P Dsup and (2.40) Z N P Dsup −h(n) f N ′(n)d(n)=0 iff P Dsup −h(n) f N ′(n)=0 a.e. on N = P Dsup . (2.41) The emptiness of G − and the definition of P D − sup further imply that there exists a se- quence of noise realizations {n r } ∞ r=1 in G − such that 13 lim r→∞ h(n r ) = P Dsup . (2.42) Hence there exists n s ∈ {n r } ∞ r=1 ⊂ G − such that h(n s ) > h(f N ′) and c(n s )≤ γ. Define a sequence of noise pdfs f Nr (n) = δ(n−n r ). Then f Ns contradicts the optimality of f N ′ while c(f Nr )≤γ for all r and lim r→∞ h(f Nr ) = lim r→∞ h(n r ) = P Dsup . (2.43) Part (b): Suppose that P D − sup < P D + sup and that G + is nonempty. Suppose also that f N ′ is the optimal SR noise pdf such that c(f N ′) = v < γ and h(f N ′) ≥ h(f N ) for any other noise pdf f N . The definition of G + implies that if n 1 ∈ G + then c(n 1 ) ≥ γ > v. Then h(n 1 ) = P Dsup >h(f N ′) because c(f N ′) = v <γ and P D − sup <P D + sup = P Dsup . Now define a function f N (n) = γ−v c(n 1 )−v δ(n−n 1 ) + c(n 1 )−γ c(n 1 )−v f N ′(n). (2.44) Then f N is a valid pdf because f N (n)≥0 for all n and R N f N (n)dn = 1. Notice that c(f N ) = Z N c(n)f N (n)dn (2.45) = γ−v c(n 1 )−v c(n 1 ) + c(n 1 )−γ c(n 1 )−v Z N c(n)f N ′(n)dn (2.46) = γ−v c(n 1 )−v c(n 1 ) + c(n 1 )−γ c(n 1 )−v v (2.47) = γ (2.48) 14 and h(f N ) = Z N h(n)f N (n)dn (2.49) = γ−v c(n 1 )−v h(n 1 ) + c(n 1 )−γ c(n 1 )−v Z N h(n)f N ′(n)dn (2.50) = γ−v c(n 1 )−v P Dsup + c(n 1 )−γ c(n 1 )−v h(f N ′) (2.51) > h(f N ′) because P Dsup >h(f N ′). (2.52) But this contradicts the optimality of f N ′. Therefore c(f N ′) = γ if f N ′ is the optimal SR noise pdf and if P D − sup <P D + sup . SupposenowthatP D − sup <P D + sup butthesetG + isempty. ThedefinitionsofP D + sup andG + implythatthereexistsn 1 ∈D + suchthath(n 1 )>h(f N ′)becausec(f N ′)=v <γ. If we define exactly the same pdf f N as in (2.44) then again f N contradicts the optimality of f N ′. Theorem 1 (a) gives the optimal SR noise pdf f Nopt if it exists and if P D − sup ≥P D + sup . Theorem 2 gives necessary and sufficient conditions for the existence of f Nopt when P D − sup <P D + sup . Theorem 2: Suppose that P D − sup <P D + sup . (a) There exists k ∗ ≥ 0 such that d + (k ∗ ) = d − (k ∗ ) = d(k ∗ ) and min{d + (k),d − (k)} ≤d(k ∗ ) ≤ max{d + (k),d − (k)} for any k ≥ 0. 15 (b) Suppose noise pdf f N satisfies h(f N ) = d(k ∗ ) > h(0) and c(f N ) = γ. Then f N is the optimal noise pdf. So the optimal average payoff is d(k ∗ ). (c) Ifthereexistn 1 ∈D + andn 2 ∈D − suchthatg(n 1 ,k ∗ )=d + (k ∗ )=d(k ∗ )=g(n 2 ,k ∗ ) = d − (k ∗ ) then f Nopt (n) = λδ(n−n 1 ) + (1−λ)δ(n−n 2 ) (2.53) with λ = c(n 2 )−γ c(n 2 )−c(n 1 ) (2.54) is the optimal SR noise pdf if d(k ∗ ) > h(0). (d) If the condition of (c) does not hold then the optimal SR noise does not exist. But there exists a noise pdf sequence {f Nr } ∞ r=1 of the form (2.53)-(2.54) such that lim r→∞ h(f Nr ) = d(k ∗ ). (2.55) Proof: Part (a): Note that d + (k) and d − (k) are continuous functions of k. Further d + (k) is an unbounded decreasing function of k while d − (k) is a nondecreasing function of k because P D − sup < P D + sup . So if d + (1) > d − (1) then there exists k ∗ > 1 such that d + (k ∗ ) = d − (k ∗ ). Similarly if d + (1) < d − (1) then there exists 0 ≤ k ∗ < 1 such that d + (k ∗ ) = d − (k ∗ ) because d + (0) = P D + sup > P D − sup = d − (0). Thus there exist k ∗ ≥ 0 such that d + (k ∗ ) = d − (k ∗ ) = d(k ∗ ) and min{d + (k),d − (k)} ≤ d(k ∗ ) ≤ max{d + (k),d − (k)} for any k ≥ 0. 16 Part (b): The result of Part (a) above and equation (2.28) imply that min k≥0 sup f N ∈W L(f N ,k) = min k≥0 max{d − (k),d + (k)} = d(k ∗ ). (2.56) Let f N be a noise pdf such that h(f N ) = d(k ∗ ) > h(0) and c(f N ) = γ. Then we get L(f N ,k ∗ ) = Z N (h(n)−k ∗ (c(n)−γ))f N (n)d(n) (2.57) = Z N h(n)f N (n)d(n) (2.58) = h(f N ) = d(k ∗ ) (2.59) = min k≥0 sup f N ∈W L(f N ,k) by (2.56). (2.60) Hence f N is the optimal SR noise pdf. Part (c): Suppose that there exist n 1 ∈ D + and n 2 ∈ D − such that g(n 1 ,k ∗ ) = d(k ∗ ) = g(n 2 ,k ∗ ). Define f N (n) = λδ(n−n 1 ) + (1−λ)δ(n−n 2 ) (2.61) where λ = c(n 2 )−γ c(n 2 )−c(n 1 ) . (2.62) Then h(f N ) = Z N h(n)f N (n)d(n) (2.63) = Z N h(n)[λδ(n−n 1 ) + (1−λ)δ(n−n 2 )]d(n) (2.64) 17 = λh(n 1 ) + (1−λ)h(n 2 ) (2.65) = λd(k ∗ )+(1−λ)d(k ∗ ) = d(k ∗ ) (2.66) and c(f N ) = Z N c(n)f N (n)d(n) (2.67) = Z N c(n)[λδ(n−n 1 ) + (1−λ)δ(n−n 2 )]d(n) (2.68) = λc(n 1 ) + (1−λ)c(n 2 ) (2.69) = c(n 2 )−γ c(n 2 )−c(n 1 ) c(n 1 ) + 1− c(n 2 )−γ c(n 2 )−c(n 1 ) c(n 2 ) (2.70) = γ. (2.71) Then (2.66), (2.71), and the result of part (b) imply that f N (n) is an optimal SR noise pdf. This form of optimal noise pdf is not unique if there exist more than one pair of noise realizations that satisfy the condition of Theorem 2(c). Part (d): Define H + = {n∈D + :g(n,k ∗ )=d(k ∗ )} and (2.72) H − = {n∈D − :g(n,k ∗ )=d(k ∗ )}. (2.73) Suppose that H + 6=∅ but H − =∅. So there exist n 1 ∈ D + such that g(n 1 ,k ∗ ) = d(k ∗ ) but there does not existsanyn∈D − such that g(n,k ∗ ) = d(k ∗ ). Then g(n,k ∗ ) <d − (k ∗ ) = d(k ∗ ) for all n ∈ D − by the definition of d − (k ∗ ). If f N ′ is the optimal SR noise pdf with c(f N ′) = γ then 18 h(f N ′) = Z N g(n,k ∗ )f N ′(n)d(n) (2.74) ≤ Z D + g(n,k ∗ )f N ′(n)d(n) + Z D − \D 0 g(n,k ∗ )f N ′(n)d(n) (2.75) ≤ Z D + d(k ∗ )f N ′(n)d(n) + Z D − \D 0 g(n,k ∗ )f N ′(n)d(n) (2.76) < Z D + d(k ∗ )f N ′(n)d(n) + Z D − \D 0 d(k ∗ )f N ′(n)d(n) (2.77) because g(n,k ∗ )<d(k ∗ ) for all n∈D − and Z D − \D 0 [d(k ∗ )−g(n,k ∗ )]f N ′(n)d(n)=0 iff [d(k ∗ )−g(n,k ∗ )]f N ′(n)=0 a.e. on D − \D 0 = d(k ∗ ). (2.78) Note that the definition of d − (k ∗ ) and d(k ∗ ) = d − (k ∗ ) imply that there exists a sequence of noise realizations {n r } ∞ r=1 in D − such that lim r→∞ g(n r ,k ∗ ) = d(k ∗ ). (2.79) Then define a sequence of pdfs f Nr (n) = λ r δ(n−n 1 ) + (1−λ r )δ(n−n r ) (2.80) where λ r = c(n r )−γ c(n r )−c(n 1 ) . (2.81) Clearly c(f Nr )=γ for all r. So we can write 19 lim r→∞ h(f Nr ) = lim r→∞ Z N g(n,k ∗ )f Nr (n)d(n) (2.82) = lim r→∞ [λ r g(n 1 ,k ∗ )+(1−λ r )g(n r ,k ∗ )] (2.83) = lim r→∞ [λ r d(k ∗ ) + (1−λ r )g(n r ,k ∗ )] (2.84) = d(k ∗ ) lim r→∞ λ r + lim r→∞ g(n r ,k ∗ ) lim r→∞ (1−λ r ) (2.85) = d(k ∗ ) lim r→∞ λ r + d(k ∗ ) lim r→∞ (1−λ r ) by (2.79) (2.86) = d(k ∗ ). (2.87) Hence c(f Nr )=γ and there exists a positive integer l such that h(f Nr )>h(f N ′) for all r ≥l. This contradicts the optimality of f N ′. Therefore optimal SR noise does not exist if H − =∅ and H + 6=∅. Similar arguments also prove the nonexistence of optimal SR noise if H + =∅ or H − =∅. The following corollary uses Theorem 2 to derive necessary conditions for the optimal SR noise. Corollary 1: Suppose that P D − sup <P D + sup and that the payoff function h and the cost function c are differentiable in the interior of the noise space N. Suppose also that f Nopt is an optimal SR noise pdf of the form (2.53)-(2.54) in Theorem 2 (c). Ifn 1 andn 2 of (2.53)-(2.54) are the interior points of N then n 1 and n 2 must satisfy 20 h(n 1 ) − kc(n 1 ) = h(n 2 ) − kc(n 2 ), (2.88) ∇h(n 1 ) − k∇c(n 1 ) = 0, and (2.89) ∇h(n 2 ) − k∇c(n 2 ) = 0 (2.90) for some k≥0. Proof: If f Nopt is an optimal SR noise pdf of the form (2.53)-(2.54) in Theorem 2(c) then g(n 1 ,k ∗ ) = d(k ∗ ) = g(n 2 ,k ∗ ) for some k ≥ 0 and hence condition (2.88) follows. Note that the definition of d(k ∗ ) imply that n 1 and n 2 are maxima points of g(n,k) for k = k ∗ . Therefore if h and c are differentiable in the interior of N and if n 1 and n 2 of (2.53)-(2.54) are the interior points of N then n 1 and n 2 must satisfy equations (2.89)- (2.90) for k = k ∗ . Note that the conditions (2.88)-(2.90) of Corollary 1 are necessary but not sufficient be- cause n 1 and n 2 satisfying (2.89)-(2.90) need not be the global maxima and they need not be in D + and D − even if they are the global maxima. Hence conditions (2.88)-(2.90) are not always useful to find the optimal SR noise. The next corollary shows that the above necessary conditions can sometimes be useful to determine when optimal SR noise does not exist. 21 Corollary 2: Suppose that P D − sup ≤ P D + sup and that h and c are differentiable in the noise space N = R m . Suppose also that for each k≥0 at most one solution of ∇h(n) - k∇c(n) = 0 in R m is a global maximum of h(n) − k(c(n)−γ). If such a solution n γ exists in D 0 then f Nopt =δ(n−n γ )istheoptimalnoisepdf. TheoptimalSRnoisedoesnotexistotherwise. Proof: Note that there exists k ∗ ≥ 0 such that d − (k ∗ ) = d(k ∗ ) = d + (k ∗ ) by Theorem 2(a). Now either there existn 1 ∈D + andn 2 ∈D − such that g(n 1 ,k ∗ ) = d + (k ∗ ) = d(k ∗ ) = g(n 2 ,k ∗ ) = d − (k ∗ ) or the optimal SR noise does not exist by Theorem 2(d). If the former case is true then n 1 and n 2 must be solutions of ∇h(n) - k ∗ ∇c(n) = 0. But at mostonesolutionof∇h(n)-k∇c(n)=0inR m isaglobalmaximumofh(n)-k(c(n)−γ) for each k≥0 by the hypothesis. Thereforen 1 =n 2 = (say)n γ ∈ D 0 = D − ∩D + . Then c(n γ )−γ = 0 and f Nopt = δ(n−n γ ) is the optimal noise pdf by Theorem 2(c). 22 Chapter 3 SR Noise Finding Algorithm This chapter presents a new algorithm for finding near-optimal SR noise. Theorem 1 and 2 give the exact form of the optimal SR noise pdf but such noise may not be easy to find in a given noise space N. So we present an algorithm that uses Theorems 1 & 2 and uses successive approximations to find near-optimal SR noise from a finite set of noise realizations ˜ N ⊆ N. The algorithm takes as input ǫ, γ, ˜ N (in (2.9)-(2.20)), and the respective payoff and cost functions h and c on ˜ N. Note that the finiteness of ˜ N implies that the optimal SR noise ˜ N opt exists in ˜ N. We suppose that h and c are nonnegative and bounded above by ξ/2. ThealgorithmfirstsearchesforaconstantnoisefromthesetG − ifP D − sup ≥P D + sup . If the inequality does not hold then the algorithm finds a number k(i) at every iteration i such that |d − (k(i))−d(k ∗ )|≤ξ2 −i which gives|d + (k(i))− d − (k(i))| < ǫ in at most i max = ⌈log 2 (ξ/ǫ)⌉+1 iterations. The algorithm then defines a noise ˜ N ′ as a proper random combination of ˜ n 1 ∈ D − and ˜ n 2 ∈ D + so that g(˜ n 1 ,k(i max )) = d − (k(i max )), g(˜ n 2 ,k(i max )) = d + (k(i max )), and c(f ˜ N ′) = γ. 23 SR Noise Finding Algorithm Let D + = {˜ n∈ ˜ N : (c(˜ n)−γ)≥ 0} and Let D − = {˜ n∈ ˜ N : (c(˜ n)−γ)≤ 0} Let P D + sup = max{h(˜ n) : ˜ n∈D + } and Let P D − sup = max{h(˜ n) : ˜ n∈D − } Let G − = {˜ n∈D − :h(˜ n) =P D − sup } If P D − sup ≥ P D + sup f ˜ N opt (n) = δ(n− ˜ n 0 ) for any ˜ n 0 ∈G − Else Let D 0 = {˜ n∈ ˜ N : (c(˜ n)−γ) = 0} and k(0) = 1 Let d − (k(0)) = max{h(˜ n)−(c(˜ n)−γ) : ˜ n∈D − } Let d + (k(0)) = max{h(˜ n)−(c(˜ n)−γ) : ˜ n∈D + } Let ds(1) = d − (k(0)) and df(1) = d + (k(0)) Let i = 1 and i stop = log 2 ξ ǫ While |d − (k(i))−d + (k(i))| > ǫ and i ≤ i stop Let dr(i) = (ds(i)+df(i))/2 Let k(i) = min{(h(˜ n)−dr(i))/(c(˜ n)−γ) : ˜ n∈D − \D 0 } Let d + (k(i)) = max{h(˜ n)−k(i)(c(˜ n)−γ) : ˜ n∈D + } Let d − (k(i)) = dr(i) and Let ds(i+1) = dr(i) If d + (k(i)) > d − (k(i)) Let df(i+1) = min d + (k(i),max{ds(i),df(i)} Else Let df(i+1) = max d + (k(i),min{ds(i),df(i)} End If Let i = i + 1 End While If |d + (k(i−1))−d − (k(i−1))| > ǫ Let t = sgn[d + (k(i−1))−d − (k(i−1))] Let k(i) = max{ h(˜ n)− d − (k(i−1))+tǫ /(c(˜ n)−γ)) : ˜ n∈D + \D 0 } Let d + (k(i)) = d − (k(i−1)) + tǫ Let d − (k(i)) = max{h(˜ n)−k(i)(c(˜ n)−γ) : ˜ n∈D − } Else Let k(i) = k(i−1) End If f ˜ N ′(n) = λδ(n− ˜ n 1 ) + (1−λ)δ(n− ˜ n 2 ) (∗) where ˜ n 1 ∈D − : h(˜ n 1 )−k(i)(c(˜ n 1 )−γ) = d − (k(i)), ˜ n 2 ∈D + : h(˜ n 2 )−k(i)(c(˜ n 2 )−γ) = d + (k(i)), and λ = c(˜ n 2 )−γ c(˜ n 2 )−c(˜ n 1 ) End If 24 Theorem 3(a) below shows that the algorithm finds an SR noise ˜ N ′ from ˜ N in at most i max iterations such that 0 ≤ h(f ˜ Nopt )−h(f ˜ N ′) ≤ ǫ. Theorem 3(b) shows that 0 ≤ h(f Nopt )−h(f ˜ N ′) ≤ (τ +ǫ) if for each n ∈ N there exists ˜ n ∈ ˜ N so that |h(n)−h(˜ n)| ≤ τ and (3.1) c(˜ n) ≤ c(n) (3.2) and if N opt is the optimal SR noise in N with f Nopt as its pdf. Thus the algorithm will findanear-optimalnoise ˜ N ′ foranysmallǫifwechoose ˜ N suchthatτ issufficientlysmall. Theorem3: Supposethatthepayofffunctionhandthecostfunctioncarenonnegative and bounded above by ξ/2. (a) For every ǫ > 0 the above algorithm finds an SR noise ˜ N ′ from ˜ N in at most i max = ⌈log 2 (ξ/ǫ)⌉+1 iterations such that h(f ˜ Nopt ) ≥ h(f ˜ N ′) ≥ h(f ˜ Nopt )−ǫ and (3.3) c(f ˜ N ′) ≤ γ. (3.4) (b) If ˜ N satisfies (3.1)-(3.2) then the detection performance with noise ˜ N ′ is at most τ+ǫ less than the optimal SR detection with noise N opt : h(f Nopt ) ≥ h(f ˜ N ′) ≥ h(f Nopt )−(τ +ǫ). (3.5) 25 Proof: Part (a): The set G − is nonemepty and finite because the noise realization vector ˜ N is finite. Suppose that P D − sup ≥ P D + sup . Then Theorem 1(a) implies that the optimal noise ˜ N opt in ˜ N has the form f ˜ Nopt (n) = δ(n−˜ n 0 ) for any ˜ n 0 ∈ G − . Note that the algorithm finds such noise if P D − sup ≥ P D + sup . Suppose now that P D − sup < P D + sup . Then there exists k ∗ ≥ 0 such that d + (k ∗ ) = d − (k ∗ ) = d(k ∗ ) by Theorem 2(a). Also there exist ˜ n ∗ 1 ∈ D − and ˜ n ∗ 2 ∈ D + such that g(˜ n ∗ 1 ,k ∗ ) = d − (k ∗ ) = d(k ∗ ) = d + (k ∗ ) = g(˜ n ∗ 2 ,k ∗ ) because ˜ N is finite. Then Theorem 2(c) implies that the optimal SR noise ˜ N opt in ˜ N has the pdf f ˜ Nopt of the form (2.53). Note that P D (f ˜ Nopt ) = γ by Theorem 1(b) while Theorem 2(b) implies that P D (f ˜ Nopt ) = d(k ∗ ). The algorithm finds an SR noise ˜ N ′ from ˜ N and its pdf f ˜ N ′ is of the form (∗). Such pdf satisfies condition (3.4) withequality. So we need to show only that f ˜ N ′ satisfies condition (3.3). We first show that if |d − (k(i))− d(k ∗ )| ≤ ǫ and |d + (k(i))− d(k ∗ )| ≤ ǫ for some i then the SR noise pdf f ˜ N ′ satisfies condition (3.3). Theorem 2(a) implies that if d − (k(i)) ≤d(k ∗ ) then d + (k(i))≥d(k ∗ ) and if d − (k(i))≥d(k ∗ ) then d + (k(i))≤d(k ∗ ). So suppose first that d(k ∗ )−d − (k(i))≤ǫ and d + (k(i))≥d(k ∗ ). Let ˜ n 1 ∈ D − and ˜ n 2 ∈ D + such that g(˜ n 1 ,k(i) = d − (k(i)) and g(˜ n 2 ,k ∗ ) = d + (k(i)). Note that c(f ˜ N ′) = R ˜ N c(n)f ˜ N ′(n)d(n) = γ. Then we can write h(f ˜ N ′) = Z ˜ N (h(n)−k(c(n)−γ))f ˜ N ′(n)d(n) (3.6) = λg(˜ n 1 ,k(i)) + (1−λ)g(˜ n 2 ,k(i)) (3.7) = λd − (k(i)) + (1−λ)d + (k(i)) (3.8) 26 ≥ λ(d(k ∗ )−ǫ)+(1−λ)d(k ∗ ) (3.9) because d(k ∗ )−d − (k(i))≤ǫ and d + (k(i)) ≥ d(k ∗ ) = d(k ∗ )−λǫ ≥ d(k ∗ )−ǫ because 0≤λ≤1 (3.10) = h(f ˜ Nopt )−ǫ. (3.11) Similar arguments show that h(f ˜ N ′) ≥ d(k ∗ )−ǫ if d(k ∗ )−d + (k(i)) ≤ ǫ and d − (k(i)) ≥ d(k ∗ ). So it now remains to show that |d − (k(i))−d(k ∗ )| ≤ ǫ and |d + (k(i))−d(k ∗ )| ≤ǫ for some i to prove the claim. Weshowthat|d − (k(i))−d(k ∗ )|≤ ξ 2 i foralli. Thisimpliesthat|d − (k(i))−d(k ∗ )|≤ǫfor i=i stop wherei stop = l log 2 ξ ǫ m . Wethenprovethat|d + (k(i))−d(k ∗ )|≤ǫfori=i stop +1. We use the induction principle to prove that |d − (k(i))−d(k ∗ )| ≤ ξ 2 i for all i. Basis Step (i=1): The definition of dr(i) gives dr(1) = ds(1)+df(1) 2 where ds(1) = d − (k(0)), df(1) = d + (k(0)), and k(0) = 1. Note that both ds(1) and df(1) are between 0 and ξ from the hypothesis. So |ds(1)−df(1)| ≤ ξ. Then |dr(1)−ds(1)| = |dr(1)−df(1)| = |ds(1)−df(1)| 2 ≤ ξ 2 . Theorem 2 (a) implies that d(k ∗ ) is between d − (k(0)) (= ds(1)) and d + (k(0)) (= df(1)). Then |dr(1)−d(k ∗ )| ≤ |dr(1)−ds(1)| ≤ ξ 2 . Further d − (k(1)) = dr(1) in the algorithm because k(1) = min{(h(˜ n)−dr(1))/(c(˜ n)−γ) : ˜ n ∈ D − \D 0 }. Hence |d − (k(1))−d(k ∗ )| ≤ ξ 2 . 27 Induction Hypothesis (i=m): Suppose that |ds(m)−df(m)| ≤ ξ 2 m−1 . Then the def- inition of dr(i) implies that |dr(m)−ds(m)| = |dr(m)−df(m)| = |ds(m)−df(m)| 2 ≤ ξ 2 m . Suppose also that d(k ∗ ) is between ds(m) and df(m). Then |dr(m)−d(k ∗ )| ≤ ξ 2 m . Note that d − (k(m)) = dr(m) where k(m) = min{(h(˜ n)−dr(m))/(c(˜ n)−γ) : ˜ n∈ D − \D 0 } in the algorithm. Therefore |d − (k(m))−d(k ∗ )| ≤ ξ 2 m . Induction Step (i=m+1): Note that d(k ∗ ) is between d − (k(m)) and d + (k(m)) by The- orem 2(a). Suppose that d + (k(m)) > d − (k(m)). Then d(k ∗ ) is also between d − (k(m)) and max{ds(m),df(m)} because d − (k(m)) = dr(m) = ds(m)+df(m) 2 by definition. There- fore d(k ∗ ) is between d − (k(m)) and min{d + (k(m)),max{ds(m),df(m)}} where d − (k(m)) ≤ min{d + (k(m)),max{ds(m),df(m)}}. The algorithm defines ds(m+1) = d − (k(m)) = dr(m) and df(m+1) = min{d + (k(m)), max{ds(m),df(m)}}. Then dr(m) < df(m+1) and d(k ∗ ) is between ds(m+1) and df(m+1). Write|ds(m+1)−df(m+1)| =|dr(m)−df(m+1)|≤|dr(m)−max{ds(m),df(m)}| ≤ ξ 2 m . Here the last inequality follows from the induction hypothesis while the first inequality follows from the definition of df(m+1) and the fact that dr(m) < df(m+1). Then we get |d − (k(m+1))−df(m+1)| = |dr(m+1)−df(m+1)| = |dr(m+1)−ds(m+1)| = |ds(m+1)−df(m+1)| 2 ≤ ξ 2 m+1 because d − (k(m+1)) = dr(m+1) = ds(m+1)+df(m+1) 2 and |ds(m+1)−df(m+1)| ≤ ξ 2 m . This proves the induction claim |d − (k(m+1))−d(k ∗ )| ≤ ξ 2 m+1 because d(k ∗ ) is between ds(m+1) and df(m+1). Similar arguments prove the induction claim when d + (k(m+1)) < d − (k(m+1)). 28 Thus we proved that |d − (k(i))− d(k ∗ )| ≤ ǫ for i = i stop . But this need not imply that |d + (k(i stop ))−d − (k(i stop ))| ≤ ǫ. If |d + (k(i stop ))−d − (k(i stop ))| > ǫ then the algorithm finds k(i stop +1) such that d + (k(i stop +1)) = d − (k(i stop )) + tǫ where t = sgn[d + (k(i stop ))− d − (k(i stop ))]. Thisimpliesthat|d − (k(i stop +1))−d + (k(i stop +1))|≤ǫbecaused(k ∗ )isbetween d − (k(i stop +1))and d + (k(i stop +1))byTheorem2(a)andbecause|d − (k(i stop +1))−d(k ∗ )|≤ǫ. Part (b): TheoptimalSRnoisepdff Nopt isoftheform(2.53)-(2.54)soc(f Nopt )=λc(n 1 ) + (1−λ)c(n 2 ) = γ. Then there exist ˜ n 1 and ˜ n 2 in ˜ N so that they satisfy conditions (3.1)-(3.2) for n 1 and n 2 respectively. Define ˜ N ′′ as a noise restricted to ˜ N such that its pdf f ˜ N ′′(n) = λδ(n−˜ n 1 ) + (1−λ)δ(n−˜ n 2 ). (3.12) Then γ = c(f Nopt ) ≥ λc(˜ n 1 ) + (1−λ)c(˜ n 2 ) because of (3.2) (3.13) = c(f ˜ N ′′) because of (3.12) (3.14) and h(f Nopt ) = λh(n 1 ) + (1−λ)h(n 2 ) (3.15) ≤ λ(h(˜ n 1 )+τ) + (1−λ)(h(˜ n 2 )+τ) because of (3.1) (3.16) = h(f ˜ N ′′) + τ because of (3.12). (3.17) 29 Inequalities (3.14), (3.17) and the fact that ˜ N ′′ is a noise restricted to ˜ N imply that if ˜ N opt is the optimal SR noise restricted to ˜ N such that c(f ˜ Nopt ) ≤ γ then h(f ˜ Nopt ) ≥ h(f ˜ N ′′) ≥ h(f Nopt )−τ. (3.18) Then the claim follows from (3.18) and the result of part (a). 30 Chapter 4 Applications of SR Noise Algorithm This chapter presents two applications of the SR noise algorithm in Chapter 3. The first application finds near-optimal SR noise for a suboptimal one-sample Neyman-Pearson hypothesis test of variance. The second application gives near-optimal signal power ran- domizationforanaverage-power-constrainedanti-podalsignaltransmitterinthepresence of additive Gaussian-mixture channel noise where the receiver uses a maximum a poste- riori (MAP) method for optimal signal detection. 4.1 Near-optimal SR noise for a suboptimal one-sample Neyman-Pearson hypothesis test of variance Considerahypothesistestforthevariancebetweentwozero-meanGaussiandistributions H 0 : f 0 (x) = 1 √ 2π e x 2 2 vs. H 1 : f 1 (x) = 1 √ 2π2 e x 2 2(2) 2 where we want to decide between H 0 and H 1 using only a single observation of X at the significance α = 0.4. Figure 1.1(a) shows both f 0 and f 1 . Note that the optimal N-P test function at level α is a chi-square test that rejects H 0 if X 2 > χ 2 α (1) because X 2 is a chi-square random variable with 1 degree 31 of freedom. If the detection system uses only X (neither|X| nor X 2 ) then it requires two thresholds − p χ 2 α (1) and p χ 2 α (1) for the optimal decision making. Suppose that the detection system uses only one threshold θ due to resource limits or some design constraint. Suppose also that we decide between H 0 and H 1 using a single noisy observation Y = X+N. Define P D (n) and P FA (n) as the respective probabilities of detection and false detection (alarm) when the noise N has realization n. Here P D (n) and P FA (n) are the respective payoff function h(n) and the cost function c(n) that are bounded above by 1 (= ξ/2). If we reject H 0 when X+N > θ then P D (n) = 1-Φ( θ−n 2 ) and P FA (n) = 1-Φ(θ-n) for standard normal cumulative distribution function Φ(z) = R z −∞ 1 √ 2π e −w 2 /2 dw. Suppose that we want α = P FA (0) = 0.4. Then θ = 0.2534 and the noiseless detection probability P D (0) = 0.4496. DefineP D (f N )= R N P D (n)f N (n)dnandP FA (f N )= R N P FA (n)f N (n)dnastherespective probabilities of detection and false detection when the noise pdf is f N . Then P D (f N ) is the average payoff h(f N ) (or E f N (h(N))) and P FA (f N ) is the average cost c(f N ) (or E f N (c(N))). The significance level α is the maximum average cost γ. We want to find theoptimalN-PSRnoisepdff Nopt suchthatP FA (f Nopt )≤α=0.4andP D (f N )≤P D (f Nopt ) foranyothernoisepdff N withP FA (f N )≤0.4. NotethatP D (n)andP FA (n)aremonotonic increasingonRsothatP D − sup =P D (0)<P D + sup =1. Theorem2(d)impliesthatoptimal N-P SR noise does not exist if the noise space is R. But the condition of Theorem 2(c) does hold if we we restrict the noise space to a compact interval (say [-5, 5]) because P D (n) and P FA (n) are continuous functions of n. WeapplythealgorithmtofindnearoptimalnoiseinN =[-5,5]forǫ=2 −20 . Consider the discretized set ˜ N of noise realizations starting from -5 up to 5 with an increment of 32 −5 0 5 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 Noise realization n Plots ofP D (˜ n)-k(i)[P FA (˜ n)−α] vs. Noiserealizations n d − (k(0)) d − (k(9)) Neyman−Pearson detection without noise P D (0) = 0.4496 d − (k(9) ≈ d + (k(9)) ≈ 0.5052 d + (k(0)) d + (k(9)) PD(˜ n)-k(i)[PFA(˜ n)-α] SR detection improvement α = 0.4 = 5 before iterations (i=0) k(0) = 1 iteration i=9, k(9) = 0.8098 ˜ n 1 = −0.8805 ˜ n 2 PD(˜ n)-k(i)[PFA(˜ n)-α] Figure 4.1: Finding near-optimal Neyman-Pearson SR noise Plots of g(˜ n,k(i)) = PD(˜ n) - k(i)(PFA(˜ n)−α) before the first iteration (i = 0) and after the 9 th iteration (i = 9) where k(0) = 1. The detection probability in the absence of additive noise is PD(0) = 0.4496. The noise finding algorithm finds a value of k(9) = 0.8098 in just 9 iterations such that|d + (k(9))− d − (k(9))| < ǫ = 2 −20 . Note that g(˜ n1,k(9)) = d − (k(9)) at ˜ n1 = -0.8805 ∈ D − and g(˜ n2,k(9)) = d + (k(9)) at ˜ n2 = 5 ∈ D + . Then equations (4.1)-(4.2) give the pdf of a near-optimal N-P SR noise ˜ N ′ and PD(f ˜ N ′) = 0.5052. Thus the N-P SR noise ˜ N ′ increases the detection probability PD from 0.4496 to 0.5052. 0.0001 ( ˜ N = [-5:0.0001:5]). Such ˜ N satisfies (3.1)-(3.2) for τ = 0.00004 because 0.4 bounds f 0 and f 1 . Figure 4.1 shows the plots of g(˜ n,k(i)) = P D (˜ n) - k(i)(P FA (˜ n)-α) before the first iteration (i = 0) and after the 9 th iteration (i = 9) where k(0) = 1. The noise finding algorithm finds a value of k(9) = 0.8098 in just 9 (< i max = ⌈log 2 (ξ/ǫ)⌉+1 = 22) iterations such that |d + (k(9)) - d − (k(9))| < ǫ = 2 −20 . Note that g(˜ n 1 ,k(9)) = d − (k(9)) at ˜ n 1 = -0.8805 ∈ D − and g(˜ n 2 ,k(9)) = d + (k(9)) at ˜ n 2 = 5 ∈ D + . Then f ˜ N ′(n) = λδ(n+0.8805) + (1−λ)δ(n−5) with (4.1) λ = (P FA (5)−0.4)/(P FA (5)−P FA (−0.8805))=0.6884 (4.2) 33 is the pdf of a near-optimal N-P additive SR noise ˜ N ′ because P D (f ˜ N ′) = 0.5052 while the detection probability P D (f Nopt ) for the optimal N-P SR noise N opt inN = [-5, 5] will be at most 0.00004 + 2 −20 more than P D (f ˜ N ′) by Theorem 3(b). So the algorithm finds a near-optimal N-P SR noise that gives a 12% increase in the probability of detection from 0.4496 to 0.5052. 4.2 Near-optimalsignalpowerrandomizationforanaverage- power-constrained signal transmitter The detection performance of the MAP receiver can sometimes benefit from signal power randomization or time-sharing in an average-power-constrained anti-podal signal trans- mitter if the channel noise pdf is not unimodal. We apply our algorithm to find a near-optimal signal power distribution or randomization inan average-power-constrained transmitter to improve the detection performance of its MAP receiver. Considerasignaldetectionhypothesistestwherethetransmittertransmitsanti-podal signals X ∈ {−S,S} where S ∈ S = [0.5, 3.75] and both signal values are equally likely: H 0 : X = −S vs. H 1 : X = S and P(H 0 ) = P(−S) = P(S) = P(H 1 ). Suppose that the transmitter can use at most 4.75 units of average power E(S 2 ) and that the receiver decides between H 0 and H 1 using a single noisy observation Y = X + N. Here N is an additive symmetric Gaussian-mixture channel noise such that the signal probability density is f 0 (y) = 1 2 √ 2π e (y−2−S) 2 2 + 1 2 √ 2π e (y+2−S) 2 2 at the receiver under the hypothesis H 0 and f 1 (y) = 1 2 √ 2π e (y−2+S) 2 2 + 1 2 √ 2π e (y+2+S) 2 2 under the hypothesis H 1 . The receiver is optimal and hence it uses maximum a posteriori (MAP) signal detection to maximize 34 the probability of correct decision. We assume that the transmitter can time-share or randomize between the signal power levels such that there is no ambiguity at the receiver about the power level of the transmitted signal. Let f S be the probability density of transmitter’s signal strength S. Define P CD (f S ) as the probability of correct decision and let E f S (S 2 ) be the average signal power when the signal-strength pdf is f S . Then we want to find f Sopt such that P CD (f S ) ≤ P CD (f Sopt ) for any f S and E f S opt (S 2 ) ≤ 4.75. We can view P CD (f S ) as the average payoff E f S (h(S)) and E f S (S 2 ) as the average cost E f S (c(S)) with the maximum average cost γ = 4.75. So we can apply the SR noise finding algorithm to find a near-optimal signal-power pdf f˜ S ′ from a discretized set ˜ S of signal-power realizations. We choose ˜ S = [0.5:0.0001:3.75] and ǫ = 2 −20 . Then ˜ S satisfies conditions (3.1)-(3.2) for τ = 0.0002 because f 0 and f 1 are bounded by 0.2. Both h and c have upper bound ξ/2 = 10/2. So the iteration upper bound i max = ⌈log 2 (ξ/ǫ)⌉+1 = 24. Figure 4.2 shows the plot of correct-decision probability P CD versus the signal power S 2 . Therandomizedsignalpowerisoptimalduetothenonconcavityoftheplot. Azizoglu [2]provedthattheplotofP CD versusS 2 isconcaveifthechannelnoisehasfinitevariance and if it has a unimodal pdf that is continuously differentiable at every point except at the mode. The nonconcavity of the plot arises from the bimodal Gaussian-mixture pdf even though the bimodality of the channel noise pdf does not itself give a nonconcave P CD versus S 2 plot. The probability of correct decision P CD is 0.7855 (point a) if the transmitter uses a constant power S 2 = 4.75 (a constant signal strength S = 2.1794). The dashed tangent line shows that we can achieve a better detection performance using the same average signal power E(S 2 ) = 4.75. The probability of correct decision P CD is 35 0.8421 (point d) if the transmitter time-shares or randomizes appropriately between the signal power levels S 2 1 = 1.4908 (point b) and power S 2 2 = 9.3697 (point c). The algorithm finds the SR noise or signal-strength randomization ˜ S ′ with pdf f ˜ S ′(s) = λδ(s−1.221) + (1−λ)δ(s−3.061) (4.3) where λ = (3.061 2 −4.74)/(3.061 2 −1.221 2 )=0.5876 (4.4) in just 13 (< i max = 24) iterations. This means transmitter should time-share or random- ize between the anti-podal signals {−1.221,1.221} and {−3.061,3.061} with respective probabilities λ = 0.5876 and 1-λ = 0.4124. This signal-strength randomization pdf f ˜ S ′ is nearly optimal because Theorem 3(b) implies that P CD (f Sopt ) for the optimal signal- strength randomization or the optimal SR noise S opt in S = [0.5, 3.75] will be at most 0.0002 + 2 −20 more than P CD (f ˜ S ′) = 0.8421. Thus the SR noise algorithm can find a near-optimal signal power randomization that gives 7.2% increase in the average proba- bility of correct decision (from 0.7855 to 0.8421) over the constant power signaling. Theseapplicationsshowthatthealgorithmfindsnear-optimalnoiseorrandomization in just a few iterations and hence it is much faster than the pure exhaustive search. They alsoshowthatsomesignalprocessingorcommunicationsystemsmayexhibitrandomized optimalsolutions. Anopenresearchproblemistofindmoresuchsystemsandperhapsto devise an adaptive SR noise finding algorithm that is faster than the present algorithm. 36 0 2 4 6 8 10 12 14 0.65 0.7 0.75 0.8 0.85 0.9 0.95 Signal power S 2 Probability of Correct Detection P CD S 1 2 = 1.4908 SR Detection improvement d c b a Maximum average signal power E(S 2 ) = 4.75 S 2 2 = 9.3697 P CD = 0.7855 using constant power signaling P CD = 0.8421 using randomized power signaling Figure4.2: SRnoise(signal-strengthrandomization)benefitsinoptimalanti-podalsignal detection Signal power randomization in an average-power-constrained anti-podal transmitter improves the detec- tion performance of the optimal receiver. The transmitter transmits anti-podal signals X ∈{−S,S} such that E(S 2 )≤ γ = 4.75 and both signals values are equally likely: H0: X = −S vs. H1: X = S and P(H0) = P(−S) = P(S) = P(H0). The receiver receives a noisy observation Y = X + N. Here N is an additive symmetric Gaussian-mixture channel noise such that the signal probability density is f0(y) = 1 2 √ 2π e (y−2−S) 2 2 + 1 2 √ 2π e (y+2−S) 2 2 at the receiver under the hypothesis H0 and f1(y) = 1 2 √ 2π e (y−2+S) 2 2 + 1 2 √ 2π e (y+2+S) 2 2 under the hypothesis H1. The receiver uses a single noisy observation Y and the op- timal maximum a posteriori (MAP) decision rule to decide between H0 and H1. The solid line shows the nonmonotonic and nonconcave plot of probability of correct decision PCD versus the signal power S 2 . Nonconcavity of the plot between the points b and c allows the SR effect to occur. If the transmitter uses a constant power S 2 = 4.75 (a constant signal strength S = 2.1794) then the respective probability of correct decision PCD is 0.7855 (point a). The dashed tangent line shows that we can achieve a better probability of correct decision (0.8421 at point d) at the same average signal power E(S 2 ) = 4.75 if the transmitter time-shares or randomizes appropriately between the signal power levels S 2 1 = 1.4908 (point b) and S 2 2 = 9.3697 (point c). 37 References [1] S. Appadwedula, V.V. Veeravalli, , and D.L. Jones. Energy-efficient detection in sensor networks. IEEE Journal on Selected Areas in Communication, 23(4):693– 702, April 2005. [2] M. Azizoglu. Convexity property in binary detection. IEEE Transaction on Infor- mation Theory, 42(4):1316–1321, July 1996. [3] S. M. Bezrukov and I. Vodyanov. Noise-induced enhancement of signal transduction across voltage-dependent ion channels. Nature, 378:362–364, November 1995. [4] A.R. Bulsara, A.J. Maren, and G. Schmera. Single effective neuron: Dendritic cou- pling effect and stochastic resonance. Biological Cybernatics, 70(2):145–156, Decem- ber 1993. [5] A.R. Bulsara and A. Zador. Threshold detection of wideband signals: A noise in- ducedmaximuminthemutualinformation. Physical Review E,54(3):R2185–R2188, September 1996. [6] F. Chapeau-Blondeau and D. Rousseau. Constructive action of additive noise in optimal detection. International Journal of Bifurcation & Chaos, 15(9):2985–2994, September 2005. [7] M. Chatterjee and M.E. Robert. Noise enhances modulation sensitivity in cochlear implant listeners: Stochastic resonance in a prosthetic sensory system? Journal of the Association for Research in Otolaryngology, 2(2):159–171, August 2001. [8] H.Chen,P.K.Varshney,S.M.Kay,andJ.H.Michels. Theoryofstochasticresonance effects in signal detection: Part I - fixed detectors. IEEE Transactions on Signal Processing, 55(7):3172–3184, July 2007. [9] J.J. Collins, C.C. Chow, A.C. Capela, and T.T. Imhoff. Aperiodic stochastic reso- nance in excitable systems. Physical Review E, 52(4):R3321–R3324, October 1995. [10] J.J. Collins, C.C. Chow, A.C. Capela, and T.T. Imhoff. Aperiodic stochastic reso- nance. Physical Review E, 54(5):5575–5584, November 1996. [11] J.J. Collins, T.T. Imhoff, and P. Grigg. Noise enhanced information transmission in rat sa1 cutaneous mechanoreceptors via aperiodic stochastic resonance. Journal of Neurophysiology, 76(1):642–645, July 1996. 38 [12] G.DecoandB.Schurmann. Stochasticresonanceinthemutualinformationbetween input and output spike trains of noisy central neurons. Physica D,117(1-4):276–282, June 1998. [13] T. Ditzinger, M. Stadler, D. Struber, and J.A. Kelso. Noise improves three- dimensional perception: Stochastic resonance and other impacts of noise to the perception of autostereograms. Physical Review E, 62(2):25662575, August 2000. [14] J.K. Douglass, L. Wilkens, E. Pantazelou, and F. Moss. Noise enhancement of information transfer in crayfish mechanoreceptors by stochastic resonance. Nature, 365:337–340, September 1993. [15] L. Gammaitoni. Stochastic resonance and the dithering effect in threshold physical systems. Physical Review E, 52(5):4691–4698, November 1995. [16] I. Goychuk and P. Haanggi. Quantum stochastic resonance in parallel. New Journal of Physics, 1:14.1–14.14, August 1999. [17] P. Haanggi. Stochastic resonance in biology: How noise can enhance detection of weak signals and help improve biological information processing. Chemphyschem, 3:285–290, March 2002. [18] G.P. Harmer, B.R. Davis, and D. Abbott. A review of stochastic resonance: Cir- cuits and measurement. IEEE Transactions on Instrumentation and Measurement, 51(2):299–309, April 2002. [19] L. Huang and M. Neely. The optimality of two prices: Maximizing revenue in a stochastic network. Proc. of 45th Annual Allerton Conference on Communication, Control, and Computing, September 2007. [20] M.E.Inchiosa,J.W.C.Robinson,andA.R.Bulsara. Information-theoreticstochastic resonance in noise-floor limited systems: The case for adding noise. Physical Review Letters, 85:3369–3372, October 2000. [21] F. Jaramillo and K. Wiesenfeld. Mechanoelectrical transduction assisted by brown- ianmotion: Arolefornoiseintheauditorysystem. Nature Neuroscience,1:384–388, september 1998. [22] P. Jung, A. Cornell-Bell, F. Moss, S. Kadar, J. Wang, and K. Showalter. Noise sustained waves in subexcitable media: From chemical waves to brain waves. Chaos, 8:567–575, September 1998. [23] G.G. Karapetyan. Application of stochastic resonance in gravitational-wave inter- ferometer. Physical Review D, 73:122003–+, June 2006. [24] S. Kay. Can detectability be improved by adding noise? IEEE Signal Processing Letters, 7:8–10, January 2000. [25] S. Kay. Reducing probability of decision error using stochastic resonance. IEEE Signal Processing Letters, 13:695–698, November 2006. 39 [26] K. Kitajo, D. Nozaki, L.M. Ward, and Yoshiharu Yamamoto. Behavioral stochas- tic resonance within the human brain. Physical Review Letters, 90(21):218103–1 – 218103–4, May 2003. [27] B. Kosko. Noise. Viking/Penguin, 2006. [28] B. Kosko and S. Mitaim. Robust stochastic resonance: Signal detection and adap- tation in impulsive noise. Physical Review E, 64(5):051110–+, November 2001. [29] B. Kosko and S. Mitaim. Stochastic resonance in noisy threshold neurons. Neural Networks, 16(5-6):755–761, June 2003. [30] B. Kosko and S. Mitaim. Robust stochastic resonance for simple threshold neurons. Physical Review E, 70(3):031911–+, September 2004. [31] I.Y. Lee, X. Liu, C. Zhou, and B. Kosko. Noise-enhanced detection of subthreshold signals with carbon nanotubes. IEEE Transactions on Nanotechnology, 5(6):613– 627, November 2006. [32] J.E. Levin and J.P. Miller. Broadband neural encoding in the cricket cercal sensory system enhanced by stochastic resonance. Nature, 380:165–168, March 1996. [33] W.B.LevyandR.A.Baxter. Energy-efficientneuralcomputationviaquantalsynap- tic failures. Journal of Neuroscience, 22(11):4746–4755, June 2002. [34] B. Lindner, J. Garcia-Ojalvo, A. Neiman, and L. Schimansky-Geier. Effects of noise in excitable systems. Physics Reports, 392:321–424, March 2004. [35] A. Longtin. Autonomous stochastic resonance in bursting neurons. Physical Review E, 55(1):868–876, January 1997. [36] D. Luenberger. Optimization by vector space methods. Wiley-Interscience, 1969. [37] M.D. McDonnell, N.G. Stocks, C.E.M. Pearce, and D. Abbott. Optimal informa- tion transmission in nonlinear arrays through suprathreshold stochastic resonance. Physics Letters A, 352:183–189, March 2006. [38] S. Mitaim and B. Kosko. Adaptive stochastic resonance. Proceedings of the IEEE: Special Issue on Intelligent Signal Processing, 86(11):2152–2183, November 1998. [39] S. Mitaim and B. Kosko. Adaptive stochastic resonance in noisy neurons based on mutual information. IEEE Transactions on Neural Networks, 15(6):1526–1540, November 2004. [40] F. Moss, L.M. Ward, and W.G. Sannita. Stochastic resonance and sensory infor- mation processing: A tutorial and review of applications. Clinical Neurophysiology, 115(2):267–281, February 2004. [41] A. Patel and B. Kosko. Stochastic resonance in noisy spiking retinal and sensory neuron models. Neural Networks, 18:467–478, August 2005. 40 [42] A. Patel and B. Kosko. Optimal noise benefits in neyman-pearson signal detec- tion. 33rd International Conference on Acoustics, Speech, and Signal Processing, ICASSP’08, 2008. [43] A. Patel and B. Kosko. Stochastic resonance in continuous and spiking neuron models with levy noise. IEEE Transactions on Neural Networks, 2008. to appear. [44] D. Rousseau, G. V. Anand, and F. Chapeau-Blondeau. Noise-enhanced nonlin- ear detector to improve signal detection in non-gaussian noise. Signal Processing, 86(11):3456–3465, November 2006. [45] D.RousseauandF.Chapeau-Blondeau. Constructiveroleofnoiseinsignaldetection from parallel array of quantizers. Signal Processing, 85(3):571–580, March 2005. [46] D. Rousseau and F. Chapeau-Blondeau. Stochastic resonance and improvement by noise in optimal detection strategies. Digital Signal Processing, 15:19–32, 2005. [47] A.A. Saha and G.V. Anand. Design of detectors based on stochastic resonance. Signal Processing, 86(3):1193–1212, June 2003. [48] Y. Sakumura and K. Aihara. Stochastic resonance and coincidence detection in a single neuron. Neural Processing Letters, 16(3):235–242, December 2002. [49] M.J.J. Scott, M. Niranjan, and R.W. Prager. Realisable classifiers: Improving op- erating performance on variable cost problems. Proceedings of 9th British Machine Vision Conference, 1:305–315, September 1998. [50] W.C. Stacey and D.M. Durand. Stochastic resonance improves signal detection in hippocampal ca1 neurons. Journal of Neurophysiology, 83(3):1394–1402, March 2000. [51] N.G. Stocks. Information transmission in parallel threshold arrays: Suprathreshold stochastic resonance. Physical Review E, 63(4):041114–+, April 2001. [52] N.G. Stocks, D. Appligham, and R.P. Morse. The application of suprathreshold stochastic resonance to cochlear implant coding. Fluctuation and Noise Letters, 2(3):L169L181, September 2002. [53] D.W. Sullivan and W.B. Levy. Quantal synaptic failures enhance performance in a minimal hippocampal model. Network: Computation in Neural Systems, 15(1):45– 67, 2004. [54] Y.Tao. Adaptivelyoptimizingstochasticresonanceinvisualsystem. Physics Letters A, 245(1):79–86, August 1998. [55] J.N. Tsitsiklis. Decentralized Detection, Advances in Statistical Signal Processing, H.V. Poor and J.B. Thomas, Eds. JAI Press, Greenwich, CT, 1998. [56] R.A.Wannamaker,S.P.Lipshitz,andJ.Vanderkooy. Stochasticresonanceasdither- ing. Physical Review E, 61(1):233–236, January 2000. 41 [57] K. Wiesenfeld and F. Moss. Stochastic resonance and the benefits of noise: From ice ages to crayfish and squids. Nature, 373:33–36, January 1995. [58] F.G. Zeng, Q.J. Fu, and R. Morse. Human hearing enhanced by noise. Brain Research, 869(1). [59] S. Zozor and P.-O. Amblard. On the use of stochastic resonance in sine detection. Signal Processing, 82(3):353367, March 2002. [60] S.ZozorandP.-O.Amblard. Stochasticresonanceinlocallyoptimaldetectors. IEEE Transactions on Signal Processing, 51(12):3177–3181, December 2003. 42
Abstract (if available)
Abstract
This thesis presents an algorithm to find near-optimal "stochastic resonance" (SR) noise to maximize the expected payoff in statistical decision problems subject to a single inequality constraint on the expected cost. The SR effect or noise benefit occurs when the expected cost satisfies the inequality constraint while the expected payoff in the presence of a noise or randomization is larger than in the case without noise. The payoff and cost functions are real-valued bounded nonnegative Borel-measurable functions on a finite-dimensional noise space N. We show that the optimal SR noise is just the randomization of two noise realizations if the statistical decision problem is subject only to a single inequality constraint and if the optimal noise exists. We give necessary and sufficient conditions for the existence of such optimal noise. If the optimal noise does not exist then there exists a sequence of noise random variables such that the limit of the respective expected-payoff sequence is optimal. We develop an algorithm that finds an SR noise ~N' from a finite set of noise realizations ~N subset N. This noise ~N' is nearly optimal if the payoff function on the actual noise space N is sufficiently close to its restriction to ~N. An upper bound limits the number of iterations that the algorithm requires to find such near-optimal SR noise. Two applications demonstrate the SR noise algorithm. The first application finds a near-optimal SR noise for a suboptimal one-sample Neyman-Pearson hypothesis test of variance. The second application gives a near-optimal signal power randomization for an average-power-constrained anti-podal signal transmitter in the presence of additive Gaussian-mixture channel noise where the receiver uses a maximum a posteriori (MAP) method for optimal signal detection. These applications show that the algorithm finds near-optimal noise or random
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Noise benefits in nonlinear signal processing
PDF
Equilibrium model of limit order book and optimal execution problem
PDF
On spectral approximations of stochastic partial differential equations driven by Poisson noise
PDF
Noise benefits in expectation-maximization algorithms
PDF
Statistical inference of stochastic differential equations driven by Gaussian noise
PDF
Differentially private and fair optimization for machine learning: tight error bounds and efficient algorithms
PDF
Dimension reduction techniques for noise stability theorems
PDF
High-capacity feedback neural networks
PDF
On the interplay between stochastic programming, non-parametric statistics, and nonconvex optimization
PDF
New methods for asymmetric error classification and robust Bayesian inference
PDF
Reinforcement learning for the optimal dividend problem
PDF
Essays on revenue management with choice modeling
PDF
Nonparametric estimation of an unknown probability distribution using maximum likelihood and Bayesian approaches
PDF
Scheduling and resource allocation with incomplete information in wireless networks
PDF
Investigation of mechanisms of complex catalytic reactions from obtaining and analyzing experimental data to mechanistic modeling
Asset Metadata
Creator
Patel, Ashok
(author)
Core Title
Optimizing statistical decisions by adding noise
School
College of Letters, Arts and Sciences
Degree
Master of Arts
Degree Program
Applied Mathematics
Publication Date
04/28/2008
Defense Date
03/14/2008
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
OAI-PMH Harvest,optimization,SR noise algorithm,statistical decisions,stochastic resonance
Language
English
Advisor
Mikulevicius, Remigijus (
committee chair
), Kosko, Bart (
committee member
), Lototsky, Sergey Vladimir (
committee member
)
Creator Email
ashokpat@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-m1191
Unique identifier
UC1291810
Identifier
etd-Ashok-20080428 (filename),usctheses-m40 (legacy collection record id),usctheses-c127-60580 (legacy record id),usctheses-m1191 (legacy record id)
Legacy Identifier
etd-Ashok-20080428.pdf
Dmrecord
60580
Document Type
Thesis
Rights
Patel, Ashok
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Repository Name
Libraries, University of Southern California
Repository Location
Los Angeles, California
Repository Email
cisadmin@lib.usc.edu
Tags
optimization
SR noise algorithm
statistical decisions
stochastic resonance