Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
00001.tif
(USC Thesis Other)
00001.tif
PDF
Download
Share
Open document
Flip pages
Copy asset link
Request this asset
Transcript (if available)
Content
THE COMPUTATIONAL COMPLEXITY OF MODULAR ARITHMETIC by Kirecti Konipella A Dissertation Presented to the FACULTY OF THE GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Pa.rl.ial Fohiiiment of the Requirements for ihe Degree DOCTOR OF PHILOSOPHY (Computer Science) December 1991 Cops right l!)9i Kireeli Kompeiia UMI Number: DP22821 All rights reserved INFORMATION TO ALL USERS The quality of this reproduction is dependent upon the quality of the copy submitted. In the unlikely event that the author did not send a com plete manuscript and there are missing pages, th ese will be noted. Also, if material had to be removed, a note will indicate the deletion. UMI Dissertation Publishing UMI DP22821 Published by ProQuest LLC (2014). Copyright in the Dissertation held by the Author. Microform Edition © ProQuest LLC. All rights reserved. This work is protected against unauthorized copying under Title 17, United States Code Pro.Quest' ProQuest LLC. 789 East Eisenhower Parkway P.O. Box 1346 Ann Arbor, M l 4 8 1 0 6 -1 3 4 6 UNIVERSITY OF SOUTHERN CALIFORNIA THE GRADUATE SCHOOL UNIVERSITY PARK LOS ANGELES, CALIFORNIA 90089-4015 Ph.D. C f S K&< This dissertation, written by ................................... • _ under the direction of h..)S..... Dissertation Committee, and approved by all its members, has been presented to and accepted by The Graduate School, in partial fulfillment of re quirements for the degree of DOCTOR OF PHILOSOPHY Dean of Graduate Studies D a te September 23, 1991 DISSERTATION COMMITTEE Chairperson D ed ica tio n To my father, in loving memory. n A ck n ow led gm en ts I would like to thank my advisor, Leonard Adlernan, for his encouragement, his support and his counsel; for sharing his knowledge; but mostly, for imparting to me a sharpened sense of professionalism, of doing things right, and of taking pride in so doing, which I shall take far beyond this dissertation. 1 would also like to thank Dennis Estes, who taught me much of what I know in Number Theory, and Ming-Deh Huang, with whom I spent many pleasurable hours doing research. It would be most remiss of me to leave out Kevin McCurley, who was an infallible source of papers, stories and advice. There are many students and other professors whose conversations and company made my stay at USC a most enjoyable one, and I would like to thank them all. Lastly, I would like to thank Tonia. Spyridi for always being there. C o n ten ts D ed ica tio n A ck n ow led gm en ts A b stra ct 1 In tro d u ctio n 1.1 Modular Arithmetic in Computation ........................................................... 1.1.1 Scope of this T hesis................................................................................ 1.2 N o ta tio n ................................................................................................................... 2 P arallel A lg o rith m s for M od u lar Inversion and M od u lar E x p o n en tia tio n 2.1 In tro d u ctio n ............................................................................................................ 2 .2 Modular E x p o n e n tiatio n................................................................................... 2.2.1 Algorithm 22/P O W E R S ...................................................................... 2.3 G C D ......................................................................................................................... 20 2.4 Other R e s u lts .............................. ......................................................................... 28 2.5 Open P r o b le m s .................................................................................................... 28 3 E fficient C heckers for M od u lar Inversion and M od u lar E x p o n e n ti a tio n 30 3.1 The Complexity of Checking Computational Problems .......................... 30 3.1.1 M otivation................................................................................................... 30 3.1.2 Formal Definition of Program C heckers........................................... 31 3.1.3 Complexity of Checkers ....................................................................... 32 3.2 Summary of Results ......................................................................................... 35 3.3 Checking Modular In v e rsio n ........................................................................... 35 3.3.1 Problem S ta te m e n t................................................................................. 35 3.3.2 A Speedy Constant-Query Checker for Modular Inversion . 37 3.3.3 Proof of Theorem 8 ................................................................................. 40 3.4 Checking Modular Exponentiation................................................................. 43 3.4.1 Problem S ta te m e n t................................................................................. 43 3.4.2 R e m a rk s...................................................................................................... 44 3.4.3 Informal D e sc rip tio n ............................................................................. 45 3.4.4 The Checker C m e .................................................................................... 47 v 3.4.5 Formal Proof of Theorem 1 0 ................................................................ 51 3.4.6 Speedy Constant-Query Checkers for Modular Exponentiation 55 3.4.7 Open Q uestions........................................................................................ 58 T h e C hecker C o m p lex ity o f M od u lar O p eration s in C ry p tograp h y 60 4.1 Checkers for C ryptography................................................. ............................ 60 4.2 Quadratic R e s id u o s ity ........................................................ ............................ 62 4.3 Checking the Diffie-Heilman Cryptanalysis Problem . ............................ 69 4.4 Checking an RSA Cryptanalyzer...................................... ............................ 71 4.5 Open P r o b le m s ..................................................................... ............................ 73 A b stra ct Modular arithmetic has a wide variety of applications in computer science. It plays a key role in computational number theory, as many properties of integers are expressed in terms of modular relations. It is the basis of finite field arithmetic, with consequent importance in coding theory. It can often advantageously replace inte ger arithmetic, yielding faster algorithms. Finally, and perhaps most importantly, modular arithmetic was the midwife of modern day public-key cryptography, and still constitutes a vital component, in its development. If modular arithmetic is an important computational tool, establishing its com putational complexity takes on particular significance. The classical complexity measure to be considered is time on a sequential machine. For this measure, the complexity of various modular operations has been determined fairly satisfactorily. However, with respect to some other measures of computational complexity, such as time on a. parallel machine, or time to check comput ations, the situation remains unclear. This thesis attempts to shed light on the parallel complexity and checking complexity of modular arithmetic, in particular, the operations of modular inversion (i.e., the operation of finding multiplicative inverses) and modular exponentiation. Unlike the situation for modular addition, subtraction and multiplication, where good parallel complexity bounds are known, the parallel complexity of modular in version and modular exponentiation is as yet unknown. It is shown in this thesis that there exist polylog depth, subexponential size probabilistic circuits for these problems. The checking complexity of these problems is also unknown. It is shown here that there exist fast checkers for these problems — i.e., that it is quicker to check these problems than to solve them using the fastest algorithms known today. Finally, this thesis also addresses the checking complexity of cryptographic prob lems based on modular arithmetic, namely, quadratic reciprocity, the Diffie-Heilman problem and that of breaking the RSA cryptosystem. It is shown that there exist polynomial time checkers for these problems. C h ap ter 1 In tro d u ctio n 1.1 M od u lar A rith m etic in C o m p u ta tio n Modular arithmetic has a wide variety of applications in computer science. It plays a key role in computational number theory, as many properties of integers are ex pressed in terms of modular relations (congruences): for example, an integer n is prime only if an = a mod n for all integers a. It is the basis of finite field arithmetic, with consequent importance in coding theory. It can often advantageously replace integer arithmetic leading to faster algorithms, as in finding integer solutions to linear equations by solving them modulo several primes [W]. Finally, and perhaps most importantly, modular arithmetic was the midwife of modern day public-key cryptography [DH, RSA] and still constitutes a vital component in its development. If modular arithmetic is an important computational tool, establishing its com putational complexity takes on particular significance. The classical complexity measure to be considered is time on a sequential machine. For this measure, the complexity of various modular operations has been determined fairly satisfactorily. However, with respect to some other measures of computational complexity, such 1 as time on a jmrnllcl machine, or time to check computations, the situation remains unclear. This thesis attempts to shed light on the parallel complexity and checking complexity of modular arithmetic. The former measure is well established (for an excellent survey, see Cook’s article [Co]). The latter measure is fairly new; conse quently, a brief overview is provided in section 3.1. This thesis focuses on modular inversion (i.e., the operation of finding multi plicative inverses) and modular exponentiation. Unlike the situation for modular addition, subtraction and multiplication, where good parallel complexity bounds are known, the parallel complexity of modular inversion and modular exponentiation is as yet unknown. The checking complexity of these problems is also unknown. Finally, this thesis also addresses the checking complexity of cryptographic prob lems based on modular arithmetic. 1.1.1 S cop e o f th is T h esis Chapter 2 of this thesis examines the parallel complexity of extended GCD (which encompasses modular inversion) and modular exponentiation. It is shown that there exist poly log depth subexponential size circuits for both these problems. Other results regarding the parallel complexity of related problems are also presented. Chapter 3 addresses the complexity of checking programs for modular inversion and modular exponentiation. First, the notion of program checkers is introduced. Then, it is shown that there exist so-called “ g-pccdy" const ant-qutry checkers for modular inversion, and that the problem of modular exponentiation admits very fast checkers. Chapter 4 deals with the complexify of checking some cryptographic problems based on modular arithmetic. It is shown that these problems, although believed computationally intractable, have constant-query checkers that run in probabilistic polynomial time. 1.2 N o ta tio n M a th e m a tica l Z and R denote the sets of integers and reals, respectively; subscripts may be used to limit the set (e.g., Z>0 denotes the positive integers), log.r denotes the logarithm base 2 of x; ln;r denotes the natural logarithm of .r. for non-negative integers o, |e| denotes the length of a in binary, that is, \a\ = 1 if a = 0, \a\ = [log(a + 1)] otherwise. denotes the cardinality of set S. For integers a and 6, a\b means that a divides /;; a J(b means that a does not divide b. A lg o rith m ic The algorithms use Pascal-like code. Keywords are in bold; if, th en , else is the usual conditional, terminated by end if when necessary for clarity, for . ..end for is an indexed loop; rep eat ... u n til condition loops 1 or more times until condition becomes true; do n times ...end do loops n times. Functions use call-by-value parameters. For all x, y 6 Z, the function “random(r, y)" is used to obtain a uni formly distributed random integer from the interval [,r,;(/]; this is the only source of randomness used in the algorithms. 3 C h ap ter 2 P arallel A lg o rith m s for M od u lar Inversion and M od u lar E x p o n en tia tio n 2.1 In tro d u ctio n The past few years have seen a. great deal of excitement, about parallel algorithms. Considerable progress has been made in many areas: graph theory, combinatorics, matrix theory, numerical analysis, etc. For many problems ArC algorithms have been found (e.g., maximal independent set [KW], [GS], sorting [AI\S], permutation group membership [Lu], CFL recognition [Ru], matrix arithmetic [Os], [PR1]). For many other problems proofs of P-completeness have been obtained (e.g., the circuit value problem [La], unification [DI\M], linear programming [DLR], maximum network flow [CSS], lexicographically first maximal clique [Co]). Regrettably, the situation in computational number theory is less satisfactory. While ArC algorithms have been discovered for the basic arithmetic operations [Sa], [Re], [BCII], and progress has been made on problems involving polynomials [Fb], [PR2], [BGII], the parallel complexity of such fundamental problems as integer gcds and modular exponentia tion has remained open since first being raised in a paper by Cook [Co]. Indeed, it seems possible that the prospects for parallelism in computational number theory 4 rest on these problems: if integer gecl and modular exponentiation were shown to be in A C, t he prospects for finding efficient parallel algorithms for other number theo retic problems would be substantial; if, on the other hand, integer gcd and modular exponentiation were shown to be P-complete. the prospects would be considerably diminished. In attacking these problems, two approaches seem natural. The more practical approach is to insist on a polynomial bound on the number of processors, and then try to obtain t he best time; perhaps the more theoretical approach is to insist on a polvlog bound on the time, and then try to obtain the best processor count. The former approach has been taken by Brent and Ktmg [BK], Kan nan, Miller and Rudolph [KMR], and Chor and Goldreich [CG] who achieve running times of ??, n log log n f log », n /lo g n respectively, while preserving a polynomial number of processors. Here, the latter approach is taken. The main idea in attacking the modular exponentiation problem is to extend the well-known sequential method of “doubling up” when raising a number to a high power. Doubling up allows one to double the current exponent at each iteration, but is too slow to yield a polylog algorithm for modular exponentiation. Instead, the algorithm presented here uses “squaring up”. That is, 0 1 1 each iteration the current exponent is squared. In this thesis, only the case when the modulus is prime will be rigorously dealt, with. Similarly, the Euclidean Algorithm is too slow’ to yield a. polylog algorithm for the gcd of two numbers. The algorithm presented here uses a strategy which produces the gcd of two numbers in essentially one parallel step. These ideas lead to polylog depth, subexponential size circuits for integer gcd and modular exponentiation. More formally: T h e o re m 1 11k rt exists a logspace uniform family of probabilist ic circuits < a n > and a c £ Z>0 such that: 1. For all n,a,b,m £ Z>o with in a prime, and |aj + |6| -f \m\ = n . a n on input < a., b, in > produces an output with probability > 1/ 2. 2. For alln,a, b, m £ Z>0 with in a prime, and |«| + |6| + |m| = n, if a n on input < a,b,in > produces an output, then the output is ab mod in. S. 12mexp = 0(log4 n) where for all n £ Z>0, Dmexp(n) = depth of e\n. I = O ( e ^ V ^ ) where for all n £ Z> 0 , Smexp{n) =size of an. T h e o re m 2 There exists a logspace uniform family of probabilist ic circuits < a n > and a c E Z>0 such that: 1. For all. n, a, b € Z>0 with |a| + |6| = n, c\n on input < a, b > produces an output with probability > 1/ 2. 2. For all n,a,h £ Z>0 with \a\ + |6| — n, if a n on input < a,b > produces an output, then the output is gcd(a, b), x, y such that ax — by = gcd(a, 6). 3. Dgcd = 0(log2?/) where for all n £ Z>o, Dgcd(n) = depth of a n. 4 ■ Sgcd = where for all u £ 2>0, S gcd{n) =size of a n. The function e logn arises as a result of the use of " ‘smooth” numbers, which will he defined below. Smooth numbers have been used to reduce the sequential time complexity of certain important problems (e.g. integer factoring and discrete logarithms) from naive exponential bounds to strictly sub-exponential bounds. It appears that for many problems, smooth numbers can also be used to reduce the circuit size complexity from naive exponential bounds to strictly subexponential bounds. Proofs for the following will also be outlined: 6 T h e o re m 3 There exists a logspa.ce un iform, family of probabilistic circuits < a„ > and a c. d £ R>o such that: 1. For all n,in £ Z>0 with rn composite and |m| = n. a n on input m produces an output with probability > 1/ 2. 2. For all n. in £ Z>0 with rn prime and |m| = n , o„ on input m does not produce an output. 3. For all n, m £ Z>0 with |m| = n, if a n on input in produces an output, then the output is 1. 4 ' D composite ^ { \ o g n ) where for all n £ Z>0, D composite{n) -depth of a n. h. ScompostU = 0 (e c' / ^ ) where for all n £ Z>0. Scomposiie(n) =size of o„. T h e o re m 4 Then exists a logspace uniform fam ily of probabilistic circuits < a n > and a . c, d £ R>0 such that: t. For all n, in £ Z>o with \vn\ — n a n on input rn with m composite produces an output with probability > 1/ 2. 2. For all n, in £ Z>o with |m| = n, if a„ on input m with in composite produces an output f then I < f < in and f\m . 3. Dfact = ()(nd) where for all n £ Z>0, D jact(n) = depth of a n. 4■ Sfact = O i e ' y / ^ ) where for all n £ Z>0, S fact(n) =size of e\n. T h e o re m 5 Th ere exists a logspace uniform family of probabilistic circuits < a n > and a . c, d £ R>0 such that: 7 1. For all n, a, ft, m £ Z>0 with |«j + |ft| -f |m| = < > ■ ctn on input a,b,m such that there crisis an x £ Z>0 until ax = ft mod m produces an output with probability > 1/ 2. T For all n, a, ft, in € Z>0 «u7ft |«| + |ft| + |m| = n, if o n on input a, ft, in such that there exists an x £ Z>0 with ax = ft mod m produces an output f then r/7 = I) mod hi 3. Ddlog = Oin'1) where for all n £ Z>0, D diog{n) —depth of a n. 4■ S dlog = Oie’V ^ ) where for all n £ Z>0, Sdiog{n) =size of a n. Although smoothness is used in achieving these results, its use varies greatly from problem to problem. For each problem the depth of the corresponding circuit is a polynomial in the log of the running time of the best known secpiential solution. In the case of the last two theorems, the processor-time product (an oft-used measure of optimality of a parallel algorithm) of the claimed circuits for inputs of size n is ) for some c £ R, that is, within a polynomial of the best known sequential running time. Sorenson [So] has shown that there exist polylog depth, subexponen tial size circuits for factoring integers and computing discrete logarithms. However, for the algorithms he describes, the processor-time product is much greater than the sequential running time. Finally, some relations between problems will be established. Let • Primality Testing be the problem of computing t he function f such that for all x £ Z>0 if x is prime f(x) = l, if x is composite f(x)=(J. • Compositeness Test ing be the problem of computing the function f such that for all x £ Z>0 if x is composite f(x) = l, if x is prime f(x) is undefined. • Relative Primality be the problem of computing the function f such that for all a, ft £ Z>o if (a,ft) = 1 then f(o,ft)=l. ot herwise f(a,ft)— 0. 8 T h e o re m 6 /. Relative Primality <nArc Primality Testing. 2. Compositeness Testing <nM'c Modular Exponeliliation. (Part, 1 of this theorem has been independently produced by Karloff [Bo]; and Part 2 by Fich and Tom pa [FT]). 2.2 M od u lar E x p o n en tia tio n Consider t he problem MODEXP(«, /? , in) of computing ah mod rn for integers a. b and in with m prime. Previous work has shown that: if the exponent b is bounded by 0 ( log rn) then this problem is in EfC1 (P-uniform) [BC11]; if the modulus in is a product of primes < log ???, then the problem is in A C2(P-miiform) [Ga.]. In the general case, however, there is no known A C algorithm for modular exponentiation; nor is there a proof of the problem being P-complcte. 1o begin, consider the problem 22/ POWERS(a, ???): when m is prime compute a 2" mod m for 0 < / < [log log ??/1. A parallel algorithm for 22/POW ERS will be presented. The main idea used in this algorithm, “squaring up’', can be illustrated 2f . 2/+1 as follows: Assume that a mod in is known and a mod in is to be computed 92 f in the next iteration. For the purpose of exposition, assume that a mod rn is the square free product of small primes, i.e. a2* mod rn = ][[Pf with the pi small. Assume furt her that p2* mod in is known for each I. Then the following holds: a * ” ' = [ a * ' )* ' = ( Y [ Plf ' = l [ Vf inod m, and so a mod in can be obtained by just a few parallel multiplications. This idea is the basis for parallelizing 22/ POWERS. 9 Of course, mod m will not in general have only small prime factors. Before dealing with this case, consider the following: D efinition 1 Let B E R>0 and x E ~l> \ - Then x is B-smooth iff for all p E I >j with p prime and p|.r. p < B. Let L : R>1 — > R>0 be defined by L(x) — fVlifxiiihix por aj] In ^ z >i let ty(m) — < ~ < rn and ~ is Zy(?«)-smooth}. The following elementary theorem, whose proof is omitted, will be used (see [CEP]): T h e o re m 7 Then exists a cs E Z such that for all m E Z> t T(m) > rn/L{m)C s To apply the method of “squaring up” in the general case, assume that a large number of “helper” numbers are available, whose inverses mod rn. denoted sk, 2f have been computed. Assume also that pf mod rn are known for p\ < B (the , 2f smoothness bound), and sk mod in for all k. The modified idea then goes as follows: For each r*, compute t = eff r* mod m. If t is B-smooth, say II Qh qi prime, then • 22/+l / 22f \ — t 22’ cff — , 4\27 -f 22' a = (« ) - (a rhsk:) = (/) sk m ■ y rt') / .xo/ ^ ■ m / = (n T/) = ClT^/ )sk (mod m ) • ) / and so, once again, a mod rn can be computed with just a few parallel multipli cations. 2 - ? ^ In the same manner, pz mod m for all primes p < B can be computed. Finally, the “helpers” are used to “help” each other in a similar manner. 10 As described, the algorithm tor 2^ POWERS requires the computation of gcds and modular inverses. That there are polylog depth, subexponential size circuits for these problems is the subject of the next section. In this section, the following proposition will be proved; the proof of Theorem 1 t hen follows from Theorem 2 and Proposition 1. P ro p o sitio n 1 Suppose there is a constant o' £ 1>0 and a circuit for EGC'D of size 6>(rrV ? ' lo«5") and depth O(log2 n) for inputs of size n. Then there exists a c e Z>0 and a logspace un iform family of probabilistic circuits < o n > such that 1. For all n,a,b,in £ Z>0 with m a prime, and, |</| + |6| + |m| = n , a n on input < a, b, m > produces an output with probability > 1/ 2. 2. For all n,a,h,m £ Z>o with m a prime, and |«| + |/> | + |m| = n, if a n on input, < a,b,rn > produces an output, then the output is eih mod m. 2. 7, = O(log4 n) where for all n £ Z>0, D mcxp{n) —depth of exn. 4- inhere for all n £ Z>0, Smf:rp(n) =size of o n. 2.2.1 A lg o rith m 22 P O W E R S An algorithm to compute 22/ POW ERS(«, m) is described below. The algorithm runs on a CR.CW PRAM with arbitrary write-conQict. resolution: when several processors attem pt to write into the same location, one of them, arbitrarily chosen, will succeed (see [G]). Let B — [L(m)j, and N = [T(n?)r-" 3 log in] where cs is as in theorem 7. The processors and variables that will be used fall into four groups. Group 1 variables: 11 • a, b, m - input, with m prime; • r[/\/?], 1 < j < N, 1 < h < [log/?/]: initialized to a random number in the range 1, 2,...,?/?. — 1; intended to hold r^/, relatively prime to m. • fi[j, //], 1 < j < /V, 1 < /? . < [log???]: intended to hold r jJ . • prime]!] - boolean: initialized to falsr. • prime]/], 2 < i < B — boolean: initialized to true. • prpow]/, j], l < i < B , l < j < [log/??]: intended to hold ?J. Croup 1 processors: • / NVj^ 1 < j < N, 1 < /? < [log???] — to find inverses of rj^. • PRisj 1 < /, j < B — to identify primes, and compute small powers thereof. The next three groups, “A” , “P ” and “S”, are very similar: each is concerned 2 / with 1) multiplying (mod ? ? ? .) x mod ?/? by every rjj% , where x is one of: the input a (group A), a prime p < B (group P), or the inverse of a random number, .s]h (group 8 ); 2) identifying the primes (and their exponents) occurring in the product so obtained, and determining whether the product, is smooth; 3) if it is smooth, then 2f+1 computing x mod ?/?. Only the group for the primes will be enumerated. Croup "P" variables: • ppower]/, /], l < ? < t 5 0 < / < [log log //?] : intended to hold i2* mod m. • ptnul]?, j, /?] I < i < BA < j < N , 1 < h < [log /??]; intended to hold i2* ■ r hh in iteration / of step 5. • ppri[/. j. //, /] 1 < / < BA < j < Ar, 1 < h < [log /?/], 1 < I < [log???]2; intended to hold primes that divide pmul]/, j, /?]. 12 • pexp[?', j, /?, /] 1 < i < BA < j < T V , 1 < h < [log???], 1 < I , < ("log 2; intended to hold the highest, power of the prime in ppri[i, j, h, I] that divides pmul[/, j, /?]. • pprod[?,_y, /?] 1 < ? < B, 1 < j < N, 1 < h < ("log m ]; intended to hold the product of the primes found to divide pm u ][?,/. //]. • pnext]?, j, /?] 1 < i < B , l < j < N , l < h < [log???.]; intended to hold 2 / pmul[?’, j, h]2 mod ? ? ? in iteration / of step 5. Group “P ” processors: • P li.j.hj l < i < B , l < j < N , l < h < [log ? 7i], 1 < / < [Jogm] 2 — for initialization. • PTi,jM 1 < * < B . 1 < j < N, 1 < h < [log???] — for computing product with ?‘;j,. • Pi,j.h,k, 1 < i < B , \ < j < A, 1 < /? < [log???], 1 < k < B — for identifying prime divisors (and exponents) in product. • P - l 1 < i < B, 1 < j < A", 1 < /? < [log ???] — checking if product was smooth, computing pnext and thus the next value of ppower. The variables and processors in groups “A” and U S'’ can be obtained by substi tuting “a” and “s” for “p”, with appropriate indices. >° 1. (* Initialize a.: a[0] — a2' mod ? ? ? *) a[0] a 2 mod ? ? ? 2. (* Compute inverses *) all p ro cesso rs /AT],?, 1 < j < Af, 1 < h < [log ???]. (* Algorithm GCD is required for this step *) 13 if gcd(r[/‘, /?],m) = = 1 th e n s[y,/?] := r[jji]~l mod m else r[y, /?] := 1 and s[y, /?] := 1 (* Initialize s: s[y, h, 0] = s f h mod ? ? ? *) spower[y, /? ., 0] := s[y, h}2 mod rn ■ I. (* Identify primes and compute squares *) all p ro cesso rs P R ij 1 < i,j < B. if j < i and j | i th e n prime)?] := false (* Initialize p: p[? , 0] = i2 2 mod rn *) ppower[?,0] := i2 mod m. 4. (* Find small prime powers *) all pro cesso rs P R ij 1 < i < B, 1 < j < [log???.] compute prpow[?,j] := P. 5. (* (Compute the 22/-th powers 1 < / < [log log ? ? ? ,] *) for f := 0 to [loglog???] — I do 5.a all pro cesso rs PIitJthj, l < i < B , l < j < N , l < h < [log???], 1 < / < [log???]2 ppri[?,j,/?,/] := 1 pexp[?', J, /?, /] := 1 5.b all pro cesso rs PTijj^ 1 < i < B, J < j < N , 1 < h < [log???] pmul[?’, j, /?] := ppowerf?’,/]*r[y,/?] mod m. 5.c all pro cesso rs Piyhh,k■ , 1 < * < B, 1 < j < N, 1 < /? < [log???], 1 < k < B. if prime[A] and A'|pnml[i, j, /?] th e n calculate largest, e such that prpow[fc, t]|pmul[?, j, /?]. v := random) 1, [log???]2). PPi'i\i-,j,h,v] := k pexp[i,j,h-,v\ := e 5.d all pro cesso rs P A ij^i 1 < i < B, 1 < j < N, 1 < h < [log???]. set pprodf?, y, /?] := n t V "1 prpow[ppri[?,.y, /?, ?e],pexp[?,.y, /? ,, iv}\ set pnext[ ijji] := I l S i "1 2 ppower[ppri[?\ j, h, w]J}P^,iA,w] raoci m if pprod[?, j, /?] = pinul[?,j, /?] th e n ppower[?,/ + 1] := pnext[?,p,/?] spower[jf, /?, /] mod m. 5.e all p ro cesso rs 57,-^j^p, I < i,j < N, 1 < g j i < [log???], 1 < / < [log?/?.]2 spri[i,g,jJi J] := L sexpf?, /?,/] := 1 5.f all p ro cesso rs A7[gjp,, 1 < i,j < N , 1 < g j i < [log???] smul[?,</,./, /?] := spowerf?, g, h] mod m. 5.g all p ro cesso rs S ugjjuk~ 1 < ?,j < Ar, 1 < gjr < [log??;], 1 < k < B. if piime[A] and A:|smul|V, < /, j, /?] th e n calculate largest e such that prpow[A:, c]|smul[*, g, j, / ? • ] . v := random(l, [log???]2), spri [i-,g,j, h,v] := k scxp[*\flr,j\A,t>] := e 5.h all p ro cesso rs A'/h^pp,., 1 < i,j < Ar, 1 < g, h < [log /??] set sprodf?, g j , /?] := n l S ”’ 1 prpow[spri[?\ g,j, h, ii?],sexp[a?, g,j, h, ? ? > ]] set snext[i,g,j,h] := n[,!=S r 1 2 ppower[spri[/, ?/,p,/ ? . , mod if sprod[i,g,j,h] = smul[i,g,j,h] th e n spower[?’, < /, / + 1] := snext[i,g,j,h] spower\j,h,f] mod m. 5.1 all p ro cesso rs AIj^i, I < j < A', 1 < h < [log???], 1 < I < [log???]2v. apri[y, /?, /] := 1 aexp[p\ /?,/] := 1 5.j all p ro cesso rs A T hh, 1 < j < N, 1 < h < [log ???] arnul [;’,/?] := apower[/]*r[/’, /?] mod m. 5.k all p ro cesso rs 1 < j < A 1 ", 1 < / ? < [log???], 1 < k < B. if primefA] and A:|a,mul[j,/?] th e n calculate largest e such that prpow[A\ ? ]|a.mul[p. /?]. v := random(l, [log???]2). apri[j, /?,?’] := k aexp[;,/?, ? ’] := e 5.1 all p ro cesso rs ,1.1,./,. 1 < j < N , !< /?■ < [log???]. set aprodfj, /?] := fllS i '"1 prpow[a.pri[/\ /?, t?’],aexp[p, /?, to]] set- anoxt[y, h] := nft !=Y^ ppo\ver[a.pri[y,/?.,«.'] mod m if aprod[y, h] — amul[/\ h] th en apower[/ + 1] := anext[y, h] spower\jji,f] mod m. OtJier ill an the computation of geds and modular inverses in step 2, the opera tions required to be performed by each processor at each step consist of the basic operations of addition, subtraction, multiplication, division (finding quotient and re mainder), and comparison of integers. These can be implemented by small size and depth circuits. The next section describes a small depth circuit for ged and mod ular inverses. Gluing these circuits together, one obtains a circuit that computes 2'iJPO WERS( a, m ). Let an be the circuit so obtained for inputs of size n. To analyze the probability of success, the following notions are introduced. For all m, A € Z>0, let- C(m ,N) be the collection of subsets of (Z/mZ)* of size N. For all T € C(m, N) and .r 6 Z/ni-Z define xT = {xy mod in\y £ T}. Say tha-t “T smoothes ,r" if x T contains an L(rn)-smooth number; call T universal if T smoothes x for all .r e (Z/mZ)*. L e m m a 1 For all in £ lyi with in prime, let N = L(m)rG log in. Then # { F | T £ C(m. A ) & T not universal) ^ 1 #C(m , N) “ ^ P ro o f: Let in be prime. Let L = L{rn) and N = L (m )r,3 log in. By theorem 7 the probability that a random positive integer < rn is L-smooth is at least 1 / Lc". Since in is prime t he probability that a random element in (Z/mZ)* is T-smooth is at least I / LCf. Sup])ose T £ C(m, N). Then the probability that. T has no L-smooth number is < (1 — 1 / L C ”)N. For all x £ (I / m l )*, x acts as an automorphism on C(m, N), hence the probability that xT has no L-smooth number is also < (1 — 1 / L C -)N. It follows that the probability that T fails to be universal is < m (1 — 1 /If* )1 * < 1 /in 2 as required. □ 16 L em m a 2 The following hold: 1. For all n ,a ,m € Z>0 with, m sufficiently large and prime, and |a| + |n?.| = n, a n on inpul < a ,m > produces an output with probability > 1/ 2. 2. For all n, a, m £ Z>0 with m a prime, and |a| + |m| = n, if a n on input< a,rn > produces an output, then the output is a2 2 mod m for 0 < / < [log log m.]. P ro o f: Consider a n on inputs a,in. Let N ,B be as in the algorithm. Consider the probability that for all h with 0 < h < [log in] Th = {/'[j, /;] | 1 < / < N} is universal. By the previous lemma, this probability exceeds l/\/2 . Assume the Th are universal. Fix / such that 0 < / < [log log m] — 1, and assume that: • ppower[i,f], 1 < i < B, • spower[i,h,f], 1 < * < N, 1 < h < [log???.], and • apower [f] have already been computed. Consider the set {//?(•/[/, h] — apower[f] * r[j,h] \ 1 < j < N, 1 < h < [log???]} computed in step 5.j. Since the Th are universal, this set must contain at least [log ???] 5-sm ooth numbers. If ma[j,h] is 5-smooth then a simple analysis shows that with probability > 1/2 in step 5.1 AAjy h will write apower[f+l]. It follows that the probability that apower[f+l] is written by some A A j j exceeds 1 — 1/m A similar analysis shows that: 17 • ppowcr[i,f+l], 1 < t < B, • spower[i,h,f+ 1], 1 < / < N , 1 < /? < [log/??], and • a.power[f-f-1 ] all have probability exceeding 1 — 1/m of being written. Let rn be sufficiently large. Then B N h < /??, so with high probability they will all be written. Since B N h [log log ???] < ???, with probability > 1 / \f2 they will all be written on every step / , given that the T/t are all universal. Thus, the probability that the above variables all get, written is > 1/ 2. If apower[f] is writ ten for 0 < / < [log log /?/] then it is easily argued that for all / apower[f]=rt2" mod rn. □ L e m m a 3 Suppose there is a constant c' £ 1 and a circuit for EGCD of size 0{ef' v/”1 '’ ® 7 1 ) and depth (9(log2 n) for inputs of size n. Then there exists a c . £ Z>0 such that /- = Of log* //) when for all ? ? € Z>0, D mexp[n) = depth of a n. 3. = O ( e ' V ^ ) inhere for all /? € Z>0, Smcxj,{n) —size of a n. P ro o f: By inspection, the number of processors required is 0 { B 2N log??/) = 0 (f (i +c*+o( i)) m log k.g m). eac]j processori other than those used in step 2, can be implemented using polynomial^' many, i.e., O(log°(1)(???)), gates; by assumption, the circuit for step 2 has size ()(N log me' Finally, n = 3 log???.. Thus the size of a n is bounded by 0 ( e V " lo8” ) for some c E Z. The depth of the circuit is dominated by the depth of the circuit for step 5, since, by assumption, the circuit for step 2 has depth 0(log2/?). This depth is (9(log log ??/) = 0 {log r?) times the 18 depth of circuits for the steps 5.a through 5.1 (to implement a. loop of k steps, one glues k copies of the appropriate circuits back-to-back, entailing a A'-fold increase in circuit depth). This latter depth is determined by the depth of circuits to compute t he product of it //-bit numbers (alt hough the products (e.g., in step 5.d) contain n 2 factors, at most n are unequal to 1). By using a result in [Ga] that such products can be computed by a. circuit of depth Of log2 />). one obtains that i)mfxp = Oflog3 ??).□ Next consider the problem 2’POWERS: on inputs a, in < = Z>0, compute a2' mod rn, 0 < / < [log??;]. This problem is AX’1 reducible to the problem 22/ POWERS. The reduction is given next. In the algorithm, a[i] is initialized to -1, indicating not yet computed; otherwise, a[i] holds a2' mod in. For all i € Z>0, wt(?) = # of L ’s in the binary representation of i. 0. (* Initialize *) a.[0] — a for i 1 to log?/? in parallel do a[?] : = -1 1. (* ( 'omputc a[?] for i such that wt(?) = k *) for k := 1 to log log ?;? do for i := 0 to log ? /? in parallel do if a.[?] > 0 (* i.e., already computed *) th en 1.1 compute ap[?,/] 22/ POWERS(a[?], ???) for j 0 to log log m in parallel do 1.2 if 2- ? >i th en a[i+2- ? ] := ap[?, /] mod ? ? ? 2. (* Output *) Output a|?], 0 < i. < log in . P ro p o sitio n 2 TP O W E R S is AfC1 reducible to V sPOWERS. P ro o f: It must be shown that, for inputs of length /?, there is a. 0(log /?) depth parallel algorithm to compute 2*P0WERS(?z, /??) given an oracle for 22/ POWER.S. The above is such an algorithm since all steps other than the oracle calls have constant depth circuits. It remains to show that it works, i.e., a[/] contains a2’ mod rn for 0 < ? : < log?/?, at the end of the algorithm. This will be done by induction on wtf/h For wt,(?) = 0, i.e., i — 0, this is accomplished in step 0. Now, assume that at the end of stage k\ all a]?] have been correct ly comput ed for i < log ? ? ? such that wt(?’)< k. One must show that, at the end of stage A - + 1, the same holds for i < log/?/ such that wt(?)< /,' +1. Let / be such that t < log in and w t(/)= k + 1. Let j be such that 2 /+1 > I > 2J; let t' — f - 2L Then w t(//)=£. By the induction hypothesis, a[F] has been computed, and thus is not -1. Then, in step 1.1, ap[/',j] will be assigned (a2 )2' = a2 2 = a2 = af mod ???; and subsequently (since 2J > t1 , a.[t] will also be assigned this value. □ P r o o f o f P ro p o sitio n 1. It is straightforward to solve MODEXP(a, b, m.) when m is prime, given an algorithm for 2*POWERS(«,m): First, reduce b mod in — 1; then compute a2’ for 0 < / < [log ?/?.] using 2*POWERS(«, ?/?). Finally, letting b = where the /3t — 0 or 1, compute ah mod ? ? ? = nf=o,/”)Li ° 2' niod ?/?. This can be done in depth O(log2z?) and polynomial size for inputs of size ??. By Proposition 2 and Lemma. 3, there is a c € Z such that 2lPOWERS(«, m) can be solved in depth (9(log4 ??) and size 0(c‘ ^ n 1 ,1 8 n) for inputs of size ? ? .. Lemma 2 and Proposition 2 demonstrate the correctness of a n. □ 2.3 G C D Although a polynomial time sequential algorithm for computing the ged of two numbers a and b (hereafter denoted (a, b)) has been known for centuries, namely 20 (lie Euclidean Algorithm, parallelizing the problem lias proven very difficult.. Algo rithms that run in linear and sublinear time have been produced: Brent and Kung show t hat, the gcd of n-bit numbers can be computed in O (n ) time with n pro cessors [BK]; Kantian, Miller and Rudolph demonstrate a sub-linear time bound of 0{v log log nf log n) using it2 log2 n processors [KMR]; and ( 'lior and Goldreich achieve a. time bound of <9e(n /lo g n ) using /d1+f) processors, for any < > 0 [CXI]. However, an ArC algorithm, if indeed such exists, has not emerged. An algorithm for gcd is presented below which runs in polylog time on a subexponential number of processors. The algorithm relies on the following principle: let a, b be positive integers, a > l> : let, d = («,/>). Then for r G Z with 0 < r < 6, we have ra mod b — sd for some 0 < s < bjd\ in fact,, for each a, we have exactly d pre-images in the range 0 1. So, by trying sufficiently many random numbers r a smooth 5 will result. 'Fhe strategy is thus: pick a random positive integer r < b\ compute ra mod b and divide out the smooth component; if the quotient q divides both a and b, then q is the non-smooth part of the gcd. The smooth part of the gcd is obtained by exhaustion. With a little more work, one can express d as an integer combination of a and 6, as follows: Suppose ra mod b — sd, with 9 smooth. Then, for some q € Z, ra — qb — sd. Assume that (b,s) — 1; let c = b~A mod s. Such a c can be found by computing the inverse of b mod p for each p dividing 9 (by exhaustion), and combining these using the Chinese Remainder Theorem to get b~l mod s. Then ra — qb — (r — bcr)a — (q — acr)b = r( 1 — be)a — ( < 7 — acr)b — sd. But now, s| 1 — be, so that ,s|(f/ — acr) (since (6,,s) = 1). Dividing through by s gives ar' — bq' = d as desired. (The case when (b.s) 7^ 1 can be handled similarly.) Note that computing the ext,ended gcd of a and b yields the multiplicative inverse of a modulo b whenever (aJ> ) = 1 (namely, r’ above). This fact, is used in the algorithm for modular exponentiation. Let, < a,b > be the input, of size n, and assume a > b. Let, B — L(b) and N = L(b)C s where' e, is as in Fheorem 7. The following processors are used: 21 • P R ij : 1 < i,j < B. • SGi : 1 < i < B. • LC-j : 1 < j < N . • SFi,j : 1 < * < B, 1 < j < N. • AFijtk : 1 < i < B, 1 < j < N, 1 < k < n. • NGj'k : 1 < j < N , 1 < k < n. • S A V : 1 < v < n.2. • CAitV ■ I < / < B. ] < v < n2. • C B ifV : 1 < i < B, 1 < v < n2. In addition, there are the following variables: • a, b — input. • priniefi] 1 < i < B — prime[i] is intended to be true iff i is prime, initialized to true for i > 1; prime[l] is initialized to false. • prpowfi, j] I < i < B , 1 < j < n, intended to hold i3. • r[y] 1 < j < N variables initialized to random numbers from 0,2,...,b-l. • ql/1 i < j < N intended to hold coefficient of b in integer combination. • t[j] i < j < N t[/'] is intended to hold the j th random integer combination of a . and b. • »[;'], «L/] f < j < N; s[j] and u[y] are intended to hold the smooth and non smooth parts of t[j] respectively. • adj[k], ad 2[j, k] 1 < k < n 2. 1 < j < N : '‘accumulators'1 intended to hold prime power divisors (initialized to 1). 22 • cl], d2 smooth and noil-smooth parts of («, 6). • a', a", iy, h" — - variables related to a and b. • w variable to hold the index of “winning” integer combination, i.e., t[w] = a'*r[w] mod b' = s[w]* gcd(a', &'), with s[w] smooth. • r, q, s ini,ended to hold coefficients of “winning” integer combination. • ax[j], ay[ /] 1 < j < A *; intended to hold primes. • cx[y], cy[;] 1 < j < A'; intended to hold coefficients for Chinese Remaindering. • ga, gb - intended to hold numbers relatively prime to b and a respectively. • bi, binv[&], ai, ainv[A:] 1 < k < n 2 intended for use in the computation of b"~l mod ga and a"-1 mod gb, respectively. • r\ r", qb q" -- intended for computation of coefficients of extended gcd. A lg o rith m G C D (a,fc) J. (* Identify all primes < B by exhaustion *) all p rocessors P R tj : 2 < i,j < B if J I i and j < i th en primefi] := false 2. (* Compute all small powers *) all p rocessors P R i j : 2 < i < B, 1 < j < n compute prpow[i,j] := iJ 3. (* Compute smooth factors of *) all p rocessors SG i : 2 < i < B if primefi] compute largest fq such that prpow[i,fj]|« compute largest, c2 such that prpow[i,C2]|6 let c — mi 1 1 (65, 62) if c > 0 th en v := random( l , n 2); adj[v] := prpow[i,e] if adi[v]/=prpow[i,e] th en FAIL Compute smooth part of (a ,b ), replace a, b *) di := n ;;l, ad,M a' := a/cL; 1 / b/di Find a random linear combination of a and b *) all p rocessors L C j : 1 < j < N t[j] := a/*r[j] mod b' q[j] := La# *rD ]/b'J Find smooth factors of t[j] *) all p rocessors S F {j : 1 < i < B, 1 < < N if prime[i] and i | t[j] th en compute largest e such that prpow[i,e]| t[j] v := random (l,n2); ad2[j,v] := prpow[i,e] if ad2[j,v]/=prpow[i,e] th en FAIL Compute d2 = non-smooth part of gcd *) d2 := 0 all p rocessors / VGj : 1 < J < N B [j] := n i x ad2[j,c] «D] := tD]Mj] if u 0] I a and u[j] | b th en (* Found gcd — set w = index of winning linear combination d2 := u[j] if d 2 = 0 th en FAIL 8. (* Replace a/ and b'; set up linear combination *) a" := a '/d 2; b" := b '/ d 2 r := r[w]; q := q[w]; s := s[w] 9. (* Compute prime factors of ga.=(a",s[w]) *) all p rocessors ,SAV : 1 < v < n 2 if ad2[w,v]|a// th en ax[v] := ad2[w,v] else ay[v] := ad2[w,v] 10. (* Compute ga=(a",s[w]) *) ga := n iia x f v ] 11. if ga > 1 th en (* Set up b"-1 mod ga using CRT *) all p rocessors CAitV : 1 < i < B, 1 < v < n if (ga-/ax[v])*i=l mod ax[v] th en cx[v] := (ga/ax[v])*(i mod ax[v]) if b"*i = 1 mod ax[v] th en binv[v] := i mod ax[v] 2 12. (* Compute b'/_l mod ga; divide through by ga *) bi := H ”=i^inv[v]*cx[v] mod ga r' := (r-bi*r*b")/ga; q' := (q-bi*r*a")/ga gb := s/ga 13. if gb > 1 th en (* Set up a"-1 mod gb using CRT *) all p rocessors C B i v : 1 < i < B , I < v < n 2 if (gb/ay[v])*i=l mod ay[v] th en 25 ey[v] := (gb/ay[v])*(i mod ay[v]) if a"*i=l mod a.y[v] th en a.inv[v] := i mod ay[v] 14. (* Compute a"-1 mod gb; divide through by gb *) 2 ai := H "=ia* nv[v]*cy[v] mod gb r" := (r/-a.i*r/*b//)/sb; q" := (q'-ai*r'*a") 15. (* Output gcd(a,b), x, y such that a.x-by=gcd(a,b) *) output < d i* d 2, rw , q" > It is easily seen that for all n € Z>0, the action of each processor on inputs of size v can be implemented by a polynomial size circuit of C>(log2 n.) depth. Thus, a. circuit for integer gcd can be obtained by gluing the circuits for each processor together. For each n, let the circuit, so obtained be called a n. L em m a 4 For all a ,b ,u 6 Z>0 with |a| + |f> | = n, if o n terminates without failure on input < a , h > , the output is < (ei,h),x,y > such that a.r — bp — {a. b). P roof: Let B be as in the algorithm; let (a, b) = ef d-i such that, d\ = T ip f with pi prime < B and (d-2-p) — 1 for all primes p < B. If step 3 does not fail, di is set to d\. In steps 1 7 , t[j] is set, to a random linear combination of a' and and factored into s[j]u[.j], where s[j] is /4-smooth. Thus r/2|t[j], and (r/2,s[j]) = I, so d2|u[j]. If u[j]|a' and u[j]|//, then u[j]|r/2, whence u[j]=(/2. Thus, if d 2 is set, it contains d2. Clearly, w is such that u[w]=(/2, hence r and t| are such that row -q//'=s, with s 74-smooth. Also, («", //') = 1. Let («",s) =ga. Then (6",ga) = 1. Step 9 partitions the divisors of s into those that divide ga (in ax[.]), and those prime to ga (in a,y[.]). Steps 11 and 12 compute bi= l/'~ l mod ga by the Chinese Remainder Theorem. Then 5 = ra" - qb" = (r - In * //' * r)a" - {q - In * a" * r)b". 26 Since ga|(r-bi/>"r) and ga|s. ga|(q-bi«"r)/>". Since (ga, 6") = I, ga|(q-bia"r). Thus, r V '— q'//'=s/ga=gb. Now (sa,a)=J, so a similar computation yields v"a"—q"/>" — 1. Hence r"a— q'7> = (a,b). □ L em m a 5 There e.rists a e 6 R>o such that for ail a,,b,n £ Z>0, with |«| -f |ft| = n and n > 3, n„ terminates without failure on input < a, b > with probability at least c. P ro o f: Consider step 3. The only scenario for failure is that two processors pick the same value of v and both write into a.dj[v], whereby one of the processors will read back a different value and fail. Now, among all processors S G i , only those that have index i such that i is prime and divides gcd (a, b) will attem pt to write into a d j. There are at most n such processors since at most n distinct primes can divide gcd(a,b). The probabilit y of success is thus the same as t he probabilit y that n numbers chosen randomly from I ,...,? ? 2 are all distinct, which is — fl) > (1 — £)M > 1 /4. Similarly, the probability that, step G succeeds is > | . Finally, the probability that in step 7, at least one of the processors will write d- 2 is given by the probability that s[j] is smooth for some j. Now, s[j] = (a'*r[j] mod b')/d, where d = gcd (a'. I/). For r[j] uniformly distributed in 0,. .. ,b'-l, s[j] will be uniformly dist ributed in 0 ,... ,b '/d -1. By Theorem 7, the probability that s[j] is Lfb'/dbsm ooth is f /L (by /cl)c-, where cs is as in Theorem 7, which is no less than the probability that s[j] is L(b')-smooth. The probability that all s[j], l< j< N , are not Llb'Fsmooth is thus at most ( 1 ----------------f < (1----------------------- < I 1 L {b 'ld y T - K L(h'Y<1 - c ’ where c is tin' base of nat ural logarithms. Hence, the probability that. d2 gets written is at, least 1-1 /c. Thus, the probability that a n terminates without failure is at least (l-l/e)/1 6 .D 27 To obtain a family of circuits for gcd that succeeds with probability at least 1/2 for all input sizes n, it suffices to 1) build circuits for gcd for n = 1 and 2 using tables; and 2) replicate the circuit above k times (k — flog(,_ ,/e)/1 6 1 / 2] ) to raise the probability of success to 1/ 2. Analysis of the depth and size of a n proceeds in a. manner similar to that used for modular exponentiation and is left to the reader. 2.4 O ther R esu lts This section contains a sketch of the proof of Theorems 3, 4 and 5. Theorems 1 and 5 can be obtained by the straightforward parallelization of any of the well-known algorithms for integer factoring (see [Po]) and discrete logarithms (see [Ad] and [Po]). Let r be a. ‘witness’ that a,b are relatively prime iff c. = ra mod b is prime and c does not divide a or does not divide b. Then an easy argument shows that for a.b relatively prime there are abundant witnesses, but for «,6 not relatively prime there are none. Theorem 6.1 follows. Theorem 6.2 follows straightforwardly from the Miller primality test; under ERIf, the reduction is deterministic [Mi]. Theorem 3 follows from Theorem 6.2 and Theorem 1. 2.5 O p en P rob lem s A number of open problems remain: the obvious ones are to determine the parallel complexity of gcd computation and modular exponent iation. Other questions are: 28 Does there exist evidence that gcd computation (resp., modular exponentia tion) is not, P-complete? One avenue perhaps is to demonstrate a class co-P (analogous to co-NP), and to show gcd computation lies in Pflco-P. Can integer factoring and discrete logarithms be performed in polylog time using a sub-exponential number of processors? This question was originally posed in 1988, and answered affirmatively by Jon Sorenson in 1990 [So], who, using the results presented here, showed that there exist probabilistic poly log depth circuits of size exp(0 (n/polylog(??.))) to solve instances of size n of both these problems. However, to more closely approach the known sequential bounds for these problems, one would like circuits of size exp(0 (\/n log??)). This is still an open problem. Arc the problems of integer gcds, modular exponentiation, and primality test ing com])utable in spac-e(td1 /2+0*D)) on inputs of size n? If the circuits de scribed here were in fact deterministic, then one could solve these problems in the given space bound. However, because randomness is used, it is not clear that the given space bound suffices. 29 C h ap ter 3 E fficient C heckers for M od u lar In version and M od u lar E x p o n en tia tio n 3.1 T h e C o m p lex ity o f C hecking C o m p u ta tio n a l P ro b lem s Before defining the complexity of checking computational problems, the notion of proffmm checkers is first defined. 3.1.1 M o tiv a tio n Program correctness is a serious concern, and has consequently received considerable attention. Three approaches have emerged: • m ath em atical: prove programs correct; • em pirical: test programs for bugs; 30 • engineering: design programs well. Now, a new approach, proposed by Manuel Blum, promises to be of both practical and theoretical significance, and is intrinsically closer to a computer scientist's heart, since the approach is • alg o rith m ic: check every computation using a program checker. Program checkers are designed for specific computational problems; a checker is an algorithm which, when given a. program that is purported to solve that problem, and an input value, makes a decision: either “CO RRECT” or “BUGGY”. If the decision is “CO RRECT” , one can Ire confident that the program computed the correct output at the given input value; if it is “BUGGY” , one can be confident, that, the program has a bug, i.e., the program computes an incorrect output at some input value. Program checkers have at least two desirable propert ies: 1) they check entire computations (software + hardware + operating system); and 2) the effort, of creat ing a checker need not be duplicated if a different program for the same problem is to be checked. On the other hand, the price for these properties is increased com putation. Not only must time be spent by the given program, but now additional time must be spent by the checker. This makes fast checkers ('specially attractive. 3 .1 .2 Form al D efin itio n o f P rogram C heckers The following is Blum's original definition of a program checker: D efin itio n 2 (B lu m [B]) Let t v be a computational problem. Lor x an input to w, let 7 r(.r) denote the output of t v . Call CK a program checker for problem n, if for all programs P that halt on all inputs, for all instances I of t v , and for all positive 31 inhgtr.s k (presented in unary), C!f is a probabilistic oracle Turing machine (with oracle P) such that: 1. If P{.v) = Ti-(.r) for all instances x of 1 r. then with probability > 1 — 1/2*', Cff(I;k) = C O R R E C T (i.e., P{I) is correct); 2. If P{I) 7^ 7 r( /) then with probability > 1 — 1/2*', Cf{ I : k) — BU G G Y (i.e., P has a “bug"). There are two points to be noted here. The first, is that a checker confronted by a program that, answers correctly on the given input, but wrongly at some other instance can say either CORRECT or BUGGY, because it is true both that the given computation was performed correctly, and that the program has a bug. This is because the definition does not specify what the checker must do in this case; in a sense, this omission is what makes the notion of program checkers so powerful. Another point to note is that, as a consequence of two-sided error, there is a, small probability that a checker declares a correct program as BUGGY. While this is generally undesirable (one would like to be certain), there are computational problems (e.g., matrix rank determination [Ka]) for which the fastest known checker needs this leeway. In such a case, it may be prudent to run the checker with a, larger confidence parameter when a result of BUGGY is obtained before acting on this result (e.g., discarding the program). Most of the checkers given below- do not suffer from this drawback — they may certify an incorrect computation as CORRECT, but will never declare a correct, program as BUGGY. However, some of the checkers described below do indeed have two-sided error. 3 .1 .3 C o m p lex ity o f C heckers In arriving at a decision, a checker undertakes t.wo kinds of activity: oracle calls (queries) to the program being checked, and its own computation. Thus the time complexity of a checker is specified by giving, as a function of both the input size and the confidence parameter: 1. the number of oracles calls needed, and 2. the time complexity of the checker’s own computations. Clearly, every computable problem 7 r has many ‘‘trivial checkers” : run a correct program 0 for 7 r, and check whether the given program output the same answer, thereby obtaining a checker that requires 1 oracle call and runs in time about 7n, where 7’ n is the running time of II. In fact, if II were the fastest possible program to compute tt, then it might appear that the trivial checker vising H would be the fastest possible way to check x. Somewhat paradoxically, this is not always the case. It appears that it is sometimes possible to find “fast” checkers for problems x such that the time cov., 1 ‘ xity for the checker's own computations is less than that of any program to compute x. As a practical matter, it is sometimes difficult to know if a given checker for x is “fast” in the sense above. The reason is that it is only rarely the case that one knows a. sufficiently tight lower bound on the complexity of x. As a result, we will settle for calling a checker for x “speedy” iff it runs in o(U„), where UK is the best known upper bound for the time complexity of x, in effect saying, “Here is a checker for x that runs faster than any known program for x.” So much for the complexity of the checker's own computations. Assuming now that, the checker does run fast, most of the cost, of checking lies in oracle calls, or queries, made by the checker of the program P being checked: to check one compulation, one may need to run P several times. If Tp(x) is the time taken by P on input, ;r, and T( p(x) the time for checking P's computation on input x (i.e., Tcp(x) = Tp(.r) + Tc (x) + Yli Tpixi), where Tc (x) is the time used by the checker on input .r, and .r,’s are the queries made by the checker), one could aim for checkers such that for some e 6 R, for all programs P and all inputs x. Tcp(x) < cTp(x). i.e., the time to check P ’s computation is within a constant, factor of the time to compute 33 75 (on input- .r). However, there are two obstacles. The first, is that checkers are usually noii-cletenninist.ic; thus, Tcp(x) is not well-defined. This may be circumvented by using the expected value of the running time. The other problem is that P 's running time could be highly non-uniform: Tp(xi) could be much greater than Tp(.r), even if .r, was in some sense “near" x (e.g., both had the same size); one can in fact show that for every checker that makes at least one query (in addit ion to that used for .c), and ior all c € R, there is a total program P and input value .r such that Tcr(x) > cTp(x). Whi le one cannot always bound Tcp(x) within a constant factor of Pp(.r), one can nevertheless aim for constant-query checkers: those that, for every desired level of confidence (captured by the para,meter k in the definition), require only a constant number of calls to the program being checked (thus any trivial checker is a constant- query checker). We will consider ourselves very successful if we are able to find checkers which have both of the desirable properties above. We will refer to such checkers as “speedy constant-query checkers.” Below, we give a speedy constant.-query checker for mod ular inversion. In the event that we cannot find a. speedy constant-query checker for a problem, we will attem pt to find a speedy checker where the number of queries required is a slowly growing function of the input size. Below we find such a checker for modular exponentiation. To recapitulate, finding a checker for a computable problem w is not hard. The intellectual challenge lies in finding a checker that needs few queries and runs fast. 34 3.2 S u m m ary o f R esu lts In liis original paper on program checking [B], Blum described checkers for several problems. Perhaps the cleanest of Iris examples was for the Extended GCD problem EG CD (described in the next, section.) However, he left open the question of the existence of speedy eonstant-query checkers for the problem of unextended GCD, to be subsequently answered affirmatively by Adleman, Iluang and Kornpella. [AHK]. In this thesis, a. speedy constant.-query checker for the compan ion problem of modular inversion is presented. A fast checker for modular exponentiation is also presented. Modular exponen tiation is an important operation in several practical systems, e.g., the RSA cryp tosystem [RSA], where security issues may make checking especially significant. The checker for modular exponent iation requires, on inputs of size /?., (9(log v) modular additions and multiplications and queries to the program to arrive at its decision. Furthermore, it has the property that correct programs are never deemed ‘BUGGY.’ Independently, R.onitt Rubinfeld of Berkeley lias obtained similar results for checking modular exponentiation [R]. 3.3 C hecking M od u lar In version 3.3.1 P ro b lem S ta tem en t The problem of modular inversion (M I) can be described as follows: Inp u t: a, b: positive integers. O u tp u t: c such that c ■ a = 1 mod b and 0 < c < b if gcd (a, b) = 1; 0 otherwise. Both GCD (given integers «, b, compute gcd(u, b)) and Ml are no harder than EG CD, the extended CCD problem: Inp u t: a, b: positive integers. O u tp u t: (1 = gcd(«, b) and integers «, v such that, an — bv = d. in the sense that they are both reducible in linear time to EGCD. As mentioned above, Blum demonstrated a speedy constant-query checker for EG CD. However, it is not known whether the existence of speedy constant-query checkers for a problem t t implies the existence of speedy constant.-query checkers for every problem reducible to k in linear time. That, is to say, the relation between “ ■checking complexity” and “solving complexity" is unclear. Thus, the question of the existence of speedy constant-query checkers for GCD (now resolved [AHK]) and MI is of theoretical interest. Moreover, modular inversion is a basic subroutine in many number theoretic algorithms, and so, confidence in its correct execution is a must. While currently, modular inversion is accomplished by solving the EGCD problem, it is possible that in the future, an algorithm will be discovered for modular inversion which runs faster than known programs for EGCD. Thus, one would like to have a checker for MI itself. The main result of this section is that programs for MI can be checked quickly with only a constant number of queries: T h e o re m 8 [Speedy Constant-Quemy Checkers for Modular Inversion Exist.] There exists a program checker Cmj for MI such that for all total programs P : x Z>o ► Z, all inputs a. b (E and all confidence paramefe rs k 6 Z>0, T (-'mi makes 0(k) queries of the program P; J. ( requires 0(kM(h)) steps, where M{b) is the number of steps required for multi■plication modulo b. 36 3 .3 .2 A S p e e d y C o n sta n t-Q u ery C h eck er for M o d u la r In version The checker, C mi is now presented. The values of a and M are as in Proposition 3; the value of ft is as in Lemma 6 (which are stated below). Denote by MI(a,b) the correct answer to the modular inversion problem on inputs «, b. Input: P: a program (supposedly for MI) that halts on all inputs; a,b: positive integers; k: security parameter (given in unary). O u tp u t: ‘CO RRECT’ (with probability > 1 — 2_A :) if P{A, B) = A11(A, B) for all A, B € Z>0; ‘BUGGY’ (with probability > 1 - 1/2*) if P(a, b) ± M I (a. !> ). b eg in c := P(a , /> ); if b < M th en (* for small values of 6, explicitly compute the value of M 1 (a. //), and thereby check the program P. *) compute d = gcd (a, b); if d ^ 1 th en c' := 0 else cf := a~l mod b\ if c y- c’ th e n output ‘BUGGY’ and halt else output ‘CO RRECT’ and halt en d if (b < M ) (* now for large values of 6. *) if c / 0 th en (r P claims that c is a~x mod b. *) if a • c mod h ^ \ th en output ‘BUGGY' 37 else output ‘CO RRECT’ en d if c / 0 if c = 0 (* P claims that gcd(a, b) > 1. *) (* Loop 1: check that P is consistent in its answers. *) for i ~ 1 to r-£ l° g (i— a/3) 2! do r := random(l, 6); s := random (l, b); u , : = a ■ r mod b; v a ■ s mod b; (* gcd(a,6)|«, gcd(a,6)|n, so, P(u,v) should be 0, since gcd(w,e) > gcd(«,6) > 1. *) if P(u, v) / 0 th en output ‘BUC1GY1 and halt en d for (* Loop 2: count the pairs (r, $), 1 < r,s < b such that P(i\s) ^ 0. *) n := \~ k log^ 2] count := 0; for i := 1 to n do r random(l, b): s := random (l, 6); if P{r,s) ^ 0 th en count := count-fl end for (* the fraction count/n is expected to be at least a. *) if (count/??) < 2a / 3 th en output ‘BUGGY’1 else output ‘CO RRECT 1 en d if (c = 0) end. The key to this checker, as well as some of the checkers to be seen in the next chapter, is a result that states that the probability that a statistical test deviates significantly from the expectation falls off exponentially. For example, consider the following statistical test: Throw an unbiased die n times, and count how many times 38 a, one is thrown. One expects that n/6 ones will be thrown; the probability that at most, half the expected number (i.e., nf 12) ones are thrown falls off exponentially with n. Similarly, the probability that more than, say, n ( 3 ones are thrown also falls off exponentially with n. For the case at hand, one of the statistical tests applied is the following: Fix a positive integer b. Pick n pairs of integers (u, v) with 1 < n,v < b, and count how many pairs consist of relatively prime numbers. Let the probability that such a random pair is made up relatively prime numbers be a (see Proposition 3). One expects that cm of the n pairs picked consist of relatively prime numbers. The probability that less than 2an/3 pairs consist of relatively prime numbers declines exponentially as a [unction of n. Now, consider the following cases on input (a.,b). Let c. = P(a, b) and let d = gcd (a, b). Assume b > AI, since the case when b < M is trivial. Oa.se 1: r ^ 0 and ac ^ 1 mod b (P(a,b) is wrong). 0/v// says ‘BUGGY’. Case ‘ 2: c ^ 0 and ac = 1 mod b {P{a,b) is correct). C m I says ‘CO RRECT’. Case 3: c = 0 and d > 1 (P(a,b) is correct). Subcase 1: P is correct on all inputs. Note that d divides all the ids and ids generated in every pass of the first loop. Thus, since 1 < d\ gcd(?/,, r), P(n,v) = 0 for all such pairs (u.v) and Cmi will not declare P to be ‘BUGGY 1 during loop 1. Moreover, the probability that fewer than 2on/3 relatively prime pairs are picked in loop 2 is small, so it is unlikely that ( ' m i will declare P to be ‘BUGGY’. Thus, with high probability, CMr will say ‘CORRECT'. Subcase 2: P is incorrect on some input. Then, as explained following the definition of a checker, it does not matter whether C m i sa,V s ‘CO RRECT’ or ‘BUGGY1 . (.lase 1: c — 0 and d = 1 (/'*(«, b) is wrong). Subcase 1: P declares fewer than cv/3 of the pairs («'.//) with 1 < a', b' < b as relatively prime (i.e., P{a(b') ^ 0). Then, it is expected that the count in loop 2 is less than cm/3; by Proposition 4, the probability that the count, is as high as ‘ 2an/3 is very small. Thus, with high probability, C \n will declare P to be ‘BUGGY’. Subcase 2: P declares more than o f 3 of the pairs (o', I/) with I < o', 1/ < b as relatively prime. Since gcd(«,6) = 1, the numbers a, v generated in loop 1 are just random numbers < b. Thus, the probability that P(u,v) ^ 0 for some pair (w, v) is very high, whereupon Cmi will say ‘BUGGY’. Thus, in either case, ( T// will declare P ‘BUGGY’ with high probability. 3 .3 .3 P r o o f o f T h eo rem 8 Lemma. 6 gives a. more formal version of the previous argument. First, some necessary results are stated. A well-known result from analytic number theory states that ,. # { ( * , / ) € Z>0 x Z>0|gcd(s,i') = 1 k s , t < x} 6 L a------------------------------------ ^ (see [Ap, Ch. 3.8, Tlnn. 3.9]). From this, one can derive the following P r o p o sitio n 3 There exists an integer M and a constant c v £ R such that for all integers b with b > A/. € Z>q x Z>o|gcd(.sY) = 1 fc s<t < (> } ^ ^ # { ( M ) G Z>0 x Z>0\s,1 < b} ~ ° ' T h eo rem 9 (C h eb y ch ev ’s In eq u a lity ) Let X be a random variable whose rth moment is defined for some r £ Z >q. Then for all c € R and all a £ R>0, Prob[\X - c| > a] < E\\X - c|r]/«r. (For a proof, see [Th, Ch. 3.5, Tlnn. 1].) 40 P ro p o sitio n 4 Let X be a 0-1 random variable, with probability p of being 1. Let S„ be the mini of n observations of X . Then Sn is a random variable with probability (k)pk '( 1 - 7 >Y"~k of being k , for k E {0,...,??}. 1. For all f > 1, there exists a constant ft = ft(p, f) E R such that for all n E Z , Prob[Sn > fpn] < ft1 1 , where ft{0,f) — 1 for f > 1, ft(p, 1/p) = p for p > 0 , ft(p,f) — 0 for p > 0 and f > Jfp , and ft{p,f) — ( j f ^ j )1- p / f-p/ for p > 0 and 1 < / < 1/p. Furthermore, for f > 1, ft(p,.f ) < 1. 2. Lor all 0 < / < 1 , there exists a constant ft — ft(p, f ) E R such that for all n E Z j , Proh[Sn < fpn] < ft”, where tf(l,l) = 1, ft{\, / ) = 0 for f < 1, and ft(p,f) = ft(l - p, 7^ ) for p < 1. Furthermore, for f < 1, ft{p, / ) < 1. P roof: 1) Clearly, ft(0, / ) = 1 for / > 1, and ft(p,f) = 0 for p > 0 and / > 1/p. It is also easy to see that, for / = 1/p, Prob[Sn > fpn] = Prob[Sn — n] = p". Now consider the case / ^ 1/p. For all A E R>o, Sn > fp n iff AS'n > A fp n iff eXSn > exll>n. Thus, Prob[Sn > fpn] - Prob[eXSn > exfpn] < E[e2XS”]/c2XIpn by the previous theorem (choosing r — 2, c = 0, and dropping the absolute values since all values are non-negative). Also, £[,«*.] = 1 - + a - rip. i = 0 Minimizing 7(A) = E[e2XSn]/e2X- fpn, one obtains Am,- n(p,/ ) = \ In , whereby Prob[Sn > fpn] < ft{p,f)n. 41 Furthermore, /(A) = 1 at A = 0, h o that for / > 1 (so that Xmin{p-.f) / 0), f i i V i ./ ) = <(A0 ) < F 2) Substituting V = 1 — .V and Tn — n — Sn, and evaluating Prob[Sn < fpn] = Prob[Tn > n — fp n ] an above yields the desired result. □ Let /i[ = /^(cv,2/3) and (32 = /i(cv/3,2), where a is as in Proposition 3. Let (3 = max{dj-/L}- L e m m a 6 (L'a// w orks.) For «// programs total P, for all positive integers a,b,k: 1. If P is correct (i.e., P{ A , II) = M I(A, B) for all positive integers A, 13), then with probability > 1 - 2 " * , G£n (a,lr,k) = ‘C O R R E C T ’ . 2. ffP(a.b) ^ 311(a, b). then with probability > \ —2~k, Cj[)/(a, b; I) = ‘ ‘ BUGGY'. Proof: Since P is checked explicitly for small values of b, assume that b > M. 1. Suppose P is correct on all inputs. If MI(a.,b) ^ 0, then C[.n (a, b; k) will clearly say ‘C O R R EC T/ If M i ( a , 6) = 0, let d = gcd(a,b). Then d > 1 and d\u, d\r for all u,v generated in the first loop. But then gcd(u, v) > 1, so that, P(u,v) — M I(u ,v) = 0, and Cmi will not declare P buggy in loop 1. Furthermore, by Proposition 3, the probability that a random pair r, ,s with 1 < r, s < b is relatively prime is at least a , so that the expected value of count at the end of the second loop is at least an. By Proposition 4, the probability that [count < 2cm/3] is at most /3” < ftn < 2~k. Thus, with probability > 1 - 2 -a ', C[n (aJp I) will say ‘CO RRECT.’ 2. Now suppose P is incorrect at the given input, i.e., c = P(a. b) yf M I [a, b) = c'. If c 7^ 0, then clearly C'{)y(n, b; k) — ‘BUGGY.’ So suppose c' 7^ 0, and c = 0. Define 7 by # { (o', b')11 < o', // < b k P(a', b') 7^ 0} 7 _ #{(o',b>)\\ <a',b> <b} The pairs (u, v) generated in each pass of the first loop are just random numbers from 1 ,.... b, since gcd(«, b) = 1. Thus, if 7 > o/3, then the probability that P(u, e) = 0 throughout loop I is (1 - 7 i-«/-n 2 < (1 - a /3 )" * ,og< *-«/*>2 < , _ 2~k. lienee, the probability that C'mi will say ‘BUC1GY’ is > 1 — 2k. On the other hand, if 7 < o/3, then the probability that, at the end of loop 2, count > 2 o n /3 is at most /!£ < (ln < 2-fc, by Proposition 4. Ilence, with probability > 1 — 2~~k Cm 1 will declare P to be buggy. □ To complete the proof of Theorem 8, one need only check that 6 a// has the stated complexity. This is obvious by inspection. 3.4 C heck ing M od u lar E x p o n en tia tio n 3.4.1 P ro b lem S ta tem en t The problem of modular exponentiation (M E ) can be described as follows: In p u t: a,b,m: positive integers, with a < m. O u tp u t: c such that 0 < c < m and c = an mod rn. The main result: T h eo rem 10 (F ast C heckers for M od u lar E x p o n en tia tio n E x ist.) There exists a program checker ( ' m e for ME such that for all total programs P : Z>o X Z>o x Z>o — * Z and all inputs «, 6, m 6 E Z>o with a < m, 43 1. C I f m a k e s C >( log log b) queries of the program P; 2. (',()F runs in lime 0 [ (log log b + log log rn)M{\og ??/))• where Al(n) is the time reejuired to perform modular multiplication of n-bit numbers. 3 .4 .2 R em ark s The checker Cm e is based on the tester-checker paradigm, introduced in [AI\]. That is, it follows a two-stage protocol, where, in the first stage, the checker tests that the program satisfies some statistical property (e.g., correctness on a certain fraction of the inputs); and in the second, it edieeks the program on the given input, making use of the statistical property just verified. This paradigm has proved useful in the design of other checkers as well (see [I\A]; see also Chapter 4). A similar checker for modular exponentiation was devised by R.ubinfeld [R], who independently discovered the tester-checker paradigm. However, the R.ubin feld checker, on input < a,b,m >, needs time 0 (log log />(log log ni )3.A/( log m )), and 0 (log log /)(log log in)2) calls to the program, and further requires that (< v , m) = 1. The checker Cme can easily be modified to check exponentiation on any group, and in many semi-groups, again with 0 (log logs) queries and group multiplications for an exponent, of s. Thus, for example, one can obtain fast, checkers for exponen tiation in polynomial rings and on elliptic curve groups. (To elaborate, “exponen tiating” on an elliptic curve E involves computing ,s ■ P for integer s and a point P on E. This requires 0 (log ,s) additions on E whereas checking a program for exponentiation requires only O(loglog.s) additions on E and calls to the program.) 44 3 .4 .3 In form al D escrip tio n (To simplify this informal description, we will begin bv assuming that, the modulus in is prime. This assumption is not required for the actual checker.) At the heart of the checker for modular exponentiation lies the familiar algebraic identity: ae ■ a-f = mod m: (3.1) Let P be a program that purports to perform modular exponentiation, and let (a,b, m ) be the given input. Suppose that, for base a and modulus m, P exponen tiates correctly at sufficiently many exponents / , i.e., P(a1 /, m) = a* mod in for at least 5/6 of / € { 0 ,1 ,... ,2 2 } (3.2) (where n is such that 22" < b < 22” ). (fall such a. program “a-tested for n at modulus in ’ (henceforth, the modulus is fixed at in, and will not be specified). Suppose further that at the given input, P(a , in) ^ o,b mod in. (3.3) Then identity (3.1) suggests the following check: pick / randomly from { 0 ,... ,2 2”} and check whether P(a, b, in) • P (a ,f,m ) = P(a,b + f, in) mod m. If / is such that P(a. f, in) = a/ mod in and P (a. b + /', in ) = ah+i mod rn, then (3.3) implies that the check will fail. Moreover, (3.2) implies that picking such an / is reasonably likely (provided that b + f < 22” ; the method actually used is modified to avoid this difficulty). Thus, the tester-checker paradigm suggests itself: first, obtain confidence that the program P satisfies (3.2) (i.e., that P is “a-tested for n"), then use identity (3.1) as outlined to obtain confidence that P(a,b,m) = ah mod in. Now, the ‘Tester'1 phase of verifying that (3.2) holds can be performed by picking several / ’ s at random from {0,... , 22 ” } (i.e., of size 2"), and checking that P(a, f ,m ) is correct: if P fails to satisfy (3.2), then with high probability, one of the f's picked 45 will demonstrate this. However, to do these checks, one cannot assume that P satisfies (.'{.2)! Instead, for these checks, one applies the tester-checker paradigm recursively thus: assume that P is “a-tested for » — l ‘\ i.e., P(a, f',m ) = a/' mod m for most ]' 6 {0,.. . ,22” ' }. Use the reduction described below to reduce checking on exponents of size 2" (i.e., the / ' s picked at random) to checking on checking on exponents of size 2”~1; now, these checks can indeed be performed, as P is assumed to be “a-tested for n — 1”. Thus, one obtains the following structure for the checker: “a-test for 1” (this turns out to be trivial); “ ’a-test for 2”; ...; “a-test for j ”; “a-test for j -f 1” : pick several f 's at random from {0,. . ., 22’+1}, and check at these / ’s by reducing to checking at exponents where testing has already been done; ...; “a-test for Now, check P(a, b, in). The reduction from checking on large exponents to checking on exponents half the size goes as follows. Let / be 2J+1 bits long; write / as h ■ 2V + I, with h and I < 22\ i.e., h and / are at most half the length of / . The following identity is used: af = (ah)2~ ■ a1 mod m. Then, to check P(a, f,m ), it suffices to check that P(a, h,m), P(.r. 22J, ///) (with .r = P(a, //, in) ) and P (a,l,m ) are all correct. Setting aside the m atter of checking P(x, 22’, m ), the above identity shows how to check that P raises a to exponents of size 2J+1 mod in correctly, given that one knows how 1o check that P raises a to exponents of size 2J mod in. Finally, it must be shown how one can check that .r2‘ mod m is computed cor rectly for arbitrary x. Again, the tester-checker paradigm is brought into play: first verify that P is “tested for 22' ”, i.e., that P{y,22\ m ) = y2^ mod m, for most y € {1 — 1}, tlien use the identity [xyY = .rc • y mod in 46 to chock that x 2* mod rn is computed correctly. Once again, the testing is done inductively: if P is tested for 2JJ, then it can be tested for 22J+ i using y = (y ) mod rn to reduce checking P(y,22'+\in ) to checking P{y,22\ m ) and P{z. 2L > ,???) (with .r = P(y,22J,m)), which can be checked since P has been tested for 22J. Thus, the structure of the checker is now: “test for 2'JJ” for 0 < j < n (n as above); them “a-test for j ” 0 < j < n; finally, check P(a, b,m). The first test is called t e s t l in the checker; the second is called test2; the corresponding checks are called c h ec k l and check2. Now consider the case when m is not prime. Then cancellation is no longer valid in general, and this introduces two problems: first, even if a. “nice;” f is picked, the check suggested by (3.1) may not work; and second, the check for P (x,22J, m) also may not work. The former problem is circumvented by working modulo the largest factor of m relatively prime to a (without explicitly computing it), so that cancella tion is reinstated; and the latter is solved by using a “generalized” cancellation law: for all ,r, y, z E Z, for all in £ Z>0 and for all e € Z>0, if z(y') = (xyY k z(y + l) e = (x(y + I ))r mod m, then z = xf mod in. 3 .4 .4 T h e C hecker CME The checker is presented below in detail. Note that the modulus rn is no longer restricted to be prime. 47 The following notation will be used: lor any positive integers in and a, write in as a product YipV °1 distinct primes, and denote by m[a] the product Tip, |aPi\ an(l let m[u] = in/in.[a\. Note that (m[«], m[«]) = («,m[e]) = 1. For j e Z>(), let 7',(/\ j. in) = 1 if < y < m k P(ij,22\ in) = y 2 ,2 mod rn} 7 # {//|0 < y < in} ~ 8 ’ and J\ ( P. j, in) = 0 otherwise; similarly, let p i P. e/, /, in ) — 1 if # { ./|0 < /' < 2V k P(n, /, in ) = < r! mod m[rt]} ^ 5 # { / |0 < / < 22J} ~ - 6 ’ and r T2(P,(t,j, in) = 0 otherwise. The checker first, attem pts to verify that both 7 \{P ,j,m ) = 1 and T2(P, a, j, in) — 1 for 0 < j < log log h by calling t e s t l arid t.est2 respectively. It then checks the program on the given input. Input: P: a program (supposedly for modular exponentiation) that halts on all inputs; a,b,m: positive integers such that 0 < a < in; k: security parameter (given in unary). O u tp u t: ‘CO RRECT’ if P(T, B, M) = A B mod M for all .4, B. M £ Z>0 with A < M ; ‘BUGGY’ (with probability > 1 - 1/2*) if P(aJnin) ^ ab mod in. b eg in Set. c — [log m] (* For small exponents b < c, check by direct computation. *) if h < c, th en compute ab mod in directly; if P(a,b,m) = /: ah mod m, th en output ‘BUGGY’ else output ‘CO RRECT’; halt, end if set n = [log log b]. (* Tester stage *) te s t l(n ,to , k) t e s t2 ( a, n . m, k) (* Checker stage: establish correctness at given input *) do A : times ch eck2 (a, 6, n, to.) (* Check that P{a, b, rn) = a ’ mod to [a] *) en d do (* Then, check that P(a, b, rn) = ab(= 0) mod t o [a] *) (* A more detailed explanation is given in Theorem 10*) compute ac mod m directly (* since c is small *) if P(a, b — c, rn) ■ ac ^ P(«, b, rn) mod rn th en output ‘BUGGY’ else output ‘CORRECT’ end t e s t l ( n, m, k): (* Test whether 7j (P,j, in) holds for 0 < j < n. *) b egin (* The base case j = 0 is just squaring: test directly *) do 6A - times x := random (l,m — 1); if P (x ,‘ 2,m.) .r2 mod m th en output ‘BUGGY’ and halt for j € { J } do do 11 A : times: x := random (l,m — 1); c h e ck l (x,j — 1. to): let y = P(x, 22 J \rn ) c h e c k l(y,j — 1 ,m); let * = P(y,2v \rn ) if P(.r, 22J, rn) ^ z mod rn th en output ‘BUGGY’ and halt 49 en d do en d for end c h e c k l ( , r ,rn): (* check that P ( x , 2 2J , rn) = x22 mod m, given that I \ (P ,j, rn) holds. *) b egin y := randorn(l,?n — 1); if P ( x , 2 21 ,rn) ■ P (y , 22\ rn) ^ P ( x ■ y mod rn,22 J ,rn) mod m th en output ‘BUGGY’ and halt, if P {x , 22' , m) • P ( y + 1,22J, rn) ^ P ( x • (y + 1) mod rn, 223, rn) mod m th en output ‘BUGGY’ and halt. end t e s t2 («, n, in, k): (* Test whether Tz( P ,a ,j,m ) holds for 0 < j < n. *) b egin (* The base case j = 0 is checked directly. *) if P{ a, 0, in) 1 mod in or P {a, 1, rn) ^ a mod m th en output ‘BUGGY,’ and halt, for j € { l , .. ., n } do 8k times e := random(0, 22 J — 1); write e = h • 22 J + /, with 0 < h, I < 2 V ' check2(o, h ,j — l,m ); let x = P (a , h,tn) checkl(;r, j — 1, m); let y = P(x, 2V \ m ) check2(a, l,j — l,m ); let z = P (a ,l,m ) if y • z ^ P(a, e, rn) mod in th en output ‘BUGGY’ and halt end do en d for end 50 ch eck2 (a, e, j, m ): (* check that P {a ,e,m ) - ae mod m[a], when e < 2P and T-2(P,a,j,rn) holds. *) b egin / := random(0, 22 J — 1); if / > e th en if P(a, e, m) • P(a, / — e, m) ^ P («, /, m) mod m th en output ‘BUGGY’ and halt, en d if ( / > e) if / < e th en if P {a ,f,m ) ■ P(a,e — f , m ) ^ P(a, c, m) mod rn th en output ‘BUGGY’ and halt, en d if ( / < e) end 3 .4 .5 F orm al P r o o f o f T h eo rem 10 In this section. Theorem 10 is proved. The proof proceeds via. several lemmas that assert that the tests and checks given perform as desired: L em m a 7 (c h e c k l w orks) For all j E Z>0, for all .c. rn E Z> (J with x < m . 1. If P ( A, E ,M ) = A E mod M for all 0 < .4 < M and all E > 0 (i.e., P is a correct program for M E ) , then c h e c k l docs nothing. 2. If T i ( P , j , m ) holds, but P (x , 22J, m) ^ x 2* mod in, then with probability > 1/2, c h e c k l (a * , j, m ) outputs 'B U G G Y ' and, halts. P roof: 1) Straightforward. 2) Assume the antecedent. For 0 < y < m , let S y = {y,xy mod m , y + 1, x(y + .1) mod »/>}; call y bad if Sy has a c such that P(~,22 > , m) ^ z2iJ mod m. Now, each ~ with 0 < ; < m can appear in at most 4 .S'^’s; and since #{.z|0 < z < rn k. P(z,2'p ,m) = £ z2~ mod m} < there are less than Sij ^ bad y's. But if y is not bad, then ch ec k l will output ‘BUGGY’ and halt, for otherwise, one must have P (x ,2 2 J ,rn) ■ y22 = (xy)2~ = .r22 • y2~ mod m[y] (since m[#]|m), so that P{x,22\ m ) --- x 2 ,2 mod m[j/] (since (?/,?«[?/]) = 1, one can cancel); and similarly, P(x, 2'2 J ,m) = x 2i mod ni[y] (since m[y]|m, and (y + 1, ?//.[?/]) = I), whence P(x,'22\ m ) = x 2~ mod m by the Chinese Remainder The orem, a. contradict ion. Thus, with probability at least 1/2, c h e c k l(.r, j, in) will output, ‘BUGGY’ and halt. □ L e m m a 8 ( te s t l w orks) For all n € E Z>(), for all m ,k 6 Z>0. /. If P{ A J f M) = AE mod M for all 0 < A < M and all E > 0 (i.e., P is a correct program for M E j. then t e s t l (n, in, k) does nothing. 2. If l\ ( l \ j , in) fails to hold for some 0 < j < n, then with probability at least 1 — 2~k. t e s t l (n,m ,k) will output 'BUGGY' and halt. P ro o f: 1) Straightforward. 2) Let j 0 be the smallest j such that Ti(P ,j,m ) fails to hold. If jo — 0, then P squares incorrectly at least 1/8 of the time, and in 6 passes through the first, loop, an instance where P(x,2,m ) ^ x2 mod rn will be found with probability at least 1 / 2. Therefore, at the end of the loop, with probability > 1 —2 k, t e s t l will output ‘BUGGY’ and halt. Now suppose jo > 0. For random x (0 < x < in). I,rob[P(.r, 22Jo, rn) ^ x 2i ° mod m}> 1/8. For such an .r, if P(x. 22’1 ' '. in ) = x 2~ ° mod in, and P fx 2 ,2 ° , 2i2 J O ', in) = (x 2 ,2 )2~ = x 2 ,2 ° mod m, then t e s t l will say ‘BUGGY1 and halt. If either of these values is computed incorrectly, then (by lemma 7) with probability at least 1/ 2, ch ec k l will output ‘BUGGY' and halt. So, every pass through the second loop will detect a bug with probability at least i/16. Thus, at the end of the loop, with probability > 1 — 2-A r, t e s t l will outpiit ‘BUGGY’ and halt (remark: ({f)1 1 < □ L em m a 9 (check2 w orks) For all c,j € Z>0 with c < 22J, for all a, in € Z>0 with a < rn, /. If P (A , E, M) = AE mod M for all 0 < A < M and all E > 0 (i.e., P is a correct program for M E ), then ch eck2 (a,e,j,rn) docs nothing. 2. If T-A P,o, j, m) holds, but P{a,c,rn) ^ o' mod m[r/] then with probability > 1/2, check2f«, c,j,in) outputs ‘B U G G Y ’ and halts. Proof: 1) Straightforward. 2) Assume the antecedent. Arguing as in lemma 7, one can show that the probability of picking / such that, modulo m[a], P(a, /, in ) = o/, P(a.,c — f,m ) = ar~f (it c > /) , and P(a, / — e, rn) = a*~r : (if / > e) is at least 1/2. But for such an / (say / > c), P(a,c,rn) ■ P(a, f — e.rn) P{a, ./’,?») mod in[a], since P(a,e,in) ^ a mod ///[«] and (n?[a],o) = 1); thus, a forteriori, the congruence cannot hold mod in. Therefore, with probability at least 1/2, check2 will output ‘BUGGY’ and halt. □ L em m a 10 (te st2 w orks) For all n € Z>0, for all a., in, k £ Z>0. I. If P(A, E, M) = AE mod M for all 0 < A < M and all E > 0 (i.e., P is a correct program for M E), then t e s t2 (a, n, in, k) dots nothing. 2. If T \{P ,j,m ) holds for all 0 < j < n, and T-2( P , a , j , m ) fads io hold for some 0 < j < n , I hen wdh probability at least 1 — 2~k, test,2 /V /, n, in. k) will output B U G G Y ' and halt. P roof: 1) Straight forward. 2) Let jo he the smallest j such that T2{P ,a,j,m ) fails to hold. If j 0 = 0, then t e s t 2 will clearly output ‘BUGGY' and halt. So, assume j 0 > 0. Then, for ran dom e < 22J0, Prob[P(e, e, ?n) ^ mod m[rt]]> 1/6. Let e be less than 22’0 and assume P (e,e, m) ^ o' mod »?[«]; write e = h22 Jo ' + /, 0 < h,l < 22M \ If .v := P(a,h,ni) = / mod m[«], P(a,l,rn) = a1 mod m[«], and P(x,'22 1 0 1, m) = .r2 2 mod in (and thus mod m[«]), then t e s t 2 will output ‘BUGGY’ and halt. If eit her of the first two congruences fails, then since T-2(P, a, jo — 1, m) holds by assumption and h.l < 22'0 ', check2 will catch the bug with probability at least 1/2. If the third congruence fails to hold, then since I) (P. jo — l,m ) holds, c h e c k l will output ‘BUGGY’ and halt with probability at least 1/2. Therefore, in one pass through the do loop, Prob[test2 finds a bug]> 1/12. Thus, after 8k passes through the do loop, Prob[test2 says ‘BUGGY’] > 1 — 2~k (note: (yy)8 < y)- n P r o o f o f T h eo rem 10: To show that C me '8 in fact a, checker for modular exponentiation, observe first that, for correct, programs P. ("ME answers ‘CO RRECT’ on all valid inputs, since the first part, of lemmas 7 through 10 show that the checks and tests do nothing on correct programs, whereby ( I me will output ‘CO RRECT’. So, assume that P and < a, b, m > are such that P(a,b,m) ^ ah mod ret and let n = [log log 6]. If l\ ( P, /, in) fails to hold for some 0 < j < n, then it follows from lemma 8 that with probability at least 1 — 2~k, C'ff{E(a,b,m: k) = ‘BUGGY’. If 7i (P .j, in) holds for all 0 < j < n. but 72( P. a. j, m ) fails to hold for some 0 < j < n, tlx'n it follows from lemma 10 that with probability at least. 1 — 2~fc , Clfurfa.b, in; k) = ‘BUGGY’. So, assume further that ] \( f),j,m ) and T2 (-P * JS m ) hold for 0 < j < n. Finally, assume that b > c — [log???]) since when b is small, checking is done by direct computation. Note that P(a, 6, m) ^ ah mod rn implies that either P(a,b,m) ^ ah mod ???[«], or P(a,b,m ) ah mod ???[«]. There are thus two cases: Case 1: P(a, /?,???) ^ ah mod ???.[«]. Since 7-}(P, o, ??, ? ? ? .) holds, it follows from lemma. 9 that ch eck 2(«,/>, ??, ???) will output ‘BUGGY’ and halt with probability at least 1/2. Thus, with probability at least 1 — 2~k\ b, rn; k) — ‘BUGGY’. Case 2: P(a,b,m ) ^ ab mod ???[«]. For b > c, ab = 0 mod ???.[«]. Thus, P(a,b — c,m) ■ a' = 0 mod ???[«]. Hence, if P(a,b,m) ^ ab = 0 mod ???[?/], then P(a,b,m ) ^ P(a, b—c, ?n)-ac = 0 mod ???[o], so the congruence fails mod m, and Cm e will output ‘BUGGY’. Thus, in any case, C^fE(a, 6, m; k) will output ‘BUGGY’ with probability at least 1 — 2 ~ k . Verifying the running time is a. straightforward task. □ 3 .4 .6 S p e e d y C o n sta n t-Q u ery C h eck ers for M o d u la r E x p o n e n tia tio n While Cm e * s a fast checker, one can ask whether there exist speedy constant-query checkers for modular exponentiation. In this section, it is shown that, under a hypothesis (first presented in [AK]), checking can indeed be done quickly with a. constant number of queries for what one might call the RSA problem: on input positive integers ;r,?/,c with ,r < 2 and = 1, compute x v mod z. The hypothesis is now presented. Some definitions are required: D efin itio n 3 For all n G Z>0, for all f : Z>0 — > Z/nl, for all 7 G R with 0 < 7 < 1, 1. f is 7-homomorphic for 1 1 iff # { . r € Z |0 < j - < n k f(x ) £ 0 mod »} # { x G Z|0 < x < nj ~ 7 and #{.r,/y £ Z | 0 < x,y < n fc /(.r + y) = ,/'(>■) ■ /(y ) mod 7 ?} > # { ./’, y € Z|0 < x,y < n} “ 7 #{.r G Z | 0 < .r < n fc /(.r + 1) = f(x ) • / ( I ) mod ??} ^ #{.r G Z|0 < r < 1 1} ~ ~ 7 2. / 7-exponentiaJ for n iff #{.r G Z|0 < x < 1 1 & /(.r) = / ( I D mod /?} #{:r G Z|0 < .r < n} “ 7 H y p o th e sis 1 Then, exists 7 G tw/A 0 < 7 < 1 .s-w c/t that for all n G Z>0 and for all f : Z>o — * Z / n l . if f is 7-homomorphic for n, then f is exponential for n. In support, of the hypothesis, note the following lemma: L em m a 11 For all n G Z>0, for all f : Z>0 — > Z/nZ. if f is 1-homomorphic for n, then f is 1-exponential for n. Tlie proof follows easily by induction. Further, Don Coppersmith has shown that a similar hypothesis that additionally involves the order of the multiplicative group 4>[n) holds. Using this, Blum, Luby and Rubinfeld have demonstrated speedy constant-query checkers for modular expo nentiation when n is prime, or of known factorization [BLR]. However, when using USA, one often does not know <f>(n)', thus, to check RSA programs, one would like to obtain speedy constant-query checkers that do not require that < f> {n) be known. 56 T h e o re m 11 Hypothesis 1 implies that there exists an R.SA checker Cus A such that for all. programs P that halt on all inputs, and all instances ,r, y, 2 6 Z>0 with x ,y < 2 and (.r, 2) = \ and all k 6 Z>0, /. C p{x,y, 2; k) requires 0(k) queries to P: 2. C p (x, y, 2; k) runs in time 0{kM{)ogz)), where M (n) is the required to perform modular ■multiplications of n-bit numbers. First, the checker is presented: Given a. program P th a t halts on all inputs (and supposedly com putes US A), and given ;r,;y,2 (E Z>0 w ith x,y < z and (# ,2 ) = 1; C^SA(x, y, 2 ; k) runs as follows: C^SA{x,y,z;k): Let # 1 = [—A:log7 ‘ 2], an d t2 = [— k/ log4/ 5 2], where 7 is as in H ypothesis 1. (E nsure th a t / ( l ) = x m od 2 where f{s) = P(x,s,z).) if P{x, 1,2) ^ x m od 2, o u tp u t ‘B U G G Y ’ and halt. (E stablish 7-hom om orphism for f{s) = P(x,s, z).) rep ea t tA times: i := ra.ndom (0,2); if P(.i\i,z) = 0 m od 2, o u tp u t ‘B U G G Y ’ and halt. ? ’ := random (0, 2); j := ra n d o m (0, 2); if P(.r,/,2) • P (x1j 1z) ^ P (x,i + j , 2) m od 2, o u tp u t ‘B U G G Y ’ and halt. • i . := random (0, 2); if P(x, i, 2) • P(.r, 1, 2) ^ P ( . r , / + 1,2) m od 2, o u tp u t ‘B U G G Y ’ and halt. 57 (E stablish correctness oil given input.) repeat t2 times: r := random(0, c); if P(x,y,=) ■ P(x,i\z) £ P(x, y + v,z) m od c, o u tp u t ‘B U G G Y ’ an d halt. O u tp u t ‘C O R R E C T '. P r o o f o f T h eo rem 11: Clearly, if P ( .r ,t/,r ) = x* m od ~ for all x,y,z £ Z>0, C r s a o u tp u ts ‘C O R R E C T ’. It rem ains to verify th a t if P(x,y,z) ^ ;r" m od r, then CpSA(x. y, 2; k) o u tp u ts ‘B U G G Y ’ with probability > 1 — l/ 2k. If / (as defined in Step 2) is not y-hom om orphic for z, then w ith probability > 1 — 1 / 2 k (by the choice of U), C r s a will find a bug. So, assum e / is in fact 7-hom om orphic. By Hypothesis 1, / is ^ -e x p o n en tial. B ut then, for all w £ Z w ith 0 < w < z, # { r 6 Z | 0 < r < £ Sc f(r) = / ( l ) r m od c f(w + r) = f ( l ) w+r m od z] # { r £ I | 0 < r < ~} Thus, by the choice of the probability that C rsa picks an r such that f(r) = f{l)r mod and f(y + r) = f{l)y+r mod z) is > 1 - l / 2 A r . If such an r is picked, then f(r) • f(y) ^ f{y + r) mod ~, since / ( l ) = x mod ~ by Step 1, and (.r,£) = !.□ 3 .4 .7 O p en Q u estio n s An important, open question is whether there are speedy constant-query checkers for modular exponentiation when the factorization of the modulus is not part of the input.. Such checkers might, prove useful in the context of cryptography. One direction in which to pursue this question would be to prove that Hypothesis 1 holds. A question of much greater general interest is to characterize problem s with speedy constant-query checkers. This m ight be of interest to software practitioners who wisli to w rite “checkable'1 program s. 59 C h ap ter 4 T h e C hecker C om p lex ity o f M od u lar O p era tio n s in C ryp tograp h y 4.1 C heckers for C ryp tograp h y It can be argued th a t m odern cryptography was born in the late 1970’s, w hen Diffie and Heilman suggested using com putational com plexity as a criterion for th e qual ity of cryptographic system s, and introduced th e notion of public-key cryptography [DIf]; and Rivest, Sharnir and A dletnan proposed the first public-key cryptosystem [USA]. Since then, there have been m any advances in this new style of cry p to g ra phy, from secret-sharing through zero-knowledge proofs. Interestingly, right from th e s ta rt, m odular operations have provided tin' “backbone” of this cryptography th e operations th a t perform inform ation hiding. For instance, in th e RSA sys tem , m o d u lar exponentiation transform s the plaintext to ciphertext; in th e so-called “knapsack” system s (m ostly broken now), m odular multiplication is used to convert an easy knapsack problem into a hopefully difficult, one wherein th e secret is hid den; and in m any im plem entations of cryptosystem s based on zero-knowledge proof system s, bits are encoded as m odular squares or non-squares. C O As was seen earlier, program correctness is a vital issue. B ut if th e average user is concerned ab o u t correctness, how much m ore so the cryptographer? T h e n a tu re of his profession, dealing as it does with inform ation of critical im portance, makes the correct m anipulation of this inform ation crucial. How often is he willing to pay a small price in running tim e for the sake of increased confidence? This is exactly what, program checkers offer. A nd since' m odular arithm etic seems well entrenched in m odern cryptography, it seems logical to begin the task of designing checkers for cryptography with checkers for the various m odular operations in cryptography. T here is another reason for investigat ing checkers for cryptography. T h e security of m any cryptosystem s is predicated on certain com putational problem s (such as in teger fact oring or discrete logarithm s) being “h ard ” to solve. However, some of these problem s are very easy to check. Does this tell us anything ab o u t th e hardness of solving these problem s? Not as yet: the relation between th e com plexity of checking and th e com plexity of solving is not fully understood, b u t of late, it has received increasing attention. For exam ple, recent research has d em o n strated upper bounds on com plexity of the "easiest,’ problem s th a t are hard to check (see [Y] and [BF]). Now. if one obtained non-trivial upper bounds for th e com plexity of problem s th a t are easy to check, or those th a t additionally have speedy constant-query checkers, these bounds may prove of great interest to cryptographers. In this chapter, efficient checkers are presented for th e cry p tan aly tic problem s associated with the Diflie-Heilman key exchange protocol [DM] and th e RSA cryp tosystem [RSA], as well as for q uadratic residuosity, which is used in m any cryp tosystem s. Blum [B] dem onstrated a. checker for a variant of th e last problem; however, this variant is not the one used in most cryptosystem s. T h e checkers for these cryptanalytic problem s ap p ear for the first time. 61 4.2 Q u ad ratic R esid u o sity Q uadratic residuosity (QR) is th e following problem : given two positive integers n and a, determ ine w hether a is a square m odulo n. W hen n is a. prim e pow er, this problem has a polynom ial tim e solution. However, it is believed th a t for a rb itrary », there is no (random ) polynom ial tim e algorithm to determ ine QR [GM], T h e security of several cryptosystem s rests on this assum ption [GM, B C , BCG]. B lum [B] gave an efficient checker for a variant of this problem : given n and a, if there exists x such th a t x 2 = a m od n, o u tp u t x. O therw ise, say “Nod’ In essence, this variant, dem ands a witness when a is a square m od n, and is clearly no easier than QR. However, this does not imply that. QR it self lias an efficient checker. T h e difficulty lies in assessing the correctness of a “Yes” answer w ithout recourse to a w itness (namely, a square root). In this section, a speedy constant-query checker is described for a, special case of QR where the num ber of prim es dividing the m odulus is fixed. Let QRs be th e problem : given n and a such th a t n — fl'Li Pi' w ith p, distinct prim es and (i positive integers, determ ine w hether a is a square m od n. N ote th a t th e m ost com m on occurrence of this problem in cryptographical applicat ions is th e case when .s = 2 . An im p o rtan t routine th a t checks w hether an answ er of “No” is correct is de scribed. This part is identical to B lum ’s checker. check_nonsquare(n, a; k) do k tim es (* A query *) r := random ( 1.n ~ 1); if P(n, r 2 mod it) = “No,” th en o u tp u t “ B U G G Y ” and halt en d do do k times (* B--query *) r := random( 1, it — 1); \f P{n,a ■ r2 mod it) = "Yes," th en output, “B U G G Y " and halt en d do Essentially, if a is a square and P declares it a non-square, P cannot distinguish between the sets {r2 m od 'n|l < r < it — 1} aiid {a • r 2 m od n |l < r < n . — 1}, and with high probability will reveal its lie. On the o th er hand, a correct program would have no trouble passing this test. T h eo rem 12 (QR* is ea sy to check) Their is a checker Cqrs forQ R s such that for all total programs P '• Z>o x Z>0 — > Z>0 and all positive integers ei,n and k where n has exactly s distinct prime divisors and a < n, 1. (■Q/i'S(i>;a: A ') irqmn:s at most O(k) queries; and 2. CqRs{ii, a\k) runs in polynomial time. Before em barking on the checker Cqrs, a rem ark is in order. In order to check P(n, a), t he checker requires several random num bers relatively prim e to n. However, it was assum ed at the outset th a t the only source of random ness available is the function handom ( l, m ) ’ (for positive in) th a t yields a uniform ly distributed random num ber between 1 and in. However, one can obtain N random num bers relatively prim e to n as follows: obtain a random num ber r between 1 and « — 1. If el = g c d (r,«.) = I, then keep r and repeat the process. If d is not 1, then use d to split n as t he product of relatively prim e num bers nA and ?> 2. Now, generate random num bers r* between 1 and nt (i = 1, 2). If each of these n is relatively prim e to /q, one can obtain an r m odulo n by th e Chinese R em ainder Theorem , and r's so obtained will l>e uniform ly uniformly distributed over (Z/nZ)*. If any of th e jy is not relatively prim e to n,, then m can be further split. C ontinuing thus, one obtains N uniformly distrib u ted random num bers from (Z/nZ)*. T h e cost is (N + lo gn) log /? 63 random bits, N god’s and C hinese Remaindering,s, and log n splittings, which can be clone in tim e linear in N and polynom ial in log it. F urther details are om itted. T h e second is t hat given a and n, a can be split (in polynom ial tim e) into a-\ and a2 such th a t 1. a = a\ ■ a-2\ 2. gcd(fl,,fl2) = g c d (a ,,/;) = 1; 3. every prim e th a t divides a2 divides n. T hen, a is a square m od i> iff a 2 is a square over the integers (this is can be determ ined in polynom ial time) and cq is a square m od n. T h a t is, th e question of w hether a is a square m od n can be reduced in polynom ial tim e to th e question of w h eth er some a' relatively prim e to n is a square m od r > . Thus, in th e description of th e checker, it will be assum ed w ithout loss of generality th a t a is prim e to n. T h e checker follows: I n p u t : P: a. program (th a t supposedly solves QRs) th a t halts on all inputs; it, a: positive integers such th a t » has exactly s distinct prim e divisors, a , < n and gcd(a. n ) = L ; k: confidence p aram eter (given in unary). O u t p u t : ‘C O R R E C T ’ if P(N, A) - QR*{N, A) for all inputs (JV, A) such that, ;V has exactly s distinct prim e divisors. A < N and g c d (T ,A r) = 1; ‘B U G G Y ’ (with probability > 1 - 1/ 2*) if P{n,a) ± Qfls(n,a). 64 b egin if P{n, a) — “Yes” th en (* Loop 1: check th a t P is consistent. *) do ]og3/4(|)A’ times r — random num ber relatively prim e to n if P{n,a ■ r2 m od n) = “No,” o u tp u t “B U G G Y ” and halt; end do; count. := 0; (* Loop 2: check th a t P declares squares as squares. *) do log3y,j(^)A- tim es r := random num ber relatively prim e to n if P {n,r2 mod n) = “No,” o u tp u t “B U G G Y ” and halt; en d do; (* Loop 3: count w hat fraction P declares as squares. *) count 0; do 7k times (7 chosen as in proof of Lem m a 13) r := random num ber relatively prim e to n if P{ > ?,r) — “Yes” th e n increm ent count en d do; if (c o u n t/7k) > | • ^ th en o u tp u t “B U G G Y ” en d if {P(n,a) — “Yes” ) if P(n,a) = “No” th en ch eck -nous quare( n , a ) en d if (P(n,a) = “No” ) end T he idea behind th e checker can be seen by considering the case when a is a non-square m od n, b u t P says “Yes” . B lum ’s checker relied on th e given “w itness” (the p u rported square root of a mod n) to discover the error. Here, one m akes use of th e statistical principle th a t large deviations from the ex p ectatio n are unlikely, seen earlier in t he checker for m odular inversion. Let n = n * = i P ? (Pi distinct prime's), a 6 ( Z / n Z ) * is a square mod n iff for each /, a is a square mod p\' (by the Chinese Remainder Theorem) iff a is a. square mod j)j (by an easy lemma from number theory). Thus, by associating a with the s-vector < ( ~ ) , . .. , ( ^ ) > ( (^ ) is the Legendre symbol), one partitions (Z/nZ)* into 2s classes; the squares mod n are then associated with the vector < + 1 ,...,+ 1 >. The class to which an element, c € (i f n Z )* belongs is {c • x 2 mod n|,r £ (Z/nZ)*}. Each class has the same size; thus, the probability of a random number in {Z/nZ)* being in any given class (e.g., the squares) is Suppose’ P is such th a t it declares m any of the m em bers of a s class as non- squares. N ote that, this contradicts its statem ent that, a is a square. T his contradic tion can be elicited with high probability by sam pling enough m em bers of a ’s class, and seeing P's responses. If any “No” appears, one can declare P to be B U G G Y . This is the intent of loop 1. Similarly, if P declares m any m em bers of th e class of squares as non-squares, one can again find P to be B U G G Y w ith high probability by sam pling. T his is done in loop 2. So, assum e P says “Yes” to most (i.e., 3/ 4 ) of th e m em bers of c/’s class, as well as the class of squares (these are, of course, distinct classes). Now, P declares too m any elem ents of (Z/nZ)* as squares — at least 50% more than it should. Loop 3 sam ples (Z/nZ)* and sees w hat fraction P declares as squares. If this fraction is at 25% bigger than it should be, P is declared to be BU G G Y . W hy does this work? Well, if P declares 50% , more elem ents to be squares mod n th a n it should, then a random sam ple should reflect this. It is possible th a t the sam ple does not; however, if a large enough sam ple is taken, th e probability’ th a t th e fraction P declares to be squares is less than 25% greater th an it should be is very small. T hus, P will be caught with high probability. N ote that the statistical principle is also used to prove th a t correct program s will not be declared BU GGY very often. It is possible th a t, in checking a, correct 66 program , one picks a large num ber of squares (i.e., 25% m ore th an th e expected num ber); the program would then be declared B U G G Y . However, the likelihood of picking such a. large; fraction of squares is very small when the sam ple is big enough. 'I'he following lem m as capture this argum ent m ore formally. L em m a 12 (check_nonsquare w orks) For all P, a, u and k as in Theorem 12, if P (■ > > ., a ) = “ V V * ”, Ihc n 1. if a is a square mod n, then check_nonsquare(n,a:k)=HUGGY wilh proba bility at least 1 — 2~k; 2. if P is a corn et program, then ch ec k -n o n sq u a re (n, a;k) does nothing. Proof: 1. A ssum e a is a square m od n. Let th e fraction of squares m od n th a t P declares as squares be 7. If 7 < 1/ 2, then th e probability th a t P answers “Yes” to all A-queries is at m ost 7^ < 2~k. If 7 > 1/ 2, the probability th a t P answers “No” to all B-queries is at most (1 — 7)* < 2~k. T hus, th e probability th a t ch eck -n on sq u are says “B U G G Y ” is at least 1 — 2” fc. 2. Obvious. L em m a 13 (( ' q w orks) For all P, a,n and k as in Theorem 12, 1. if P{n,a) ^ QR,s(n,a,) then CqRs{n,a; k) = B V G G Y with probability at least 1 - 2 ~ k : 2. if P{N ,A ) = QBs(N, A) for all A, N € Z>o with N the produet of exactly s distinct primes, then ( q / s()c a; k) =CORR.TJCT with probability at least 1 — 2~U 67 P ro o f: Let fix = /^(| • jr,5 /6 ) and /f2 = fi(^z, 5/4), where /3() is as in Proposition 4 in Chapter 3. Let fi — inax(/ii, fi2). Let 7 = log/? Consider the following 4 eases: 1. a is a non-square mod n, but P(n,a) = “Yes.” There are three subcases: (a) P declares less than 3/4 of the elements in «'s class as squares. Then with probability at least I — 2k, loop 1 will discover this. (b) P declares at least 3/4 of the members of a's class as squares, but declares fewer t han 3/4 of the squares as squares. Then with probability at least i — 2k, loop 2 will discover this. (c) P declares at least 3/4 of the members of a's class as squares, and declares at least 3/4 of the squares as squares. Consider loop 3. One expects a fraction of at least | • ^7 elements (3/4 of the squares plus 3/4 of the members of o's class) of the random numbers chosen in this loop to be declared as squares. The probability that (count/7 ^) < | • jj is at most < iP'k = 2~k by choice of fix and Proposition 4. So, loop 3 will declare P BUCGY with probability at least 1 — 2~k. Thus, in any case, Cqr„ will out])ut BUCGY with probability at least 1 —2~k. 2. a is a iion-s(piare mod n, and P(n,a) = “No”. If P is a correct program for Qlts, I> y Lemma 13, check_nonsqua.re will do nothing, and ( 'q/? .s will output CORK.ECT. If P is buggy, it does not matter what Cqrs outputs. 3. a is a square mod /?, and /■*(??,a.) = “Yes.” Again, if P is buggy, it does not m atter what Cqris outputs. If P is a correct program for QRs, then it will pass loops 1 and 2, as well as the calls to check_nonsqua.re in loop 3. The only way Cqhs can output BUGGY is if (count/yA:) > | • yr- This will occur with probability at most fi^1 " < fPk = 2~k by choice of fi2 and Proposition 4. Thus, wdth probability 1 — 2~fc, Cqjxs will output CORRECT. 68 4. a is a square mod •//, but. P(n,a) = “No." By Lemma 13, check_nonsquarc will discover this with probability at least 1 — 2~k. □ To complete the proof of Theorem 12, it only remains to verify the complexity °f ( qH s- a straight forward task. 4.3 C h eck in g th e D iffie-H eilm an C ry p ta n a ly sis P ro b lem Let p be a prime, g a generator of (Z/pZ)*, and a, ft (Z/pZ)*. Then there are numbers ,r,y unique mod p — 1 such that a = gr mod p and ft = <f mod p. Let c = gxv mod p. The so-called DiHie-Heilman cryptanalysis problem, D H consists of comput ing c given p, g, a and ft; the security of t he Dilfie-Hellman key exchange protocol [DH] rests on the difficulty of solving this problem. Now suppose you were given p. < y , a. ft, and were told that d was the solution. It is not obvious how to verify this statement without actually computing the right answer. On the other hand, if you had a program P that purported to solve D //, and P(p,g,a,b) — d, you could quickly say either that, d is correct, or P is buggy, i.e.. T h e o re m 13 (D ll is easy to check) There is a checker Cdh .for D ll such that for all total programs P : Z>0 — » Z>o and all positive integers p. g. a. ft and k with p prime, g a generator mod p, and a, ft < p, 1. Cpfj(p,g,a,b', k) requires at most O(k) queries; and. 69 2. C£fl(p,g,a,b;k') runs in polynomial time. Here is an efficient checker for DH. It is based on the “tester-checker” paradigm mentioned above. C^H {p,g,a,b:k): In p u t: P: a program (that supposedly solves DH) that, halts on all inputs; p. y. a. b: positive integers; k: coiifidence parameter (given in unary). O u tp u t: ‘CO RRECT’ if P(q, h, c, d) = DH(q, h, c, d) for all inputs (q, h, c, d) such that q is prime, h is a generator mod q, and 1 < c, d < q\ ‘BUCGY’ (with probability > 1 — \/2 k) if P(p,g,a,b) ^ DIP(p, g,a,b). beg in Test: do k times r := ra n d o m (l,p — 1); s random( 1,p — 1); let a' = gr mod p: let // = gs mod /> ; if P(p,g,ar, b') ^ grs mod p th e n output “BUGGY” and halt en d do Check: let d = P(p,g,«, b) do k times r := random (l,p — 1); s := random (l,p — 1); let a' = a ■ gr mod p; let // = b ■ gs mod p; if P(p,g,a', b') ^ d' • as • br ■ grs mod p th e n output “BUGGY” and halt en d do output “CO RRECT” en d 70 The proof of Theorem 13 follows from the proof below that ('i)H is in fact a. checker lor DIP since verifying the running time is straightforward. Clearly, if P is a correct program for D IP then the checker would output “COR RECT.” It remains to verify that if < 1 ^ c — D H (/>, g ,a , /> ), then with probability at least 1 - 2~k the checker says “BUGGY.” Fix p and g. If # { a , 6 g (l/pl)*\P(rKg,aJ>) = D H (p, g,aj>)} 1 # {« ,& € (Z/pZ)*} “ 2 then each iteration of the Test phase has probability 1/2 of finding a bug. Thus, with probability at, least, 1 — 2_i', the Test, phase will output “BUGGY”. If, on the other hand, (4.1) does not hold, then with probability better than 1/ 2, the random pair // obtained in any iterat ion of the ( 'heck phase will be such that c’ = P(p,g,aPU) = DU (/>, < y , o’. IP ). But, then, < : ’ = DIJ (p. g. gr+ r, gv f 4) = < y (x+7 ')(3 /+s) = gj-y+.r.s+yr+r.s ^ . a$ . y . ( y s _£ g . (fs . yr _ ^ r.s mocj ^ Thus, with probability at least 1 — 2_A :, the Check phase will say “BUGGY.” □ 4 .4 C heck ing an R S A C ryp tan alyzer It, is assumed that the reader is familiar with the RSA cryptosystem [RSA]. There are two associat ed cryptanalysis problems. The first is: given a public RSA key (n, e) and a ciphertext, c € Z/nZ, find the message rn such that m f = c mod n. Checking this problem is easy, requiring just, one modular exponentiation. The other is, given a public key (u, c), find a private key pud) such t hat (mf)d = m mod n for all m £ I f n i . This problem is random polynomial-time equivalent, to factoring, and since factoring is easy to check, a. variant of BeigeFs Theorem [B] would produce a polynomial-time checker. However, the emphasis here is on speedy constant,-query checkers, and one is demonstrated below. 71 For n = Hip!', let A(») = l c . m { )}, where < p is the Euler phi function. Then the private key d must satisfy d • e = 1 mod A(n). Let R S A C be the problem of computing such an d . < n given n and e (the answer may not be unique). ^ r s a c A ™ i c, fc): In p u t: P: a program (that supposedly solves R S A C ) that halts on all inputs; n. c positive integers; k: confidence parameter (given in unary). O u tp u t: ‘CO RRECT’ if D ■ P {N , E) = 1 mod A(.V) and P(N, E) < N for all inputs (N,E) with gcd(E, A(iV)) = 1; ‘BUGGY’ (with probability > 1 — l/2k) if e ■ P(n,e) ^ 1 mod X(n) or P(n, e) > n b eg in let d = P(n, c); if d > n th e n output “BUGGY” and halt; do (log',/4 ^)A - times a := random(l,7i — 1); if (ad)e y a mod n th e n output “BUGGY” and halt output “CO RRECT” en d L e m m a 14 (C r s a c w orks) For all total programs P : Z>0 x Z>0 — * Z>0 and all positive integers «, e and k, 1. If P (l\\ E) ~ R SA C (N , E) mod A(;V) and P(N,E) < N for all N, E € Z>0, then Cj’ {SAC(n<( :k) = “CO RRECT”. 2. If P(n,e) R SA C (n,e) mod A(n) or P ( n f e) > n, then C^gAC(n,c; k) = “ B U G G Y ” with probability at least 1 — 2~k. 72 P ro o f: 1) is clear. 2) 'flic case P (« ,e) > n is obvious. So suppose d = P{n,t.) ^ JiSAC(n,e) mod A(ra). Let n = n i p{'• Now, d ■ c jk 1 mod A(»). So, there is some prime power ^||A(/? ) ( / > 0) such that d-c ^ 1 mod qd Since A(n) = lcm{0(p/')}, there is an i such that (/\\(p(p'i’)- For every a 6 (Z/p/'Z)* such that, a is not a <yth power and every ,r € Z>o, a* = 1 mod p{' implies qf |.r. Conversely, for such a, q* fx implies a* ^ 1 mod p/*, or err+ 1 = £ a mod p{% . Now consider x = de — 1. /f.r, since d ■ e ^ 1 mod qd Thus, a’ 1 , ^ a mod p{\ whence ade ^ a mod n. The probability of picking a ^ 0 mod pt is 1 — [ifpd > 1/ 2; the probability that such an a is not a qt\\ power mod p,- is 1 — (I/ q) > 1/2; thus the probability that a random a does not yield “BUGGY” is < ‘ 3/1. Thus t he probability of not detecting a bug is < 2~A :. □ T h eo rem 14 (IIS'AC is ea sy to check) There exists a checker C rsac for R SA C such that for all total programs P : Z>0 X Z>0 — * ■ Z>o and all positive integers n, e and k, 1. Cr s 4c (u - < G k) squires exactly 1 query; and 2. Crsac,(n ‘ > G ^') runs di polynomial time. P ro o f: Consider Crsac given above. That it is a checker for RSAC is shown by the previous lemma. The query count and running time are obvious by inspection. □ 4.5 O p en P rob lem s There are several interesting questions regarding the checking complexity of cryp tographic problems. One is whether there is an efficient checker for QR., i.e., no witness is given, and the number of primes dividing the modulus is unknown. An other question is whether there is an efficient checker for the problem of recognizing generators mod primes ((7/?): given a prime p and 0 < < j < /> , determine whether g generates (Z/pZ)*. A very interesting question would be to find non-trivia! upper bounds for the complexity of problems with polynomial time checkers, or to ot herwise relate check ing complexity and solving complexity. Such a. result may offer new insights into the security of current cryptographic systems. 74 R eferen ce List [Ad] Leonard Adleman, A subexponential algorithm for the discrete log prob lem, Proc. 20th Annual Symposium on the Foundations of Computer Sci ence. l E E F ( \ \ m ) , pp. 55-60. [AUK] Leonard Adleman, Ming-Deli Huang and Kireeti Kompella, Efficient Checkers for Number-Theoretic Computations, submitted to Information and Computation, June 1990. [AK] Leonard Adleman and Kireeti Kompella, Self-Checking RSA Programs, abstract submitted to Advances in Cryptology (CRYPTO 89). [AI\S] M. Ajtai, .1. Komlos and E. Szemeredi, Sorting in c log n parallel steps, ('ombinatorica 3 (1983), pp. 1-19. [Ap] Apostol, I 1 ., “Introduction to Analytic Number Theory’ " ’, Springer Verlag, New York. [BCH] P. Beanie, 11. Hoover and S. Cook, Log depth circuits for division and related problems, SIAM J. Computing 13 (1986), pp. 994-1003. [BE] Richard Beigel and Joan Feigenbaum, Improved Bounds on Coherency and Checking, manuscript, April 1990. [B] Manuel Blum, Designing Programs to Check Their Work, Technical Re port, ISRI. UC Berkeley, Nov. 1988. [BLR] Blum, M., Luby, M., and Rubinfeld, IT, Self-Correcting Programs, Proc. of th( 22nd Annual ACM Symposium on Theory of Computing. 75 A. Borodin, Private communication with M. Tompa, August 1985. A. Borodin. .J. von zur Cathen and J. Hopcroft, Fast, parallel matrix and CCD computation, Information and Control 52, 3 (1982), pp. 241-256. (lilies Brassard, David Chaum and Claude Crepoa.il, Minimum. Disclosure Proofs of Knowledge, 1987. (lilies Brassard and Claude Crepeau, Zero-Knowledge Simulation of’ Boolean Circuits, Advances in Cryptology: Proc. of CRYPTO 86, Santa Barbara, California, 1986. R. Brent and II. Kung, A systolic algorithm for integer CCD computation, (•MU Tech. Rep. CMU-CS-8 1-135. E. Canfield, P. Erdos and C. Pomerarice, On a problem of Oppenheim concerning “Factorisatio Numerorum,” J. Number Theory 17 (1983), pp. 1-28. B. Chor and O. Goldreich, An improved algorithm for integer CCD, MIT Laboratory for Computer Science (1985). S. Cook, A Taxonomy of Problems with Fast Parallel Algorithms, Infor mation and Control 64 (1985), pp. 2-22. L. Csanky, Fast parallel matrix inversion algorithm, SIAM J. Comp., 5, 4 (1976), pp. 618-623. W. Diffie and M. Heilman, New Directions in Cryptography, IEEE Trans actions on Information Theory, vol. IT-22, 1976, pp. 644-654. 1). Dobkin, R. Lipton and S. Reiss, Linear programming is logs pace hard for P, Information Processing L e t t e r s 8 (1979), pp. 96-97. C. Dwork, P. Kanellakis and .1. Mitchell, On the sequential nature of unification, ./. Le>gic Programming (198 \), pp. 35-50. W. Eberly, Very fast matrix and polynomial arithmetic, Proc. 25th An nual Symposium on Foundations of Computer Science, IEEE (1984), pp. 21-30. F. Fich and M. Tompa, The parallel complexity of exponentiating poly nomials over finite fields, Technical Report 85-03-01, University of Wash ington, Seattle, 1985. J. von zur Gathen, Parallel powering, Proc. 25th Annual Symposium on Foundations of Computer Science, IEEE (1984), pp. 31-36. M. Goldberg and T. Spencer, A new parallel algorithm for the maximal .independent set problem, Proc. 28th Annual Symposium, on Foundations of Computer Science, IEEE {1987), pp. 161-165. L. M. Goldschlager, A universal interconnection pattern for parallel com puters, ./. ACM (Oct. 1982) vol. 29, no. 4, pp. 1073-1086. ]j. Goldschlager, R. Shaw and ,J. Staples, The maximum flow problem is logs pace complete for P, Theoretical Computer Science 21 (1982), pp. 105-111. Sha.fi Goldwasser and Silvio Micali, Probabilistic Encryption, JCSS, vol. 28, no. 2, 1981, pp. 270-299. R. Kannat), G. Miller and L. Rudolph, Sublinear parallel algorithm for computing the greatest common divisor of two integers, SIA M J. Comp. 16, 1, (1987), pp. 7-16. Sampath Kan nan, Ph.D. Dissertation, University of California, Berkeley, 1990. R. Karp and A. Wigderson, A fast parallel algorithm for the maximal independent set problem, Proc. Kith Annual ACM Symposium on Theory of Computing (1984), pp. 266-272. Kompella, K., and Adleman, L., Checkers for Cryptographic Problems, to appear in Advances in Cryptology (CRYPTO 90)- R. Ladner, The circuit value problem is log,space complete for P, SIC A C T News 7, 1 (1975), pp. 18-20. E. Luks, Permutation groups and graph isomorphism, Proc. 27th Annual Sympeesium on the Foundations of Compute r Science, IEEE] (1986), pp. 292-502. CL Miller, Riemann hypothesis and tests for primality, J. Computer and System Science f f 13 (1976), pp. 300-317. V. Pan and J. Reif, Efficient parallel solutions of linear systems, Proc. !7th Annual ACM Symposium on Theory of Computing, (1985) pp. 143- 152. V. Pan and J. Reif, Some polynomial and Toeplitz matrix computa tions, Proc. 28th Annual Symposium on foundations of Computer Sci ence, IEEE (1987) pp. 173-184. ('. Pomerance, Fast, rigorous factorization and discrete logarithm algo rithms, in Discrete Algorithms and Complexity, Academic Press, Florida, 1987. J. Reif, Logarithmic depth circuits for algebraic functions, Proc. 2fth Annual Symposium on foundations of Computer Science, IEEE (1983), pp. 138-115. Revised version, 1984. Ron Rivest, Adi Shamir, and Leonard Adleman, A Method for Obtaining Digital Signatures and Public-Key' Cryptosystems, Communications of the ACM. vol. 21. no. 2, February 1978, pp. 120-126. Rubitifeld, R., private communication. W. Ruzzo, On uniform circuit complexity, .]. Computer and System Sci ences 22, 3 (1981), pp. 365-383. .1. Savage, The Complexity of Computing, Wiley, New York (1976). Jon Sorenson, Polvlog depth circuits for integer factoring and discrete logarithms, Technical Report, Dept, of Clomputer Science, University of Wisconsin, Madison, 1989. Edward 0 . Thorp, “Elementary Probability,” John Wiley and Sons, Inc., 1966. D. H. Wei deman 1 1 , Solving sparse linear equations over finite fields, IEEE Trans. Inform. Thtory, IT-3‘ 2, 1986, pp. 54-62. Andrew Yao, Coherent Functions and Program Checkers, Proc. of the 22nd Annual ACM Symposium on Theory of Computing, 1990, pp. 84-
Asset Metadata
Core Title
00001.tif
Tag
OAI-PMH Harvest
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-oUC11257362
Unique identifier
UC11257362
Legacy Identifier
DP22821
Linked assets
University of Southern California Dissertations and Theses