Page 191 |
Save page Remove page | Previous | 191 of 243 | Next |
|
small (250x250 max)
medium (500x500 max)
Large (1000x1000 max)
Extra Large
large ( > 500x500)
Full Resolution
All (PDF)
|
This page
All
|
(c) Thus, a matching converse holds for any d 2 N+, which proves inequality (6.3). Now we proceed to prove the rest of Lemma 6.1, explicitly, we aim to prove that the recovery threshold of any T-private encoding scheme is at least RLCC(N,K, f) + T · deg f. Inequality (6.3) essentially covers the case for T = 0. Hence, we focus on T > 0. To simplify the proof, we prove a stronger version of this statement: when T > 0, any valid T-private encoding scheme uses at least N RLCC(N,K, f)+T ·deg f workers. Equivalently, we aim to show that N (K+T −1) deg f +1 for any such scheme. We prove this fact using an inductive approach. To enable an inductive structure, we prove a even stronger converse by considering a more general class of computing tasks and a larger class of encoding schemes, formally stated in the following lemma. Lemma C.1. Consider a dataset with inputs X , (X1, ...,XK) 2 (Fd)K, and an input vector , (1, ..., K) which belongs to a given subspace of FK with dimension r > 0; a set of N workers where each can take a coded variable in Fd+1 and return the product of its elements; and a computing task where the master aim to recover Yi , Xi,1 · ... · Xi,d · i. If the inputs entries are encoded separately such that each of the first d entries assigned to each worker are some TX > 0-privately linearly coded version of the corresponding entries of Xi’s, and the (d + 1)th entry assigned to each worker is a T-privately4 linearly coded version of , moreover, if each i (as a variable) is non-zero, then any valid computing scheme requires N (TX + K − 1)d + T + r. Proof. Lemma C.1 is proved by induction with respect to the tuple (d, T, r). Specifically, we prove that (a) Lemma C.1 holds when (d, T, r) = (0, 0, 1); (b) If Lemma C.1 holds for any (d, T, r) = (d0, 0, r0), then it holds when (d, T, r) = (d0, 0, r0 + 1); (c) If Lemma C.1 holds for any (d, T, r) = (d0, 0, r0), then it holds when (d, T, r) = (d0, T, r0) for any T; (d) If Lemma C.1 holds for any d = d0 and arbitrary values of T and r, then it holds if (d, T, r) = (d0 + 1, 0, 1). Assuming the correctness of these statements, Lemma C.1 directly follows by induction’s principle. Now we provide the proof of these statements as follows. (a). When (d, T, r) = (0, 0, 1), we need to show that at least 1 worker is needed. This directly follows from the decodability requirement, because the master aims to recover a variable, and at least one variable is needed to provide the information. (b). Assuming that for any (d, T, r) = (d0, 0, r0) and any K and TX, any valid computing scheme requires N (TX + K − 1)d0 + r workers, we need to prove that for (d, T, r) = (d0, 0, r0 + 1), at least (TX + K − 1)d0 + r0 + 1 workers are needed. We prove this fact by fixing an arbitrary valid computing scheme for (d, T, r) = (d0, 0, r0 + 1). For brevity, let ˜ i denotes the coded version of 4For this lemma, we assume that no padded random variable is used for a 0-private encoding scheme. 176
Object Description
Title | Coded computing: a transformative framework for resilient, secure, private, and communication efficient large scale distributed computing |
Author | Yu, Qian |
Author email | qyu880@usc.edu;qianyu0929@gmail.com |
Degree | Doctor of Philosophy |
Document type | Dissertation |
Degree program | Electrical Engineering |
School | Viterbi School of Engineering |
Date defended/completed | 2020-03-24 |
Date submitted | 2020-08-04 |
Date approved | 2020-08-05 |
Restricted until | 2020-08-05 |
Date published | 2020-08-05 |
Advisor (committee chair) | Avestimehr, Salman |
Advisor (committee member) |
Luo, HaiPeng Ortega, Antonio Soltanolkotabi, Mahdi |
Abstract | Modern computing applications often require handling massive amounts of data in a distributed setting, where significant issues on resiliency, security, or privacy could arise. This dissertation presents new computing designs and optimality proofs, that address these issues through coding and information-theoretic approaches. ❧ The first part of this thesis focuses on a standard setup, where the computation is carried out using a set of worker nodes, each can store and process a fraction of the input dataset. The goal is to find computation schemes for providing the optimal resiliency against stragglers given the computation task, the number of workers, and the functions computed by the workers. The resiliency is measured in terms of the recovery threshold, defined as the minimum number of workers to wait for in order to compute the final result. We propose optimal solutions for broad classes of computation tasks, from basic building blocks such as matrix multiplication (entangled polynomial codes), Fourier transform (coded FFT), and convolution (polynomial code), to general functions such as multivariate polynomial evaluation (Lagrange coded computing). We develop optimal computing strategies by introducing a general coding framework called “polynomial coded computing”, to exploit the algebraic structure of the computation task and create computation redundancy in a novel coded form across workers. Polynomial coded computing allows for order-wise improvements over the state of the arts and significantly generalizes classical coding-theoretic results to go beyond linear computations. The encoding and decoding process of polynomial coded computing designs can be mapped to polynomial evaluation and interpolation, which can be computed efficiently. ❧ Then we show that polynomial coded computing can be extended to provide unified frameworks that also enable security and privacy in the computation. We present the optimal designs for three important problems: distributed matrix multiplication, multivariate polynomial evaluation, and gradient-type computation. We prove their optimality by developing information-theoretic and linear-algebraic converse bounding techniques. ❧ Finally, we consider the problem of coding for communication reduction. In the context of distributed computation, we focus on a MapReduce-type framework, where the workers need to shuffle their intermediate results to finish the computation. We aim to understand how to optimally exploit extra computing power to reduce communication, i.e., to establish a fundamental tradeoff between computation and communication. We prove a lower bound on the needed communication load for general allocation of the task assignments, by introducing a novel information-theoretic converse bounding approach. The presented lower bound exactly matches the inverse-proportional coding gain achieved by coded distributed computing schemes, completely characterizing the optimal computation-communication tradeoff. The proposed converse bounding approach strictly improves conventional cut-set bounds and can be widely applied to prove exact optimally results for more general settings, as well as more classical communication problems. We also investigate a problem called coded caching, where a single server is connected to multiple users in a cache network through a shared bottleneck link. Each user has an isolated memory that can be used to prefetch content. Then the server needs to deliver users’ demands efficiently in a following delivery phase. We propose caching and delivery designs that improve the state-of-the-art schemes under both centralized and decentralized settings, for both peak and average communication rates. Moreover, by developing information-theoretic bounds, we prove the proposed designs are exactly optimal among all schemes that use uncoded prefetching, and optimal within a factor of 2.00884 among schemes with coded prefetching. |
Keyword | coding theory; information theory; distributed computing; security; privacy; matrix multiplication; caching |
Language | English |
Part of collection | University of Southern California dissertations and theses |
Publisher (of the original version) | University of Southern California |
Place of publication (of the original version) | Los Angeles, California |
Publisher (of the digital version) | University of Southern California. Libraries |
Provenance | Electronically uploaded by the author |
Type | texts |
Legacy record ID | usctheses-m |
Contributing entity | University of Southern California |
Rights | Yu, Qian |
Physical access | The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the author, as the original true and official version of the work, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright. The original signature page accompanying the original submission of the work to the USC Libraries is retained by the USC Libraries and a copy of it may be obtained by authorized requesters contacting the repository e-mail address given. |
Repository name | University of Southern California Digital Library |
Repository address | USC Digital Library, University of Southern California, University Park Campus MC 7002, 106 University Village, Los Angeles, California 90089-7002, USA |
Repository email | cisadmin@lib.usc.edu |
Filename | etd-YuQian-8883.pdf |
Archival file | Volume13/etd-YuQian-8883.pdf |
Description
Title | Page 191 |
Full text | (c) Thus, a matching converse holds for any d 2 N+, which proves inequality (6.3). Now we proceed to prove the rest of Lemma 6.1, explicitly, we aim to prove that the recovery threshold of any T-private encoding scheme is at least RLCC(N,K, f) + T · deg f. Inequality (6.3) essentially covers the case for T = 0. Hence, we focus on T > 0. To simplify the proof, we prove a stronger version of this statement: when T > 0, any valid T-private encoding scheme uses at least N RLCC(N,K, f)+T ·deg f workers. Equivalently, we aim to show that N (K+T −1) deg f +1 for any such scheme. We prove this fact using an inductive approach. To enable an inductive structure, we prove a even stronger converse by considering a more general class of computing tasks and a larger class of encoding schemes, formally stated in the following lemma. Lemma C.1. Consider a dataset with inputs X , (X1, ...,XK) 2 (Fd)K, and an input vector , (1, ..., K) which belongs to a given subspace of FK with dimension r > 0; a set of N workers where each can take a coded variable in Fd+1 and return the product of its elements; and a computing task where the master aim to recover Yi , Xi,1 · ... · Xi,d · i. If the inputs entries are encoded separately such that each of the first d entries assigned to each worker are some TX > 0-privately linearly coded version of the corresponding entries of Xi’s, and the (d + 1)th entry assigned to each worker is a T-privately4 linearly coded version of , moreover, if each i (as a variable) is non-zero, then any valid computing scheme requires N (TX + K − 1)d + T + r. Proof. Lemma C.1 is proved by induction with respect to the tuple (d, T, r). Specifically, we prove that (a) Lemma C.1 holds when (d, T, r) = (0, 0, 1); (b) If Lemma C.1 holds for any (d, T, r) = (d0, 0, r0), then it holds when (d, T, r) = (d0, 0, r0 + 1); (c) If Lemma C.1 holds for any (d, T, r) = (d0, 0, r0), then it holds when (d, T, r) = (d0, T, r0) for any T; (d) If Lemma C.1 holds for any d = d0 and arbitrary values of T and r, then it holds if (d, T, r) = (d0 + 1, 0, 1). Assuming the correctness of these statements, Lemma C.1 directly follows by induction’s principle. Now we provide the proof of these statements as follows. (a). When (d, T, r) = (0, 0, 1), we need to show that at least 1 worker is needed. This directly follows from the decodability requirement, because the master aims to recover a variable, and at least one variable is needed to provide the information. (b). Assuming that for any (d, T, r) = (d0, 0, r0) and any K and TX, any valid computing scheme requires N (TX + K − 1)d0 + r workers, we need to prove that for (d, T, r) = (d0, 0, r0 + 1), at least (TX + K − 1)d0 + r0 + 1 workers are needed. We prove this fact by fixing an arbitrary valid computing scheme for (d, T, r) = (d0, 0, r0 + 1). For brevity, let ˜ i denotes the coded version of 4For this lemma, we assume that no padded random variable is used for a 0-private encoding scheme. 176 |