Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
Computer Science Technical Report Archive
/
USC Computer Science Technical Reports, no. 834 (2004)
(USC DC Other)
USC Computer Science Technical Reports, no. 834 (2004)
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
A Fault Tolerance Protocol for Uploads: Design and Evaluation ⋆ L. Cheung 1 , C.-F. Chou 2 , L. Golubchik 3 , and Y. Yang 1 1 Computer Science Department, University of Southern California, Los Angeles, CA. {lccheung, yangyan}@usc.edu, 2 Department of Computer Science and Information Engineering, National Taiwan University. ccf@csie.ntu.edu.tw, 3 Computer Science Department, EE-Systems Department, IMSC, and ISI, University of Southern California, Los Angeles, CA. leana@cs.usc.edu Abstract. This paper investigates fault tolerance issues in Bistro, a wide area upload architecture. In Bistro, clients first upload their data to intermediaries, known as bistros. A destination server then pulls data from bistros as needed. However, during the server pull process, bistros can be unavailable due to failures, or they can be malicious, i.e., they mightintentionallycorruptdata.Thisdegradessystemperformancesince the destination server may need to ask for retransmissions. As a result, a fault tolerance protocol is needed within the Bistro architecture. Thus, in this paper, we develop such a protocol which employs erasure codes in ordertoimprovethereliabilityofthedatauploadingprocess.Wedevelop analytical models to study reliability and performance characteristics of thisprotocol, andwederiveacostfunctiontostudythetradeoffbetween reliability andperformance inthiscontext.Wealsopresentnumericalre- sults to illustrate this tradeoff. 1 Introduction High demand for some services or data creates hot spots, which is a major hurdle to achieving scalability in Internet-based applications. In many cases, hot spots are associated with real life events. There are also real life deadlines associated with some events, such as submissions of papers to conferences. The demand of applications with deadlines is potentially higher when the deadlines are approaching. ⋆ This work is supported in part by the NSF Digital Government Grant 0091474. It has also been funded in part by the Integrated Media Systems Center, a National ScienceFoundationEngineering ResearchCenter,Cooperative AgreementNo.EEC- 9529152. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect those of the National Science Foundation. More information about the Bistro project can be found at http://bourbon.usc.edu/iml/bistro. To the best of our knowledge, however, there are no research attempts to relieve hot spots in many-to-one applications, or upload applications, except for Bistro[1].Bistroisawide-areauploadarchitecturebuiltattheapplicationlayer, and previous work [2] has shown that it is scalable and secure. In Bistro, an upload process is broken down into three steps (see Sect.3 for details)[1].First,inthetimestampstep,clientssendhashesoftheirfiles,h(T),to theserver,andobtaintimestamps,σ.Thesetimestampsclockclients’submission time.Inthedatatransferstep,clientssendtheirdata,T,tointermediariescalled bistros. In the last step, called the data collection step, the server coordinates bistros to transfer clients’ data to itself. The server then matches the hashes of the received files against the hashes it received directly from the clients. The serveracceptsfilesthatpassthistest,andaskstheclientstoresubmitotherwise. This completes the upload procedure in the original Bistro architecture [1,2]. We are interested in developing and analyzing a fault tolerance protocol in this paper,in the contextofthe Bistroarchitecture.TheoriginalBistrodoesnot make any additional provisions for cases when bistros are not available during the data collection step. In addition, malicious bistros can intentionally cor- rupt data. Although a destination server can detect corrupted data from the hash check, it has no way of recovering the data. Hence, unavaliable bistros and malicious behavior can result in the destination server having to request client resubmissions. In this work, we are interested in using forward error correction techniques to recover corrupted or lost data in order to improve the overall sys- tem performance. The fault tolerance protocol, on the other hand, brings in additional storage and network transfer costs due to redundant data. The goal of this paper is to (a) provide better performance when intermediaries fail while reducing the amount of redundant data needed to accomplish this, and (b) to evaluate the resulting tradeoff between performance and reliability. Weproposeanalyticalmodelstoevaluateourfaulttoleranceprotocol.Inpar- ticular, we develop reliability models to analyze the reliability characteristics of bistros. We also deriveperformance models to estimate the performancepenalty of employing our protocol. Moreover, we study the tradeoff between reliability and performance. The remainder of this paper is organized as follows. Section 2 describes re- lated work.Section 3describesourfaulttoleranceprotocol.We deriveanalytical models for this protocol in Sect.4. Section 5 presents numerical results showing the tradeoff between performance and reliability characteristics of our protocol. Finally, we conclude in Sect.6. 2 Related Work This section briefly describes fault tolerance considerations in other large-scale data transfer applications, and discusses other uses of erasure codes in the con- text of computer networking. One approach to achieving fault tolerance is through service replication. Replication of DNS servers is one such example. The root directory servers are replicated, so if any root server fails, DNS service is still available. Each ISP is likely to host a number of DNS servers, and most clients are configured with primary and alternate DNS servers. Therefore, even if some DNS servers fail, clients can contact an alternate DNS server to make DNS lookup requests. In Bistro, the service of intermediaries is replicated, where intermediaries provide interim storages of data until the destination server retrieves it. Instoragesystems,dataredundancytechniques,suchasRAIDtechniques[3], arecommonlyused forprovidingbetter faulttolerancecharacteristics.In caseof disk failures, file servers are able to reconstruct data on the failed disk once the faileddiskisreplaced,anddataisavailableevenbeforereplacingthefaileddisks. Althoughdataredundancycanprovidebetterfaulttolerancecharacteristics,the storageoverheadcanbe high.Weareinterestedinprovidingfaulttolerancewith small storage overhead in this work. Erasure codes are useful in bulk data distribution, e.g., in [4] clients can reconstruct the data as long as a small fraction of erasure-encoded files are received.Thisschemeallowsclientstochoosefromalargesetofservers,resulting in good fault tolerance and better performance characteristics than traditional approaches. In wireless networking, using forward error correction techniques can reduce packet loss rates by recovering parts of lost packets [5,6]. Packet loss rates in wireless networks are much higher because propagation errors occur more fre- quently when the data is transmitted through air. Employing forward error cor- rection techniques can improve reliability and reduce retransmissions. These applications of erasure codes assume that packets are either received successfully or are lost. They assume that there are other ways to detect cor- ruptedpackets.e.g.,usingTCPchecksums.InBistro,however,thisassumptionis notvalidbecausepacketscanbeintentionallycorruptedbyintermediatebistros. In Sect.3, we describe one way to detect corrupted packets using checksums so that we can treat corrupted packets as losses. 3 Fault Tolerance Protocol This section provides details of our fault tolerance protocol. The protocol is broken down into three parts as in the original Bistro protocol described in [2]. We provide details of each step in this section with focus on the fault tolerance aspects proposed in this paper. We also discuss related design decisions. 3.1 Timestamp Step The timestamp step verifies clients’ submissions. Clients first pass their files,T o , to erasure code encoders to get the encoded files T =T 1 +T 2 +...+T x . Then, clients generate hashes of each part of their data, concatenate the hashes and send the results, H, to the destination server. The destination server replies to clients with tickets, ξ, which consist of timestamps, σ, and the hash messages clientshavejustsent,h(H).Ticketsaredigitallysignedbythedestinationserver, so clients can authenticate the destination server. Fig.1 depicts the timestamp step. ξ = Kpriv(h(H), σ) H = h(T1) + h(T2) + ... + h(TX) Client Destination Server Fig.1. Timestamp Step In the original protocol, clients send a checksum (or a hash) of the whole file to the destination server in the timestamp step. If any packets are lost or corrupted, the checksumcheck would fail, and the destination serverwould have to discard all packets that correspond to that checksum because it does not know which packets are corrupted. This would mean that losing any packet would result in retransmissions of entire files, in the original protocol. To solve this problem, we send multiple checksums in the fault tolerance protocol,h(T 1 )+h(T 2 )+...+h(T x ).AssumethateachclienthasW datapackets to send. The data packets are divided into Y FEC (forward error correction) groups of k packets each. For each FEC group, a client encodes k data packets into n packets (data + parity), arranges the n packets into Z checksum groups each of size g, and generates one checksum for each checksum group using a message digest algorithm such as SHA1. We assume that Z is a factor of g, because we wantthe size of all checksumgroupsto be the same, which simplifies our reliability evaluation in Sect.4. There are altogether X = YZ checksums, whichareconcatenatedandsentinonemessagetothedestinationserver.Figure 2 illustrates the relationship between FEC groups and checksum groups. Original File Divide into FEC groups Encode with erasure code Divide into checksum groups W packets k packets g packets each k packets k packets n packets Fig.2. FEC Groups and Checksum Groups Note that the size of a checksum group has to be smaller than the number of data packets per FEC group (g <k). Recall that erasurecodes do not correct corrupted packets, so we drop all packets in a checksum group if any packet within the checksum group is lost or corrupted, and then we try to recover the dropped packets using an erasure code. If g ≥ k and if a checksum group is dropped, then we lose more than k packets in at least one FEC group, which we would not be able to recover because less than k packets within that FEC group are received, i.e., we would have to ask for retransmissions if any packet in the file is lost or corrupted. So, if g ≥ k, we are back to the problem of the original protocol where losing any packet would result in retransmissions. The above argument also implies that there must be at least two checksum groups per FECgroup.Thisalsoexplainsthe orderin which FECgroupsand checksum groups are constructed. 3.2 Data Transfer Step Inthe datatransferstep, clientssendtheirfilestointermediatebistroswhichare nottrusted.ClientsfirstchooseB bistrostosendtheirdatatoandthengenerate a session key K sesi for each chosen bistro, 1≤i≤B. After that, clients divide theirfilesintoB parts.Foreachparti,clientsencryptitwithasessionkeyK sesi and send that part to an intermediate bistro i. Clients also send to bistro i the sessionkey,K sesi , and ticket,ξ, encrypted with the public keyofthe destination server. In addition, clients send event IDs, EID, so as to identify that the data isforaparticularuploadeventwhoseeventIDisEID.Eachbistroigeneratesa receipt,ρ i , and sends it to both an appropriateclient and the destination server. The receipts contain the public key of bistro i, K i,pub , so that both clients and the destination server can decrypt and verify the receipt 4 . Figure 3 depicts the data transfer step. Client bistro1 bistroB KsesB (TB), Kpub(KsesB , ξ), EID Kses1 (T1), Kpub(Kses1 , ξ), EID Client bistro1 bistroB ρB = KB, priv(EID, Kpub(KsesB , ξ)) ρ1 = K1, priv(EID, Kpub(Kses1 , ξ)) Fig.3. Data Transfer Step 4 Note that whether thepublic key of an intermediate bistro is correct or notdoes not affectthecorrectnessoftheprotocol,asintheoriginalBistrosystem,asintermediate bistros are not trusted in any case. In [7], the so-called assignment problem is studied, i.e., how a client should choose a bistro to which it sends its file. However, in that case, only one bistro outofa pool of bistrosis chosen.In the case ofstriping (ourcase), a clientneeds to choose B ≥ 1 bistros. As shown in [7], this is a difficult problem even for B = 1. Hence, we leave the choice of which B bistros a client should stripe its fileto,andhowclientsdeterminethevalueofB tofuturework.Intheremainder of this paper, we assume that the B bistros are known. 3.3 Data Collection Step Inthedatacollectionstep,thedestinationservercoordinatesintermediatebistros to collect data. When the destination server wants to retrieve data from bistro i, it sends a retrieval request along with the receipt ρ i and the event ID EID. Upon receiving retrieval requests from the destination server, bistro i sends the file T i along with the encrypted session key and ticket for decryption. Figure 4 depicts the data collection step. bistro1 bistroB Retrieve (EID, ρ1) Retrieve (EID, ρB) Destination Server bistro1 bistroB Kses1 (T1), Kpub(Kses1 , ξ), EID KsesB (TB), Kpub(KsesB , ξ), EID Destination Server Fig.4. Data Collection Step Whenallpacketswithinachecksumgrouparereceived,thedestinationserver computes the checksum of the received checksum group. It then matches this checksum with what it received during the timestamp step. If these two check- sums match, the destination server accepts all packets in the checksum group, and discards them otherwise. After the destination server has retrieved data from all intermediate bistros, it passes the packets that pass the checksum check to an erasure code decoder, if it has received at least k packets from every FEC group. The erasure code decoder then reconstructs the original file T o . If the destination server receives less thank packets from any FEC group, it contacts the appropriate clients and requests retransmissions of specific FEC group(s). 4 Analytical Models We propose analytical models to evaluate our fault tolerance protocol in this section. We develop a reliability model to study how reliability characteristicsof bistrosaffectsystemreliability.Wealsodevelopaperformancemodeltoestimate the performance penalty of employing our protocol. Lastly, we derive a cost function to study the tradeoff between reliability and performance. 4.1 Reliability Model Let p g be the probability that there is no loss within a checksum group. Re- call that if a checksum check fails, all packets within that checksum group are discarded because we have no way of determining which of the packets are cor- rupted. Hence, the probability that at leastone packetis lost within a checksum group is 1−p g . Let us assume that losing or corrupting one packet is independent of los- ing or corrupting other packets within the same checksum group. Let p be the probability that a packet is lost or corrupted. Then, p g = (1−p) g . (1) Due to the lack of space, we omit the derivation of P retrans , the probability that retransmission is needed, and simply state the result as follows: P retrans = 1−( Z X i=⌈ k g ⌉ Z i p i g (1−p g ) Z−i ) Y . (2) The derivation of (2) can be found in [8]. We have also developed other reliability models to evaluate the reliability of our protocol, which are omitted here due to lack of space. They can be found in [8]. 4.2 Performance Model Thissectiondescribesthe performancemodel usedforevaluatingourfaulttoler- ance protocol. We limit the evaluation in this paper to the performance penalty in thetimestamp steponly.Thisis motivatedbythe factthatifthe performance oftimestampstepispoor,then wearebacktotheoriginalproblemswheremany clients are trying to send large amounts of data to a server at the same time. We believe that it is more important to consider the potential overloading of network resources, due to sending a greater number of checksums, than the additional computational needs on the server for producing digital signatures of larger timestamp messages. This is again due to the consideration that clients sending large messagesto the destination serveraround the deadline time would take us back to the original problem of a large number of clients trying to send large amounts of data to the server in a short period of time. Hence,weusethenumberofchecksumgroupsperdatapacket, Z k ,asourper- formance metric. This is derived by considering the total number of checksums, YZ, normalized by the file size, Yk. 4.3 Cost function Now that we have a reliability model and a performance model, the question is how to combine the effects of both in order to study the tradeoff between reliability and performance. This section describes a cost function which we propose to use to achieve this goal. Let C 1 be the cost computed using the reliability model, and let C 2 be the cost computed using the performance model in the timestamp step. Thus, our cost function is C =w 1 C 1 +w 2 C 2 (3) where w 1 and w 2 are weights of each factor. Earlierwederivedtheprobabilitythatretransmissionisneeded,P retrans ,asa reliabilitymetric.Wecanusethis metricasourreliabilitycost.Theperformance cost in the timestamp step is given by the number of checksum groups per data packet, Z k . In the next section, westudy howreliabilityand performancemetrics affect the overall cost function. 5 Numerical Results This section provides numeric results on varying different parameters of our protocol and their effect on the cost function discussed above. Due to lack of space, we present only a subset of our experiments. Other results, which also support our findings, can be found in [8]. The parameters of interest to our system and its corresponding reliability and performance characteristics include the following. 1. Number of checksum groups per FEC group, Z. Setting Z to be large can provide better reliabilitybecause lossof a packetaffects fewer other packets, as we drop the entire checksum group whenever any packet from that group is lost or corrupted. On the other hand, large values of Z result in large timestamp messages, which can have adverse effects on network resources. 2. Number of parity packets per FEC group, n−k. For reliability reasons, we wantto send alargenumberofparitypackets,but this increasesthe number ofchecksumswesendasweareinterestedin addingparitychecksumgroups. 3. Number of data packets per FEC group, k. Given a file of W packets, we want to study the differences in dividing the file into few large FEC groups or many small FEC groups. 4. Probability of losing a packet, p. We want to see how sensitive the cost function is to p. 5. Weights w 1 and w 2 . We are interested in how sensitive the cost function is to the chosen weight values. We performed a number of experiments with different weight values (please refer to [8] for the results). Due to lack of space, we omit these results here and only use representative weight values in the remainder of the paper. The tradeoff between reliability and performance in the context of varying the number of checksum groups per FEC group, Z, is illustrated in Fig.5(a), where Y =5, n = 20,k = 10,p = 0.01,w 1 = 0.9, and w 2 = 0.1. In Fig.5(a) the cost is high when Z is small because P retrans is high. The cost decreases when Z is between 1 and 4 because P retrans is improving. At Z≥ 2, the cost goes up again because the size of the message becomes too large. 0 0.1 0.2 0.3 0.4 0.5 0.6 0 2 4 6 8 10 12 14 16 18 20 cost number of checksum groups per FEC group (a) cost vs Z 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0 5 10 15 20 25 30 35 40 cost number of parity packets per FEC group (b) cost vs n−k 0 0.05 0.1 0.15 0.2 0.25 0 5 10 15 20 25 30 35 40 45 50 cost number of data packets per FEC group (c) cost vs k 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 cost probability of losing a packet (d) cost vs p Fig.5. Results for varying different parameters in the cost function We study the tradeoff between reliability and performance of adding parity packets, n−k in Fig.5(b), where k = 10, p = 0.01, Z = 2, w 1 = 0.9, and w 2 = 0.1. In Fig.5(b), when n−k ≤ 10, the cost decreases because P retrans decreases. At n−k ≥ 10 in Fig.5(b), the cost increases because Z k increases while P retrans approaches 0. We study how we should choosek, the number of data packets in each FEC group,inFig.5(c).ForthisexperimentwesetW =100,n = 2k,Z = 2,p =0.01, w 1 = 0.9, and w 2 = 0.1. Figure 5(c) shows that cost is high when k is small, because this results in a lot of checksums. The cost drops when k is between 1 and 10, as we send fewer checksums and the corresponding reliability penalty does not increase as fast. Eventually, when k≥ 10, cost goes up as k increases since larger FEC groups are not as fault tolerant. We are interested in looking at how the cost function changes with the prob- ability of losing a packet, p. We set Y = 5, n = 20, k = 10, Z = 2, w 1 = 0.9, and w 2 = 0.1 in Fig.5(d). Since both Z and k are fixed, changes in cost reflect changes in P retrans . Cost increases rapidly when p is between 0 and 0.1. When p> 0.1, since P retrans approaches 1, cost remains fairly constant. 6 Conclusions Bistroisascalableandsecurewide-areaupload architecturethatcanprovidean efficient upload service. The goal of this paper was to develop a fault tolerance protocol that improves performance in the face of failures or malicious behavior of intermediaries in the context of the Bistro architecture. We developed such a protocol using a forward error correction technique. We also evaluated this protocol using proposed analytical models to study the reliability and perfor- mance characteristics. We studied the resulting cost, as a function of a number of parameters, including the number of data packets per FEC group, the num- ber of parity packets, and the number of checksum groups per FEC group. In conclusion, we believe that fault tolerance is important in wide area data up- load applications. We believe that the proposed protocol is a step in the right direction, leading to better fault tolerance characteristics with fewer retrans- missions due to packet losses or corruptions, resulting in better overall system performance. References 1. Bhattacharjee, S., Cheng, W.C., Chou, C.F., Golubchik, L., Khuller, S.: Bistro: a framework forbuildingscalable wide-areauploadapplications. ACMSIGMETRICS Performance Evaluation Review 28 (2000) 29–35 2. Cheng, W.C., Chou, C.F., Golubchik, L., Khuller, S.: A secure and scalable wide- area upload service. In: Proceedings of 2nd International Conference on Internet Computing. Volume 2. (2001) 733–739 3. Patterson, D.A.,Gibson,G., Katz,R.H.: Acasefor redundantarraysofinexpensive disks (raid). In: Proceedings of the 1988 ACM SIGMOD international conference on Management of data, ACM Press (1988) 109–116 4. Byes, J., Luby, M., Mitzenmacher, M., Rege, A.: A digital fountain approach to reliable distribution of bulk data. In: ACM SIGCOMM. (1998) 5. Ding, G., Ghafoor, H., Bhargava, B.: Resilient video transmission over wireless net- works. In: 6th IEEE International Conf. on Object-oriented Real-time Distributed Computing. (2003) 6. McKinley, P.,Mani, A.: Anexperimental studyof adaptive forward error correction for wireless collaborative computing. In: IEEE Symposium on Applications and the Internet (SAINT 2001). (2001) 7. Cheng, W.C., Chou, C.F., Golubchik, L., Khuller, S.: A performance study of bistro,ascalable upload architecture. ACMSIGMETRICS PerformanceEvaluation Review 29 (2002) 31–39 8. Cheung, L., Chou, C.F., Golubchik, L., Yang, Y.: A fault tolerance for uploads: Design and evaluation. Technical Report 04-834, Computer Science Department, University of Southern California (2004)
Linked assets
Computer Science Technical Report Archive
Conceptually similar
PDF
USC Computer Science Technical Reports, no. 766 (2002)
PDF
USC Computer Science Technical Reports, no. 913 (2009)
PDF
USC Computer Science Technical Reports, no. 888 (2007)
PDF
USC Computer Science Technical Reports, no. 815 (2004)
PDF
USC Computer Science Technical Reports, no. 894 (2008)
PDF
USC Computer Science Technical Reports, no. 906 (2009)
PDF
USC Computer Science Technical Reports, no. 918 (2010)
PDF
USC Computer Science Technical Reports, no. 917 (2010)
PDF
USC Computer Science Technical Reports, no. 920 (2011)
PDF
USC Computer Science Technical Reports, no. 914 (2010)
PDF
USC Computer Science Technical Reports, no. 905 (2009)
PDF
USC Computer Science Technical Reports, no. 928 (2012)
PDF
USC Computer Science Technical Reports, no. 969 (2016)
PDF
USC Computer Science Technical Reports, no. 919 (2011)
PDF
USC Computer Science Technical Reports, no. 923 (2012)
PDF
USC Computer Science Technical Reports, no. 904 (2009)
PDF
USC Computer Science Technical Reports, no. 924 (2012)
PDF
USC Computer Science Technical Reports, no. 588 (1994)
PDF
USC Computer Science Technical Reports, no. 696 (1999)
PDF
USC Computer Science Technical Reports, no. 726 (2000)
Description
Leslie Cheung, Cheng-Fu Chou, Leana Golubchik, Yan Yang. "A fault tolerance protocol for uploads: Design and evaluation." Computer Science Technical Reports (Los Angeles, California, USA: University of Southern California. Department of Computer Science) no. 834 (2004).
Asset Metadata
Creator
Cheung, Leslie
(author),
Chou, Cheng-Fu
(author),
Golubchik, Leana
(author),
Yang, Yan
(author)
Core Title
USC Computer Science Technical Reports, no. 834 (2004)
Alternative Title
A fault tolerance protocol for uploads: Design and evaluation (
title
)
Publisher
Department of Computer Science,USC Viterbi School of Engineering, University of Southern California, 3650 McClintock Avenue, Los Angeles, California, 90089, USA
(publisher)
Tag
OAI-PMH Harvest
Format
10 pages
(extent),
technical reports
(aat)
Language
English
Unique identifier
UC16269326
Identifier
04-834 A Fault Tolerance Protocol For Upload - Design and Evaluation (filename)
Legacy Identifier
usc-cstr-04-834
Format
10 pages (extent),technical reports (aat)
Rights
Department of Computer Science (University of Southern California) and the author(s).
Internet Media Type
application/pdf
Copyright
In copyright - Non-commercial use permitted (https://rightsstatements.org/vocab/InC-NC/1.0/
Source
20180426-rozan-cstechreports-shoaf
(batch),
Computer Science Technical Report Archive
(collection),
University of Southern California. Department of Computer Science. Technical Reports
(series)
Access Conditions
The author(s) retain rights to their work according to U.S. copyright law. Electronic access is being provided by the USC Libraries, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright.
Repository Name
USC Viterbi School of Engineering Department of Computer Science
Repository Location
Department of Computer Science. USC Viterbi School of Engineering. Los Angeles\, CA\, 90089
Repository Email
csdept@usc.edu
Inherited Values
Title
Computer Science Technical Report Archive
Description
Archive of computer science technical reports published by the USC Department of Computer Science from 1991 - 2017.
Coverage Temporal
1991/2017
Repository Email
csdept@usc.edu
Repository Name
USC Viterbi School of Engineering Department of Computer Science
Repository Location
Department of Computer Science. USC Viterbi School of Engineering. Los Angeles\, CA\, 90089
Publisher
Department of Computer Science,USC Viterbi School of Engineering, University of Southern California, 3650 McClintock Avenue, Los Angeles, California, 90089, USA
(publisher)
Copyright
In copyright - Non-commercial use permitted (https://rightsstatements.org/vocab/InC-NC/1.0/