Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Theoretical foundations for modeling, analysis and optimization of cyber-physical-human systems
(USC Thesis Other)
Theoretical foundations for modeling, analysis and optimization of cyber-physical-human systems
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
THEORETICAL FOUNDATIONS FOR MODELING, ANALYSIS AND OPTIMIZATION OF CYBER-PHYSICAL-HUMAN SYSTEMS by Mingxi Cheng A Dissertation Presented to the FACULTY OF THE USC GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (ELECTRICAL ENGINEERING) August 2022 Copyright 2022 Mingxi Cheng Acknowledgements First and foremost, I would like to express my gratitude to my thesis advisors, Paul Bogdan, for his irreplaceable guidance and priceless motivation throughout my Ph.D. studies at the University of Southern California (USC), and Shahin Nazarian, for his unconditional support and invaluable guidance from the first day of my Ph.D. Thanks to their guidance, patience, support, and inspiration, I have dared to think out of the box, formulate new theoretical frameworks to solve complex problems, and implement novel solutions to bridge the gaps between research disciplines. Thanks to them, I have had a chance to choose research topics that I sincerely love and am passionate about. Thanks to them, I have had a chance to devote my time and energy to research problems that fascinate me and reward me with great pride. Secondly, I would like to show my appreciation to my committee members, Dr. Jyotirmoy Deshmukh, Dr. Edmond Jonckheere, and Dr. Richard Leahy. They have been there with me from my Ph.D. qualifying exam to the final thesis defense exam, and I consider myself fortunate to have received invaluable feedback on my research from them. I am particularly grateful to Dr. Jyotirmoy Deshmukh, who has provided countless precious advice and invested a great amount of time in our research projects. I respect him for his rigorous attitude to science and warm attitude to students. I am grateful to Dr. Edmond Jonckheere for offering me insightful advice on the future directions of my ii research. Last, but not least, I am grateful to Dr. Richard Leahy for giving me valuable and encouraging comments on my research in my exams. I have been extremely blessed to receive the unconditional love and support from my parents, Xiangying and Jianxin, my cousin sister Ting, my aunt Xiangyang, and my grandparents, Qixu and Fu. My family members have raised me into a reasonable and strong person and have helped me considerably pursue challenging educational goals. I am particularly grateful to my parents, who have shaped my analytical thinking and encouraged me to be curious about science and technology. I still remember the first computer we had back in 1998 and my parents taught me to type with it when I was four years old. They are kind-hearted, considerate, honest, righteous, diligent, intelligent, supportive, and they are the best parents one can ever have. Above all, I would like to extend my special thanks to my fianc e and my cat. Without them, my life as a graduate student at USC would have been much harder. I adopted my cat from a rescue center when he was 8 weeks old in my first semester at USC. Throughout the years of living together, he gave me all the emotional support and has warmed my days even in the darkest times. I believe having him in my life is one of the best decisions I’ve ever made. I am very grateful to my fianc e for his love, encouragement, support, and understanding throughout my doctoral studies. Living abroad pursuing a Ph.D. far away from family is difficult, especially for an introverted girl. This journey would have been a thousand times harder without my fianc e. I would also like to thank all my friends in the states who made this experience away from family and home not unbearable. Finally, I would like to express my appreciation to the agencies who have contributed to the funding of my research, namely USC for my Ph.D. fellowship, the National Science Foundation, and the Defense Advanced Research Projects Agency. iii Table of Contents Acknowledgements ii List of Tables vii List of Figures ix Abstract xviii 1 Introduction 1 1.1 Cyber-Physical Systems and Cyber-Physical-Human Systems . . . . . . 1 1.2 Challenges and Research Objectives . . . . . . . . . . . . . . . . . . . 3 1.3 Thesis Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2 Trust in Deep Learning 9 2.1 Probabilistic Reasoning Preliminaries . . . . . . . . . . . . . . . . . . 11 2.2 Trust in Deep Neural Networks . . . . . . . . . . . . . . . . . . . . . . 16 2.2.1 Trustworthiness . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.2.2 DeepTrust Formulation . . . . . . . . . . . . . . . . . . . . . . 17 2.2.3 A good NN topology leads to high projected trust probabilities, even when trained with untrustworthy data . . . . . . . . . . . 27 2.2.4 Uncertainty is not always malicious when evaluating the opinion and trustworthiness of a neural network . . . . . . . . . . . . . 33 2.2.5 Did you trust those who predicted Trump to lose in 2016 election? 39 2.3 Trust in Convolutional Neural Networks . . . . . . . . . . . . . . . . . 41 2.3.1 Trustworthiness Evaluation in CNNs . . . . . . . . . . . . . . . 46 2.3.2 TrustCNet Framework . . . . . . . . . . . . . . . . . . . . . . 52 2.3.3 Trust Quantification of CNNs . . . . . . . . . . . . . . . . . . 54 2.3.4 TrustCNet outperforms CNNs when dealing with noisy input . . 58 3 Trust-aware Control in Multi-Agent CPSs 64 3.1 A General Trust Framework for Multi-Agent CPSs . . . . . . . . . . . 64 3.1.1 Quantifying Trust in MAS . . . . . . . . . . . . . . . . . . . . 67 iv 3.2 Trust-aware Control for Intelligence Transportation Systems . . . . . . 72 3.2.1 Autonomous Intersection Management (AIM) . . . . . . . . . . 72 3.2.2 AIM-Trust . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 3.2.3 AIM vs. AIM-Trust . . . . . . . . . . . . . . . . . . . . . . . . 81 3.3 Trust-based Malicious Attacker Detection in CACC Platoons . . . . . . 87 3.3.1 CACC Platoons and Attacker Models . . . . . . . . . . . . . . 87 3.3.2 Trust-based Attacker Detection Model . . . . . . . . . . . . . . 87 3.3.3 Proposed Trust-based Model Accurately Detects Attackers . . . 90 4 Dynamic Trust Quantification for Perceptions 93 4.1 Perception and Decision-making . . . . . . . . . . . . . . . . . . . . . 97 4.2 Dynamic Trust, Risk, and Uncertainty Quantification . . . . . . . . . . 99 4.2.1 S1. Proxy Monitors for Perception . . . . . . . . . . . . . . . . 99 4.2.2 S2. Trust and Risk Quantification of Perception Systems . . . . 100 4.3 Trust- and Risk-modulated Decision-making . . . . . . . . . . . . . . . 103 4.3.1 S3. Trust and Risk Modulation . . . . . . . . . . . . . . . . . . 103 4.4 Conservative Trust Modulation . . . . . . . . . . . . . . . . . . . . . . 104 4.4.1 Depth Modulation . . . . . . . . . . . . . . . . . . . . . . . . 108 4.4.2 Trust-Modulated Perception Reduces Collision Rate . . . . . . 109 5 Misinformation Analysis and Prediction in CPHSs 113 5.1 Misinformation and Infodemics . . . . . . . . . . . . . . . . . . . . . 113 5.2 VRoC Misinformation Classification Framework . . . . . . . . . . . . 118 5.2.1 LSTM-based Variational Autoencoder . . . . . . . . . . . . . . 118 5.2.2 Rumor Classifier . . . . . . . . . . . . . . . . . . . . . . . . . 121 5.2.3 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 5.3 Deciphering the Laws of COVID-19 Misinformation Dynamics . . . . . 130 5.3.1 Misinformation Dynamics Analysis Methods . . . . . . . . . . 132 5.3.2 COVID-19 Misinformation Network Characterization . . . . . 138 5.3.3 COVID-19 Misinformation Network Prediction . . . . . . . . . 149 5.3.4 Discussion and Future Directions . . . . . . . . . . . . . . . . 151 6 Gene Mutation Detection and Rumor Detection 156 6.1 GAN-based Classifier . . . . . . . . . . . . . . . . . . . . . . . . . . 161 6.1.1 Generative Adversarial Network Architecture . . . . . . . . . . 161 6.1.2 Model Training Techniques and Hyperparameter Configuration 165 6.2 Rumor Detection With Explanations . . . . . . . . . . . . . . . . . . . 171 6.2.1 Detection Results . . . . . . . . . . . . . . . . . . . . . . . . . 172 6.2.2 Explanation Results . . . . . . . . . . . . . . . . . . . . . . . 174 6.3 Gene Classification With Mutation Detection . . . . . . . . . . . . . . 178 6.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 v 7 Conclusion and Future Research Directions 184 7.1 Major Contribution of this Thesis . . . . . . . . . . . . . . . . . . . . . 184 7.2 Future Research Directions . . . . . . . . . . . . . . . . . . . . . . . . 186 Reference List 188 vi List of Tables 2.1 Comparison of accuracy and projected trust probability between NN 1 (NN D 1 ) andNN 2 (NN D 2 ). . . . . . . . . . . . . . . . . . . . . . 30 2.2 Test accuracy and trustworthiness results of CNN blocks. . . . . . . . . 56 2.3 Comparison between TrustCNets and their non-trust-aware variants under noisy datasets. In case I, position and intensity information of noisy input are used in input opinion initialization. In case II, only position information is used, and in case III, no noise information is used in opinion initialization. . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 4.1 Aggregate scores for the evaluation metrics. . . . . . . . . . . . . . . . 112 5.1 Comparison between VRoC and baselines on the rumor detection task. 127 5.2 Comparison between VRoC and baselines on the rumor tracking task. . 127 5.3 Comparison between VRoC and baselines on the rumor stance classifica- tion task. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 5.4 Comparison between VRoC and baselines on the rumor veracity classifi- cation task. Lc represents that the news related to Charlie Hebdo is left out while trained under L principle. . . . . . . . . . . . . . . . . . . . 129 6.1 Baselines’ architecture setup in both rumor detection task and gene classification with mutation detection task. . . . . . . . . . . . . . . . . 170 6.2 Macro-f1 and accuracy comparison between our model and baselines on the rumor detection task. The models are trained on PHEME and tested on both original dataset PHEME and augmented dataset PHEME+PHEME’. * indicates the best result from the work that proposed the corresponding model. L represents the model is evaluated under leave-one-out principle. Variance results in cross-validations are shown in Table 6.3. . . . . . . 171 6.3 Variance results in cross-validations on the rumor detection task. . . . . 171 6.4 Examples ofD explain andD classify ’s prediction on rumor (first) and non- rumor (second). The suspicious words in the rumor predicted byD explain are marked in bold.D classify provides a score ranging from 0 to 1. 0 and 1 represent rumor and non-rumor, respectively. . . . . . . . . . . . . . . 172 vii 6.5 Examples of D explain predicting suspicious words in rumors (marked in bold). D classify outputs probabilities in range [0; 1], where 0 and 1 represent rumor and non-rumor, respectively. . . . . . . . . . . . . . . 173 6.6 Marco-f1 and accuracy comparison between our model and baselines on the extended 4-class experiments of rumor detection task on PHEME dataset. U indicates that the model is trained on PHEME+PHEME’, otherwise it is train on original PHEME dataset. All models are tested on PHEME (R /N) and PHEME+PHEME’ (R /N /R 0 /N 0 ). . . . . . 176 6.7 Marco-f1 and accuracy comparison between our model and baselines on the extended 4-class experiments of provenance (real / fake) and veracity (true / false) tasks. U indicates that the model is trained on FMG+FMG’, otherwise it is train on FMG. All models are tested on FMG and FMG+FMG’. . . . . . . . . . . . . . . . . . . . . . . . . . . 177 6.8 Examples ofD explain failing to predict suspicious words in some short rumors. D classify outputs probabilities in range [0; 1], where 0 and 1 represent rumor and non-rumor, respectively. . . . . . . . . . . . . . . 177 6.9 Comparison between our model and baselines on the gene classification with the mutation detection task. * indicates the best result from the corresponding paper. 2-class refers toAP ,AN for acceptor, andDP , DN for donor. 4-class refers toAP ,AN,AP 0 ,AN 0 for acceptor, and DP ,DN,DP 0 ,DN 0 for donor. A and D indicate acceptor and donor. . 178 6.10 Examples of the generative model modifying gene sequences and the discriminative model detecting the modifications (marked in bold). . . 178 viii List of Figures 1.1 CPSs integrate computation, communication, and control of physical systems, such as platoons formed by self-driving cars, groups of drones that cooperate together, and robots controlled by an intelligent algorithm. The physical systems communicate with each other while computing. CPHSs involve human agents in the loop. . . . . . . . . . . . . . . . . 2 1.2 a. A four-way intersection. b. An autonomous intersection management system where each vehicle sends requests to the intersection manager, and the manager controls the intersection by some policy. c. Trust-based transportation management. Each vehicle is attached to a time-space buffer in AIM. Trustworthy vehicles have tighter buffers and untrustwor- thy vehicles have large buffers. . . . . . . . . . . . . . . . . . . . . . 4 1.3 Research objectives in this thesis: modeling, analysis, and optimization of CPHSs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.1 Opinion triangle examples. . . . . . . . . . . . . . . . . . . . . . . . . 12 ix 2.2 DeepTrust: subjective trust network formulation for multi-layered NNs and NN opinion evaluation. A, Subjective trust network for- mulation for a multi-layered NN. To quantify the opinion of network W NN , i.e., human observer’s opinion of a particular neural network W A NeuralNetwork , human observer as an analyst relies on sources, in this case, neurons in a network, which hold direct opinions of neuron net- work.W A source andW source NeuralNetwork are analyst A’s opinion of source and source’s opinion of neural network. Then the derived opinion of neural network is calculated asfusion(W [A;source] NeuralNetwork ). B, NN opinion eval- uation. Dataset in DeepTrust containsData, i.e., features and labels the same way as a normal dataset, in addition to theOpinion on each data point. If a data point doesn’t convey the information as other data points do, for example, one of the features is noisy or the label is vague, we consider this data as uncertain, and hence introduce uncertainty into the dataset. Given NN topology, opinion of data, and training loss, Deep- Trust can calculate the trust of NN. Note that, the trust of hidden neurons and trust of output neurons are quantified differently as shown in this figure. Each neuron in the output layer is a source that provides advice to analyst, so that the analyst can derive her own opinion of the NN. W Y neuronjy is represented byW y 0 jy for simplicity. Detailed computation and explanation are summarized in Section 2.2. . . . . . . . . . . . . . 18 2.3 Backpropagation in one neuron and opinion update of weight and output. The backpropagation process in neural network training first compares the true label and output given by the neuron, then back propagates the difference to net, and adjusts the weight accordingly to minimize the error. The weight opinion update process mimics the backprobagation: (i) At the current episode, the opinion of neuron is the combined opinion of forward opinion and backward opinion, which are based on currentW weight , and currentW outputjlabel , respectively. (ii) Then in the next episode, the opinion of neuron will be recalculated by taking updatedW weight andW outputjlabel into consideration. . . . . . . 22 2.4 General topology example. The first hidden layer contains hidden neuronsN 1 1 andN 1 2 , and the second hidden layer contains hidden neurons N 2 1 andN 2 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 x 2.5 Opinion comparison betweenNN 1 andNN 2 under undamaged MNIST data. A, Opinion ofNN 1 with topology 784-1000-10. B, Opinion of NN 2 with topology 784-500-500-10. C, Projected trust probability com- parison betweenNN 1 andNN 2 . D-M,NN D 1 with the same topology as NN 1 , i.e., 784-1000-10, is trained with damaged data. Randomly take 10% to 100% training data, and alter labels to introduce uncertainty and noise into the dataset. Set opinion of damaged data point to have maximum uncertainty:f0; 0; 1; 0:5g. Belief is sparser while disbelief becomes denser in D-M, but there is still belief even if the dataset is 100% damaged. N-O, normalized cumulative belief and disbelief of NN D 1 under 10% to largest data damage, averaged over 10 run. . . . . 31 2.6 Projected trust probability and accuracy comparison of 784-x-10 and 784-{1000}-10 under original and damaged MNIST data. A, Projected trust probability of 784-x-10 reaches to 0:8 when increas- ing the number of hidden neurons from 100 to 2000. Topology highly impacts the projected trust probability, especially when rearranging a certain number of hidden neurons, in a various number of hidden layers. Accuracy hits the highest value with topology 784-2000-10, and the second-best accuracy is given by 784-1400-10. B, Compared to other topologies, the projected trust probability of 784-1000-10 is the highest with value 0:78, while topology 784-500-500-10 outperforms others in terms of accuracy. C, Under 10% data damage, projected trust proba- bility of 784-x-10 reaches 0:64 when increasing the number of hidden neurons from 100 to 2000. D, Topology 784-1000-10 outperforms others in both accuracy and trust. E, Under 20% data damage, the projected trust probability of 784-x-10 settles at 0:5 when increasing the number of hidden neurons from 100 to 1900, while 784-2000-10 provides the highest trust probability. F, Topology 784-1000-10 results in the highest trust probability, while topology 784-500-500-10 reaches the highest accuracy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.7 Projected trust probability and loss comparison ofNN S 1 andNN S 2 . A-B, Projected trust probability comparison ofNN S 1 andNN S 2 in afore- mentioned cases. Both NN S 1 and NN S 2 are trained under the same process with the same dataset. Training loss comparison is shown in C. The results ofNN S 1 andNN S 2 are similar. NN S 1 andNN S 2 reach a certain trust probability level with different speeds, more precisely,NN S 1 reaches a desired projected trust probability level faster. . . . . . . . . . 37 xi 2.8 NN opinion results of 2016 election prediction. A-B, Opinion com- parison of presidential election predictors NN 1-32-32-1 and NN 9-32- 64-32-1. A, NN 1-32-32-1 is trained under original pre-election poll data. The projected trust probability of this NN in validation phase reaches 0:38, and its opinion reachesf0:38; 0:60; 0:02; 0:13g. B, NN 9-32-64- 32-1 is trained under enriched pre-election poll data. The opinion of this NN isf0:71; 0:26; 0:03; 0:13g, which has higher belief value and results in more trustworthy predictions. . . . . . . . . . . . . . . . . . . . . . 40 2.9 Trustworthiness quantification in conv layer and max-pooling layer. Trust calculation and feature calculation are accomplished at the same time. In conv layer, feature calculation includes a convolution calculate as shown in (a), and trust calculation includes a fusion calculation done in parallel as shown in (b). The resulting feature map and trust map have the same shape. A max-pooling layer with 22 window size then takes the feature map in layerl + 1 and outputs the maximum feature value (e.g., 6) in the window. The corresponding cell in the trust map contains the trust value 0:4, which is the trust value of cell 6 in feature map. . . . . . . . . . . 46 2.10 Opinion calculation in a dense layer. Layerl is a dense layer and each neuron in this layer has a feature value and a corresponding opinion and trust value. Both opinions of neurons and opinions of weights are used to calculate the forward opinion of a neuron in the next layer using Eq. 2.12. If the next layer is the output layer, backward opinion is also calculated using Eq. 2.15. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 2.11 Max-trust-pooling layer. Different than max-pooling layer that operates on feature maps, a max-trust-pooling layer generates output based on trust map. In this example, with 22 window size, a max-pooling function will output the maximum value in feature window, which is 6 with the trust value of 0:1. A max-trust-pooling function in this case will output feature value 5 because it has the maximum trust value of 0:4. This demonstrates the difference between max-pooling and max-trust-pooling. 52 2.12 TrustCNet-n: a building block withn conv layers followed by one max- trust-pooling layer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 2.13 Accuracy and trustworthiness evaluation of CNN blocks. Conv2 - max- pool architecture is the best among the four blocks tested as it achieves the highest trustworthiness and accuracy. . . . . . . . . . . . . . . . . 56 2.14 a. Accuracy and trustworthiness evaluation of VGG16 and AlexNet. b. Comparison between DeepTrust and our framework. Results are evaluated on VGG16. . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.1 A cloud-based (or edge-based) architecture with trustworthiness quantifi- cation in a multi-agent CPHS. . . . . . . . . . . . . . . . . . . . . . . 66 xii 3.2 a. A trust framework where the centralized trust managerA keeps inspecting target agentsX . b.A does not directly inspectX but relies on distributed trust authorities , which may or may not be trustworthy. c. BothA and directly inspectX . . . . . . . . . . . . . . . . . . . . 67 3.3 A trust framework in traffic systems.A and keep inspecting the target vehicleX. Both road side units and other vehicles adjacent toX serve as . If we assume road side units are trustworthy, then the opinion updating equation can be simplified as W A X (W 1 X W [A; 2 ] X W A X )W A X , where on the left hand side, the firstW A X in the bracket is short-term opinion, and the secondW A X is a long-term opinion extracted fromH. . 70 3.4 A four-way intersection. Color-shaded areas represent the space-time buffers for each vehicle. Trustworthy vehicles have a tight buffer since they are expected to obey the instructions with small errors. Untrust- worthy vehicles have a large buffer because it is highly likely that they will act differently than instructed. The dark red area represents a col- lision warning in simulated trajectories. In this case, the vehicles are not permitted to enter the intersection and their requests are rejected. The AIM-Trust framework consisting of IM and TA is shown on top. A detailed description of each component can be found in Algorithm 1. . . 72 3.5 Comparison between AIM and AIM-Trust. . . . . . . . . . . . . . . . 79 3.6 a. Collision comparison between AIM-Trust, AIM-RL, and AIM-1. b. Throughput comparison between AIM-Trust, AIM-RL and AIM-Fix. Note in AIM-Trust and AIM-RL, the buffer size ranges from 0 to 16 in cases with 20% to 60% untrusted vehicles, while it ranges from 5 to 21 in cases with more untrusted vehicles (since the upper bound of 16 is not enough for RL agents to learn a good collision avoidance strategy). This change of action space causes the discontinuity of trends in terms of collisions and throughput from 60% and 80% cases. c. Collision results of AIM-Trust with 10 test cases that are different from the training set. Collision rates in test and training sets are consistent and stable even when 100% vehicles are untrustworthy. . . . . . . . . . . . . . . . . . 81 3.7 Collision comparison. The results of RL-based methods contain 10 test cases in each scenario (untrusted vehicle percentage varies from 20% to 100%), and 10 runs of each case (hence in total 100 data points in each box). Trustworthiness-aware methods have lower collisions in all scenarios. Table 1: AIM-Trust’s collision rate decrements compared to baselines, and AIM-Trust’s throughput increments compared to AIM-Fix. UV indicates the untrusted vehicle percentage. . . . . . . . . . . . . . . 82 xiii 3.8 Trustworthiness results (blue lines) and instruction violation results (red areas). In 20% untrusted case, 2 of 10 vehicles may or may not follow instructions, hence, 2 figures in the first row contain red areas. Our trust calculation precisely captures the instruction violation: a vehicle’s trustworthiness increases when it follows the instruction, and decreases otherwise. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 3.9 Trust-based attacker detection model with single and bi-directional trust evaluations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 3.10 Single-directional attacker detection experimental results. A 10-vehicle platoon completes 6 trips. Assume in the first trip all vehicles are new to the trust system and do not have trust record. Their records inH start building from trip 1 and are used in the following trips. The sine waves are required accelerations, and the fuzzy parts are acceleration attacks performed by vehicles. . . . . . . . . . . . . . . . . . . . . . . . . . . 90 3.11 Bi-directional attacker detection experimental results. A 10-vehicle platoon completes 2 adjacent trips and attackers 1, 2, and 3 perform similar accelerations attacks. a. All vehicles do not have trust history inH. b. Only attacker vehicles have moderate histories inH with trust value 0:25. c. Only attacker vehicles have bad histories with trust value 0:05. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4.1 The proposed trust-modulated decision-making in autonomous mobile system consists of monitors to evaluate the perception modules and generates property satisfaction verdicts (evidence), which are then uti- lized in the trustworthiness and risk quantification node to dynamically estimate the trust values of the perception. Then the trust and/or risk modulator modulates the perception results and sends the trust and/or risk-modulated perception outputs to the vehicle decision-making node for further actions. Without the proposed perception evaluation and mod- ulation module, the output of perceptionY is directly used in the vehicle decision-making module (dashed red arrow). . . . . . . . . . . . . . . . 94 xiv 4.2 Autonomous software stack with a self-driving car perception example, in which the perception processes data stream of frames and generate perceived output (there might be errors as shown int = 1 scene, missing objects in t = 2 scene, and inconsistencies as shown in t = 3 scene in the perception process, where an ambulance is perceived as a truck then perceived as an ambulance; and the system failed to perceive the pedestrian in continuous scenes). Then, perception tasks such as object recognition, tracking, depth estimation, and trajectory prediction are per- formed and sending various outputs to the planning and decision-making node. The decision-making node finally generates decisions about way- points which later on is used in low-level actuation. In this workflow, we can see the errors and therefore the safety and trustworthiness of the perception and decision-making node are impacted. . . . . . . . . . . . 96 4.3 Our framework is composed of five components: object detection node D 1 to generate the bounding box of the object of interest, depth prediction nodeD 2 to predict distance of the object, quality check node to evaluate the quality ofD 1 , a trust calculation node to calculate the trustworthiness of D 1 , and a distance modulation node to modulate the output of D 2 accounting the trustworthiness ofD 1 . Then the modulated distance can be used in later applications such as emergency brake and pedestrian avoidance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 4.4 Pedestrian avoidance using our trust-modulated perception. a. A fatal safety violation happened with the usage of the Direct controller. b. Our trust-aware perception modulation successfully avoids the accident. c. Proxy monitor satisfaction results and the resulting trustworthiness evaluation of object detection node. d. Ground truth distance and pre- dicted distance provided by Direct controller in example shown in a. e. Ground truth distance (d G ), predicted distance (d), and the modulated distances ( ^ d andd ) provided by our controller in example shown in b. Note that the predictedd in d-e is intermittent because when the object detection moduleD 1 detects that there is no object in the current frame, the distance predicted is infinity. f. Mean-variance results ofd G ,d, ^ d, and d of our controller averaged over 13 trails. Note that in the time steps whered =1, we manually set it to a large number (100) to represent that there is no object detected at timet. . . . . . . . . . . . . . . . . . 111 5.1 Rumor classification system consists of four components: rumor detec- tion, rumor tracking, rumor stance classification, and rumor veracity classification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 xv 5.2 VRoC: The proposed V AE-aided multi-task rumor classification system. The top half illustrates the V AE structure, and the bottom half shows the four components in the rumor classification system. IN and OUT repre- sent input layer and output layer, respectively. Numbers in parenthesis indicate the dropout rates. Note that the generated text could be different from the original text, if the V AE is not perfect. . . . . . . . . . . . . . 117 5.3 Statistics comparison between networks constructed by formulation I-III. (a-c) Node number comparison. (d-f) Edge number comparison. We can see that the cumulated misinformation networks (with and without node deletion mechanism) are larger in scale and contain much more nodes and edges compared to daily misinformation networks. . . . . . . . . . 134 5.4 The fitted power-law model (red dash line) and log-normal model (green dash line) of the COVID-19 misinformation mean popularity. a-e, Mod- els fitted for different types of misinformation. f, Models fitted for all COVID-19 misinformation. Log-normal is a plausible data-generating process of the misinformation mean popularity since the plausibility values p KS are greater than 0:1. Both goodness-of-fit test and likeli- hood ratio test indicate that compared to power-law, log-normal is more plausible. (Detailed hypothesis test procedure is stated in Methods sec- tion, "Power-law and log-normal analysis".) The log-likelihood ratios (R’s) and significance values (p’s) between the two candidate distri- butions, log-normal and power-law, are (0:422; 0:429), (0:911; 0:289), (1:832; 0:245), (1:335; 0:352), (1:066; 0:369), (0:565; 0:203), for unreli- able, political, bias, conspiracy, clickbait, and all type misinformation, respectively. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 5.5 Misinformation network centrality measures. The mean value curves of the degree centrality (a), closeness centrality (b) and second order centrality (c) for misinformation networks of 60 days across five differ- ent misinformation categories: unreliable, clickbait, political, bias and conspiracy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 5.6 Node fitness and PA function (shown as in-plot) co-estimation for nodes in the misinformation networks at days 10 (a), 20 (b) and 30 (c). The heavy tails of fitness distributions show the existence of the fit get richer phenomenon. The estimated PA functions imply that the higher the node degree, the more competitive the node is in terms of link competition; it also shows a rich get richer phenomenon. . . . . . . . . . . . . . . . . 143 5.7 Probability of attachment (a), network evolution (b), node fitness esti- mations (c-e), and network centrality measures (f-h) of misinformation networks with deletion mechanism. . . . . . . . . . . . . . . . . . . . 147 xvi 5.8 Top words inS [0;50] (a) andS [49;55] (b). Top words are the words with highest TF-IDF scores and represent the most influential and important words / topics in sentences. We take n = 1 and 2 for n-grams, therefore, in the results there exist unigrams and bigrams. We find that sentences that survived from day 0 to 50 mainly discussed political-related topics, and sentences that survived from day 49 to 55 are more discussing non- political- or medical-related topics. Specifically, 75:31% inS [0;50] , and 41:50% in S [49;55] are discussing political-related topics, respectively. This shift of topic may in fact is the reason of cyclical behavior of probability of attachment we discovered in Fig. 5.7 (a). . . . . . . . . . 148 5.9 Centrality predictions of daily misinformation networks. To predict day(s)t’s central nodes with respect to degree, closeness, or betweenness centrality, daily misinformation networks prior to day(s)t are used as training data. Instead of network topology, e.g., adjacency matrix, we take the natural language embedding of each misinformation as the input to the DNN. The DNN then predicts which nodes are going to be the top 100 central nodes in day(s)t. E.g., in 1-day prediction, we predict day 10’s top nodes based on day 0-9’s information; and in 5-day prediction, we predict days 5-10’s top nodes based on day 0-5’s information. . . . . 150 6.1 Our proposed framework. The generative model (shown on the left hand side) consists of two generatorsG where andG replace . The discriminative model (shown on the right hand side) consists of two discriminators, namelyD explain for explainability andD classify for classification. . . . 161 6.2 Macro-f1 (a) and accuracy (b) comparison between our model (-CNN and our model-LSTM) and baselines on the rumor detection task. The models are trained on augmented dataset PHEME+PHEME’ and tested on both original PHEME and augmented PHEME+PHEME’. L represents the model is evaluated under leave-one-out principle. . . . . . . . . . . . . 172 xvii Abstract With the recent advances in information technology, we are witnessing a complex inter- twining of more and more complex cyber-physical systems (CPS) in human lives and human activity. Consequently, Cyber-Physical-Human Systems (CPHSs) referring to the intertwining of CPSs and human activity have evolved rapidly in recent years. The CPHSs exploit multi-modal sensing to monitor a wide variety of human activity and physical phenomena, extract relevant information and communicate it to edge and fog computing devices that construct perception models, investigate the likelihood of anoma- lies, forecast trends, as well as perform decision-making and control of a wide variety of CPSs to ensure the safety and a higher quality of life in our society. By leveraging human activity, mining observed behavior, and extracting relevant knowledge, the CPHSs aim to maximize the trustworthiness and fairness in AI-based solutions. Towards this end, there is an urgent need for understanding the principles and theoretical foundations for designing CPHSs capable of operating correctly, safely, reliably and supporting a high degree of quality-of life. In this thesis, we provide a mathematical framework for quantifying the degree of trustworthiness in CPHSs. More specifically, we first describe a framework to evaluate the trustworthiness of deep neural networks (DNNs) and convolutional neural networks (CNNs) which represent the backbone for several AI-based solutions. Next, we discuss the first trustworthiness-aware optimization of neural networks’ (NN) architectures. xviii We provide systematic methods to quantify and evaluate the trustworthiness of input data, the NN itself, and the NN’s output in multiple real-world applications. Going beyond NNs, we propose a general framework to quantify the trustworthiness of agents in multi-agent CPHSs (MACPHSs), and evaluate the efficacy of the proposed trust- aware control in MACPHSs. We demonstrate the ability of our proposed framework empirically through real-world applications, such as intelligent transportation systems and self-driving vehicles. Furthermore, the trustworthiness and credibility of CPHSs can be greatly influenced by rumors, misinformation, and fake news, as in the COVID- 19 infodemic. Therefore, we consider the human aspect of a CPHS and tackle the misinformation analysis and prediction problems. We propose a comprehensive rumor classification system for rumor detection and tracking, veracity prediction, and stance prediction. By analyzing the COVID-19 misinformation networks, we propose a deep learning and network science-based framework for misinformation evolution prediction. In summary, this thesis offers theoretical foundations for modeling, analysis, and optimization of CPHSs where a special emphasis is on quantifying and optimizing their degree of trustworthiness. These current results and conclusions can be further extended to other applications and research areas. One research direction as we move into the post-pandemic era is to model, analyze, and optimize future online service systems, such as online medical and healthcare services, online legal services, etc. Another research direction concerns robotics, human-machine, multi-agent and machine-machine interactions. Relying on our proposed trustworthiness quantification and optimization framework, future CPHSs and their applications can be made more trustworthy, safe, and reliable. xix Chapter 1 Introduction 1.1 Cyber-Physical Systems and Cyber-Physical- Human Systems Cyber-Physical System (CPS) is composed of sensing, communication, computation, and control of physical systems [Lee10]. It focuses on complex integration and interdepen- dencies of cyberspace and the physical world, ensures data acquisition and information feedback from the physical system and cyberspace, and provides intelligent manage- ment, modeling, and control of physical and engineered systems [LBK15]. Since the first time that the term CPS was used to describe the systems combining the cyber with the physical world [Gil06], it has been defined and studied from various aspects by the scientific community, with the main focus on the theoretical foundations and implementations of systems to improve performance, autonomy, adaptability, efficiency, functionality, usability, safety, and reliability [PSG + 11, Che17, MB10]. For example, practical applications such as intelligent transportation systems, manufacturing, human robotics interaction, control and security, healthcare and medicine, smart city and smart home, energy management, and environmental monitoring, have all been studied under the CPS umbrella [GPGV14, YX16, HAR14]. Human, as an important factor that uses these systems, has been put in the loop and the term Cyber-Physical-Human System (CPHS) is emerged from a continuous evolution and integration of science and technologies to describe CPSs with people in the loop [LW20]. Although many of us may think that in CPHSs, people are the ones who 1 Figure 1.1: CPSs integrate computation, communication, and control of physical systems, such as platoons formed by self-driving cars, groups of drones that cooperate together, and robots controlled by an intelligent algorithm. The physical systems communicate with each other while computing. CPHSs involve human agents in the loop. control and operate the system, it is often the case wherein complex modern systems such as autonomous vehicles and smart homes, human agents, along with computers, machines, and AI algorithms, cooperate together and achieve goals [SSZ + 16]. With people in the loop, it is vital to understand the difference when compared with the traditional CPSs without human participation. For instance, different than machines, people may behave differently every time and may choose to not do something in the same situation due to internal desires or outside motivations [SSZ + 16]. Leveraging people’s capabilities such as awareness and adaptability in human-interactive systems is also a matter worth exploring. With people in the loop, it is important to acknowledge and understand the complex heterogeneity, lack of appropriate abstractions and theoretical foundations, potentially black-box heterogeneous integration, and complex requirements for functionality, quality of service, and performance [LW20]. Thus, many new challenges come into view and new theoretical foundations are needed to better function with CPHSs. 2 1.2 Challenges and Research Objectives Human, machines, computers, and AI algorithms, often unite together to achieve goals in CPHSs. With human and AI in the loop, one common concern nowadays is “how trustworthy the AIs are”. Human operators follow a strict educational curriculum and performance assessment that could be exploited to quantify how much we entrust them. To quantify the trust of AI decision-makers, we must go beyond task accuracy, especially when facing limited, incomplete, misleading, controversial, or noisy datasets. Especially for black-box AI algorithms such as deep learning models, while many evaluation metrics have been proposed to quantify the performance, trust-, risk-, and fairness-related evaluations are still in great need. Neural networks are known to be an effective tool in many deep learning application areas such as computer vision. Despite the good performance in terms of classical evaluation metrics such as accuracy, the trustworthiness of such models stays unclear [LW20], and it poses questions and doubts in applications where trust is an important factor. Transportation systems of the future can be best modeled as multi-agent CPHSs. A number of coordination protocols such as autonomous intersection management (AIM) shown in Fig. 1.2 a-b, among others have been developed with the goal of improving the safety and efficiency of such systems. The overall goal of these CPHSs is to provide behavioral guarantees under the assumption that the participating agents work in concert with a centralized (or distributed) coordinator. While there is work on analyzing such systems from a security perspective, we argue that there is limited work on quantifying the trustworthiness of individual agents in a multi-agent CPHS. At a more fine-grained level, such as in modern autonomous driving systems, vehicle’ safety is contingent on visual perception systems used for environment comprehension and subsequent decision-making while taking into account dynamic, potentially adversar- ial agents. State-of-the-art perception tasks (e.g., object detection, depth estimation from 3 Figure 1.2: a. A four-way intersection. b. An autonomous intersection management system where each vehicle sends requests to the intersection manager, and the manager controls the intersection by some policy. c. Trust-based transportation management. Each vehicle is attached to a time-space buffer in AIM. Trustworthy vehicles have tighter buffers and untrustworthy vehicles have large buffers. 2D monocular images) require training on large datasets to minimize estimation error. However, these datasets may be insufficient to account for the uncertainty of real-world environments; situations that are significantly different than those encountered during training may lead to safety violations by the autonomous vehicle. Therefore, evaluating the trustworthiness of safety levels of individual agents is required to ensure the overall safety and trustworthiness of a CPHS. For CPHSs with human in the loop, such as in social media where human and bots interact and manipulate information, trust and veracity is also an important topic. Social media became popular and percolated into almost all aspects of our daily lives. While online posting proves very convenient for individual users, it also fosters the fast spreading of various rumors. The rapid and wide percolation of rumors can cause persistent adverse or detrimental impacts. This vigorous growth of social media contributed not only to 4 a pandemic (fast-spreading and far-reaching) of rumors and misinformation but also to an urgent need for text-based rumor detection strategies. To speed up the detection of misinformation, traditional rumor detection methods based on hand-crafted feature selection need to be replaced by automatic AI approaches. AI decision-making systems require to provide explanations in order to assure users of their trustworthiness. While there are many other challenges spreading over every perspective of CPHSs and various applications, this dissertation only overcomes a very small part of them. Within the limited scope of interest, this thesis makes contributions through the following major objectives: • Trustworthiness modeling in deep learning models, trust-based neural net- work optimization, and trust-aware control in multi-agent CPHSs. A CPHS is a computer system in which a mechanism is controlled or monitored by computer- based algorithms with human in the loop. The control algorithms, such as deep neural networks, are vital in CPHSs and we focus on the trustworthiness of such control algorithms. Therefore, we first propose to evaluate the trustworthiness of deep neural networks and agents in a CPHS and then realize trust-aware control in CPHS and multi-agent CPHS. • Misinformation and rumor analysis and classification. In CPHSs, networks of human agents are common such as social media networks. Trustworthiness and credibility in such networks can be greatly impacted by rumors, misinformation, and fake news, as in the COVID-19 infodemic we have seen in 2020. Analyzing misinformation networks, deciphering the laws of such networks, and building models to classify and predict misinformation are all critical and meaningful for maintaining a healthy and credible CPHS. 5 1.3 Thesis Organization This thesis focuses on the proposed trustworthiness evaluation framework, trust-aware optimization and control, and misinformation analysis and classification as demonstrated in Fig. 1.3. In the following, we provide a brief overview of our contributions. In Chapter 2, we focus on trust in AI, which plays a fundamental role in the modern world, especially when used as an autonomous decision-maker. We describe DeepTrust, which identifies proper deep neural network (DNN) topologies that have high projected trust probabilities, even when trained with untrusted data. We show that an uncertain opin- ion of data is not always malicious while evaluating DNN’s opinion and trustworthiness, whereas the disbelief opinion hurts the trust the most. Also, trust probability does not necessarily correlate with accuracy. DeepTrust also provides a projected trust probability of DNN’s prediction, which is useful when the DNN generates an over-confident output under problematic datasets. These findings open new analytical avenues for designing and improving the DNN topology by optimizing opinion and trustworthiness, along with accuracy, in a multi-objective optimization formulation, subject to space and time constraints. In addition, we further extend this work to quantify the trust of convolu- tional neural networks (CNNs). We propose a framework to evaluate the trustworthiness of CNNs and their major building blocks, i.e., convolutional layer and pooling payer. We then design a trust-based pooling layer for CNNs to achieve higher accuracy and trustworthiness in applications with noise in input features. We name our trustworthiness- aware CNN architecture as TrustCNet, and it can stack together as a trust-aware CNN architecture or be plugged into deep learning architectures to improve their performance. We demonstrate the effectiveness of our TrustCNet empirically with multiple datasets. In Chapter 3, we go beyond the inner working mechanism of neural networks and focus on multi-agent CPHSs, such as intelligent transportation systems. We propose a framework that uses an epistemic logic to quantify the trustworthiness of agents, and 6 Figure 1.3: Research objectives in this thesis: modeling, analysis, and optimization of CPHSs. embed the use of quantitative trustworthiness values into control and coordination poli- cies. Our modified control policies can help the multi-agent system improve its safety in the presence of untrustworthy agents (and under certain assumptions, including malicious agents). We empirically show the effectiveness of our proposed trust framework by embedding it into multiple intelligent transportation algorithms. In our experiments, our trust framework accurately detects attackers in cooperative adaptive cruise control platoons; mitigates the effect of untrustworthy agents in autonomous intersection man- agement systems, and trust-aware traffic light control reduces collisions in all cases compared to the vanilla versions of these algorithms. In Chapter 4, we pay attention to individual agents in a CPHS, such as a self-driving car and we propose trust quantification for perception in self-driving vehicles. We introduce the notion of quantifiable trust in a given perception system that goes beyond simple metrics like training accuracy. We then develop a self-supervised method to dynamically update the trust in perception modules using logic-based monitors that serve as effective proxies for ground truth labels. Finally, we design a decision-making framework that explicitly uses trustworthiness to make conservative but safe decisions. We test our framework in the context of an autonomous emergency braking (AEB) system that needs to take into account an adversarial agent in the environment, and we 7 empirically show that the trust-modulated controller provides better safety guarantees for systems with unreliable perception. In Chapters 5-6, we focus on the human factor in CHPSs and provide a misinformation classification and COVID-19 misinformation network dynamics analysis. The rumor classification system aims to detect, track, and verify rumors on social media. Such systems typically include four components: (i) a rumor detector, (ii) a rumor tracker, (iii) a stance classifier, and (iv) a veracity classifier. In order to improve the state-of-the-art in rumor detection, tracking, and verification, we propose VRoC, a tweet-level variational autoencoder-based rumor classification system. We show that VRoC is able to classify unseen rumors with high levels of accuracy. The global rise of COVID-19 health risk has triggered the related misinformation infodemic. In Chapter 5, we present the first analysis of COVID-19 misinformation networks and determine a few of its implications. Firstly, we analyze the spread trends of COVID-19 misinformation and discover that the COVID-19 misinformation statistics are well fitted by a log-normal distribution. Secondly, we form misinformation networks by taking individual misinformation as a node and similarity between misinformation nodes as links, and we decipher the laws of COVID-19 misinformation network evolution. Lastly, we present a network science- inspired deep learning framework to accurately predict which Twitter posts are likely to become central nodes (i.e., high centrality) in a misinformation network from only one sentence without the need to know the whole network topology. With the network analysis and the central node prediction, we propose that if we correctly suppress certain central nodes in the misinformation network, the information transfer of the network would be severely impacted. Finally, we conclude with a summary of this thesis and outline our future work directions in Chapter 7. 8 Chapter 2 Trust in Deep Learning “AI is no longer the future–it’s now here in our living rooms and cars and, often, our pockets” [IBM15]. Trust is a significant factor in the subjective world, while becoming increasingly critical in Artificial Intelligence (AI). When we behave according to AI’s calculated results, how much trust should we put in it? Neural networks (NNs) have been deployed in numerous applications, however, despite their success, such powerful technologies also raise concerns [Ros19]. The incidents, such as fatal accidents of self- driving cars, intensified the concerns about NNs’ safety and trustworthiness. Research efforts focused on the trustworthiness and safety of NNs include two major aspects: certification and explanation. The former process is held before the arrangement of the model or product to make sure it functions correctly, while the latter tries to explain the behavior of the model or product during its lifetime [HKK + 18]. Verification and testing are the two techniques frequently used in the certification process, but the explainability of systems with machine learning components is still difficult to achieve for AI developers [HKK + 18]. Neural network verification determines whether a property, e.g., safety [IWA + 19], local robustness, holds for a neural network. A robust model has the ability to maintain an “acceptable" behavior under exceptional execution conditions [FMP05], such as adversarial samples [SZS + 13]. The robustness of NNs has been well studied in the literature [GMDC + 18, HKWW17]. Trustworthiness of NNs, however, is a more complicated and abstract concept that needs to be explored. In summary, robustness contributes to trustworthiness, but robustness alone is not sufficient for trustworthiness quantification since it only partially covers the verification requirement. 9 Formal and empirical metrics are designed to model trust. Subjective Logic (SL) is a type of probabilistic logic that explicitly takes uncertainty and source trust into account [Jøs16]. It has been used to model and analyze trust networks in social networks and Bayesian networks [JHP06], and to evaluate information from untrustworthy sources [KBdS17]. SL offers significantly greater expressiveness than Boolean truth values and probabilities by opinion representation, and gives an analyst the ability to specify vague (and subjective) expressions such as “I don’t know” as input arguments [Jøs16]. Arguments in SL are subjective opinions that contain four parameters: belief, disbelief, uncertainty, and base rate. For example, believing in a statement 100% is an opinion. In SL discipline, this case is expressed as belief is one, disbelief and uncertainty are both zeros. Opinions are also related to the belief representation in Dempster–Shafer belief theory (DST) [Dem08]. Compared to monotonic logics, SL provides multiple advantages while handling default reasoning, abductive reasoning, and belief reasoning. Uncertainty quantification is closely related to trust evaluation. Various works in the scientific literature have explored uncertainty in deep learning and machine learning models. In comparison to uncertainty propagation [TJK18], the term uncertainty in this work refers to the uncertainty value in a subjective opinion of human or machine observer. In this chapter, we propose DeepTrust, an SL-inspired framework to evaluate the opinion and trustworthiness of an AI agent, such as a neural network, and its predictions, based on input trust information, a hyper parameter of neural networks (i.e., topology), and parameters of neural networks (such as weight and bias). The questions about “how much we should trust AI and input data” and “Which topologies are more trustworthy” can be answered by evaluating AI in the SL discipline offered by DeepTrust. The trust quantification in this work is not limited by the linearity and/or nonlinearity of the neural network and the size or topology of the neural network. By providing the projected 10 trust probability, DeepTrust promises the needed trustworthiness in a wide range of applications involving problematic data. One example is related to the 2016 presidential election prediction. We show that differently from almost all the predictors back in 2016, DeepTrust calculates a very low projected trust probability of 0:21 for “Clinton wins”, and a higher projected trust probability of 0:5 for “Trump wins”. Hence, by quantifying the opinion and trustworthiness, DeepTrust could relatively predict that it would be more than twice as trustworthy to predict Trump as the winner, or at least it could raise alarms on all those strong pre-election results in favor of Clinton. In addition, we propose a follow-up work of DeepTrust to quantify the trust in convolutional neural networks (CNNs). 2.1 Probabilistic Reasoning Preliminaries Different from binary logic and operators which are based on true/false logic values and seem more familiar, SL defines its own set of logic and operators, not only based on logic truth, but also based on probabilistic uncertainty. In the following, we introduce the basic definitions and operators that are used in DeepTrust, specifically, binomial opinions and their quantification from evidence, the binomial multiplication operator, and averaging fusion operator. Binomial opinions We start with an example to explain what opinions mean in the real world. Imag- ine you would like to purchase a product from a website, and you have an opin- ion about this product, i.e., belief in support of the product being good, belief in support of the product not being good (or disbelief in support of the product 11 Figure 2.1: Opinion triangle examples. being good), uncertainty about the product, and the prior probability of the prod- uct being good. A binomial opinion about the truth of x, e.g., a product, is the ordered quadrupletfbelief;disbelief;uncertainty;base rateg, which is denoted by W x = fb x ;d x ;u x ;a x g, with an additional requirement: b x + d x + u x = 1, where b x ;d x ;u x 2 [0; 1]. The respective parameters are: belief mass b x , disbelief mass d x , uncertainty massu x , which represents the vacuity of evidence, and base ratea x , the prior probability ofx without any evidence. In what follows we will use binomial opinion and opinion interchangeably. Binomial opinion quantification from evidence LetW x =fb x ;d x ;u x ;a x g be a binomial opinion of a random variableX, e.g., a product. To formulate our opinion about this product, we need to rely on the evidence about the 12 quality of this product. To calculate the binomial opinion of random variableX from directly observed evidence, we use the following mapping rule [Jøs16, CA18]: 8 > > > > > < > > > > > : b x = rx rx+sx+W ; d x = sx rx+sx+W ; u x = W rx+sx+W : r x ands x represent the positive evidence and negative evidence ofX taking valuex, respectively. W is a non-informative prior weight, which has a default value of 2 to ensure that the prior probability distribution function (PDF) is the uniform PDF when r x =s x = 0 anda x = 0:5 [Jøs16]. In our online shopping example, a customer can form his/her opinion by looking at the product report. For example, 10 reports show that the product is good and 10 show that it is bad. Then positive evidencer x = 10 and negative evidences x = 10, and the opinion can be calculated based on these evidence (and say a W of 2) asW x =f 10 22 ; 10 22 ; 2 22 ; 1 2 g. To further understand the binomial opinion, we introduce the projected probability p x =b x +u x a x . A binomial opinion is equivalent to a Beta probability density function (PDF). Assume a random variableX is drawn from a binary domainfx; xg. Letp denote a continuous probability functionp :X! [0; 1] wherep(x) +p( x) = 1. Withp(x) as variable, the Beta probability density functionBeta(p(x);;) reads: Beta(p(x);;) = ( +) ()() (p(x)) 1 (1p(x)) 1 ;> 0; > 0; 13 where, represent evidence/observations ofX =x andX = x. Letr x ands x denote the number of observations ofx and x, then, can be expressed as follows: 8 > > < > > : =r x +a x W; =s x + (1a x )W: The bijective mapping between a binomial opinion and a Beta PDF emerges from the intuitive requirement that the projected probability of a binomial opinion must be equal to the expected probability of a Beta PDF, i.e.,p x =b x +u x a x =E[x] = + = rx+axW rx+sx+W . Binomial multiplication Binomial multiplication operator is used to derive the opinion of the conjunction of two opinions. Multiplication in SL corresponds to AND in binary logic. Let W x =fb x ;d x ;u x ;a x g and W y =fb y ;d y ;u y ;a y g be binomial opinions about x and y, respectively. We can get the opinion of the conjunctionx^y [Jøs16]: W xy =W x W y : 8 > > > > > > > > > < > > > > > > > > > : b x^y =b x b y + (1ax)aybxuy +ax(1ay )byux 1axay ; d x^y =d x +d y d x d y ; u x^y =u x u y + (1ay )bxuy +(1ax)byux 1axay ; a x^y =a x a y : In our online shopping example, assume a customer holds opinions of two different productsx andy. Then he/she can derive the opinion about the conjunctionx^y using this multiplication operator. 14 Averaging fusion To combine different people’s opinions about the same domain, we use fusion operators. The fusion operator used in this work is the averaging fusion operator [WZ + 17], which is appropriate for circumstances when agentA and agentB observe the same process over that same time period [Jøs16]. In what follows we will use fusion and averaging fusion interchangeably. LetW A X =fb A X ;d A X ;u A X ;a A X g andW B X =fb B X ;d B X ;u B X ;a B X g be source agentA andB’s binomial opinions aboutX, respectively. For example, customerA and B’s opinions about a product. The binomial opinionW (AB) X = fusion(W A X ;W B X ) is called the averaged opinion ofW A X andW B X , which represents the combined opinion of productX from customerA andB. The averaging fusion operator works as follows: Case I:u A X 6= 0 oru B X 6= 0. W (AB) X : 8 > > > > > < > > > > > : b (AB) X = b A X u B X +b B X u A X u A X +u B X ; u (AB) X = 2u A X u B X u A X +u B X ; a (AB) X = a A X +a B X 2 : Case II:u A X = 0 andu B X = 0: W (AB) X : 8 > > > > > < > > > > > : b (AB) X = A X b A X + B X b B X ; u (AB) X = 0; a (AB) X = A X a A X + B X a B X ; where 8 > > < > > : A X =lim u A X !0;u B X !0 u B X u A X +u B X ; B X =lim u A X !0;u B X !0 u A X u A X +u B X : 15 2.2 Trust in Deep Neural Networks We define trustworthiness and introduce DeepTrust, our neural network opinion, and the trustworthiness quantification framework in this section. We evaluate the opinion in a simple one-neuron case, then further generalize to multi-layered typologies. 2.2.1 Trustworthiness It is crucial in any stochastic decision-making problem to know whether the information about the probabilistic description is trustworthy and if so, the degree of trustworthiness. This is even more important when quantifying trust in an artificial intelligence-based decision-making problem. In scenarios without trust quantification, a neural network can only present the decision based on output labels without any clear measure of trust in those decisions. However, if the environment is corrupted and the network is trained with damaged data, the decisions generated by this network could be highly untrustworthy, e.g., YES but with 10% belief. Lack of quantification of trust results in a lack of information, and consequently fatal errors. This shows the need for trust quantification. In what follows, we describe DeepTrust for quantifying the opinion and trustworthiness of a multi-layered NN as a function of its topology and the opinion about the training datasets. DeepTrust applies to both classification and regression problems since the value of input does not affect the calculation of the opinion. As long as we have true labels, i.e., the problem is in the realm of supervised learning, DeepTrust can calculate the trustworthiness. Definition 2.2.1. In our DeepTrust, we define the trustworthiness ofx as the projected trust probability ofx, i.e., trustworthinessp x =b x +u x a x , whereb x ,u x ,a x are belief, uncertainty, and base rate ofx, respectively. 16 The intuition for our definition of trustworthiness is as follows. The higher the belief mass and the base rate are, the higher the projected trust probability and hence the higher the trustworthiness is. If a neural network results in a high projected trust probability, then it is considered to be trustworthy. Belief mass, the product of uncertainty mass and base rate can both contribute to projected trust probability. High belief mass comes from a large volume of positive evidence supportingx and a high base rate represents a high prior probability ofx without any evidence. For example, when a large volume of evidence is collected, ifb x = 0, i.e., belief is zero, then it can be concluded that all collected evidence are not supportingx, henced x = 1, i.e., disbelief is one. Now the trustworthiness ofx should be extremely low since no evidence is supporting it, and p x = 0. An opposite case is when no evidence supports or opposesx, the background information aboutx, i.e.,a x , defines the trustworthiness ofx due to the lack of evidence. In this case,b x =d x = 0, i.e., belief and disbelief both equal to zero, uncertaintyu x = 1, andp x = a x , i.e., projected probability equals to base rate. It is noteworthy that our measure of trust based on projected trust probability is in agreement with the main SL reference [Jøs16]. More precisely Josang presents the special case of projected trust probability as 1 (0), like a case with complete trust (distrust) in the source. 2.2.2 DeepTrust Formulation Because we cannot apply SL directly to trust quantification of NNs, we will need to formulate this as an SL problem. For that, we formulate the trust relationships in NN trust quantification as a subjective trust network as shown in Fig. 2.2A, where NN is the target object. A subjective trust network represents trust and belief relationships from agents, via other agents or sources to target entities/variables, where each trust and belief relationship is expressed as an opinion [Jøs16]. For example, a customer wants to purchase a product from a website, so he/she will make a purchase decision by browsing 17 Figure 2.2: DeepTrust: subjective trust network formulation for multi-layered NNs and NN opinion evaluation. A, Subjective trust network formulation for a multi-layered NN. To quantify the opinion of networkW NN , i.e., human observer’s opinion of a particular neural networkW A NeuralNetwork , human observer as an analyst relies on sources, in this case, neurons in a network, which hold direct opinions of neuron network.W A source andW source NeuralNetwork are analyst A’s opinion of source and source’s opinion of neural network. Then the derived opinion of neural network is calculated asfusion(W [A;source] NeuralNetwork ). B, NN opinion evaluation. Dataset in DeepTrust containsData, i.e., features and labels the same way as a normal dataset, in addition to theOpinion on each data point. If a data point doesn’t convey the information as other data points do, for example, one of the features is noisy or the label is vague, we consider this data as uncertain, and hence introduce uncertainty into the dataset. Given NN topology, opinion of data, and training loss, DeepTrust can calculate the trust of NN. Note that, the trust of hidden neurons and trust of output neurons are quantified differently as shown in this figure. Each neuron in the output layer is a source that provides advice to analyst, so that the analyst can derive her own opinion of the NN.W Y neuronjy is represented byW y 0 jy for simplicity. Detailed computation and explanation are summarized in Section 2.2. other buyers’ reviews of this product. He/she may or may not trust each and every review. 18 In this case, the customer as an agent forms his/her opinion of the product via other agents’ (i.e., buyers’) opinions. In DeepTrust, a human observer A as an analyst wants to formulate an opinion about a given neural network,W A NeuralNetwork . However, this analyst A doesn’t have a direct trust relationship with the whole neural network and hence needs to gather sources’ opinions of the neural network, W Source NeuralNetwork ’s. In the case of neural networks, the sources are the neurons in the output layer. Therefore, the human observer can only interpret the trustworthiness of a neural network by referring to the neurons in the output layer. We will later prove in Theorem 2.2.1 that taking neurons in all layers into consideration causes an opinion duplication. In the trust network in Fig. 2.2A, analyst A discounts the information provided by the source (neuron) and derives an opinion about the neural network,W [A;Source] NeuralNetwork , i.e., A’s opinion of the neural network through a source. An intuitive underlying relationship is that A trusts the source, and the source trusts the neural network. Analyst A discounts the information given by the source since A may not fully trust the source. If there are more than one sources, i.e., more than one neurons in output layer, analyst A will gather advice from all sources and fuse the discounted opinions by fusion operator we introduced in Section 2.1. A’s opinion to sourceW A source is set to be maximum belief based on the assumption that analyst A trusts the source completely, so A doesn’t discount the source’s information and the derived opinionW [A;Source] NeuralNetwork simply becomesW Source NeuralNetwork . Therefore, A’s opinion of a given neural network with multiple sources reads: W A NeuralNetwork =fusion(W [A;Source] NeuralNetwork ) =fusion(W Source NeuralNetwork ): (2.1) Since neurons in output layer are analyst’s sources, we use notationW neuron in Fig. 2.2B to represent a neuron’s opinionW Source NeuralNetwork in output layer for simplicity, and 19 use notationW H neuron to represent neurons in hidden layers. Derivations ofW neuron and W H neuron are introduced in Section 2.2.2 and 2.2.2. We will omit notation A and denote opinion of neural network asW NN from now on. As shown in Fig. 2.2B, DeepTrust quantifies NN’s opinion based on opinion of dataset, network topology, and training loss. Opinion of dataset is assumed given in this work since it should be quantified and provided by the data collector before the delivery of data (and dataset’s opinion and trustworthiness quantification will be explored in future work). We consider multilayer neural networks in this work, and more complicated neural networks such as convolutional neural networks and recurrent neural networks are left out to be discussed in future work. Opinion evaluation for one neuron To better understand the opinion and trustworthiness evaluation for a multi-layered NN, we will first introduce opinion quantification for one neuron with one input and one output. This is later utilized as a foundation of multi-layered NN opinion and trustworthiness quantification by DeepTrust. To calculate the trustworthiness of a NN, we first need the opinions of the data.W X , the opinion of inputX andW Y , the opinion of true labelY are given along with the dataset and used to evaluate the opinion of one neuronN. In this work, without losing generality, we assume opinions of all data pointsW ~ x ’s andW y ’s are the same. The reason is that for realistic datasets, if a data point is damaged or noisy, we may not be able to determine which feature(s) or label is problematic. We would like to note that the size of~ x is greater than and equal to 1 in general, and we take it as 1 here since here we are considering a neuron with only one input. We first take a look at how the neuron works in an ordinary neural network. When training a neural network, the forward pass takes input, weight, and bias to calculate the total net inputnet asweightinput+bias for hidden neuronN. Activation function such as ReLU is applied tonet to calculate outputout of neuronN, which is denoted byy 0 in 20 one neuron case. In backpropagation, the back pass takes the error and backpropagate it all the way back to the input layer as shown in Fig. 2.3. Inspired by this flow, the opinion of neuron,W N , is calculated based on the forward pass and backward pass operations using two opinions: forward opinion of neuronW X N fromX point of view, and backward opinion of neuronW Y N fromY point of view. We can viewW X N andW Y N as advice from sourcesX andY , respectively. To combine these two opinions, the fusion operator is then used to get the final opinion of neuronW N : W N =fusion(W X N ;W Y N ): (2.2) Here in the one neuron case,N is the only neuron in the hidden and output layer, hence afore-mentionedW neuron andW H neuron are the same and calculated asW N , since this neuron is in hidden/output layer. To calculateW X N in Eq. 2.2, we first look at the original neural network and the calculation of net input: net = ~ w~ x +b, where weight ~ w and bias b are initialized randomly. Inspired by this, the forward opinion ofN,W X N , is calculated as follows: W X N =fusion(W ~ w~ x ;W b ); (2.3) where the fusion operator takes addition’s place in thenet calculation. The opinion of product of ~ w and~ x,W ~ w~ x , is: W ~ w~ x =W ~ w W ~ x : (2.4) W X N is calculated regardless of the activation functions since it doesn’t make sense to apply activation functions on opinions. In this sense, our framework is not limited by the linearity or non-linearity of the neural network. During the training process, the weight 21 Figure 2.3: Backpropagation in one neuron and opinion update of weight and output. The backpropagation process in neural network training first compares the true label and output given by the neuron, then back propagates the difference tonet, and adjusts the weight accordingly to minimize the error. The weight opinion update process mimics the backprobagation: (i) At the current episode, the opinion of neuron is the combined opinion of forward opinion and backward opinion, which are based on currentW weight , and currentW outputjlabel , respectively. (ii) Then in the next episode, the opinion of neuron will be recalculated by taking updatedW weight and W outputjlabel into consideration. and bias are updated during the backward propagation based on loss. Therefore, as shown in Fig. 2.3, opinions of weight and bias should be updated simultaneously during the training as well. At the beginning,W ~ w andW b are initialized to have maximum uncertainty due to lack of evidence, and later on updated according to the neuron’s output based on the same rule introduced in Eq. 2.5. Backward opinion of neuronW Y N in Eq. 2.2 is an opinion fromY point of view hence it is calculated based on the opinion of true labelW y . In backpropagation, error is the key factor. Similarly, we use the errorjy 0 yj in computation ofW Y Njy , the conditioned backward opinion of neuron, which is equivalent toW y 0 jy , the opinion of neuron’s output y 0 given the true labely. During the training process, based on the opinion quantification from evidence rules introduced in Section 2.1, if output of the neuron,y 0 , is in some tolerance region of true labely, i.e., there exists a small, s.t.jy 0 yj < , we count 22 this as positive evidencer to formulate the opinion ofy 0 giveny: W y 0 jy . Otherwise, it is negative evidences. Since the positive and negative evidences are calculated along with the training process, it will not cause extra computation expense. Intuitively,r ands represent the numbers of outputs that NN predicts correctly and wrongly, respectively, and they are updated during the training. After each update,W y 0 jy = (b y 0 jy ;d y 0 jy ;u y 0 jy ;a y 0 jy ) is then formulated from these evidences according to the opinion quantification from evidence rules. After deriving the conditioned opinion, the backward marginal opinion of N, i.e.,W Y N , is calculated as follows (similar to the calculation of a marginal probability given a conditional probability): W Y N =W y 0 =W y 0 jy W y : (2.5) Opinion propagation in general topology For a neural network (denoted as NN) with multiple inputs, multiple outputs, and multiple hidden layers, final opinionW NN consists of opinions of all neurons in the final layer, i.e., allW neuron ’s in the output layer, each contains forward part and backward part, similar to those of one neuron case. As shown in Fig. 2.2.B, for a neuron in a hidden layer,W H neuron is calculated asfusion(W b ;W ! 1 W x 1 ;:::). If it is the first hidden layer, thenW input = [W x 1 ;W x 2 ;:::] T represents the opinions of data input. If it is the second or latter hidden layer,W input represents the output opinionsW neuron from the previous layer. For a neuron in the output layer, the opinion is calculated as in Eq. 2.2. The forward opinionW X neuron takes opinions of input (i.e., the output of the previous hidden layer), opinions of weight, and opinions of bias into account. Similarly,W Y neuron ’s are the backward opinions of neurons in the output layer, each of which is a function of opinions 23 of true labels and opinions of neuron’s output. After evaluating the opinions of all output neurons,W NN is then calculated as the averaging opinion of allW neuron ’s in the output layer. Theorem 2.2.1. Considering all neurons’ opinions instead of only the neurons in the output layer causes opinion duplication. Proof. Let us consider a simple neural network with one hidden neuron and one output neuron. According to the calculation strategy, the opinion of output neuron is calculated as: W neuron =fusion(W X neuron ;W Y neuron ) =fusion(fusion(W b ;W ! W x );W Y neuron ); whereW x is the output opinion of the previous hidden neuron, i.e.,W x = W H neuron . Therefore, the final opinion formula of the neuron in output layer reads: W neuron =fusion(fusion(W b ;W ! W H neuron );W Y neuron ): If we take opinion of the hidden neuronW H neuron again into consideration when calcu- lating the final opinion of this simple neural network, the final opinion equation of the neural network becomes: W NeuronNetwork =fusion(W neuron ;W H neuron ) =fusion(f(W H neuron );W H neuron ); wheref(W H neuron ) represents thatW neuron is a function ofW H neuron . Hence, we can see the above equation counts the opinion of the hidden neuron twice and causes an opinion duplication. Since all the previous layers’ opinions are propagated to the final output layer, the opinion of the output neuron already takes the opinions of hidden neurons and 24 Figure 2.4: General topology example. The first hidden layer contains hidden neuronsN 1 1 and N 1 2 , and the second hidden layer contains hidden neuronsN 2 1 andN 2 2 . input neurons into account. This means we do not need to double count them again in the calculation of the final opinion of the neural network. Here we describe a concrete example of multi-layer neural network opinion evaluation by using DeepTrust. Let us derive an opinion of a neural network with 2 inputs, 2 hidden neurons, and 2 outputs, as shown in Fig. 2.4. The final opinion of the neural network, W NN , is the fused opinion of all output neurons (N 2 1 andN 2 2 ): W NN =fusion(W N 2 1 ;W N 2 2 ): (2.6) W N i j is the opinion ofj th neuron ini th layer. OpinionW N i j is calculated by Eq. 2.2: W N i j =fusion(W X N i j ;W Y N i j ), more specifically: 8 > > < > > : W N 2 1 =fusion(W X N 2 1 ;W Y N 2 1 ); W N 2 2 =fusion(W X N 2 2 ;W Y N 2 2 ); (2.7) where the opinion of each neuron in output layer takes two parts into consideration: forward part fromX point of view, and backward part formY point of view. Since the 25 forward part comes from the previous layers, the calculation formula ofW X N i j is similar to Eq. 2.3, with the multi-source fusion operator [WZ + 17]: 8 > > > > > > > > > < > > > > > > > > > : W X N 2 1 =fusion(W b 2 ;W w 2 11 N 1 1 ;W w 2 21 N 1 2 ); W X N 2 2 =fusion(W b 2 ;W w 2 12 N 1 1 ;W w 2 22 N 1 2 ); W X N 1 1 =fusion(W b 1 ;W w 1 11 x 1 ;W w 1 21 x 2 ); W X N 1 2 =fusion(W b 1 ;W w 1 12 x 1 ;W w 1 22 x 2 ): (2.8) We can clearly see thatW X N 2 1 combines trust information of biasb 2 , weightsw 2 11 andw 2 21 , and neuronsN 1 1 andN 1 2 in the previous layer. Backward opinion of each neuron inNN’s output layer in Eq. 2.7 is calculated similarly to what Eq. 2.5 states: 8 > > < > > : W Y N 2 1 =W N 2 1 jy 1 W y 1 W Y N 2 2 =W N 2 2 jy 2 W y 1 : (2.9) Opinion quantification of NN’s output DeepTrust not only has the ability to quantify the opinion of a NN in training phase under the assumption that the training data and the training process are accessible, but it can also be deployed in trust quantification of NN’s decision or output when given a pre-trained neural network. This enables DeepTrust with wider usefulness and deeper impact in real-world implementations. For most of the real-world situations, we only have access to a pre-trained NN, such as online machine learning models provided by cloud service providers, and we need to evaluate the trust of a given pre-trained NN and/or its output. To this end, in addition to the training phase, DeepTrust includes opinion and trustworthiness quantification in validation and test phases, under the assumption that the NN’s topology is known. In the validation phase, given a pre-trained NN (along with its topology) and 26 validation data, DeepTrust can quantify opinion and trustworthiness of the pre-trained NN similarly as in Section 2.2.2 in training phase. After calculating the opinion of the NN, the opinion of its output can be calculated similarly as the forward opinion of the NN, which is similar to generating a prediction in the ordinary NN testing phase. Opinion and trustworthiness quantification of NN’s prediction provides an evaluation of input data and NN inner working’s trustworthiness, and is often useful when NN generates overconfident predictions. Besides accuracy, confidence value, this third dimension not only covers the objective data itself, but also consists NN and data’s subjective trust information. Together with accuracy, multi-objective optimization can be formed in future work. 2.2.3 A good NN topology leads to high projected trust probabilities, even when trained with untrustworthy data Opinion and trustworthiness quantification of a multi-layered NN depends on the opinions about the dataset content, the training loss, and the network topology. Since the training loss is highly correlated with the network topology, in this work, we mainly focus on the effect of the opinion about the dataset’s content and the network topology. Given that the topology is more under the control of the designer, we first start with the impact analysis of the network topology. Case study I: experiment setup To investigate the relationships among network topologies (i.e., we only vary the number of hidden layers and number of hidden units in each layer, other hyper-parameters such as the learning rate, activation functions are the same in all experiments), uncertainty degree of data, and opinion of a NN, we conduct a case study with three parts: 27 • Evaluate opinion of NN 1 with topology 784-1000-10 and NN 2 with topology 784-500-500-10, under original MNIST data with max belief opinion and damaged MNIST data with max uncertainty opinion assigned to the damaged data points. Data damage percentage ranges from 10% to 100%. • Evaluate opinion of network with topology 784-x-10, where the number of hidden neurons x ranges from 100 to 2000, under original MNIST data and damaged MNIST data with 10% to 20% data damage. We evaluate in total 20 different topologies in this step. • Evaluate opinion of NN with topology 784-{1000}-10, which represents the neural network with 784 neurons in input layer, 10 neurons in output layer, 1000 hidden neurons in total and distributed in 1 to 5 layers (more specifically, the topologies used are 784-1000-10, 784-500-500-10, 784-300-400-300-10, 784-250-250-250- 250-10, and 784-200-200-200-200-200-10). The NNs are trained under original MNIST data and damaged MNIST data with 10% to 20% data damage. The first part of case study I addresses the relationship between opinion about a NN and uncertainty degree of data by comparing the opinion of different topologies under different data damage degrees. A network topology regarded as among the best for the MNIST database (denoted byNN 1 ), has a topology of 784-1000-10 [SSP03] (where 784 neurons in the first layer are used for the input handwritten digit images, each with 28 28 pixels, the 1000 neurons in the second layer are used as hidden neurons, and the 10 neurons in the third layer are used for the 10 output classes, digit 0 to 9). To contrast the opinion quantification for NN 1 , we also consider NN 2 with a topology of 784-500-500-10 (for which the neurons of the middle layer ofNN 1 are distributed equally to two hidden layers each of which with 500 neurons). To evaluate the opinion of NN 1 andNN 2 , we first train them by feeding the training dataset once, one data point at 28 a time, and then evaluate the opinion of trained networks based on their topologies and the training loss. To better realize the impact of topology, the NNs are trained with datasets with full confidence on the trustworthiness of data, i.e., the opinion about the dataset has the maximum belief to reflect minimum uncertainty of 0 regarding the dataset. We denote this maximum belief and zero uncertainty byf1; 0; 0; 0:5g, which representsbelief = 1, disbelief = 0, uncertainty = 0, and base rate = 0:5. This helps our analysis to concentrate on the impact of topology only. In addition to evaluating the opinion ofNN 1 andNN 2 with the highly trustworthy dataset, we also randomly take a subset of training data and flaw the labels by randomly altering them, and then feed the damaged training dataset to NN D 1 and NN D 2 , which have the same corresponding topologies as those ofNN 1 andNN 2 , but with different parameters values (i.e., bias and weights) as the training data are different. Since the training set is damaged by altering some labels, the opinion of training set should be redefined accordingly to account for the damaged data. The opinion of a damaged data point is set to be maximum uncertainty. More precisely, the opinionf0; 0; 1; 0:5g representsbelief = 0,disbelief = 0,uncertainty = 1, and baserate = 0:5. The level of data damage is varied from 0 to 100%. The second and third parts of this case study investigate the relationship between the projected trust probability of NN and its topology. By varying the number of hidden neurons in one hidden layer and the number of hidden layers under the same amount of hidden neurons as described above, we further explore the impact of topology on opinion and trustworthiness. Opinion setup for damaged data follows the same principle as that stated in the first part of this case study. Case study I: experimental results Remark 2.2.1. Higher percentage of damage in the input dataset results in higher levels of trust degradation, however, the exact level is highly topology dependent. A good 29 Table 2.1: Comparison of accuracy and projected trust probability betweenNN 1 (NN D 1 ) andNN 2 (NN D 2 ). Data Damage Accuracy Projected Trust Probability Percentage 784-1000-10 784-500-500-10 784-1000-10 784-500-500-10 0% 90.29% 91.03% 78.46% 67.54% 10% 81.18% 80.74% 60.74% 44.71% 20% 71.77% 73.53% 50.00% 30.34% 30% 67.90% 65.22% 50.00% 25.09% 40% 61.31% 60.51% 50.00% 25.09% 50% 52.60% 53.15% 49.99% 25.10% 60% 50.46% 52.15% 49.99% 25.09% 70% 46.33% 44.40% 49.99% 25.09% 80% 40.08% 40.40% 49.99% 25.09% 90% 35.02% 38.94% 49.99% 25.09% 100% 36.10% 37.19% 49.99% 25.10% network topology leads to high trustworthiness even when trained with untrustworthy data. Our experiments confirm this observation as shown in Fig. 2.5. Fig. 2.5A-C summarize the opinion and projected trust probability comparison betweenNN 1 and NN 2 . The trust probability ofNN 1 andNN 2 converge to 0:78 and 0:68, respectively. Fig. 2.5D-M shows the opinion ofNN D 1 . TrainingNN D 1 with 10% data damage results in a belief value of 0:6. When the damaged data percentage varies from 20% to 100%, the belief ofNN 1 converges to 0:5, with the disbelief increasing its portion as shown in Fig. 2.5E-M. TrainingNN D 2 with 10% to 100% data damage results in relatively lower belief value compared toNN D 1 . When damaged data percentage varies from 30% to 100%, the belief ofNN D 2 converges to 0:25. This confirms that for a robust topology on MNIST dataset such asNN 1 , the impact of damage in the dataset is less severe than that in a NN with a frail topology (e.g.,NN 2 ) in terms of the belief and projected trust probability. 30 Figure 2.5: Opinion comparison betweenNN 1 andNN 2 under undamaged MNIST data. A, Opinion ofNN 1 with topology 784-1000-10. B, Opinion ofNN 2 with topology 784-500- 500-10. C, Projected trust probability comparison betweenNN 1 andNN 2 . D-M,NN D 1 with the same topology asNN 1 , i.e., 784-1000-10, is trained with damaged data. Randomly take 10% to 100% training data, and alter labels to introduce uncertainty and noise into the dataset. Set opinion of damaged data point to have maximum uncertainty:f0; 0; 1; 0:5g. Belief is sparser while disbelief becomes denser in D-M, but there is still belief even if the dataset is 100% damaged. N-O, normalized cumulative belief and disbelief ofNN D 1 under 10% to largest data damage, averaged over 10 run. 31 Remark 2.2.2. When choosing between two NN architectures, if the accuracy compari- son doesn’t provide good results, adding trustworthiness comparison into the performance measures helps with the decision making. Accuracy comparison ofNN 1 (NN D 1 ) andNN 2 (NN D 2 ) appears in Table 2.1.NN 2 and most cases ofNN D 2 slightly outperform their correspondingNN 1 (NN D 1 ) cases in terms of accuracy. However, we believe this slight difference is not very convincing when choosing between these two different topologies. On the other hand, trust comparison acts as a more reliable tool, e.g., the impacted projected trust probability is almost 50% (49:99% vs. 25:1% forNN D 1 andNN D 2 , respectively). We therefore propose quantifying opinions for NNs and using the opinion comparison among various NNs along with accuracy evaluation as a tool to determine the robustness of the NN topologies in both cases of trustworthy and untrustworthy data. Note that both topologies were trained by feeding all training data once, with one data point at a time. To increase the accuracy, a better training strategy is to feed the entire dataset to the NN multiple times, with a mini-batch of data at a time. DeepTrust works with batch-training as well. Remark 2.2.3. Accuracy and trustworthiness are not necessarily correlated. Fig. 2.6 shows the projected trust probability and accuracy comparison of 784-x-10 and 784-{1000}-10 trained under original MNIST data with maximum belief opinion f1; 0; 0; 0:5g, and under 10% and 20% data damage (maximum uncertainty opinion, i.e., f0; 0; 1; 0:5g). We take low data damage percentage here because in real life severely damaged data will not be used to train the models at all. The results confirm that data damage impacts both trustworthiness and accuracy, however accuracy and trustworthiness are not necessarily correlated, i.e., topologies that result in highest accuracy, may not reach the highest trustworthiness levels. Under original and slightly damaged MNIST data, adding more hidden neurons results in higher projected trust probabilities when 32 the number of layers is fixed, however, the trust probability increasing rate tends to slow down as more neurons are added. When the data damage is higher, e.g., 20%, adding more neurons in one layer doesn’t lead to a significant increase in the trust outcome. On the contrary, while keeping the total number of hidden neurons fixed, changing the number of hidden layers strongly impacts the projected trust probability. Therefore, when the dataset is damaged and the training resource is limited, varying the number of hidden layers rather than the number of hidden neurons is a more efficient strategy to obtain higher levels of projected trust probability outcome. 2.2.4 Uncertainty is not always malicious when evaluating the opin- ion and trustworthiness of a neural network In SL, the lack of confidence in probabilities is expressed as uncertainty mass. Uncer- tainty mass represents a lack of evidence to support any specific value [Jøs16]. An important aspect of uncertainty quantification in the scientific literature is statistical uncertainty. This is to express that the outcome is not known each time we run the same experiment, but we only know the long-term relative frequency of outcomes. Note that statistical uncertainty represents first-order uncertainty, and therefore is not the same type of uncertainty as the uncertainty mass in opinions, which represents second-order uncertainty [Jøs16]. The impact of the topology on opinion quantification of a NN is discussed in the previous section, and the impact of the opinion of data is explored as follows. Case study II: experiment setup To address the impact of the opinion of data on opinion and trustworthiness of a neural network, a case study is designed as follows: 33 • Construct a simple neutral network NN S 1 with topology 3-1-1. Set opinion of training dataset to be six cases: max belief, max disbelief, max uncertainty, neutral, equal belief & disbelief, and more belief than disbelief. Detailed opinion setup is described in Case I to Case VI. • Construct another simple neutral networkNN S 2 with same setup and same training data asNN S 1 , butNN S 2 has 10 hidden neurons. NN S 2 uses a more complicated topology, i.e., 3-10-1, to realize same function asNN S 1 . Both neural networks are trained under the same process and the opinions of both are evaluated after training. This case study focuses on the impact of the opinion of data, hence simple topologies are chosen without loss of generality. Starting with zero bias may at times generate better outcomes than cases with neutral information. To present the true meaning of this statement, we evaluate the impact of the degree of opinion confidence in the training dataset on the opinion and trustworthiness of NN. We consider the following six cases of opinion for the training dataset: • Case I - Max belief: set the opinion of training datasetOpinion Data to be maximum belief, i.e.,f1; 0; 0;ag. This means the training of NN is performed with the highest level of data trustworthiness. • Case II - Max disbelief: Opinion Data is set to be maximum disbelief, i.e., f0; 1; 0;ag, which means that the dataset is untrustworthy. • Case III - Max uncertainty: Opinion Data is set to be maximum uncertainty, i.e., f0; 0; 1;ag. This setting is used when we do not know whether we can trust the dataset due to a lack of information. • Case IV - Neutral: Opinion Data is set to be neutral:f1=3; 1=3; 1=3;ag. This is similar to Case III in the sense that we lack information on the dataset, however, it 34 presents scenarios where the levels of uncertainty and trustworthiness of data are in the same level. • Case V - Equal belief & disbelief: Opinion Data is set to bef0:5; 0:5; 0;ag. This opinion represents that the belief mass and disbelief mass are both equal to 0:5 with minimum uncertainty of 0, which is the scenario that an agent cannot generate a certain opinion, but there is no uncertainty formulation. • Case VI - More belief than disbelief: Opinion Data is set to bef0:75; 0:25; 0;ag to compare with Case III and further investigate the importance of uncertainty in opinion quantification. This setting contains 3 times more belief mass than disbelief and zero uncertainty. All base rates are set to be 0:5 to represent an unbiased background. Note that cases III and IV are more realistic, whereas cases I and II are more on the extreme sides. Case study II: experimental results Remark 2.2.4. Disbelief hurts the trust the most while uncertainty is not always mali- cious. Our experimental results in Fig. 2.7 confirm that for the same topology and training loss, the afore-mentioned cases generate different levels of projected trust probability in the outcome of the trained NN. According to our experiments, NN in Case III results in a much higher projected trust probability than in Case IV , V and VI, which leads to the conclusion that disbelief in training dataset hurts the final trust probability of NN more than uncertainty. In addition, lack of uncertainty measurement in Case V and VI leads to low projected trust probability, however, this dilemma occurs frequently in real world. It is therefore recommended to set the opinion of a dataset to maximum uncertainty in cases where belief, disbelief, and uncertainty are at similar levels, due 35 Figure 2.6: Projected trust probability and accuracy comparison of 784-x-10 and 784- {1000}-10 under original and damaged MNIST data. A, Projected trust probability of 784-x-10 reaches to 0:8 when increasing the number of hidden neurons from 100 to 2000. Topology highly impacts the projected trust probability, especially when rearranging a certain number of hidden neurons, in a various number of hidden layers. Accuracy hits the highest value with topology 784-2000-10, and the second-best accuracy is given by 784-1400-10. B, Compared to other topologies, the projected trust probability of 784-1000-10 is the highest with value 0:78, while topology 784-500-500-10 outperforms others in terms of accuracy. C, Under 10% data damage, projected trust probability of 784-x-10 reaches 0:64 when increasing the number of hidden neurons from 100 to 2000. D, Topology 784-1000-10 outperforms others in both accuracy and trust. E, Under 20% data damage, the projected trust probability of 784-x-10 settles at 0:5 when increasing the number of hidden neurons from 100 to 1900, while 784-2000-10 provides the highest trust probability. F, Topology 784-1000-10 results in the highest trust probability, while topology 784-500-500-10 reaches the highest accuracy. 36 Figure 2.7: Projected trust probability and loss comparison ofNN S 1 andNN S 2 . A-B, Pro- jected trust probability comparison ofNN S 1 andNN S 2 in afore-mentioned cases. BothNN S 1 andNN S 2 are trained under the same process with the same dataset. Training loss comparison is shown in C. The results ofNN S 1 andNN S 2 are similar. NN S 1 andNN S 2 reach a certain trust probability level with different speeds, more precisely,NN S 1 reaches a desired projected trust probability level faster. to lack of information. Furthermore, if an AI decision-maker cannot provide a result with full confidence, uncertainty mass is recommended to be generated along with the decision. We use two simple neural networks,NN S 1 andNN S 2 , both of which implement a function that does binary OR on the first two inputs and ignores the third input, using the 3-1-1 and 3-10-1 topologies, respectively. The simplicity of the function helps us focus on the impact of the opinion of dataset, as both NN S 1 and NN S 2 would very well implement it, given a sufficient number of training episodes. The projected trust probability comparison is illustrated in Fig. 2.7. The projected trust probabilities of trainedNN S 1 andNN S 2 are different in six cases as expected: • Case I: Both NN S 1 and NN S 2 are highly trustworthy for such a scenario. The reasons are twofold: First of all, the simple topologies of NN S 1 and NN S 2 are sufficient to realize the simple two-input OR functionality. This is confirmed by the fact that belief sharply saturates to a maximum and the training loss drops to a minimum. Secondly, training is done with a trustworthy dataset, which contains no noise, glitch or uncertainty. 37 • Case II: BothNN S 1 andNN S 2 are untrustworthy because of the highly untrustwor- thy dataset. Therefore, the trust in the outcome of the network shows maximum disbelief establishment. Remark 2.2.5. Use data with maximum uncertainty opinion to train a NN, then belief mass of this pre-trained NN can be nonzero. • Case III: The results for this case are depicted in Fig. 2.7A-B. They confirm that after all, there is hope for belief, even in the case of maximum uncertainty. The opinions ofNN S 1 andNN S 2 have nonzero belief values, even when the opinion of dataset is set to have maximum uncertainty. This result is helpful in data pre- processing: uncertain data should not be filtered out since uncertainty has its own meaning. Even if the data is fully uncertain, there is still hope after all for belief. • Case IV: The opinion of training dataset is neutralf1=3; 1=3; 1=3; 0:5g, which means belief, disbelief, and uncertainty of the dataset are set to be equal. This neutrality in terms of similar levels of belief, disbelief, and uncertainty in dataset can damage the projected trust probability in the outcome. The results shown in Fig. 2.7 confirm lower levels of projected trust probability when compared to Case III, which leads to the conclusion that if no information about dataset is given, starting with total uncertainty is actually better than with biased opinion, and even a neutral one. • Case V: In such scenarios, the dilemma of belief and disbelief is brought to maximum when belief = disbelief = 0:5 with zero uncertainty. The results, in this case, settle in much lower belief and projected trust probability values compared to those of Case III and IV . This reveals that the neutral case with uncertainty as in Case IV is much better than this neutral case without uncertainty measurement. 38 • Case VI: The results in this case are comparable to Case III in terms of trust since the belief mass of training data is three times more than disbelief, and the belief contributes the most to the results. However, lack of uncertainty measure in this uncertain case (there exists both belief and disbelief, and none of them plays a major role) leads to low projected trust probability in the end. Although similar to those ofNN S 1 , the results ofNN S 2 show lower projected trust probability levels upon convergence for cases I, III, IV and VI, while showing the same level of convergence, but a slower rate for the rest of the cases. 2.2.5 Did you trust those who predicted Trump to lose in 2016 elec- tion? A significant amount of machine learning-related projects or research activities involve the utilization of pre-trained NNs. The training process is time-consuming, expensive, or even inaccessible. In any case, one crucial question to answer is how much we should trust in the predictions offered by those pre-trained NNs. DeepTrust may shed light on this by providing opinion and trustworthiness quantification of NN’s prediction. To further show the usefulness of DeepTrust, we apply DeepTrust to 2016 election prediction and quantify the opinion (and projected trust probability) of the two major predictions back in 2016: “Trump wins" and “Clinton wins". 2016 presidential election prediction has been called “the worst political prediction” and the erroneous predictions hand us an opportunity to rethink AI political prediction. In this case study we use 2016 presidential pre-election poll data [Fiv16], which contains more than 7000 state-wise pre-election polls conducted by CNN, ABC News, Fox News, etc. We train a NN with structure of 1-32-32-1 to predict the winner between Hillary Clinton and Donald Trump. Input of the NN is state and the output is Clinton vs. Trump. The training accuracy saturates to 63:96% and the trained NN predicts Clinton 39 Figure 2.8: NN opinion results of 2016 election prediction. A-B, Opinion comparison of presidential election predictors NN 1-32-32-1 and NN 9-32-64-32-1. A, NN 1-32-32-1 is trained under original pre-election poll data. The projected trust probability of this NN in validation phase reaches 0:38, and its opinion reachesf0:38; 0:60; 0:02; 0:13g. B, NN 9-32-64-32-1 is trained under enriched pre-election poll data. The opinion of this NN isf0:71; 0:26; 0:03; 0:13g, which has higher belief value and results in more trustworthy predictions. as president by winning 38 states and 426 votes, which is consistent with most of the presidential election predictors back in 2016. To quantify the opinion and trustworthiness of this trained NN, we use the 2016 presidential election results in the validation phase based on the assumptions that (i) we are given a trained NN which predicts Clinton to win the election, (ii) the election data has the maximum belief opinion off1; 0; 0; 0:5g because the true election result is a trustworthy fact. The opinion of the NN 1-32-32-1 is shown in Fig. 2.8A. In validation phase, the projected trust probability of this NN is 0:38 with low belief value of 0:38. After calculating the opinion in validation phase, opinion of this NN’s output is quantified in test phase. Input of the NN in test phase has maximum belief value because of the maximum trustworthiness of voters in real election. The opinion of NN’s output isf0:21; 0:77; 0:02; 0:03g, which results in 0:21 projected trust probability. By utilizing DeepTrust, the opinion and trustworthiness of this NN presidential election predictor is quantified and we show that its output, “Clinton wins presidential election”, is untrustworthy. To further show the usefulness and effectiveness of DeepTrust, we quantify the opin- ion and trustworthiness of a NN which predicts Trump as winner in presidential election, 40 and verify that this result is more trustworthy. The 1-32-32-1 NN predictor results in wrong prediction because of the untrustworthy pre-election poll data. Multiple factors related to pre-election poll data such as “shy Trumpers”, lack of voters’ detailed informa- tion such as race, sex, education, income, etc. might have resulted in untrustworthiness and uncertainty in the whole prediction process. To present that this could have been avoided, had DeepTrust been used, we enrich the dataset by adding afore-mentioned detailed information of voters, and construct a dataset with 9 features: state, poll sample size, percentage of black, white, Latino, male, female, percentage of bachelor degree, and average household income. The NN predictor we use for this enriched dataset has the structure of 9-32-64-32-1, and its training accuracy reached 67:66% and predicts Trump as winner by wining 36 states and 336 votes. This result is more accurate than the previous one and closer to the true 2016 election result. This 9-32-64-32-1 NN should be more trustworthy, and we show that DeepTrust verifies this claim. The opinion of NN 9-32-64-32-1 is quantified in validation phase, and the results are shown in Fig. 2.8B. The projected trust probability of this NN reaches 0:71 with 0:70 belief value. In testing phase, the opinion of this NN’s output, “Trump wins presidential election”, is f0:50; 0:46; 0:04; 0:01g with 0:5 projected trust probability, which verifies that this result is relatively more trustworthy than previous “Clinton wins” result given by the 1-32-32-1 NN. 2.3 Trust in Convolutional Neural Networks Within the recent rapid developments of deep learning (DL), the convolutional neural networks (CNNs) emerged as a critical component in various computer vision and machine learning tasks including the novel view synthesis [ZTS + 16], SARS-CoV-2 vaccine design [YBN21], or perception and decision-making modules of autonomous 41 systems [GMJ19, ZKSE15, LYU18, PMB + 21]. Although numerous evaluation metrics for the analysis of neural networks, such as the accuracy, precision-recall, and area under the curve (AUC) exist, they fail to provide a measure of trust / trustworthiness when quality of the dataset and the degree of noise are unknown. Consequently, to overcome the challenges related to unknown corrupted or imprecisely curated training datasets as well as to account for the intrinsic uncertainty of dynamic environments not captured in training datasets, the trustworthiness and fairness become critical evaluation metrics for characterizing the performance of DL models [JKGG18, Esh21, DYZH20, MMS + 21]. Fairness-related and trust-related research has received an increasing amount of attention in recent years; these efforts include fairness-aware datasets and models [HBD + 21, MMS + 21, BLCGPO20], as well as trust-aware models in various application areas [DHX + 17, BRG16, CYZ + 21]. Frameworks for fairness and trust quantification for DL and machine learning models remain sparse. Trustworthiness quantification in DL models has emphasized the limitations of relying solely on accuracy when dealing with unknown noise sources. For instance, recent work from Google research proposes to quantify the trust score of a classifier’s prediction on a specific test sample by measuring the agreement between the classifier and a modified nearest-neighbor classifier [JKGG18]. This work does not consider the inner architecture of a classifier nor the trustworthiness of datasets. The DeepTrust [CNB20a] framework provides analytical strategies for quantifying the trustworthiness of deep neural networks (DNNs) by exploiting a binomial subjective opinion [Jøs16]. DeepTrust takes into consideration the trustworthiness of data, the specifics of the DNN architecture, and the learning process. While DeepTrust quantifies the trustworthiness of classical DNNs and is innovative, it is not directly applicable to CNNs. To overcome these shortcomings, we develop a trustworthiness evaluation framework for CNN building blocks and evaluate the trustworthiness of several popular CNN 42 architectures. We further propose a trust-aware CNN block with trust-based max-pooling layer, which, unlike traditional pooling layers work on feature maps, but work on the evaluated trustworthiness maps to achieve higher trustworthiness and accuracy after training. The main contributions of this work are as follows: • We develop a general analytical framework for quantifying the trustworthiness of CNNs with conv, max-pooling, and fully connected layers. • We evaluate and compare the trustworthiness of popular CNN building blocks in computer vision models, e.g., VGG [SZ14], ResNet [HZRS16], etc. • We propose a max-trust-pooling layer which operates based on the trust map instead of the feature map. In addition, we empirically show that with this max- trust-pooling, the trained CNNs achieve higher trustworthiness and accuracy and possess specific noise-tolerant properties. Trust modeling. To model and quantify trust / trustworthiness in various AI decisions, early efforts led to the Dempster-Shafer theory (DST) and more recently to the subjective logic (SL) formalism. In SL, the probability density function is used to quantify a so- called opinion [Jøs16]. In order to form an opinion on a subject, the theory of belief functions, DST, is used for modeling epistemic uncertainty – a mathematical theory of evidence [Sha76]. However, the corresponding fusion operators in DST are controversial and confusing [Sma04]. Therefore, we propose to use the flexible operators in SL for trust and opinion calculations. Note that the core of our work is to quantify the trust in CNNs and propose trust-aware CNN building blocks, which is significantly beyond the realm of SL. Trust in DL. Several evidence theories and trust propagation methods in the online reputation systems and network systems are proposed in the literature [GKRT04,SLF + 14, 43 UKD + 19]. However, these methods are not applicable to the field of AI and DL. Uncer- tainty, on the other hand, has been studied in DL research, like uncertainty propagation in deep neural networks in [TJK18], piece-wise exponential approximation in [AN11], and evidential DL in [SKK18]. DeepTrust [CNB20a], an SL-inspired framework, is proposed to evaluate the trustworthiness of DNNs, however, trust is not embedded in the network architecture or utilized in inference. In contrast, in this work, we propose a trust framework for CNNs and design a new trust-based max-pooling function to improve CNNs’ performance in some cases. We leverage trust information in the training and inference of CNNs. In addition, one of the biggest limitations of DeepTrust mentioned by the authors is the requirement of data’s trustworthiness. However, we don’t have this limitation. Uncertainty quantification. Uncertainty quantification (UQ) is a well-studied field in deep learning and machine learning. Generally speaking, there are two sources of uncertainty [GTA + 21, HW21], namely model uncertainty and data uncertainty. Model uncertainty is known as epistemic uncertainty and is reducible with infinite data. Data uncertainty is known as aleatory uncertainty and usually is irreducible, such as label noise or attribute noise. Bayesian approximation such as variational inference [HVC93, BB98, GG16], and ensemble learning techniques such as weight sharing [GIG17] are two popular methods in UQ. In Bayesian methods, model parameters are modeled as random variables, and the parameters are sampled from a distribution in a single forward pass [APH + 21]. In ensemble methods, there are multiple models, and the prediction is the combination of several models’ outputs. In contrast, our framework works with one neural network model and collect evidence for each parameter to measure an opinion (which includes belief mass, disbelief mass, and uncertainty mass). However, our uncertainty mass in opinion is a second-order uncertainty, while epistemic and aleatory uncertainty are first-order uncertainties. More specifically, second-order uncertainty 44 represents a probability density function (PDF) over the first-order probabilities (e.g, p(x)). According to the Beta binomial model in SL, a binomial opinion is equivalent to a Beta PDF (Beta(p(x))) under a specific bijective mapping. The density expresses where the probabilityp(x) is believed to be along the continuous interval [0; 1]. The probability density, is then be interpreted as second-order probability. This interpretation is the foundation for mapping high probability density in Beta PDFs into high belief mass in opinions, therefore, flat probability density in Beta PDFs is represented as uncertainty in opinions [Jøs16]. Since this evidence-based uncertainty reflects the vacuity of information, it is reducible as we collect information during training. The uncertainty mass in opinions is an evidence-based uncertainty and can be char- acterized by the spread of the Beta or Dirichlet PDFs [SZCY20, CLT + 21]. Perhaps the closest uncertainty notion to our work is grounded in a recently developed family of models: Dirichlet-based uncertainty family (DBU) [KCZ + 21]. Evidential deep learning (EDL) [SKK18] is one of the works in DBU models, proposed to quantify classification uncertainty. Given a standard neural network classifier, the softmax output of the clas- sifier for a single sample is interpreted as a probability assignment over the available classes. The density of each of such probability assignments is represented by a Dirichlet distribution, and it models second-order probabilities and uncertainty. It has been shown that data, model, and distributional uncertainty, can all be quantified from Dirichlet distributions; and these DBU measures show outstanding performance in the detection of OOD samples [MG18, SKK18, CLT + 21]. In our experiments, we find that OOD and adversarial samples are less trustworthy compared to clean in-distribution data, and this could be further explored in future works for OOD detection and generalization studies. 45 fusion ( * ) conv ( * ) Trust map, layer Feature map, layer Trust kernal Kernal Trust map, layer Feature map, layer 6 5 3 1 6 6 5 3 1 .1 .4 .2 .2 Max pooling .4 0.1 0.4 0.2 0.2 Feature map after max pooling Trust map after max pooling Max pooling a. b. Flatten Dense Dense ... Feature values Opinion values ... ... Figure 2.9: Trustworthiness quantification in conv layer and max-pooling layer. Trust calculation and feature calculation are accomplished at the same time. In conv layer, feature calculation includes a convolution calculate as shown in (a), and trust calculation includes a fusion calculation done in parallel as shown in (b). The resulting feature map and trust map have the same shape. A max-pooling layer with 22 window size then takes the feature map in layerl + 1 and outputs the maximum feature value (e.g., 6) in the window. The corresponding cell in the trust map contains the trust value 0:4, which is the trust value of cell 6 in feature map. 2.3.1 Trustworthiness Evaluation in CNNs We introduce our opinion and trustworthiness quantification framework for CNN building blocks, i.e., conv and pooling layers. We demonstrate our trustworthiness evaluation with a simple CNN architecture as shown in Fig. 2.9. which consists of one conv layer using same padding method and one 22 classical max-pooling layer followed by fully-connected layers. We perform trustworthiness evaluation along with neural network training. Fig. 2.9a shows a normal CNN calculation and Fig. 2.9b shows the trustworthiness calculation. In each layer, there are ordinary feature values saved in cells/neurons and weights, as well as trust values and opinions corresponding to cells/neurons and weights. We denote a neuron or a cell by N, and its feature and opinion/trust values are denoted byC andW N , respectively. 46 Opinion and Trust Evaluation in CNNs The trust evaluation is dependent on opinion quantification. To determine the opinion of a CNN, we first evaluate the opinion of output neurons. On a high level, the opinion of a CNN comes from opinions of the output neurons, i.e., neurons in the output layer [CNB20a]. We denote the opinion of a CNN asW CNN and the opinion of an output neuron asW neuron (orW N in short). With output neurons’ opinions known, we calculate the opinion of a CNN by combining or fusing all output opinions together utilizing the average fusion operator [Jøs16]: W CNN =fusion(W neuron 1 ;:::;W neuronn ); (2.10) wheren is the total number of output neurons. Generally speaking, the fusion operators combine the opinions together and take care of the conflict information expressed in these opinions. Out of many fusion operators defined in SL, we use average fusion here to combine the opinions from output neurons as they hold opinions over the same variable/object, a CNN, and there exist dependencies within opinions. In what follows, we will introduce how to evaluate the opinion of (i) a single neuron, (ii) a conv layer, and (iii) a max-pooling layer. Opinion Evaluation of a Single Neuron For a single neuron in neural networks, during the forward propagation, it takes inputs and weights and produces an output, and in the backward propagation, weights are updated based on gradients. Inspired by this process, the opinion calculation of a single neuron is quantified in a procedure called opinion propagation. Similarly in classical neural networks, opinion forward propagation is nothing but another “feature” propagation, where the feature is an opinion. Each weight is attached with an opinion 47 as well. In opinion backward propagation, the opinions of weights are updated. For an output neuron, the opinion (W neuron ) combines the forward passing opinion (W F N ) and backward updating opinion (W B N ): W neuron =fusion(W F N ;W B N ); (2.11) whereN is short forneuron. For a hidden neuron, the opinion calculation is simpler as it only contains the forward pass opinion [CNB20a]. In the following sections, we will introduce the forward opinion propagation to calculateW F N , and the backward opinion calculation to quantifyW B N and opinions of weightsW w ’s (wherew is short forweight). Forward Opinion Propagation For a neuronN l in layerl, it takes input from the previous layer, and we denote the opinion of a input from the previous layer asW F N l1. For neurons connected with input layer,W F N l1 is the opinion of input features (W x , we will introduce how to initialize it in Sec. 2.3.4), and for neurons in other layers,W F N l1 is the calculated opinion of neuron in the previous layer. Note that only the neurons in output layer have backward opinions, and neurons in all hidden layers only have forward opinions. In this section, we study frequently used layers in CNNs, namely, fully connected layer, conv layer, and pooling layer. The opinion evaluation in these architectures is different. We use two examples shown in Fig. 2.9 and Fig. 2.10 to illustrate the framework. Forward propagation in fully connected layer. Given a neuroni in layerl, we assume its feature value and forward opinion areC l i andW F N l i , respectively, as shown in Fig. 2.10. To calculate the opinion of a neuronj in the next layer, we follow [CNB20a] and get inspiration from ordinary forward propagation calculation in dense layers, e.g., 48 Feature Trust ... ... Output layer Layer Forward feature calculation Forward opinion calculation Forward opinion Label y & opinion of y Backpropagation Opinion backpropagation ... ... Feature value Layer Figure 2.10: Opinion calculation in a dense layer. Layerl is a dense layer and each neuron in this layer has a feature value and a corresponding opinion and trust value. Both opinions of neurons and opinions of weights are used to calculate the forward opinion of a neuron in the next layer using Eq. 2.12. If the next layer is the output layer, backward opinion is also calculated using Eq. 2.15. C l+1 j =f( P i w i;j C l i ), wheref() is an activation function andw i;j ’s are weights. The opinion calculation of neuronj in layerl + 1 takes similar form: W F N l+1 j =fusion(W w l 1;j N l 1 ;:::;W w l n;j N l n ); (2.12) where W w l i;j N l i = W w l i;j W F N l i ,8i; j, is an opinion multiplication. Opinion W w l i;j represents the opinion of a weight passing fromi th neuron in layerl to thej th neuron in layerl + 1. To replace the addition operation in P i w i;j C l i , we use fusion operator fusion(). For neurons in hidden layers, they only have forward opinion updated along with the training process, and for neurons in the output layer, both forward opinionW F N and backward opinionW B N are used to calculate the final opinion of an output neuron W neuron . We will introduce the backward opinion calculation in Sec. 2.3.1. Forward propagation in conv layer. Operations in conv layer are nothing more but multiplication and addition, therefore, the forward propagation in conv layer is very 49 similar to that in fully connected layer. As shown in Fig. 2.9b, we calculate the opinion of a neuron in conv layer by taking inputs from one filter window, e.g., anmm filter, and multiplying corresponding weights. Formally, the forward opinion of a neuron/cellj in layerl + 1 reads: W F N l+1 j =fusion(W w j 1 N l 1 ;:::;W w j mm N l mm ): (2.13) Forward propagation in max-pooling layer. The classical pooling layers are some- times sandwiched between conv layers to compress the amount of data and parameters. Its operations are similar as that of the conv layers, except that the core of the pooling function is to take the maximum value or the average value of the corresponding position for maximum or average pooling, respectively. Max-pooling outputs the maximum value in the pooling window (e.g., a 22 window) and its corresponding opinion reads: W F N l+1 i = max feature (W F N l i;1 ;W F N l i;2 ;W F N l i;3 ;W F N l i;4 ): (2.14) Fig. 2.9 shows that the max-pooling takes input opinions (W F N l+1 i;1 ;W F N l+1 i;2 ;W F N l+1 i;3 ;W F N l+1 i;4 ) from a 22 pooling window in layerl + 1, and outputsW F N l+2 i , the opinion of thei th cell in the layerl + 2. Backward Opinion Propagation In classical neural network training process, model parameters are updated in back propagation. Inspired by this, we update the parameter opinionW w and backward neuron opinionW B N in the backward opinion propagation phase. Update parameter opinion. Opinions of weights (and bias) are updated simultane- ously during the training. In the beginning, parameters are initialized to have maximum uncertainty opinions (i.e., uncertainty mass takes value of 1) due to lack of evidence, and 50 later on they are updated based on partial derivative of the cost function dJ dw . Based on evidence theory [CNB20a], if a gradient is in a tolerance range, i.e., dJ dw (where is a hyperparameter), we count it as a positive evidencesr to formulate the opinion ofW w . Otherwise, it contributes to a negative evidences. In simpler terms, if weights are updated by a larger step, it contributes to lower belief and lower trustworthiness. Intuitively,r ands represent the magnitude of parameters change between the current value and the optimal value. Positive evidences indicate that current parameter values are getting close to the optimal parameters. With evidence quantified,W w ’s are updated. The opinion evaluation and evidence collection in neuron-level also provides future possibilities to incorporate memory in trust quantification. Update output neuron opinion. Besides the parameter opinions, the backward opinion W B N of a neuron in an output layer needs to be updated during the training process. It is formulated based on the absolute errorjy 0 yj between neuron’ outputy 0 (i.e., confidence value after softmax function [GBC16]) and the one-hot encoded ground truth labely. The evidence for conditional backward opinion formulation of a neuron (W y 0 jy ) on each update is determined as follows: a positive evidencer is counted when jy 0 yj<, and a negative evidences is counted whenjy 0 yj. After calculating theW y 0 jy , we updateW B N as follows: W B N =W y 0 =W y 0 jy W y ; (2.15) whereW y is the opinion of true labely and it is initialized based on the quality of dataset, e.g., for dataset without label noise,W y could have the maximum belief mass. With backward opinion updated, we update output neurons’ opinions following Eq. 2.11 as demonstrated in Fig. 2.10. 51 Trust map, layer Feature map, layer 6 5 3 1 Max pooling 6 6 5 3 1 Choose maximum feature value .1 .4 .2 .2 Max trust pooling .4 0.1 0.4 0.2 0.2 Choose maximum trust value .1 5 Feature and trust map after max pooling Trust and featire map after max trust pooling Figure 2.11: Max-trust-pooling layer. Different than max-pooling layer that operates on feature maps, a max-trust-pooling layer generates output based on trust map. In this example, with 22 window size, a max-pooling function will output the maximum value in feature window, which is 6 with the trust value of 0:1. A max-trust-pooling function in this case will output feature value 5 because it has the maximum trust value of 0:4. This demonstrates the difference between max-pooling and max-trust-pooling. 2.3.2 TrustCNet Framework We first propose a novel trustworthiness-based max-pooling (max-trust-pooling) layer. Then, we take a CNN block and substitute the classic max-pooling layer with our max- trust-pooling layer to build our TrustCNet block, a novel trust-aware CNN block. Max-trust-pooling Pooling layers usually follow conv layers and take the average or maximum value in a window based on feature values. With opinion and trust evaluation, a conv layer has a trust map in addition to the feature map as shown in Fig. 2.11. Besides the feature-based pooling functions, trust-based pooling comes naturally. Therefore, we propose max-trust- pooling, a new pooling layer function based on trustworthiness values instead of feature values. Classical max-pooling selects maximum values from filters based on input values to extract most important features. We argue that with trustworthiness considerations, 52 Max trust pooling Conv feature Conv trust Feature map Trust map Conv feature Conv trust ... Figure 2.12: TrustCNet-n: a building block withn conv layers followed by one max-trust-pooling layer. features can be selected according to their trust values to improve both trustworthiness of the CNN and accuracy under certain circumstances. We can interpret this as choosing the most trusted items in every pooling window. A comparison between max-pooling and max-trust-pooling is shown in Fig. 2.11. Given a feature mapC l and a trust mapW N l in layerl, a 22 max-pooling function selects the maximum feature value, and the resulting cell in the next layer contains a feature of value 6 and its corresponding trust value is 0:1. If we replace the max-pooling by our proposed max-trust-pooling, the pooling function then looks into the trust map (instead of feature map), and the resulting cell in the next layer contains a trust value of 0:4 (and its corresponding feature value is 5). Max-trust-pooling does not require much more storage space, and all the computation can be realized based on trust framework. We only need to replace the basic opinion propagation method in pooling layer with: W F N l+1 i = max trust (W F N l i;1 ;W F N l i;2 ;W F N l i;3 ;W F N l i;4 ): (2.16) TrustCNet Block Our TruetCNet-n block is built based on a plain CNN network, e.g., n conv layers followed by a max-pooling layer, and we replace the max-pooling with our proposed 53 max-trust-pooling. As shown in Fig. 2.12, conv layers have feature maps and trust maps, then a max-trust-pooling layer takes the trust map and generates output. In the process of training, the parameters of the model and the corresponding opinions and trust maps should be updated synchronously. As an architecture that operates based on trust values, TrustCNet-n is a basic building block that can be stacked together to form a trust-aware CNN, i.e., TrustCNet. By collecting and amplifying more trustworthy feature values in layers, the trust-aware model learns different information and features than the plain deep learning models. 2.3.3 Trust Quantification of CNNs Deep learning model architecture design is an active research area and researchers develop various CNN models throughout the years to achieve higher performance in multiple application areas. Many successful models become benchmarks such as AlexNet [KSH12], VGG [SZ14], ResNet [HZRS16], Inception [SVI + 16], EfficientNet [TL19], etc. Various evaluation metrics have been proposed and used in evaluating deep learning models. However, trustworthiness evaluation is still lacking. Therefore, in this section, we use the framework we proposed in Sec. 2.3.1 to quantify trustworthiness of several frequently used CNN building blocks. For example,n conv layers followed by a max- pooling layer, wheren2f1; 2; 3g, and we denote these CNN blocks as convn-max- pool. We also evaluate trustworthiness of a CNN residual block. In addition, we provide trustworthiness evaluation of deeper CNNs such as VGG16 and AlexNet. Furthermore, we compare with the state-of-the-art trust quantification framework DeepTrust [CNB20a]. Experiment Setup In this experiment, we train and test all six CNNs with the same procedure using a subset of ImageNet dataset [RDS + 15, vdOKK16]. Training / test split is 80%=20%. 54 We pre-process the input by scaling the feature values in range [0; 1] and resizing the image shape to (180; 180; 3). For the CNN building blocks, Adam optimizer is used with 0:0005 learning rate and a sparse categorical cross entropy loss is used. ReLU activation function [Fuk69] is used in all conv layers. We develop CNNs containingn conv layers followed by a max-pooling layer and a fully connected output layer. For deeper CNNs, we experiment with VGG16 and AlexNet. The architecture details of CNN blocks are as follows: • 1 conv layer followed by 1 max-pooling (n = 1): conv (kernel window 33, stride 1, same padding, filter number 32) - max-pooling (kernel window 22, stride 2) - fully connected. This block type has been seen in AlexNet, ResNet, Inception, EfficientNet, etc. • 2 conv layers followed by 1 max-pooling (n = 2): conv (kernel window 33, stride 1, same padding, filter number 32) - conv (kernel window 33, stride 1, same padding, filter number 64) - max-pooling (kernel window 22, stride 2) - fully connected. This block type has been used in VGG, Inception, EfficientNet, etc. • 3 conv layers followed by 1 max-pooling (n = 3): conv (kernel window 33, stride 1, same padding, filter number 32) - conv (kernel window 33, stride 1, same padding, filter number 64) - conv (kernel window 33, stride 1, same padding, filter number 64) - max-pooling (kernel window 22, stride 2) - fully connected. This block type has been used in AlexNet, VGG, etc. • Residual block: we use the first residual block architecture from ResNet [HZRS16]. 55 Figure 2.13: Accuracy and trustworthiness evaluation of CNN blocks. Conv2 - max-pool architecture is the best among the four blocks tested as it achieves the highest trustworthiness and accuracy. Table 2.2: Test accuracy and trustworthiness results of CNN blocks. Accuracy Trustworthiness conv1 - max-pool 0:5342 0:5167 conv2 - max-pool 0.5522 0.5506 conv3 - max-pool 0:5248 0:5174 Residual block 0:1178 0:0694 Experimental Results Results of CNN blocks. Fig. 2.13 shows the training accuracy and trustworthiness results of four CNN building blocks we considered in this section. During the training process, training accuracy improves and trustworthiness also increases. The final test results of this experiment are listed in Tab. 2.2. After inspecting the results, we find that residual block’s performance is not ideal, and conv2-max-pool evolves to be a clear winner in 56 a. b. VGG16 Figure 2.14: a. Accuracy and trustworthiness evaluation of VGG16 and AlexNet. b. Comparison between DeepTrust and our framework. Results are evaluated on VGG16. our experiments as it achieves the highest accuracy and trustworthiness. These results could be useful in future deep learning model design works. Results of deeper CNNs. The results of test accuracy and trustworthiness are shown in Fig. 2.14a. We observe that VGG16 although results in lower accuracy than AlexNet, it achieves higher trustworthiness. This result shows that accuracy and trustworthiness are not necessarily positively related. Trustworthiness evaluation of a NN in our framework considers a few factors, such as data trustworthiness and parameter trustworthiness that contributes to the final values from the X direction, and trustworthiness of output from the Y direction. The evaluation of output is related to the accuracy in some sense, because correct predictions contribute to high trust values. The other factors considered in the evaluation are not directly related to accuracy. This makes the trust metric go beyond accuracy and provide values that are not considered by the accuracy evaluation. Comparison with DeepTrust. We further quantify the trustworthiness of VGG16 using modified DeepTrust and our framework. The comparison of trustworthiness results is shown in Fig. 2.14b. We find that DeepTrust generates much higher trustworthiness values than our method. The difference is caused by the difference in parameter opinion evaluation. Our evaluation considers difference of weight in training while DeepTrust 57 is more output-dominated as its trust of weight is set to be the same as the backward opinion of output. Besides the empirical difference, our framework differs from DeepTrust in many aspects. First of all, DeepTrust is designed specifically for neural networks with only dense layers, and it is not directly applicable to CNNs. It does not utilize trust values in inference or training and only quantifies trust values and does not use the trust values to improve performance. Our framework can quantify trust of networks and also provide a way to utilize trust values for improvement. Secondly, DeepTrust requires opinion or trust of dataset as a priori. Furthermore, we calculate the parameter opinion differently than DeepTrust. In DeepTrust, the opinion of a parameter is the same as the backward opinion of output, which means all weights and bias receive the same opinion and trustworthiness. To address this, our framework calculates opinion of weight based on their gradients during training, which assigns different trust value to different weight. This allows our framework to consider differences in parameters. 2.3.4 TrustCNet outperforms CNNs when dealing with noisy input In the previous section, we showed that our trust framework generated higher values for clean data and much lower values for damaged or noisy data. In applications where input data are damaged or noisy, deep learning models could generate sub-optimal results. Trust-aware CNNs that operate on both feature maps and trust maps might be able to dampen the effect of the noise infused in data, and therefore output better results compared to their non-trust-aware versions. Note that we do not focus on adversarial defense but only provide a study involving noisy data. In this section, we develop a CNN architecture, and then utilize TrustCNet blocks and construct a trust-aware version to compare them in terms of both accuracy and trustworthiness. For noise in dataset, we study three difficulty levels: given input data, (i) we know both the position and intensity 58 of the noise, (ii) we know the position but not intensity of the injected noise, and (iii) both position and intensity are unknown. In addition, we further test our TrustCNet with three different noises under the highest difficulty level. How Much Do We Know? Noisy data is a frequent issue seen in machine learning and DL. Noise can affect either the labels, features, or both. Here, we consider feature noise, e.g., noise in input images in computer vision tasks, and the difficulty level of the problem depends on how much noise information (or prior knowledge of the noise) we know in the test phase. We assume that in training phase, the training data may or may not contain noise and we don’t know this information. Therefore, in training phase, opinions of input feature and input label areW x = [f; 0; 1f; 0:5] andW y = [1; 0; 0; 0:5], respectively, wheref is normalized feature value. Then, in test phase, we consider three difficulty levels as follows. I. Known position and known intensity. To calculate the opinion of CNNs, we propose the trust quantification framework in Sec. 2.3.1, and it needs opinions’ of input features (W x ) for initialization and updates opinions in propagation. If we know the location and intensity of noise, in the phase of opinion initialization, we set the opinion tuple of the input features without noise asW x = [f; 0; 1f; 0:5], wheref is the normalized feature value, and the opinion of features with noise reads W x = [f (1t); 1f (1t); 0; 0:5], wheret is the intensity of noise. Intuitively, features without noise obtain a belief value based on the feature value, and features with noise obtain a belief value dampened by the noise level, and they also obtain a disbelief value to fulfill the opinion requirementbelief +disbelief +uncertainty = 1. II. Known position and unknown intensity. When working with noisy input, case I is nearly perfect as we know a lot of useful information about noise. However, sometimes we only know the location information of noise. This key information is also 59 proved to be functional. The opinion initialization of input in this case is as follows. Given a input, we set the opinion of all features with noise toW x = [f; 1f; 0; 0:5], (wheref is feature value) and the opinion of all features without noise isW x = [f; 0; 1f; 0:5]. This setup intuitively represents that, on the premise of knowing the location of noisy features, we disbelieve those positions containing noise and hence a low degree of trust on them is propagated during the convolutional trustworthiness computation. III. Unknown position and unknown intensity. Another common situation when dealing with noise is that we lack information about noise type. In such situation, we set the opinion of all input features asW x = [f; 0; 1f; 0:5]. By assigning the rest part of mass to the uncertainty mass, the TrustCNet learns to update parameters in the training processing in order to obtain the maximum trustworthiness. Experiment Setup The TrustCNet model we develop and test in this section contains a TrustCNet-1 block (i.e., 1 conv layer followed by 1 max-trust-pooling layer), a 33 conv layer followed by a 22 max-pooling layer, and fully connected layer. The conv layer in TrustCNet-1 has 8 filters with 33 window size, stride 2 using same padding and the max-trust-pooling’s window size is 22 with stride 2. The non-trust-aware version of this TrustCNet has the same architecture but with the max-trust-pooling layer replaced by a normal max-pooling layer. We use MNIST [LBBH98] and CIFAR-10 [KH + 09] datasets and insert a self- defined Gaussian-distributed noise to input features following [ZZGZ17] in training and testing. We randomly add Gaussian noise to a certain number of pixels in input images. Adam optimizer is used with 0.001 learning rate and sparse categorical cross entropy loss is used to update the parameters during batch training process. ReLU activation is used in all conv layers. We use accuracy (and accuracy improvement), trust (and trust improvement) as evaluation metrics to compare TrustCNet and its non-trust-aware 60 Table 2.3: Comparison between TrustCNets and their non-trust-aware variants under noisy datasets. In case I, position and intensity information of noisy input are used in input opinion initialization. In case II, only position information is used, and in case III, no noise information is used in opinion initialization. Max-pooling Max-trust-pooling (TrustCNet) Dataset Level Accuracy Trustworthiness Accuracy Improvement Trustworthiness Improvement MNIST Clean 0:9757 0:0144 0:9836 0:0116 0:9643 0:0124 - 0:9540 0:0158 - Case I 0:7515 0:0184 0:7450 0:0166 0:9398 0:0125 25.05%" 0:9417 0:0104 26.40%" Case II 0:7824 0:0186 0:7784 0:0118 0:9130 0:0174 16.69%" 0:9363 0:0174 20.28%" Case III 0:7575 0:0146 0:7341 0:0132 0:9027 0:0178 19.16%" 0:9228 0:0169 25.70%" CIFAR-10 Clean 0:7210 0:0106 0:7003 0:0192 0:7047 0:0189 - 0:7124 0:0177 - Case I 0:3445 0:0196 0:3211 0:0246 0:5078 0:0191 47.40%" 0:5133 0:0247 59.85%" Case II 0:3390 0:0243 0:3674 0:0285 0:5062 0:0233 49.32%" 0:5128 0:0182 39.57%" Case III 0:3375 0:0214 0:3036 0:0197 0:4816 0:0243 42.69%" 0:5112 0:0260 68.37%" version. Improvement is calculated as the increase divided by the original value. We conduct the experiments 5 times and report mean and standard deviation results. Experimental Results The comparison results of TrustCNet and its non-trust-aware version are shown in Tab. 2.3. Non-trust-aware CNNs achieve similar results in terms of both accuracy and trustworthiness when facing with noisy input, the slight differences are contributed by random initialization. While with clean data, TrustCNets and its non-trust-aware versions perform similarly, TrustCNets perform much better than the non-trust-aware ones when there is noise involved in data, and different prior noise information leads to different results. Case I. With known noise position and intensity, TrustCNets achieve much better accuracy and trustworthiness results compared to their non-trust-aware versions. Max- trust-pooling-based TrustCNets operate on both feature map and trust map, hence achieve better results in terms of trustworthiness. The feature value-based input opinion initial- ization contributes to the outperformance of accuracy and shows that TrustCNets have noise-tolerant ability. 61 Case II. As we expected, taking location information of the noisy features as a priori in opinion initialization helps the model operate on reliable features and achieve higher trustworthiness and accuracy. The benefit comes from assigning disbelief mass to noisy features, which greatly affects the feature propagation in max-trust-pooling layer. Case III. Finally, with the least amount of information known about the noise, TrustCNets manage to improve accuracy and trustworthiness compared to its non-trust- aware variants by assigning uncertainty mass to all input features. The results show that TrustCNets in case III have lower numbers than in case I and II, which is reasonable because both location and intensity of noise are unknown in case III. Furthermore, we also find that TrustCNets have certain noise-tolerant ability when used alone or placed towards the beginning of the neural network (close to the input layer). When placed after non-trust-aware layers, TrustCNets do not possess this ability anymore. This is because that the noise is processed by the proceeding layers and TrustCNet does not have direct access to noise information anymore; hence, it could not improve the results. Discussion. In this section, we demonstrated that TrustCNets with max-trust-pooling layers are useful in cases involving attribute noise. Besides those CNN architectures containing max-pooling layers, there are many modern network designs do not contain a max-pooling. We would like to briefly discuss some potential future directions on how to utilize trust maps without max-pooling. Max-trust-pooling is a straight-forward way to leverage the quantified trust values in neural network training or design by pooling with trust values. We can also combine trust with dropout layer, to make a trust-guided dropout; or to use it for model architecture selection or tuning, e.g., keep only the essential and trustworthy parts of the architecture to save resource; or we can infuse trust evaluation in continual learning tasks, to identify parameters that are more important and trustworthy during training to reduce catastrophic forgetting and improve model 62 trustworthiness. We believe the trust map and the whole trust quantification framework are worth exploring and useful in many future directions. 63 Chapter 3 Trust-aware Control in Multi-Agent CPSs 3.1 A General Trust Framework for Multi-Agent CPSs Multi-agent systems (MASs) consist of multiple, interacting, intelligent cyber-agents [AAMA10, NH11, CNB20a], and the successful behavior of a MAS typically depends on safe coordination between the agents. For autonomous and mobile MASs, such as those found in ground transportation systems or in unmanned aerial vehicles comprising avionic systems, coordination may be used to endow greater safety over human-operated agents or to improve the efficiency of the system (e.g. traffic throughput or increas- ing the sensing range) or both. For instance, in the context of traffic light control or autonomous intersection management [DS04], the goal is to improve the throughput of traffic intersections in a safe fashion, for traffic consisting of a mixture of human-driven, semi-autonomous and autonomous vehicles [AZS15, SS17]. Similarly, there is work on cooperative adaptive cruise control where the objective is to improve traffic flow and fuel consumption while ensuring collision freedom [GLM + 12, MSS + 13]. An important consideration for MASs is to achieve safe and efficient coordination when the MAS consists of a mixture of trusted and untrusted agents. Here, being trustworthy can encapsulate different things: (i) the agent follows the commands of the coordinator to a high degree of precision, (ii) the agent reports its state (e.g. position, velocity) with consistent accuracy, or (iii) the agent is not malicious, i.e. it does not 64 purposefully engage in behavior that can endanger system safety. For instance, vehicle platooning systems require AI strategies to analyze platoon members and evaluate their degree of trustworthiness in order to avoid attacks that can lead to accidents. In the works of [Axe16, Fle11, GWWW19], researchers take the front collision warning, lane departure warning, and autonomous braking system into consideration to construct a trust evaluation framework. However, these approaches analyze individual vehicles in isolation and do not account for communication and cooperation among vehicles. Moreover, in these approaches, the system can only react to only one malicious attack. When the platoon system is attacked by multiple malicious agents, the system can be deceived and led to a catastrophic state. Existing trust frameworks are ad hoc, which makes it difficult to apply them universally. In this paper, we propose a universal framework based on a logical characterization of trust that allows us to quantify trust in individual agents in a systematic fashion. Our framework considers both short-term and long-term behavioral histories of agents to quantify their trustworthiness. We envision a cloud-based (or edge-based) architecture as shown in Fig. 3.1 where trust values for agents are stored in a secure fashion, and where authenticated decision- making nodes (such as centralized or distributed coordinators) are able to access trust values for agents to make real-time decisions. Through quantitative trustworthiness scores, we are able to perform trust-aware decision-making, where a coordinator is able to explore trade-offs between safety and efficiency when orchestrating coordination for a mixture of trusted and untrusted agents. Our main contributions are as follows: • We propose a framework to mathematically quantify the trustworthiness of agents in MASs using the formalism of subjective logic. We propose that an agent’s trust- worthiness is updated using long-term and short-term observations of the agent’s behavior. 65 Figure 3.1: A cloud-based (or edge-based) architecture with trustworthiness quantification in a multi-agent CPHS. • We provide a trust-aware decision-making framework that uses the quantified trust values to choose coordination policies that achieve the desired trade-off between safety and efficiency. • We demonstrate the feasibility and applicability of the proposed trust framework by applying it in three MASs: cooperative adaptive cruise control (CACC)-based platoons, an autonomous intersection management (AIM) system, and a reinforcement learning- based traffic light control (TLC) system. With a minimum modification of existing MASs (e.g., AIM, TLC), we can formulate trust-aware decision-making strategies and achieve better performance. 66 Figure 3.2: a. A trust framework where the centralized trust managerA keeps inspecting target agentsX . b.A does not directly inspectX but relies on distributed trust authorities , which may or may not be trustworthy. c. BothA and directly inspectX . 3.1.1 Quantifying Trust in MAS In MASs, such as air/drone traffic control system [Hop17, BCC + 17], adaptive cruise control system [GZP19], multi-agent autonomous traffic management [ASS11], and even federated learning [KMR15] in machine learning, the safety and behavior of one or a subset of agents affect the efficiency and safety of the whole system. Such systems usually are vulnerable to agents that are untrustworthy for various reasons including operating defects, uncertain operating environments or purposeful malice. In these cases, a subjective measurement is a must to identify untrustworthy agents. Therefore, we propose a trust quantification framework based on Subjective Logic (SL) [Jøs16]. Our framework interprets agent behaviors and assigns a trustworthiness score to the agents. To explain the basic idea, we assume that the MAS is endowed with a secure and trusted observer known as the trust manager (denoted asA) that observes the behavior of agents, extracts knowledge (opinion in SL parlance) from observations (evidence in SL parlance) and computes the agents’ trustworthiness. We now provide the definitions required for the calculation of trustworthiness in our proposed trust framework. Given a 67 trust managerA and a specified agentX in a MAS, letb A X denote the belief mass that A has inX, letd A X denote the disbelief mass, letu A X denote the uncertainty mass, and a A X denote the base rate. Intuitively, the belief and disbelief loosely correspond to the probabilities of an agent being trustworthy and untrustworthy. Uncertainty represents the lack of evidence to support any specific probability, e.g.,u A X = 1 represents we know nothing about the agent’s behavior and by default, with a chance ofa A X = 0:5, it can be trustworthy. Definition 3.1.1 (Opinion [Jøs16]). In SL, a binomial opinionW A X =fb A X ;d A X ;u A X ;a A X g represents the opinion of an observerA about X, where b A X , d A X , u A X , and a A X are as previously defined, andb A X +d A X +u A X = 1 fora A X 2 [0; 1] and base rate is akin to a prior. Definition 3.1.2 (Trustworthiness [Jøs16, CNB20a]). The trustworthiness ofX assessed byA is defined asp A X =b A X +u A X a A X , whereb A X ,u A X anda A X are as defined previously, andp A X 2 [0; 1]. Definition 3.1.3 (Evidence [Jøs16]). Given a behavioral property', positive evidence r quantifies the satisfaction of the property' by the behavior ofX as observed byA, negative evidence s quantifies the violation of ' by the observed behavior of X. A binomial opinion is formed using evidence based on the principle thatr contributes to the belief mass ands contributes to disbelief mass using the following equations: b A X = r r +s +! ; d A X = s r +s +! ; u A X = ! r +s +! ; (3.1) where! = 2 is a default non-informative prior weight. Fig. 3.2a shows our proposed trust-aware MAS consisting of a centralized manager A that keeps inspecting agents and updates their trustworthinessp A X ;8X2X (whereX represents the set of all agents) based on time-varyingW A X ;8X2X (which are updated 68 based on observed evidencesr ors). Instead of keeping a record of all past evidence histories, i.e.,r ands, we keep a hash tableH that records (long-term) opinions ofX and use a cumulative fusion operator [Jøs16] to merge established (long-term) opinions and newly observed (short-term) opinions. 1 Definition 3.1.4 (Cumulative Fusion Operator). Let us assume a long-term opinion about agentX,W A X , is calculated based on previous observationsr [0;t] ands [0;t] from time 0 tot, and newly observed evidencesr [t;t] ands [t;t] form a short-term opinion W E X . The updated opinion takes evidences from time period [0;t], which is equivalent to the cumulative fusion ofW A X andW E X : W A X W AE X =W A X W E X : (3.2) In addition to the centralized trust authorityA, there are also distributed trust authori- ties that help inspect agents and collect evidence as shown in Fig. 3.2b. The existence of enlarges the observation range, increases the observation frequency, and relaxes the requirement thatA needs to directly inspect agents. keeps local trust records of covered agents and sends updates toA regularly. For example, in traffic systems, local roadside units inspect vehicles and report to the department of motor vehicles. To merge the (long-term) opinions fromA and , we use cumulative fusion operator: W A X W A X . Note that, in our trust framework,A can operate alone without helpers due to the assumption thatA can directly inspect agents as shown in Fig. 3.2a. We assume thatA is always trustworthy, and may or may not be trustworthy. For example, in a traffic system, if roadside units serve as , then they usually are trustworthy. However, if vehicles serve as , for example, if the trailing vehicle reports 1 Assume short- and long-term opinions are established by evidences (r 1 ;s 1 ) and (r 2 ;s 2 ), which are observed in non-overlapping time periods. Applying cumulative fusion to combine opinions is equivalent to summing up evidences (r 1 ;s 1 ) and (r 2 ;s 2 ). 69 Figure 3.3: A trust framework in traffic systems.A and keep inspecting the target vehicle X. Both road side units and other vehicles adjacent to X serve as . If we assume road side units are trustworthy, then the opinion updating equation can be simplified as W A X (W 1 X W [A; 2 ] X W A X )W A X , where on the left hand side, the first W A X in the bracket is short-term opinion, and the secondW A X is a long-term opinion extracted fromH. toA about leader vehicle, then can be untrustworthy and their trust evaluations may not be trustworthy. To deal with such scenarios,A applies a discounting factor [Jøs16] to take ’s own trustworthiness into consideration when relying ’s evaluations. 2 Definition 3.1.5 (Discounting Operator). AssumeA would like to develop trust inX andA relies on for evidence collection and opinion/trust evaluation.A’s opinion about is represented asW A , and ’s opinion aboutX isW X . Based on the combination ofA’s trust in and ’s opinion aboutX,A updates its opinion aboutX using the discounting operator : W [A;] X =W A W X : (3.3) 2 Assume ’s trustworthiness isp A , thenA discounts ’s opinions byp A . 70 W [A;] X is a short-term opinion. To merge with the long-term opinionW A X , substituting W E X withW [A;] X in Eq. 3.2 generates the designated result. In cases where there are both A and as shown in Fig. 3.2c, or there are multiple inspecting the target agentX at the same time, we need to have a way to merge multiple short-term opinions together. We can make use of the averaging fusion operator in SL to take the average of two opinions observed at the same time [Jøs16]. 3 Definition 3.1.6. Subject to trust authoritiesA and , and a specified agent X in a multi-agent system, assume bothA and inspectX in the same time period [t;t] and may or may not be trustworthy.A and develop opinions toX asW A X , andW X , respectively. The short-term opinion aboutX combines both authorities’ opinions via an averaging fusion operator: W A[A;] X =W A X W [A;] X : (3.4) If distributed authority is trustworthy then Eq. 3.4 is simplified as W A X = W A X W X . If the observing authorities are both distributed authorities, namely 1 and 2 , then Eq. 3.4 reads: W [A; 1 ][A; 2 ] X . Then to merge with the long-term history of X, use the cumulative fusion operator defined in Definition 3.1.4. A demonstration example of our proposed trust framework in traffic systems is shown in Fig. 3.3, which corresponds to the scenario in Fig. 3.2c. To demonstrate how the proposed trust framework works in different applications, we first show its feasibility in the context of CACC platoons (Section 3.3) where the distributed trust authorities are not necessarily trustworthy. We show with simulation results that our trust-based attack detection can accurately detect attackers. 3 Assume two short-term opinions are established by evidences (r 1 ;s 1 ) and (r 2 ;s 2 ), which are observed in the same time periods. Applying averaging fusion to combine opinions is equivalent to take average of evidences (r 1 ;s 1 ) and (r 2 ;s 2 ). 71 Figure 3.4: A four-way intersection. Color-shaded areas represent the space-time buffers for each vehicle. Trustworthy vehicles have a tight buffer since they are expected to obey the instructions with small errors. Untrustworthy vehicles have a large buffer because it is highly likely that they will act differently than instructed. The dark red area represents a collision warning in simulated trajectories. In this case, the vehicles are not permitted to enter the intersection and their requests are rejected. The AIM-Trust framework consisting of IM and TA is shown on top. A detailed description of each component can be found in Algorithm 1. 3.2 Trust-aware Control for Intelligence Transportation Systems 3.2.1 Autonomous Intersection Management (AIM) The intersection traffic in AIM is a simplified version of real-world intersection traffic. Fig. 3.4 illustrates a four-way intersection example with three lanes in each road leading to an intersection areaI (marked by the white dotted rectangle). A vehicle agentA2V on the road traveling towards but not already in the intersectionI is considered to be on the AIM mapM. AnyA inM communicates with the intersection manager (IM) C by sending a requesty(t) which consists of the vehicle identification number, vehicle size, predicted arrival time, velocity, acceleration, arrival , and destination lanes, and then 72 receives an instructionu(t). The IMC calculates the trajectory ofA, makes a grant or reject decision (t) and sends the decision toA.C rejects a request if there exist conflicts in the simulated trajectories. IfC approves the request,A is responsible for obeying the instructions to enter and drive throughI. IfC rejects the request,A has to resend the request and await further instructions. Important assumptions in AIM are as follows: (i) For allA2V, they follow the instructions ofC strictly within an error tolerance. This restriction guarantees safety by simulating trajectories and rejecting conflicting requests. (ii) EachA2V is associated with a static buffer size with a minimum of value 1. The buffer size indicates the time- space reservation ofA. Trajectories are defined as conflicting if buffers of at least two vehicles overlap (marked as dark red in Fig. 3.4 where the shaded area represents the buffer of each vehicle). The larger the buffer size, the higher the safety, and the lower the efficiency or throughput. Note that in conventional AIM [DS08, SS17], all vehicles’ buffer sizes are set to 1 and this preserves collision-freedom because of assumption (i). However, assumption (i) can be invalid as compromised agents can act recklessly and disobey instructions, which can lead to collisions inI. The small static buffer size in assumption (ii) intensifies this situation. 3.2.2 AIM-Trust Since in the real world, many vehicles can be malicious and violate AIM assumptions, we associate every IM with a TA based on our SL-based trust evaluation framework. The TA uses the trustworthiness tableH to obtain the trustworthiness for each vehicle inM. It uses these values to make better trajectory approval decisions. This framework is called AIM-Trust. We assume that TA-augmented IMs (TA-IMs) 4 communicate with each other 4 Note that in AIM-Trust, we denote a TA-augmented IM asC for simplicity, while it is in fact the combination of a TA and an IM. 73 and they all persistently maintainH through appropriate synchronization mechanisms to maintain data coherence. Each TA-IM maintains a (coherent) local copy with part ofH for efficiency and scalability. In addition, roadside units (RSUs) serve as LTAs and provide coverage in places between the intersections to track the trustworthiness of vehicles. AIM-Trust has two collision avoidance mechanisms: (i) a vehicle surveillance system to identify untrustworthy behavior of vehicles (withinM), and (ii) an intelligent trust-based buffer adjusting mechanism to help decrease collision risk while maintaining high throughput. Fig. 3.4 and Algorithm 1 show the operation details. Note that if there is no untrustworthy vehicle inM, AIM-Trust will reduce to the original AIM algorithm with a fixed buffer size 1 to ensure efficiency. The TA-IM in AIM-Trust discriminates incoming vehicles into three bins: unpro- cessed, approved, and safe (see Fig. 3.4). A vehicle with its request unapproved by TA-IM is unprocessed. Once its request is approved, the vehicle is approved and under the surveillance of TA-IM for a few time steps. If the vehicle behaves well, then it becomes safe and the surveillance ends. However, if the vehicle violates the approved trajectory with an intolerable error, then the vehicle goes back to the unprocessed bin and TA-IM starts processing its requests all over again. The detailed state transition process can be found in Algorithm 1. Compared to classical AIM where IM stops interacting with a vehicle once the request is approved, AIM-Trust includes a trust-based approve-observe process to decide whether to revoke and remake the approval decision. Trust Calculation In AIM-Trust, TA-IM maintains a trustworthiness / opinion tableH and updatesH by either communicating with RSUs or considering new evidence via cumulative fusion operator () as shown in Algorithm line 3 and 13. The detailed procedure is shown in 74 Algorithm 1: AIM-Trust algorithm. VehicleA sends a request to TA-IMC, which responds based on simulated trajectories. Input :VehicleA’s request messagey(t), i.e., vehicle identification number id A , vehicle size, predicted arrival time, velocity, acceleration, arrival and destination lanes (e A ;o A ). Output :Approve or reject decision ofy(t). 1 Pre-process 2 Pre-processy(t) for new reservation. . Same as AIM 3 Trustworthinessp A trust_calculator(id A ) 4 Vehicle status A unprocessed 5 end 6 State Transition 7 Buffer sizea A buffer_calculator(id A ;e A ;o A ;p A ) 8 Decision AIM_control_policy(a A ) . Same as AIM 9 Post-process Send the decision toA. . Same as AIM 10 if Decision == Approve then 11 A approved, runsurveillance(id A ) 12 ifsurveillance(id A ) ==malicious then Go to Pre-process; 13 A safe,p A trust_calculator(id A ) onceA exitsM. 14 end 15 end Algorithm 2. The first trustworthiness update happens whenA entersM and sends requests to the managerC: W C A = 8 > > > > > > > > > > < > > > > > > > > > > : W UN A ; if vehicleA is unknown, W C A ; if TA-IM(s)C knows vehicleA, W LTA A ; if road side unit knows vehicleA; W LATC A ; if both LTA(s) andC know vehicleA: (3.5) Case I. “A is unknown” represents that the vehicle does not have a record inH. 5 5 We assume W UN A = f1; 0; 0; 0:5g to represent the maximum belief based on autoepistemic logic [Moo85] (the vehicle is not reported to be untrustworthy, so it is trustworthy). 75 Case II. “C knowsA” represents thatH has an entry ofA andA has not been picked up by RSUs after the previous record update. Case III. VehicleA entersM andC receives LTA’s report aboutA. Since RSUs are only activated when undesired behavior happens, we expectW LTA A to be a negative opinion. Case IV . RSU reports aboutA, which already has a record inH, hence, we use a cumulative fusion operator () to merge these two opinions together. Algorithm 2:trust_calculator(id A ) 1 if A ==unprocessed then .A entersM 2 W C A Eq. 3.5 3 else if A ==approved then .y(t) approved andsurveillance(id A ) =malicious 4 W E A Def. 3.1.1 . Evidence is collected beforeI 5 W C A W CE A 6 else if A ==safe then .y(t) approved andsurveillance(id A )6=malicious 7 W E A Def. 3.1.1 . Evidence is collected inI 8 W C A W CE A .A exitsM 9 returnp A p C A =b C A +u C A a C A . Definition 3.1.2 These first updates of W C A and p A are now completed and then used by buffer adjustment agent as shown in Algorithm 1 line 7. Then, the AIM control policy generates accept / reject instruction. AfterA receives the instruction, the evidence framework starts monitoringA’s behavior before it entersI. If negative evidence is observed, thenA goes back to pre-process as indicated in Algorithm 1 line 12 andW C A is updated as shown in Algorithm 2 line 3-5. Otherwise,A becomes safe and proceeds toI. OnceA entersI, the surveillance system again observesA’s behavior and the collision situation. Positive / negative evidence based on collision and trajectories is then evaluated and an opinion from evidence collected inI is derived asW E A (which is evaluated byC but we denote the superscript asE to distinguish from long-termW C A ). AfterA exitsM, 76 the trustworthiness ofA is updated again as shown in Algorithm 2 line 6-8 and uploaded toH. Evidence Measurement Framework Exploiting the STL formalism, we define a set of rules to specify a driving behavior to be desired or undesired, i.e., quantify positive (s) or negative (r) evidence for trust estimation. Desired / undesired behavior contributes to positive / negative evidence; hence, they contribute to increasing / deceasing of trustworthiness of an agent. Before the target driver approachesM, RSUs that have observed the target vehicle assess the (undesired) behavior and generate (negative) evidence. When the target vehicle arrives at the intersection, AIM-Trust uses a self-embedded behavior measurement system to quantify the target’s behavior based on trajectory and collision status. Evidence evaluation at road side units (RSUs). Suppose an RSU observes vehicle A’s trajectory and velocity, and the desired behavior is defined by a set of rules, e.g., driving within one lane with negligible deviation and under the designated speed limit. Hence, we define these properties formally as follows: Subject toA, suppose the true trajectory observed by RSU is tr , the requested (or approved) trajectory is tr , the reported trajectory is y tr , and the negligible deviation or error is tr . Similarly, the observed speed of the vehicle is sp , and the designated speed is sp , the reported speed isy sp , and the negligible error is sp . 6 We quantifyr ands as follows: 8 > > < > > : r =r + 1 ; if ( tr ;t)j ='^ ( sp ;t)j = ; s =s + 2 ; otherwise. (3.6) 6 Note that in a MAS where all agents are honest, the agent reportedy is the same as controller observed . 77 ' G [t 1 ;t 2 ] (jtr (t)ytr (t)jtr^jtr (t) tr (t)jtr ) (3.7) G [t 1 ;t 2 ] (jsp(t)ysp(t)jsp^jsp(t) sp(t)jsp) (3.8) These equations indicate that the true (observed) trajectory / speed of a vehicle should not deviate from (i) the requested trajectory / speed and (ii) the reported trajectory / speed by more than tr / sp in time interval [t+t 1 ;t+t 2 ], wheret 1 ,t 2 , tr , sp , 1 , and 2 are hyper parameters. 1 ; 2 2Z + are positive integers correlated with values ofj(t)y(t)j and j(t) (t)j, which indicates that the more proper / improper the behavior is, the bigger the reward / penalty is. 7 Evidence evaluation at TA-IM. When vehicles are under the surveillance of AIM-Trust before enteringI (Algorithm 1 line 11), the evidence measurements in Eq. 3.6 are used to quantify the positive and negative evidence. If negative evidence is observed, it means the vehicle violates the approved trajectory by an intolerable error. (For AIM-Trust, when TA-IM approvesA’s requested trajectory,y = .) Once vehicles enterI, a new set of rules are used to take into account the collision status of vehicles and collisions in I:r =r + 1 , if the vehicle follows the approved trajectory and no collision happens; otherwises =s + 2 . RL-based Buffer Adjustment Agent AIM-Trust operates under the assumption that there may exist untrustworthy vehicles that would not follow instructions from TA-IM. Under such scenarios, the agents can not execute potential evasive maneuvers to avoid collisions, thereby these malicious agents with fixed and small buffer sizes will threaten other agents, even the whole system. Then, how do we determine the optimal buffer size for agents with different trust values? 7 In SL, 1 and 2 usually takes value of 1. 78 Figure 3.5: Comparison between AIM and AIM-Trust. In order to assess this, and to have a buffer allocation policy, we use reinforcement learning (RL) to explore the unknown environment and figure out the appropriate buffer sizes. In this section, we define the RL formulation (deep Q-learning [MKS + 15]) including definitions of states, actions, and rewards. In deep Q-learning, the neural network is approximating a Q-learning table, where each entry in the table is updated byq(s t ;a t ) q(s t ;a t ) + [r t+1 + max a q(s t+1 ;a)q(s t ;a t )] [DS + 90], wheres t is state,a t is action,r t+1 is reward, is learning rate, and is discounting factor. State-state transition-action spaces. We model a four-way intersection with three lanes in each direction as shown in Fig. 3.4. To simulate the real-world scenarios, we explicitly allow vehicles on each lane to either go straight, turn left, or right. We define our states ass t = (id 1 t ;e 1 t ;o 1 t ;p 1 t ;:::;id n t ;e n t ;o n t ;p n t ) T , where (id i t ;e i t ;o i t ;p i t ) are the vehicle identification number, starting point, requested destination, and trustworthiness 79 of vehiclei2 [1;n] at timet. In each training time stept, vehicles pass throughI and fully exitM. Within one episode, there are in total steps, which represent that the n vehicles pass through intersections.8i;t; (e i t ;o i t ) are randomly generated by the simulator, whilep i t is continuously updated based on Algorithm 2. The state transition equations for the environment are defined as: id i t+1 id i t ;p i t+1 trust_calculator(id i t ); ande i t+1 ;o i t+1 Random(id i t+1 ), whereRandom() is the random starting point and destination generator in simulator. The action is defined asa t = (a 1 t ;a 2 t ;:::;a n t ) T , where a i t is the buffer size of vehiclei at timet. Neural network makes prediction by assessing vehicles’ positions, trustworthiness, and requests. Reward function. The goal of our RL agent is to operate the intersection with the lowest collision rate and high throughput. It is known that throughput is sensitive to buffer size, i.e., a large buffer size harms throughput. We take safety as our primary consideration and improve the throughput under the promise of safety. Therefore, the reward function reads: r i t = 8 > > < > > : 1 +(b th a i t ); if no collision, ( 1) [1 +(b th a i t )]; otherwise, (3.9) whereb th is a hyper parameter indicating a reasonable upper bound buffer size. is a hyper parameter that balances the collision and throughput. The vehicle is removed once it collides and not blockI, and is put back inM in the next step. A training episode contains steps, and an episode ends once the maximum step size is reached. The formulation of r i t indicates that we want the buffer size to be as small as possible to increase the throughput while penalizing collisions. 80 Figure 3.6: a. Collision comparison between AIM-Trust, AIM-RL, and AIM-1. b. Throughput comparison between AIM-Trust, AIM-RL and AIM-Fix. Note in AIM-Trust and AIM-RL, the buffer size ranges from 0 to 16 in cases with 20% to 60% untrusted vehicles, while it ranges from 5 to 21 in cases with more untrusted vehicles (since the upper bound of 16 is not enough for RL agents to learn a good collision avoidance strategy). This change of action space causes the discontinuity of trends in terms of collisions and throughput from 60% and 80% cases. c. Collision results of AIM-Trust with 10 test cases that are different from the training set. Collision rates in test and training sets are consistent and stable even when 100% vehicles are untrustworthy. 3.2.3 AIM vs. AIM-Trust Experiment Setup We consider in an RL training episode,n = 10 vehicles pass through = 10 intersections and we monitor the collisions occuring within thesen = 100 passings asc. For each intersection (or step)t in an episode,n vehicles enter and leave the intersection following randomly picked start points, [e 1 t ;:::;e n t ], and destinations, [o 1 t ;:::;o n t ]. We generate 1 set of starting points and destinations as training set, and generate 10 independent sets as a test set. We consider two performance metrics: collision rate c n , and throughput nc T , whereT is the time (in seconds) elapsed in one episode. Baselines. We first compare our proposed AIM-Trust with 3 AIM family baselines: the original AIM algorithm with fixed buffer size 1, namely AIM-1; the modified AIM algorithm with the fixed averaging buffer size, namely AIM-Fix (we manually select the fixed buffer size for this baseline to force it perform similarly to AIM-Trust in terms 81 UV 20% 40% 60% 80% 100% AIM-RL 51:35% 50:00% 64:44% 18:18% 71:69% AIM-1 64:00% 82:85% 79:37% 83:84% 89:28% TIM-RL 44:46% 53:48% 60:94% 78:11% 79:85% TIM-Fix 58:70% 56:51% 64:33% 78:11% 81:55% TIM-Trust 42:98% 52:31% 57:22% 71:18% 78:48% TIM-Trust 2:0 25:14% 36:58% 54:28% 70:61% 76:86% AIM-Fix 16:93% 17:25% 30:13% 3:61% 9:77% Figure 3.7: Collision comparison. The results of RL-based methods contain 10 test cases in each scenario (untrusted vehicle percentage varies from 20% to 100%), and 10 runs of each case (hence in total 100 data points in each box). Trustworthiness-aware methods have lower collisions in all scenarios. Table 1: AIM-Trust’s collision rate decrements compared to baselines, and AIM-Trust’s throughput increments compared to AIM-Fix. UV indicates the untrusted vehicle percentage. of collision rate, then compare the throughput with AIM-Trust); and a variation of AIM-Trust without considering the trust factor,p, in the state space, namely AIM-RL. In addition, we compare AIM-Trust with traffic light-based intersection control methods that are not in the AIM family. We follow [LDWH18] to construct a deep reinforcement learning (DRL)-based traffic light cycle control method asC to operate the intersection. We denote this method as TIM-RL, i.e., traffic-light intersection man- agement based on RL. Since TIM methods focus on operating the intersection with high efficiency without considering untrustworthy agents, they have no collision avoidance mechanism. To make the baseline more competitive in the scenarios involving untrust- worthy vehicles, we propose two enhanced TIM methods: (i) TIM-Trust, which includes trustworthinessp i t in TIM-RL’s state space, and (ii) TIM-Trust 2:0, which includes trust- worthiness and has collision penalties in reward function. Furthermore, we include a fixed cycle traffic light (no RL agent involved) to replicate the conventional traffic light control method for comparison, which is denoted as TIM-Fix. 82 Experimental Results Collision results. In this section, we first present collision comparison results in AIM family as shown in Fig. 3.6a. We vary the percentage of untrusted vehicles from 20% to 100%. Since RL training embeds the randomness from initialization naturally, we train AIM-Trust 10 times and report the mean-variance results to show the stability. Fig. 3.6a shows that AIM-Trust decreases the collision numbers drastically compared to AIM-1 (see Table 1 for detailed numerical results). Since AIM cannot deal with the violation of assumption (i), the small and fixed buffer size leads to high collision rate. The more untrustworthy vehicles in the system, the more the collisions. AIM-Trust’s adjustable buffer size helps to decrease the collision rate and maintains a stable low collision rate even when all vehicles are untrustworthy. In order to examine the effectiveness of the trustworthiness, we make a baseline AIM-RL by taking out the trust factor from AIM-Trust. Except for the trust factor, AIM-RL is exactly the same as AIM-Trust with adjustable buffer size to decrease the collision rate. In addition, we control the training process of AIM-RL and AIM-Trust to be the same to ensure a fair comparison. The experimental results in Fig. 3.7 and Table 1 demonstrate that the trustworthiness of a vehicle is the key to inferring the appropriate buffer size. To investigate the convergence and robustness of the AIM-Trust agent, we consider the collision performance of AIM-Trust in the test set and the training set is consistent as shown in Fig. 3.6c, which indicates that pre-trained AIM-Trust performs well in unseen traffic scenarios. Next, we show the performance of AIM-Trust compared with non-AIM methods, TIM-Fix, TIM-RL, TIM-Trust, and TIM-Trust 2:0, in Figure 3.7 and Table 1. Compared with conventional traffic light-based intersection control methods, AIM-Trust is advanta- geous since it considers the uncertainty and trustworthiness of vehicles, and decreases 83 collision rate by foreseeing the potential trajectories of trustworthy and untrustworthy vehicles. To demonstrate the significance and advantage of the proposed trustworthiness of agents, we enhance TIM-RL and reveal that the trust-based methods, TIM-Trust and TIM-Trust 2:0, beat TIM-RL in terms of collision rate in all scenarios. Experimental results confirm that in a MAS, when there exist untrustworthy agents, trustworthiness is important for control algorithms to infer the involved uncertainty. Throughput Results. In addition to collision rate, we compare the throughput between AIM-Trust and AIM-Fix. For a fair comparison, we let buffer size of AIM-Fix to be [9; 9:5; 11; 13; 21:5] under 20% to 100% untrusted vehicles such that AIM-Fix has similar (slightly larger) collision rates as AIM-Trust. Then, under a similar collision rate, we compare the throughput. As shown in Fig. 3.6b and Table 1, AIM-Trust’s throughput improvement demonstrates that the RL-based buffer adjustment not only decreases the collision rate, but also benefits the throughput. Compared to AIM-Fix, AIM-Trust on average achieves higher throughput in all cases. Note that the collision rate of AIM-Fix is higher than AIM-Trust and based on throughput calculation, a high collision rate actually gives advantages. With different collision rates, the comparison of throughput is unfair since the sacrifice of safety generates better performance in throughput (note that in the simulation we remove the collision vehicle immediately from the map and do not affect the traffic flow). To compare with AIM-RL fairly, we can compare with 80% untrusted vehicle as AIM-Trust and AIM-RL achieve similar collision rate in this case; and the throughput results show AIM-Trust achieves higher throughput. In other words, AIM-Trust with both lower collision rate and higher throughput indicates that AIM-Trust is much better than AIM-Fix and AIM-RL; and the trust factor we defined in this work is the major contributor to this out-performance. Collision-free AIM-Trust. AIM-Trust with the throughput consideration provides sig- nificant collision rate reduction compared to AIM-1. However, it cannot guarantee 84 Figure 3.8: Trustworthiness results (blue lines) and instruction violation results (red areas). In 20% untrusted case, 2 of 10 vehicles may or may not follow instructions, hence, 2 figures in the first row contain red areas. Our trust calculation precisely captures the instruction violation: a vehicle’s trustworthiness increases when it follows the instruction, and decreases otherwise. collision-free due to the safety and throughput dilemma. In AIM-Trust, we consider collision and throughput via balancing factor in the reward function Eq. 3.9. To demonstrate that AIM-Trust can deduce appropriate buffer sizes based on trust, we relax the throughput requirement and modify the reward function of AIM-Trust to ber i t = 1 if no collision andr i t = 40 otherwise. We also let the RL agent choose buffer size in range 0 to 26 (we denote this new version as AIM-Trust 2:0). Through these minor modifications, AIM-Trust 2:0 focuses on collision avoidance and learns to achieve collision-free in training. On average, the resulting buffer sizes of AIM-Trust 2:0 in cases with 20% to 100% untrustworthy vehicles are [14:4; 14; 14:4; 20; 24]. With the same reward function, AIM-RL 2:0 (i.e., AIM-Trust 2:0 without trust factor in state space) also learns to avoid collision completely, but with higher buffer sizes [14; 15; 16; 20; 26] that lead to lower throughput. 85 Trust Results. Here, we present the trustworthiness quantification of vehicles in our experiments. Fig. 3.8 shows the trustworthiness results of 10 vehicles in one of our experiments. Each column corresponds to a vehicle, which may or may not be trust- worthy. Each row represents a full episode of training with 10 time steps (i.e., vehicles passing through 10 intersections). For example, the first row contains the trustworthiness evaluations of all 10 vehicles passing through 10 intersections, and 20% of them are untrustworthy. Red areas indicate that a vehicle does not follow the approved trajectory in simulation, and this causes trustworthiness (blue line) decrements. These results show that our trustworthiness evaluation accurately captures the undesired behavior of vehicles, and is significantly helpful when used in control algorithms. Conclusion. AIM is designed to provide a collision-free intersection management with high throughput. However, in the application environment where there exist untrustworthy even malicious vehicles that do not follow the instructions, conventional AIM leads to a large number of collisions (up to 28%). This example reveals the need for a trustworthiness measure in MAS we proposed in this work. We design a trust evaluation framework and propose to use evaluated trustworthiness in control algorithms. We demonstrate in a case study how to embed trustworthiness in intersection management by designing AIM-Trust. To evaluate the effectiveness of the trust factor, we explicitly compare our AIM-Trust with baselines, and the experimental results show that the trust factor reduces the collision rate in all cases. In addition, in trustworthiness results, we directly see that trust scores accurately reflect the behavior quality of vehicles. For future work, we would like to refine our trust framework to be more comprehensive and applicable to a broader range of MAS control algorithms. 86 3.3 Trust-based Malicious Attacker Detection in CACC Platoons 3.3.1 CACC Platoons and Attacker Models Recent advances in vehicle-to-everything (V2X) communication have enabled the devel- opment of platooning to save energy, improve efficiency, and ensure safety [Axe16]. In a platoon, a chain of vehicles equipped with V2X sense the surroundings and maintain a constant inter-vehicle space. The head vehicle controls the platoon by broadcasting its kinematic data, such as its designated velocityv, and inter-vehicle spaced. The mem- ber vehicles follow the head vehicle’s instructions and use beacons from other platoon members to control velocity and inter-vehicle space. 8 Various CACC (platoon) attacks have been proposed in the literature, including jam- ming attacks, V2X data injection, [ARC + 15] and sensor manipulation attacks [vdHLK17]. Attack defense models such as misbehavior detection have also been studied [GWWW19]. In this case study, we aim to detect attackers in platoons, so the core of our attacker model is that attackers gain control over vehicles and their actions are observable by participants. In order to detect adversarial behavior, we focus on V2X data injection attacks: acceleration data injections. 3.3.2 Trust-based Attacker Detection Model We now demonstrate how to apply our trust framework to detect attackers in CACC platoons. We assume a centralized trust authorityA that maintains a trustworthiness tableH. Such an authority could either be a cloud-based service or an edge computing 8 Beacon messages containing vehicle information are communicated by vehicles to increase cruise stability [GWWW19]. 87 Figure 3.9: Trust-based attacker detection model with single and bi-directional trust evaluations. node. We assume that the head vehicle is the leader andA only directly inspects leader to reduce inspection intensity. We also assume that each vehicleX serves as a distributed trust authority and reports toA when evaluating the adjacent vehicles; and it is also a target, when the adjacent vehicles evaluateX. Since can be untrustworthy, when it reports toA, we apply the discounting operator defined in Eq. 3.3, Definition 3.1.5. Assume the long-term trust histories ofX and and its successor and predecessor vehicles 1 and 2 areW A X ,W A 1 , andW A 2 , respectively. 1 and 2 use sensors to get accurate information including sensed inter-vehicle distancesx X 1 ,x X 2 and sensedX’s speed sp X 1 ,sp X 2 . Therefore, they evaluateX and the resulting short-term trust/opinions are W 1 X andW 2 X . Then the short-term opinion aboutX reads: W [A; 1 ] X W [A; 2 ] X = (W A 1 W 1 X )(W A 2 W 2 X ): (3.10) After combining the long-term opinion, the opinion about X reads: W A X (W [A; 1 ] X W [A; 2 ] X )W A X . This bi-directional trust evaluation takes information from both the vehicles that are right before and after the target vehicle as illustrated in Fig. 3.9. 88 To reduce communication intensity, this trust evaluation can be downgraded to single directional as shown in the top half of Fig. 3.9. In single-directional trust evaluation, each vehicle is only evaluated by its direct predecessor . Hence, the opinion update equation of vehicleX takes a simplified version:W A X W [A;] X W A X = (W A W X )W A X . We assume that in both single and bi-directional evaluations, the head vehicle’s predecessor is the trustworthyA. In fact,W X is evidence-based and now we present how to derive evidences. A vehicle gains accurate position and velocity information of its adjacent vehicles via sensors, and the reported information from beacons (of other vehicles). Therefore, vehicles as measure evidence using a set of rules to determine if the adjacent vehicle is trustworthy based on the assumption that the sensor data is always accurate. In what follows, we use ', , and to denote behavioral properties in an appropriate formalism such as Signal Temporal Logic (STL) [MN04]. For brevity, we omit a detailed explanation of STL; in our notation, (x X ;t)j =' denotes that starting from timet, the behaviorx X satisfies', and (x X ;t)j =G [t 1 ;t 2 ] ' indicates that the formula' holds at all times betweent +t 1 and t +t 2 . A set of platoon-specific rules determines positive (r) and negative (s) evidence: r =r + 1 if (x X ;t)j ='^ (sp X ;t)j = ^ (jk X ;t)j =; s =s + 1 otherwise. (3.11) ' G [t 1 ;t 2 ] jx X (t)dj space ^jx X (t)x X (t)j space (3.12) G [t 1 ;t 2 ] jsp X (t)vj speed ^jsp X (t)sp X (t)j speed (3.13) G [t 1 ;t 2 ] jk X (t) jkness ^jjk X (t)jk X (t)j jkness (3.14) Eq. 3.11 indicates that the reported inter-space of X from beacons,x X , should not deviate from the requestedd by more than space in time interval [t +t 1 ;t +t 2 ], wheret 1 andt 2 are hyper parameters. Similarly, reportedx X should not deviate from the sensed 89 Figure 3.10: Single-directional attacker detection experimental results. A 10-vehicle platoon completes 6 trips. Assume in the first trip all vehicles are new to the trust system and do not have trust record. Their records inH start building from trip 1 and are used in the following trips. The sine waves are required accelerations, and the fuzzy parts are acceleration attacks performed by vehicles. x X by more than space . Similar rules apply for speedsp X and the jerk valuejk X , which we estimate by taking the difference between the accelerations values for the last and current beacons. High jerk values or abrupt change in acceleration are a safety risk [GWWW19]. 3.3.3 Proposed Trust-based Model Accurately Detects Attackers Experiment setup. We experiment with 10-vehicle platoons and there exists att2 [1; 2; 3] attacker(s) that are randomly located in the member vehicles. We test our trust-based attacker detection models (both single and bi-directional) with acceleration injection attacks. We generate synthetic acceleration data for member vehicles and evaluate trust opinions in real time. Evaluated trust values are saved inH for long-term record after each trip. Experimental results. Fig. 3.10 shows the experimental results of the single-directional trust model. Trust values of attackers decrease when they perform acceleration attacks. Since our trust framework is aware of long-term history, if no record inH, the initial opinion about vehicle is set tof0; 0; 1; 0:5g, where uncertainty takes its maximum 1 to represent the fact that we don’t know anything (before trip 1). If a vehicle has no history 90 Figure 3.11: Bi-directional attacker detection experimental results. A 10-vehicle platoon com- pletes 2 adjacent trips and attackers 1, 2, and 3 perform similar accelerations attacks. a. All vehicles do not have trust history inH. b. Only attacker vehicles have moderate histories inH with trust value 0:25. c. Only attacker vehicles have bad histories with trust value 0:05. and performs an attack in the beginning of its first trip, then its trust value decreases very fast, e.g., vehicle 1, 2, 3 in trip 1. If a vehicle has good history, and performs an attack or behaves dangerously, then its trust value also decreases but with relatively low rate, e.g., vehicle 9 in trip 5. This is because our framework calculates trustworthiness based on both long-term and short-term history. The longer a vehicle keeps a good record, the slower the penalty comes. On the contrary, when a vehicle with bad history behaves dangerously, its trust value will decrease by a large margin, e.g., vehicle 3 in trip 6. Fig. 3.11 shows results of the bi-directional trust model. Different historical trust record of attackers results in different trust values. When attackers with no or moderate histories perform attacks, the trust evaluations and degradation in Fig. 3.11a-b are similar in Fig. 3.10. When attackers with bad histories perform good in the current trip, they will gain trust slowly as shown in Fig. 3.11c. Note that the middle attacker V2 gains trust slower than V1 and V3 because in our bi-directional trust model, V2’s evaluators are also untrustworthy, hence their evaluations are discounted by their own trustworthiness. Discussion. Our attacker detection model combines long- and short-term trustworthiness history and takes distributed authorities’ own trustworthiness into consideration to enable the detection of multiple attackers in platoons. One improvement could be differentiating the danger level of attackers by manipulating Eq. 3.11. A more dangerous behavior (e.g., 91 behavior leads to crash) should be penalized more than a less dangerous behavior. With this consideration, we will make the trust-based attacker detection be more comprehensive and efficient in follow-up works, such as trust-aware distance control in CACC platoons. In addition, involving trustworthy RSUs is always helpful but costly. With the long-term trust history, vehicles can choose platoons controlled by more trustworthy head vehicles when joining and forming platoons, which is also a meaningful direction in platoon research. In this chapter, we demonstrate the possibility and feasibility of our trust framework and provide backbones for future works to build on. In the following sections, we will demonstrate how to use the calculated trust values in control policies. 92 Chapter 4 Dynamic Trust Quantification for Perceptions While automated driving systems (ADSs) promise safety, efficiency, and comfort [YLCT20, DMDV20], they fundamentally depend on robust perception of the environ- ment and subsequent decision-making. The computer vision community – in part fueled by advances in deep learning – has made tremendous advances in techniques for real-time object detection and semantic segmentation [ZYW + 18,FHSR + 20,GDDM14,GDDM15], object tracking [PT09,RT19], end-to-end path planning [XCG + 20,DTMD08], and pedes- trian recognition and movement prediction [LCB + 18, SG16]. While these techniques are considered vital for safe autonomous driving, the metric of success for such techniques is usually the training accuracy, i.e., the accuracy of the perception module compared to human-labeled ground truth. Unfortunately, recent traffic accidents [Lee18,Tem20,nts19], show that even large datasets are not sufficient to account for imprecisely characterized or unknown real-world scenarios, and high training accuracy may not translate into reliable decision-making. A key reason is that most decision-making algorithms operate under the (often flawed) assumption of perfect observability (i.e., all critical variables are known and accurately measured, the measurement data does not contain conflicting situations/configurations) and that the perception is perfect. To overcome real-world challenges related to incompletely characterized, noisy, or uncertain data, we address the 93 Proxy Monitors Monitor Monitor Monitor ... Perception Opinion Belief Disbelief Uncertainty Ignorance Perception Component 1 Perception Component 2 Perception Component n ... Perception Trust- and Risk- Modulator Vehicle Decision-making Low-level Actuation Input Data Stream Perception Evidence Positive Negative Uncertain Trust and Risk Risk Trust Perception Module Proposed Perception Evaluation and Modulation Module Decision Maker Figure 4.1: The proposed trust-modulated decision-making in autonomous mobile system con- sists of monitors to evaluate the perception modules and generates property satisfaction verdicts (evidence), which are then utilized in the trustworthiness and risk quantification node to dynami- cally estimate the trust values of the perception. Then the trust and/or risk modulator modulates the perception results and sends the trust and/or risk-modulated perception outputs to the vehicle decision-making node for further actions. Without the proposed perception evaluation and modu- lation module, the output of perceptionY is directly used in the vehicle decision-making module (dashed red arrow). following questions: (1) How can we quantify the time-varying trust and risk in the per- ception software modules using metrics that go beyond training accuracy? (2) How can we use trustworthiness values of perception to improve the safety of decision-making? To address the former questions, we develop an evidence-based reasoning framework to dynamically reason about the system’s trust in its perception modules. The pillar of this framework is inspired by subjective logic (SL), an epistemic logic suitable for modeling and analyzing situations involving relatively unreliable sources. In our framework, we consider the perception system itself to be unreliable. In the presence of user-labeled ground-truth, every correct perception decision increases our belief in the perception system, while incorrect or missed decisions increase our disbelief in it. However, we desire to assess the performance of a deployed perception system, and obviously, there is no ground truth available when the system is being operated in the field. To address this, we postulate that reasoning based on physical laws and cognitive contexts can serve as a proxy for ground truth data. For example, consider an object detection module that labels an object as a non-moving obstacle in one video frame, fails to recognize it in the 94 next frame, but recognizes it as a non-moving obstacle in the third frame. Clearly, the missing obstacle in the second frame is not possible. In [DADF18], the authors have introduced a logic known as Time Quality Temporal Logic (TQTL) which allows to express properties that serve as proxy for ground truth data. In our paper, we monitor logical properties such as those expressible in an extension of TQTL with spatial operations. Such monitors essentially act as a mechanism for evaluating the reliability of the perception module; e.g., frequent monitor failures lead to an erosion of trust and an increment of risk. We call this part of our framework dynamic trust and risk quantification. We use our dynamic mechanism for judging the trustworthiness and risk of our perception system to then enable safer decision-making. The workflow of this trust-modulated decision-making is shown in Fig. 4.1. In simple terms, the decision-making system can be treated as an agent performing actions based on its perceived environmental state. We wish to avoid scenarios where incorrect perception leads to actions where the agent believes that the action would ensure safety, but should the action be executed, would cause a safety violation. We show that if the decision- making system satisfies certain properties, for every action that it takes, we can formulate a trust-modulated action, which will lead to safe system behavior with a high probability. To solve the above-mentioned challenging problems, we make the following novel contributions: • We design STQL-based monitors for reasoning about perception reasonableness quantitatively, translating observations into quantitative evidence and quantifying trust. • We develop a trust-modulated action framework that improves system safety. 95 Data Stream of Actual Scenes Perceived Scenes (May contain errors and inconsistencies) Perception Planning & Decision- making Waypoints Safe? Trustworthy? Object Recognition Tracking Depth Estimation Trajectory Prediction Bounding boxes Positions (x,y,z) Distances (d1,d2,...) Trajectories Low-level Actuation Pedestrain Bicycle Truck Pedestrain Bicycle Ambulance Pedestrain Bicycle Ambulance Actural Scene Perceived Scene at t=1 Perceived Scene at t=2 Bicycle Ambulance Perceived Scene at t=3 Figure 4.2: Autonomous software stack with a self-driving car perception example, in which the perception processes data stream of frames and generate perceived output (there might be errors as shown int = 1 scene, missing objects int = 2 scene, and inconsistencies as shown in t = 3 scene in the perception process, where an ambulance is perceived as a truck then perceived as an ambulance; and the system failed to perceive the pedestrian in continuous scenes). Then, perception tasks such as object recognition, tracking, depth estimation, and trajectory prediction are performed and sending various outputs to the planning and decision-making node. The decision-making node finally generates decisions about way-points which later on is used in low-level actuation. In this workflow, we can see the errors and therefore the safety and trustworthiness of the perception and decision-making node are impacted. • We empirically demonstrate the efficacy of our framework in an AEB setting with an adversarial pedestrian and a potentially untrustworthy perception module. Our trust-aware modulation provides a reduction in collision rate by 69.24%. Problem Statement In order to increase the safety of autonomous vehicles, we raise the following research questions. • RQ1. How do we evaluate the “reasonableness” of the perception system? • RQ2. How do we quantify the trustworthiness, risk, and uncertainty of the perception system? 96 • RQ3. How can trustworthiness, risk, and uncertainty help increase the safety of the decision-making system in autonomous vehicles? In this chapter, we propose the following solutions: • S1. Quantify the reasonableness of the perception system using proxy monitors, which judge the degree of reasonableness or quality of perception via rule-based logic. • S2. Quantify the trustworthiness, risk, and uncertainty of the perception system based on the satisfaction of proxy monitors. • S3. Utilize a trust-modulated control algorithm to operate the system via mode- switching to increase safety. 4.1 Perception and Decision-making The software stack for a typical mobile autonomous system, such as a self-driving car consists of several functionally distinct components. We illustrate the high-level software architecture of such a system in Fig. 4.2. We are interested in modules that perform perception and planning/decision-making. Hence, in this section, we define each of these modules in a formal notation that facilitates discussing our theoretical framework. Perception. Perception modules usually contain a diverse set of components with various functional aspects. The perception can be vision-based: where it takes as input a sequence of images from a monocular or stereo camera [PAD + 17], or LIDAR-based or RADAR- based, where it takes as input a point cloud or a list of points in a 2D or 3D space along with some attributes for each point such as intensity. Perception systems then perform one of several tasks such as semantic segmentation (splitting the given image or point-cloud into semantically disjoint parts), object detection and recognition (identifying if a given 97 image contains an object of a pre-defined class), object tracking (identifying the spatio- temporal trajectory of an object), trajectory prediction (producing predicted trajectories for dynamic objects in its environment), depth prediction (estimating the distance of objects in an image), and feature identification (e.g. identifying lane markings, traffic lights, etc.). At an abstract level, a perception module can be thought of as taking as input a data stream and producing a labeled data stream. We formally define a post-perception data stream below. Definition 4.1.1 (Data Stream, Frames and Data Objects). A post-perception data stream D is a sequence of frameshD 0 ;D 1 ;:::;D n i. The index t of frameD t is called the frame number, assumed to be a non-negative integer. A frameD t is a set ofm t data objectsfd 1 ;:::;d mt g, where eachd i is an element of some setO i , also known as the data domain. Example 1. Consider the following data stream: D 0 : d 1 : ((ID; 1); (class;car); (pr; 0:9); (bb;B 1 )) D 1 : d 1 : ((ID; 1); (class;car); (pr; 0:9); (bb;B 1 )) d 2 : ((ID; 2); (class;car); (pr; 0:8); (bb;B 2 )) D 2 : d 1 : ((ID; 1); (class;car); (pr; 0:9); (bb;B 1 )) d 2 : ((ID; 3); (class;pedestrian); (pr; 0:6); (bb;B 3 ))::: Consider the frameD 2 : (d 1 ;d 2 ), where eachd i is defined as a tuple of key-value pairs. For example,d 1 is: ( (ID; 1); (class;car); (pr; 0:9); (bb;B 1 ) ), where the value for the key ID contains a unique (within the frame) positive integer indicating the object id, key class is a string specifying the object type, the key pr points to a real number in [0; 1] indicating the probability ofcar indeed being the label of thisd 1 , andB 1 2Z 4 is a 98 bounding box (bb) containing the top, left, bottom, and right pixels indicating corners of an axis-oriented rectangle thatd 1 represents. Planning/Decision-Making. In most autonomous mobile systems, there are several levels of planning and decision-making. For example, an autonomous vehicle may perform route planning, behavioral planning that selects an appropriate mode for the vehicle (e.g., urban driving, highway driving, emergency braking, evasive maneuver), and path planning which generates a sequence of collision-free way-points for the vehicle. Way-points generated by the vehicle planner are then used by a downstream vehicle actuation controller that appropriately actuates the vehicle’s lateral and longitudinal motion to track the way-points. Here, we are broadly concerned with the behavioral planning and motion planning aspects. At a high-level, the decision-making system can be thought of as a stateful transformer that in any given stateq, takes as input a single datastream frame from the perception module and (1) chooses an appropriate behavioral mode for the vehicle, and (2) produces a motion plan for the vehicle. 4.2 Dynamic Trust, Risk, and Uncertainty Quantifica- tion 4.2.1 S1. Proxy Monitors for Perception While training and evaluating perception modules on datasets allows us to compute the accuracy and precision-recall ratio of the algorithm against that specific dataset, these metrics may not provide information about the types of failure modalities that exists with the algorithm, and how the module performs with unseen environments. To address these shortcomings, we propose the use of proxy monitors: these are runtime perception 99 monitors that dynamically evaluate the reliability of the system in the absence of ground truth information. Definition 4.2.1 (Proxy Monitors for Perception). There might be multiple proxy moni- tors and a proxy monitor' i is a streaming function that takes as input a post-perception data-streamD, and at each frame t, produces either a qualitative answer about the validity of the property being monitored, or a quantitative answer judging the degree of reasonableness or quality of perception. I.e.,' i :DN!f0; 1g or' i :DN!R. Proxy monitors on the annotated bounding boxes outputted by object detection node are defined using STQL [Hek21, BDH + 21], which allows us to temporally reason about spatial functions like distance between bounding boxes, and the intersection and union of bounding boxes. We are thus able to define high-level properties on the consistency in detected object classes, approximate smoothness of object trajectories, potential occlusions, and other properties of interest. 4.2.2 S2. Trust and Risk Quantification of Perception Systems In this section, we introduce how trustworthiness (and risk) is dynamically quantified utilizing the outcome of proxy monitors based on a formal trust logic. Perception evidence. To enable quantitative reasonableness and trustworthiness eval- uation of perception, we propose to extract numerical perception evidence based on previously defined proxy monitors' = [' 1 ;:::;' n ]. We define perception evidence to summarize the property validation results, i.e., positive evidence (s) represents that all properties are validated; negative evidence (r) represents that none of the properties is validated, and uncertain evidence () represents that some (but not all) properties are validated. 100 Definition 4.2.2 (Perception Evidence). Given proxy monitors' = [' 1 ;:::;' n ],8i2 [1;n], ' i = 1 represents that property' i is validated, otherwise' i = 0. Perception evidenceE = (s;r;) then reads: 8 > > > > > < > > > > > : s =s + 1; if 1 n P n i=1 ' i = 1, r =r + 1; if 1 n P n i=1 ' i = 0, = + 1; otherwise. (4.1) Perception opinion. To enable trustworthiness evaluations, we propose to extend a probabilistic logic, subjective logic (SL) [Jøs16], to quantify perception opinion based on perception evidence. Positive evidence represents that all proxy monitors are validated, negative evidence represents the opposite, and uncertain evidence exists whenever there is conflict or inconsistency in perception. Therefore, we define our opinion about perception based on this quantitative evidence. Formally, a perception opinion consists of five components, belief (b), disbelief (d), uncertainty (u), ignorance (i), and base rate (a), where belief, disbelief, uncertainty are linked to positive, negative, uncertain evidence, and ignorance represents the lack of knowledge. Base rate is a prior probability indicating the accuracy of perception in training phase. Definition 4.2.3 (Perception Opinion). Subject to a perception, the opinion about this perception isW = (b ;d ;u ;i ;a ), whereb +d +u +i = 1 anda 2 [0; 1]. In particular, a perception opinion can be calculated based on perception evidence using the following rules: b =s=; d =r=; u ==; i =!=; (4.2) where =s +r + +! and! is a default non-informative prior weight with value 2. 101 With the pre-defined proxy monitors', the stream of the behavior of is evaluated and translated into a stream of evidence E. With Boolean verdicts, 1’s and 0’s in' indicate the satisfaction and violation of pre-defined properties. Translating this in perception opinion, positive, negative, and uncertain evidence correspond to the belief, disbelief, and uncertainty in the opinionW . Trustworthiness and risk. In previous sections, the behavior of is translated into an opinion with the help of proxy monitors. Next, we can further evaluate the trustworthiness and risk of perception based on perception opinion. Intuitively, the trustworthiness of perception represents how much we trust the perception to make the right prediction; and the risk of perception indicates the risk of being incorrect. Therefore, we formally define the trustworthiness and risk using the following definitions: Definition 4.2.4 (Trustworthiness). Subject to a specified perception, the trustworthi- ness of is defined asp =b +i a , whereb accounts for the belief in to make the right prediction, andi a represents that in unknown or unseen scenarios, we rely on training accuracya to approximate our trust. Definition 4.2.5 (Risk). Subject to a specified perception, the risk of is defined as = d +i (1a ), where d accounts for the disbelief in to make the right prediction, andi (1a ) represents that in unknown or unseen scenarios, we rely on training error rate (1a ) to predict the risk of being incorrect. Lemma 4.2.1. Trustworthiness, risk, and uncertainty about perception are mutually exclusive and sum up to 1, i.e.,p + +u = 1. Based on the formulation of trustworthiness and risk, and the evidence theory, the history of perception is dynamically translated into trust and risk evaluations. Trustwor- thiness and risk (and uncertainty) are bounded within the limit of [0; 1]. 102 4.3 Trust- and Risk-modulated Decision-making Perception in autonomous vehicles perceives the surrounding environment and takes actions based on the post-perception data streamD. The decision-making, mode-switching, and perception process can be abstractly represented by a tuple (Q;X;D;A;T ). In particular, the perception process takes a true inputu2X from the environment, and changes the internal state tox2Q and perceives the input asy2D. Then, based ony the autonomous agent makes a decision about taking an actiona2A. Algorithm 3: Trust- and risk-aware perception modulation and decision-making. Input :Inputu2D to the perception. Output :Modulated output and action of the perception,y anda . 1 Perception 2 y (u) . Perceived output. 3 EvidencesE '(y) . Def. 4.2.2 4 OpinionW (b ;d ;u ;i ;a ) . Def. 4.2.3 5 Trustp b +ia . Def. 4.2.4 6 Risk d +i (1a) . Def. 4.2.5 7 end 8 Trust- and risk-modulated decision-making 9 y f(y;p;) . Modulatey based on a trust- and risk-aware transformf() 10 Take actiona 2A S based on modulated outputy . 11 end 4.3.1 S3. Trust and Risk Modulation Due to the fact thatu may or may not be the same asy, and the decision-making based ony may or may not be safe, modulation of perception makes an effort to modulatey by taking a linear or non-linear transformf() and the resulting modulated output reads y =f(y). Then, the action taken based on the modulated output is denoted asa 2A. Trust and risk-modulated perception takes trustworthinessp and risk calculated based on proxy monitors and modulatesy withy =f(y;p;). Assume that there exists a set 103 Figure 4.3: Our framework is composed of five components: object detection nodeD 1 to generate the bounding box of the object of interest, depth prediction nodeD 2 to predict distance of the object, quality check node to evaluate the quality ofD 1 , a trust calculation node to calculate the trustworthiness ofD 1 , and a distance modulation node to modulate the output ofD 2 accounting the trustworthiness ofD 1 . Then the modulated distance can be used in later applications such as emergency brake and pedestrian avoidance. of actionsA S A, such that actions taken from this subset result in safe behavior under the environment producing a set of inputX. If the trust and risk modulation revises the perceived output asy , then actionsa taken based on this output lie in the safe action setA S . Detailed trust and risk modulation of perception algorithm is defined in Alg. 3. In the various tasks of visual perception, such as obstacle detection, line merging, pedestrian prediction, modulation could provide the help with safety ensuring by modulating the perception output accordingly. In the case of pedestrian prediction or intersection cross- ing, the perception system can make progressive or conservative modulations to balance between safety and efficiency. In the case of emergency brake, conservative modulations could ensure the system to operate in a secure and cautious manner. 4.4 Conservative Trust Modulation In this section, we discuss our proposed self-supervised trust-modulated perception with particular object detection (D 1 ) and object-specific depth prediction (D 2 ) tasks. As 104 illustrated in Fig. 4.3, our framework has five components: object detection node, depth prediction node, proxy monitors, dynamic trust quantification, and trust-aware depth modulation node. In particular, we follow trust modulation recipe in Alg. 3 and extend it to trust-aware depth modulation case study as shown in Alg. 4, in which we demonstrate end-to-end our revision procedure. In the remaining of this section, we introduce the five components in depth. Algorithm 4: Self-supervised trust-aware distance revision in visual perception. Input :Monocular imageI(t), training accuracy of object detection nodea D 1 , vehicle velocityv(t). Output :Revised object-specific depthd (t). Initialization :Perception opinion of object detection nodeW D 1 = (0; 0; 0; 1;a D 1 ) 1 Object Detection and Trust Estimation 2 class(t);bb(t) YOLO(I(t)) . Sec. 11 3 Evidences E(t) = (r(t);s(t);(t)) ' (class(t);bb(t);class(t);bb(t)) 4 OpinionW D 1 (t) updated . Sec. 11 5 Trustp D 1 b D 1 +i D 1 a D 1 . Def. 4.2.4 6 end 7 Depth Prediction and Revision 8 Visual feature(t) VGG16 (I(t)) 9 Distance predictiond(t) D 2 ((t);bb(t)) . Sec. 11 10 Trust-aware distance modulationd (t) . Eq. 4.3 & 4.4 11 end Object detection. In ADSs, computational cost is an essential consideration for percep- tion model selection. Thus, in our framework, the object detection moduleD 1 , follows the YOLO architecture [RF18], which is a convolutional neural network (CNN) architec- ture that allows for efficient multi-object detection and classification in a single pass of an input image. In our experiments, we train YOLO on the KITTI dataset [GLSU13]. D 1 then consumes images generated from cameras attached to autonomous vehicle at a fixed rate and outputs a list of tight-fitting bounding boxes (bb’s) and object types (class’s) for each object in the image. These bounding boxes are in-turn used to predict 105 the distance to obstacles, and to compute the trustworthiness of the perception module as a whole. Depth prediction. The task of object depth prediction nodeD 2 in our work is to predict object-specific distance, based on the monocular image. In this work, we build our depth prediction nodeD 2 as shown in Fig. 4.3. We follow [ZF19] to exploit the advanced vision model VGG16 to extract visual features from input images, and combine the bounding boxes (output fromD 1 ) with the visual features to craft the input ofD 2 , then we trainD 2 to estimate and produce object-specific distanced for later usage. Proxy monitors. As discussed in Sec. 4.2, we use STQL properties as proxy monitors that outputs the satisfaction value of some input trace (signal trajectory or sequence of bounding boxes) against a logical formula. Specifically, in the presented framework, given a set of properties defined using STQL, we construct monitors for the stream of labeled bounding boxes outputted by the object detection moduleD 1 for following properties: • Consistent Detections' 1 : Object detection algorithms are known to frequently miss detecting objects in consecutive frames or detect them with low confidence after detecting them with high confidence in previous frames. This can cause issues with algorithms that rely on consistent detections, e.g. for obstacle tracking and avoidance. The following property can be described in STQL to detect consistent detections: If there exists some object in the previous frame that is not near the edges of the frame, then the object must be present in the current frame too. 106 We also supplement this property by checking if the object is consistently labeled, i.e., for all objects in the current frame, the corresponding object in the previous frame has the same label. • Smooth Object Trajectories ' 2 : As the bounding box output of D 1 is used to estimate the distance to an obstacle, it is critical that the bounding boxes evolve in a somewhat smooth manner. If the bounding boxes for a single object are away from each other by a large margin in consecutive frames, then the distance measurements may be off too. The following property, if violated, detects such events: For every 5 frames, the bounding boxes for an object in each of the frames must overlap more than 50%. The above properties enforce some sanity checks on the output of the perception module, providing evidence for the reliability (or lack thereof) of the perception system. We use the PerceMon [BDH + 21] tool to monitor the data streams from the perception modules. Trustworthiness estimation. In a system with multiple components, a faulty component may cause the dysfunction of the whole system, especially in pipelined systems with a series of connections. Many perception formulations that produce depth predictions rely on the previous output of object detection node, such as those presented in [HGRDG18, ZF19]. The quality or trustworthiness of the object detection node could severely impact the performance of the depth prediction in a perception module. Therefore, we propose to measure the trustworthiness of object detection nodeD 1 to help modulate the output of later depth prediction nodeD 2 . In the previous section, we introduced proxy monitors to measure the quality ofD 1 . In what follows, we propose to estimateD 1 ’s trustworthiness based on proxy monitor satisfaction results, and calculate a single trust value for each frame inD to use in later trust-based depth modulation. 107 In our trust-aware perception, we maintain a running opinion/trustworthiness of D 1 . As we’ve seen in Sec. 4.2, an opinion is composed of a five-tuple. Hence, we denote the opinion ofD 1 asW D 1 = (b D 1 ;d D 1 ;u D 1 ;i D 1 ;a D 1 ). In the training stage, we trainD 1 with training data that are assumed to be representative, however, we cannot determine if the test environment would contain the same or different patterns as in the training experience. Hence, to reflect the existence of unawareness in test environment, we initialize the opinion ofD 1 asW D 1 = (0; 0; 0; 1;a D 1 ), where belief, disbelief, and uncertainty take minimum value of 0’s, ignorance takes the maximum value of 1, and base rate is equal to training accuracy. Then this opinionW D 1 is updated in real-time in testing based on the evidence calculated from proxy monitors following Eq. 4.1-4.2. After the opinion update, trustworthinessp D 1 is calculated based on Def. 4.2.4 and later on used in-depth modulation (Eq. 4.4) as a discounting factor. 4.4.1 Depth Modulation Given outputd from depth prediction nodeD 2 , we perform a depth modulation to ensure a conservative safety guarantee. Considering AEB, the conservative guarantee should output a modulated distance less or equal tod. In addition, the consecutive outputs of D 2 should not have an abrupt change, i.e., the distance between two outputs should be bounded by the real-time velocity multiplied by the sampling time difference. Therefore, our first modulation is as follows. Definition 4.4.1. Assume we take output from the depth prediction nodeD 2 every seconds, and the averaging real-time velocity of the autonomous vehicle between time t andt isv(t), the revised depth is defined as: ^ d(t) = 8 > < > : min (d(t);d(t)v(t)); ifd(t)>d(t), min d(t);d(t)v(t); ^ d(t)v(t) ; otherwise. (4.3) 108 Based on our pipeline, the input ofD 2 is the combined form of the original real-time monocular images and the bounding boxes, which are output given byD 1 . Hence, the quality ofD 1 directly affects the accuracy ofD 2 . To take the trustworthiness ofD 1 into consideration, we double modulate our depth estimation by discounting ^ d with D 1 ’s trustworthiness: d (t) =p D 1 (t) ^ d(t); p D 1 (t)2 [0; 1]: (4.4) 4.4.2 Trust-Modulated Perception Reduces Collision Rate Simulations. To test the efficacy of our framework, we designed a simulation setup using the Robotics Operating System (ROS) [QCG + 09] to orchestrate the various modules in the system and the CARLA autonomous car simulation platform [DRC + 17] as the environment to run scenarios where the car may need to deploy its autonomous emergency braking (AEB) system. Pedestrian Avoidance. In this experiment, we refer to the car being controlled as the ego vehicle, and any other “adversarial” agents as ado agents. We have the ego vehicle travel down a road, following some waypoint trajectory using a simple PID controller. The scenario also consists of one other ado agent: a pedestrian idling on the sidewalk along the path of the ego car who suddenly decides to cross the street. Thus, the goal of the scenario is to use the data fromD 1 andD 2 to avoid a collision with the ado pedestrian by engaging the AEB system. During the simulations, we recorded the following data to compute the performance metrics, detailed later: • d G (t): ground truth distance to the object. • d(t): predicted object-specific depth generated byD 2 . • ^ d(t): predicted object-specific depth after 1 st modulation. 109 • d (t): trust-aware modulated object-specific depth (2 nd modulation). • v(t): the velocity of the ego. • p D 1 (t): Real-time trustworthiness ofD 1 . • T c : The timestamp that the collision occurs (if any). Baselines. To measure the performance of our proposed trust-aware perception modula- tion framework, we compared the performance against the following baselines: • Direct controller: the controller here uses the output ofD 2 to get the distance to the obstacles in front of it, and uses the output directly to reason about when to engage the AEB. In the presence of untrustworthy perception modules, this strategy can (and usually does) lead to fatalities. • Ideal controller: this controller is the “ideal controller”, i.e., it is able to query the simulator directly for the distance to impeding obstacles, and can use the true distance to reason about when to engage the AEB. This baseline allows us to check what would be the ideal stopping distance for the controller, and compare it with the average stopping distance for the above controllers. Evaluation Metric. The evaluation process is performed based on the intuition that in our AEB case study, the principled way of modulation is conservative and enhances safety. Therefore, we report and compare the following metrics with the baselines, aggregated across 13 trials for each controller framework with a 95% confidence interval: • : the average distance to the ado when the ego car stops. • v: average velocity during each trial, from when the simulation starts until either the AEB is engaged or collision happens. • : collision rate, which indicates safety. 110 Figure 4.4: Pedestrian avoidance using our trust-modulated perception. a. A fatal safety violation happened with the usage of the Direct controller. b. Our trust-aware perception modulation successfully avoids the accident. c. Proxy monitor satisfaction results and the resulting trustworthiness evaluation of object detection node. d. Ground truth distance and predicted distance provided by Direct controller in example shown in a. e. Ground truth distance (d G ), predicted distance (d), and the modulated distances ( ^ d andd ) provided by our controller in example shown in b. Note that the predictedd in d-e is intermittent because when the object detection moduleD 1 detects that there is no object in the current frame, the distance predicted is infinity. f. Mean-variance results ofd G ,d, ^ d, andd of our controller averaged over 13 trails. Note that in the time steps whered =1, we manually set it to a large number (100) to represent that there is no object detected at timet. Experimental Results In Fig. 4.4a-b, d-e, we present an example that illustrates the difference between the Direct controller and our trust-aware controller. In principle, the Direct controller mimics the behavior of an autonomous vehicle that relies on the perception system to make decisions, i.e., brake or accelerate, in various scenarios. Consider the scene shown in Fig. 4.4b, when the car is 25m away, the pedestrian is visible to the perception module before getting momentarily occluded by the bus stop. Without a perfect perception module, the Direct controller that only relies on instantaneous perception of an obstacle fails to recognize the uncertainty associated with the pedestrian position and thus almost always collides with the pedestrian. 111 In contrast, our method takes into account the “unawareness” of perception, dynami- cally evaluates the trustworthiness of perception and modulates the perceived distance conservatively. The two-step modulation performs well in scenarios where there may exist uncertainty and adversarial agents as shown in Fig. 4.4f. With our trust-modulation, the error betweend G andd is much smaller than original perception error, i.e., error betweend G andd. Fig. 4.4c illustrates how our proxy monitors contribute to trustworthi- ness quantification. Monitors' 1 and' 2 check for consistency and smoothness ofD 1 ’s output, therefore, the intermittent prediction around time step 100 causes violation of proxy monitors and results in trustworthiness decrements. Table 4.1: Aggregate scores for the evaluation metrics. (m) v (m=s) (%) Ideal 7:214 0:23 5:88 0:71 0 Direct 5:56 4:78 6:76 0:88 92:31 Ours 18:67 9:54 6:87 2:22 23:07 The conservative modulation provides much better safety guarantee, specifically, our method reduces average collision rate from 92:31% (of Direct controller) to 23:07% as shown in Table. 4.1. The Ideal controller is the best possible controller given the perfect perception. However, this Ideal controller is unrealistic without perfect perception. From comparison of v, we find that the Direct controller maintains a larger average speed compared to the Ideal controller, thereby leading to fatalities. We remark that a hypothetical controller that never allows a car to move is the safest but not very useful. Thus, the consistent average velocity of the car with our controller shows that we can effectively trade-off safety with performance. 112 Chapter 5 Misinformation Analysis and Prediction in CPHSs 5.1 Misinformation and Infodemics Social media and micro-blogging have gained popularity [Est19, Jen09] as tools for gathering and propagating information promptly. About two-thirds of Americans (68%) obtain news on social media [KE18], while enjoying its convenient and user-friendly interfaces for learning, teaching, shopping, etc. Journalists use social media as convenient yet powerful tools and ordinary citizens post and propagate information via social media easily [ZAB + 18]. Despite the success and popularity of online media, the suitability and rapidly-spreading nature of micro-blogs fosters the emergence of various rumors [CGL + 18, AGY19]. Individuals encountering rumors on social media may turn to other sources to evaluate, expose, or reinforce rumors [LES + 12, CSTL15]. The rapid and wide spread of rumors can cause various far-reaching consequences, for example, during the 2016 U.S. presidential election, 529 different rumors about Donald Trump and Hillary Clinton spread on social media [JCG + 17] and reached millions of voters swiftly. Hence, these rumors could potentially influence the election [CGL + 18]. More recently, the rapid spread of rumors about 2019 novel coronavirus [CN20,Mer20,Mat20] (some of which are verified to be very dangerous false claims [Jes20], e.g., those that suggest drinking bleach cures the illness [Ton20]) has made social media companies such as Facebook to find more effective solutions [Zoe20]. If not identified timely, 113 sensational and scandalous rumors could provoke social panic during emergency events, e.g., coronavirus [Jul20], threaten the internet credibility and trustworthiness [FAEC14], with serious implications [Fac20]. Social media rumors are therefore a major concern. Commercial giants, government authorities, and academic researchers heavily invest in diminishing the adverse impacts of rumors [CGL + 18]. The literature defines a rumor as “an item of circulating information whose veracity status is yet to be verified at the time of posting" [ZAB + 18]. On a related note, if the veracity status is confirmed to be false, the rumor can then be considered as fake news. Rumor handling research efforts cast four main elements: rumor detection, rumor tracking, rumor stance classification, and rumor veracity classification [ZAB + 18]. A typical rumor classification system includes all the four elements. As shown in Fig. 5.1, the first step in rumor classification is rumor detection. Identi- fying rumors and non-rumors has been usually formulated into a binary classification problem. Among the numerous approaches, there are three major categories: hand- crafted features-based approaches, propagation-based approaches, and neural network approaches [CGL + 18]. Traditional methods mostly utilize hand-crafted features extracted from textural and/or visual contents of rumors. Having applied these features to describe the distribution of rumors, classifiers are trained to detect rumors [CMP11, KCJ + 13]. The approaches based on the structure of social network use message propagation infor- mation and evaluate the credibility of the network [GZH12], but ignore the textual features of rumors. Social bot detection and tracking built on social network structure and user information can be utilized to detect bot-generated rumors. Recent deep neural network (DNN)-based methods extract and learn features automatically and achieve significantly high accuracies on rumor detection [CLYZ18]. Generative models and adversarial training techniques have also been used to improve the performance of rumor detectors [MGW19]. After a rumor is identified, all the related posts or sentences dis- 114 Figure 5.1: Rumor classification system consists of four components: rumor detection, rumor tracking, rumor stance classification, and rumor veracity classification. cussing this rumor should be clustered together for later processing, and other unrelated posts should be filtered out. This rumor tracking task can be formulated into a binary classification problem, which classifies posts as related or unrelated to a rumor. Unlike other popular components in rumor classification system, research in rumor tracking has been scarce. The state-of-the-art work uses tweet latent vector to overcome the limited length of a tweet [HD15]. Once the rumor tracking component clusters posts related to a rumor, the stance classification component labels each individual post by its orientation toward rumor’s veracity. For example, a post or reply can be labeled as support, deny, comment, or query [GBD + 18]. Usually rumor stance classification can be realized as a two to four-way classification problem. Recurrent neural networks (RNNs) with long short-term memory (LSTM) [HS97] cells have been used to predict stance in social media conversations. The authors in [KC19] proposed to use convolution units in Tree LSTMs to realize a four-way rumor stance classification. Variational autoencoders (V AEs) [KW13] have been used to boost the performance of stance classification. The authors in [SWSF19] 115 utilize LSTM-based V AEs to capture the hidden meanings of rumors containing both text and images. The final component in rumor classification is veracity classification, which deter- mines the truth value of a rumor, i.e., a rumor can be true, false, or unverified. Some works have limited the veracity classification to binary classification, i.e., a rumor can either be true or false [ZLLY19]. The initiated research in this direction does not tackle the veracity of rumors directly, but rather their credibility perceptions [CMP11]. Later works in this area dealing with veracity classification take advantage of temporal features, i.e., how rumors spread over time, and linguistic features. More recently, LSTM-based DNNs are frequently used to do veracity classification [KLZ18, KC19]. Similarly, fake news detection is often formulated into a classification problem [OQW18]. Without establishing a verified database, rumor veracity classification and fake news classification perform a similar task. Instead of tackling each component in rumor classification system individually, multi- task classifiers are proposed to accomplish two or more functions. Tree LSTM models proposed in [KC19] employ stance and rumor detection and propagate the useful stance signal up in the tree for followup rumor detection. DNNs are trained jointly in [MGW18] to unify stance classification, rumor detection, and veracity classification tasks. Rumor detection and veracity classification sometimes are executed together since they can be formulated into a four-way classification problem as in [MGW18, HD15]. A post can be labeled as a non-rumor, true rumor, false rumor, or unverified rumor. The authors of [KLZ18] proposed an LSTM-based multi-task learning approach that allows joint training of the veracity classification and auxiliary tasks, rumor detection and stance classification. Previous works in scientific literature have accomplished one or a few tasks in rumor classification, but none of them provides a complete high performance rumor 116 Figure 5.2: VRoC: The proposed V AE-aided multi-task rumor classification system. The top half illustrates the V AE structure, and the bottom half shows the four components in the rumor classification system. IN and OUT represent input layer and output layer, respectively. Numbers in parenthesis indicate the dropout rates. Note that the generated text could be different from the original text, if the V AE is not perfect. classification system to account for all four components. In this work, we propose VRoC to realize all four tasks. The contributions of this work are as follows: • We propose VRoC, a tweet-level text-based novel rumor classification system based on variational autoencoders. VRoC realizes all four tasks in the rumor classification system and achieves high performance compared to state-of-the-art works. • We propose a co-train engine to jointly train the V AEs and rumor classification components. This engine pressurizes the V AEs to tune their latent representations to be classifier-friendly. Therefore, higher accuracies are achieved compared to other V AE-based rumor detection approach. • We show that the proposed VRoC has the ability to classify previously seen or unseen rumors. Due to the generative nature of V AEs, VRoC outperforms baselines under both training policies introduced in Section 5.2.3. 117 5.2 VRoC Misinformation Classification Framework In this section we first present the problem statement and then describe the details of our VRoC framework. Fig. 5.2 illustrates our VRoC, a V AE-aided multi-task rumor classifier that consists of rumor detector, rumor tracker, stance classifier, and veracity classifier. V AE in this work is an LSTM-based variational autoencoder model to extract latent representations of tweet-level text. For each rumor classification component, a V AE is jointly trained to extract meaningful latent representations that not only is information rich, but also is friendly and suitable for each component. Problem Statement. Rumor classification consists of four components: rumor detection (D), rumor tracking (T), stance (S) classification, and veracity (V) classification, each of which can be formulated into a classification problem. Given a tweet-level textx, VRoC can provide four outputsfy D ;y T ;y S ;y V g, wherey D 2fRumor;Nonrumorg, y T 2 fRelated;Unrelatedg, y S 2 fSupport;Deny;Comment;Queryg, y V 2 fTrue;False;Unverifiedg. Four components are realized independently in this work, but they can also be implemented as one general classifier that jointly produces four types of outputs, or two to three classifiers that each realizes one or more tasks. 5.2.1 LSTM-based Variational Autoencoder The V AE model in this work consists of an encoder and a decoder, both of which are LSTM networks because of the sequential nature of language. To extract latent representation from tweet-level data, in this work, we take advantage of V AEs and utilize the encoder-extracted latent representations to compress and represent information in textural data. The decoder decodes the latent representations generated by the encoder into texts and ensures the latent representations are accurate and meaningful. Rumor 118 classifier components are co-trained with V AEs to help the encoders tune their outputs to be more classifier-friendly. Let us consider a set of tweetsX =fx 1 ;x 2 ;:::;x N g, each of which is generated by some random processp (xjz) with an unobserved variablez. Thisz is generated by a prior distributionp (z) that is hidden from us. In order to classify or utilize these tweets, we have to infer the marginal likelihoodp (x), however is unfortunately also unknown to us. In V AEs,z is drawn from a normal distribution, and we attempt to find a function p (xjz) that can mapz tox by optimizing, such thatx looks like what we have in data X. From coding theory,z is a latent representation and we can use a recognition function q (zjx) as the encoder [KW13]. q (zjx) takes a valuex and provides the distribution overz that is likely to producex.q (zjx) is also an approximation to the intractable true posteriorp (zjx). Kullback-Leibler (KL) divergence measures how close thep (zjx) and q (zjx) are: D KL [q (zjx)jjp (zjx)] =E zq (zjx) [logq (zjx)logp (zjx)]: (5.1) After applying Bayesian theorem, the above equation reads: logp (x) =D KL [q (zjx)jjp (zjx)] +L(;;x); (5.2) L(;;x) =E zq (zjx) [logq (zjx) +logp (xjz) +logp (z)]: (5.3) We can see the autoencoder from Eq. 5.3 already:q encodesx intoz andp decodesz back tox. Since KL divergence is non-negative,L(;;x) is called the evidence lower bound (ELBO) oflogp (x): logp (x)L(;;x): (5.4) 119 In V AEs,q (zjx) is a Gaussian:N(z;; 2 I), where and are outputs of the encoder. The reparameterization trick [KW13] is used to expresszq (zjx) as a random variable z =g (;x) = +, where the auxiliary variable is drawn from a standard normal distribution. A Monte Carlo estimation is then formed for ELBO as follows: L MC (;;x) = 1 N N X n=1 logq (z n jx) +logp (xjz n ) +logp (z n ); (5.5) wherez n =g ( n ;x), n N(0;I). Encoder. The encoderq (zjx) is realized by an RNN with LSTM cells. The input to the encoder is a sequence of wordsx = [w 1 ;w 2 ;:::;w T ], e.g., a tweet with lengthT . The hidden stateh t ,t2 [1;T ] in LSTM is updated as follows: h t =o t tanh(C t ); (5.6) o t =(W o [h t1 ;w t ] +b o ); (5.7) C t =f t C t1 +i t C t ; (5.8) f t =(W f [h t1 ;w t ] +b f ); (5.9) i t =(W i [h t1 ;w t ] +b i ); (5.10) C t =tanh(W C [h t1 ;w t ] +b C ): (5.11) is a sigmoid function.o t ,f t ,i t are output, forget, and input gate, respectively.C t , C t are new state and candidate cell state, respectively. W o ,W f ,W i ,W C ,b o ,b f ,b i ,b C are parameter metrics. The outputs of the encoder are divided into and. Decoder. The decoder p (xjz) is realized by an RNN with LSTM cells which has a matching architecture as that of the encoder. It takes a z as input and outputs the 120 probabilities of all words. The probabilities are then sampled and decoded into a sequence of words. Training. In this work, we propose a co-train engine for V AEs. Each component in rumor classifier is trained jointly with a V AE, i.e., there are four sets of V AEs and components. Each set is trained individually. To co-train each set, we modify the loss function of V AE by adding a classification penalty, which is the loss of the rumor classification component. By backpropagating the loss of each component to the corresponding V AE, the V AE learns to tune its parameters to provide classifier-friendly latent representations. In addition, this operation introduces randomness into the training process of V AEs, hence the robustness and generalization ability of VRoC are improved. The loss function of VRoC then reads: L VRoC =L MC (;;x) + 1 L D + 2 L T + 3 L S + 4 L V ; (5.12) where 1 , 2 , 3 , 4 are balancing parameters for each rumor classification component. L D ,L T ,L S ,L V are loss functions of rumor detector, rumor tracker, stance classifier, and veracity classifier, respectively. Training V AEs and the components jointly improves the performance of the rumor classifier, compared to the V AE-based rumor classifiers without our co-train engine. We confirm the improvement by conducting a set of comparison experiments between VRoC and V AE-LSTM in Section 5.2.3. 5.2.2 Rumor Classifier We introduce all four components in VRoC’s rumor classification system and their loss functions here. The loss functions are consistent with free-energy minimization principle [Fri09]. 121 Rumor Detector. Given a set of social media postsX =fx 1 ;x 2 ;:::;x N g, rumor detector determines which ones are rumors and which ones are not. The classified posts can then be tracked and verified in later steps. In this work, rumor detection task is formulated as a binary classification problem. Rumor detector D is realized by an RNN with Bidirectional-LSTM (BiLSTM) cells. It takes az as input and provides a probabilityy D of the corresponding postx being a rumor. The loss functionL D is defined as follows: L D =E yY D [ylog(y D ) + (1y)log(1y D )]; (5.13) whereY D 2fRumor;Nonrumorg is a set of ground truth labels. Rumor Tracker. Rumor trackerT is activated once a rumor is identified. It takes a set of postsX as input, and determines whether each post is related or unrelated to the given rumor. Rumor tracking task is formulated into a classification problem in this work, and it is fulfilled by an RNN with BiLSTM cells. Given az,T generates a probabilityy T indicating whether the corresponding post is related to the identified rumor. The loss functionL T reads: L T =E yY T [ylog(y T ) + (1y)log(1y T )]; (5.14) whereY T 2fRelated;Unrelatedg. Stance Classifier. Given a collection of rumorsR =fr 1 ;r 2 ;:::;r N g, where each rumor r n (n2 [1;N]) consists of a set of posts discussing it, the stance classifierS determines whether each post is supporting, denying, commenting, or querying the related rumorr n . 122 In VRoC, we utilize an RNN with BiLSTM cells to perform a four-way rumor stance classification. The loss functionL S is defined as follows: L S =E yY S [ X ylog(y S )]; (5.15) whereY S 2fSupport;Deny;Comment;Queryg. Veracity Classifier. Once rumors are identified, their truth values are determined by rumor veracity classifierV . Instead of being true or false, some rumors could in reality remain unverified for a period of time. Hence, in this work, we provide a three-way veracity classification using an RNN with BiLSTM cells. The loss functionL V is defined as follows: L V =E yY V [ X ylog(y V )]; (5.16) whereY V 2fTrue;False;Unverifiedg. This veracity classifier can also be used for fake news detection since it performs a similar task. 5.2.3 Experiments Datasets and Evaluation Metrics We evaluate our VRoC on PHEME dataset [KLZ18]. PHEME has two versions. PHEME5 contains 5792 tweets related to five news, 1972 of them are rumors and 3820 of them are non-rumors. PHEME9 is extended from PHEME5 and contains veracity labels. RumourEval dataset [GBD + 18] is derived from PHEME, and its stance labels are used for rumor stance classification task. We use PHEME5 for rumor detection and tracking task, PHEME5 with stance labels from RumourEval for rumor stance classification task, and PHEME5 with veracity labels from PHEME9 for the rumor veracity classification task. Due to the imbalance of classes in dataset, accuracy alone as evaluation metric 123 is not sufficient. Hence, we use precision, recall, and macro-F1 scores [V A13, LSZ10] together with accuracy as evaluation metrics. Since baselines are trained under different principles, we carry out two types of trainings to guarantee fairness. To compare with baselines that are trained under the leave-one-out (L) principle, i.e., train on four news and test on another news, we train our models under the same principle. L principle evaluates the ability of generalization and constructs a test environment close to real world scenarios. To compare with baselines that do not use this principle, we hold out 10% of the data for model tuning. Models and Baselines In VRoC, we co-train each set of V AE and classification component after pre-training the V AEs. To show the effectiveness of our designed co-train engine, we developed a baseline V AE-LSTM that trains V AE first, then uses its latent representations as input to train the LSTM models. V AE-LSTM’s architecture is the same as VRoC, but without the proposed co-train engine. The V AEs used in VRoC and V AE-LSTM are pre-trained with the same procedure. The encoder architecture in V AE is EM-LSTM32-LSTM32-D32, where EM, LSTM32, D32 represent an embedding layer, an LSTM layer with 32 neurons, and a dense layer with 32 neurons, respectively. The decoder’s architecture is IN-LSTM32- LSTM32-Dvs, where IN and Dvs represent input layer and dense layer with vocabulary size neurons. We chose our V AE architecture by an expert-guided random search (RS) under the consideration of data size. Compared to other computationally expensive neural architecture search methods, such as evolutionary search and reinforcement learning-based approaches, RS is confirmed to achieve competitive performance [EMH18, RAHL19, LSV + 17]. Early stopping strategy is used in training. Other state-of-the-art baselines used for comparison are described as follows. 124 Rumor detection. CRF [ZLP17] is a content and social feature-based rumor detector that based on linear-chain conditional random fields learns the dynamics of information during breaking news. GAN-GRU, GAN-BOW, and GAN-CNN [MGW19] are gen- erative adversarial network-based rumor detectors. A generator is designed to create uncertainty and the complicated sequences force the discriminator to learn stronger rumor representations. DataAUG [HGC19] is a contextual embedding model with data augmentation. It exploits the semantic relations between labeled and unlabeled data. DMRF [NDCD19] formulates rumor detection task as an inference problem in a Markov Random Field (MRF). It unfolds the mean-field algorithm into neural network and builds a deep MRF model. DTSL [DVCQ19] is a deep semi-supervised learning model containing three CNNs. Rumor tracking. As we mentioned before, rumor tracking receives few attention in scientific literature. Hence, we train two baselines to compare with our proposed VRoC. CNN is a convolutional neural network-based baseline. LSTM is an RNN with LSTM cells. We input all five news in PHEME to all models, and perform a five-way classification, i.e., determine the input post is related to which one in five news. Five news are: Sydney siege (S), Germanwings (G), Ferguson (F), Charlie Hebdo (C), and Ottawa shooting (O). L principle is not applicable under this setup. Stance classification. LinearCRF and TreeCRF are two different models proposed in [ZKL + 16] for capturing the sequential structure of conversational threads. They analyse tweets by mining the context from conversational threads. The authors in [MGW18] proposed a unified multi-task (rumor detection and stance classification) model based on multi-layer RNNs, where a shared layer and a task-specific layer are employed to accommodate different types of representations of the tasks and their corresponding parameters. MT-US and MT-ES are multi-task models with the uniform shared-layer and enhanced shared-layer architecture. 125 Veracity classification. MT-UA [LZS19] is a multi-task rumor detector that utilizes the user credibility information and attention mechanism. In [KLZ18], a joint multi-task deep learning model with hard parameters sharing is presented. It outperforms the sequential models by combining multiple tasks together. We choose two best performing models from [KLZ18]: MTL2 that combines veracity and stance classification, and MTL3 that combines detection, stance, and veracity classification. TreeLSTM [KC19] is a tree LSTM model that uses convolution and max-pooling unit. Experimental Results The comparison between VRoC and baselines in all four rumor classification tasks are presented in Section 5.2.3 to 5.2.3. In all tables, * indicates the best results from the work that proposed the corresponding model. We finally describe the comparison results of VRoC and V AE-LSTM in Section 5.2.3. Rumor Detection. Comparison results between VRoC and baselines on the rumor detec- tion task are shown in Table 5.1. Compared to baselines, VRoC achieves significantly higher accuracy levels and macro-F1 scores, and V AE-LSTM stands as the second best. VRoC outperforms CRF and GAN-GRU by 26:9% and 9:5% in terms of macro-F1. Under L principle, on average, VRoC and V AE-LSTM outperforms baselines by 13:2% and 14:9% in terms of macro-F1. V AE’s compact latent representations contribute to these results the most. Compared to V AE-LSTM, the proposed co-train engine boosts the performance of VRoC one step further. Rumor Tracking. Comparison results between VRoC and baselines on the rumor tracking task are shown in Table 5.2. VRoC achieves the highest macro-F1, but it does not outperform baselines by a large percentage. In rumor tracking, raw data might be a preferable data source since they contain keywords and hashtags that can be used to 126 Table 5.1: Comparison between VRoC and baselines on the rumor detection task. Accuracy Precision Recall Macro-F1 CRF* - 0:667 0:556 0:607 GAN-BOW* 0:781 0:782 0:781 0:781 GAN-CNN* 0:736 0:738 0:736 0:736 GAN-GRU* 0:688 0:689 0:688 0:687 V AE-LSTM 0:833 0:834 0:834 0:833 VRoC 0.876 0.877 0.876 0.876 DataAUG* (L) 0:707 0:580 0:497 0:535 DMFN* (L) 0:703 0:667 0:670 0:657 DTSL* (L) - 0:560 0.794 0:615 V AE-LSTM (L) 0:736 0:746 0:736 0:735 VRoC (L) 0.752 0.755 0:752 0.752 Table 5.2: Comparison between VRoC and baselines on the rumor tracking task. S G F C O Accuracy Macro-F1 F1 F1 F1 F1 F1 CNN 0:570 0:574 0:589 0.534 0:777 0:571 0:400 LSTM 0:585 0:585 0:607 0:352 0.804 0.711 0:453 V AE-LSTM 0:609 0:612 0.666 0:515 0:641 0:694 0:545 VRoC 0.644 0.632 0:611 0:520 0:640 0:685 0.703 directly track the topic. For long posts, rumor tracking can be effortlessly accomplished by retrieving the hashtags. Compared to models that use raw data, VRoC has advantages when dealing with imbalanced and unseen data since it can extract compact information from a few posts and generalize to a broader range of data. Stance Classification. Comparison results between VRoC and baselines on the rumor stance classification task are shown in Table 5.3. Stance classification is the hardest component in rumor classification. The reason is two-fold: four-way classification problems are naturally more difficult than binary classification problems, and imbalanced data exaggerate the difficulty. In addition, the stance classification is not as easy as the tracking task. Stance classifier has to extract detailed patterns from a sentence and consider the whole sentence. In the tracking task, posts can be classified together by 127 Table 5.3: Comparison between VRoC and baselines on the rumor stance classification task. Support Deny Comment Query Accuracy Macro-F1 F1 F1 F1 F1 MT-US* - 0:400 0:355 0:116 0.776 0:337 MT-ES* - 0:430 0:314 0:158 0:739 0.531 V AE-LSTM 0:464 0:461 0:447 0:395 0:588 0:416 VRoC 0.533 0.522 0.452 0.415 0:712 0:511 TreeCRF* (L) - 0:440 0:462 0:088 0.773 0:435 LinearCRF* (L) - 0:433 0:454 0:105 0:767 0.495 V AE-LSTM (L) 0:467 0:459 0.463 0:423 0:567 0:384 VRoC (L) 0.480 0.473 0:452 0.429 0:596 0:416 filtering out the obvious keywords, but stance is related to the semantic meaning of the whole sentence and hence is more complicated. Baselines in the comparison all suffer from the extremely low F1 score on Deny class, which is caused by the small size of the Deny instances in the dataset. Feature extraction from raw data results in severely imbalanced performance among different classes. VRoC’s and V AE-LSTM’s classifiers are trained under latent representations. Although data imbalance affects the performance of VRoC and V AE-LSTM, the impact is not as drastic as in other baselines. Compared to state-of-the-art baselines that concentrate on stance classification, VRoC’s stance classification component provides the highest macro-F1 scores under both training principles and V AE-LSTM follows as the second best. Veracity Classification. Comparison results between VRoC and baselines on the rumor veracity classification task are shown in Table 5.4. VRoC and V AE-LSTM achieve the highest macro-F1 and accuracy compared to baselines. On average, VRoC outperforms MT-UA and other baselines under L principle by 24:9% and 11:9% in terms of macro-F1, respectively. Rumor veracity classification under L principle is particularly difficult since there is no previously established verified news database. An unseen news on a previously unobserved event has to be classified without any knowledge related to this event. For example, you observed some news on A event, and you need to verify 128 Table 5.4: Comparison between VRoC and baselines on the rumor veracity classification task. Lc represents that the news related to Charlie Hebdo is left out while trained under L principle. True False Unverified Accuracy Macro-F1 F1 F1 F1 MT-UA* 0:483 0:418 - - - V AE-LSTM 0:628 0:627 0:691 0:576 0:615 VRoC 0.667 0.667 0.745 0.632 0.624 MTL2* (Lc) 0:441 0:376 - - - MTL3* (Lc) 0:492 0:396 0.681 0:232 0:351 V AE-LSTM (Lc) 0:507 0:503 0:545 0.449 0.515 VRoC (Lc) 0.531 0.513 0:564 0:434 0:480 TreeLSTM* (L) 0:500 0:379 0:396 0.563 0:506 V AE-LSTM (L) 0:494 0:475 0:429 0:472 0.523 VRoC (L) 0.521 0.484 0.480 0:504 0:465 whether a news from an unrelated B event is true or not. Without a verified news database, the abstracted textural patterns of sentences are utilized to classify unobserved news. Latent representations extracted by V AEs are hence very helpful to generalize in veracity classification. The outperformance of VRoC and V AE-LSTM over baselines under L principle demonstrates the outstanding generalization ability of V AEs. In addition, VRoC beats V AE-LSTM in terms of both accuracy and macro-F1 in all cases. These results further demonstrate the power of the proposed co-train engine. VRoC and V AE-LSTM As shown in Tables 5.1-5.4, VRoC outperforms all the baselines in terms of macro-F1 and accuracy in all four rumor classification tasks, while V AE-LSTM stands as the second best. On average, in all four tasks, VRoC and V AE-LSTM surpass the baselines by 10:94% and 7:64% in terms of macro-F1 scores. The ability of V AE-based rumor classifier is confirmed by these results. The advantage of latent representations over raw tweet data is demonstrated as well. VRoC achieves higher performance than V AE-LSTM because of the designed co-train engine. VRoC’s latent representations are more suitable 129 and friendly to the rumor classification components. Furthermore, the co-train engine introduces randomness into the training process of VRoC, hence the robustness and generalization abilities of VRoC are improved. Dimensionality reduction [AC15] is also realized by the V AEs to further aid the generalization. Semantically and syntactically related samples are placed near each other in latent space. Although future news are unobserved, they may contain similar semantic and/or syntactic features to those observed news. Thus VRoC could generalize and place the new latent representations close to the old ones and classify them without the need of retrain. VRoC and V AE-LSTM are efficient since all four tasks can be performed in parallel. Assume the serial runtime is T s , the parallel runtime isT p = Ts p (if all four tasks are parallelized andp = 4 is number of processors used in parallel), then the efficiencyE = Speedup p = Ts=Tp p = 1. 5.3 Deciphering the Laws of COVID-19 Misinformation Dynamics With the SARS-CoV-2 pandemic outbreak, COVID-19 related rumors and misinfor- mation infodemic has become a serious problem. The rapid spread of COVID-19 misinformation provokes the social panic, influences political battles [Don20], and some dangerous false/fake rumors, e.g., drinking bleach to cure coronavirus [Ton20], can cost lives. Academic researchers and government authorities are working intensively to fight COVID-19 infodemic by monitoring, identifying, analyzing, and blocking mis- information [Cau20, Sub20, CLNB21]. Commercial giants such as Facebook [Fac21], Twitter [Twi21], Google [Mag20] also trying to show their efforts in misinformation combating. A recent work introduces a formal mathematical model that illustrates that governments and social media platforms’ efforts can dis-incentivize the spread of fake news by social media users [HV20]. Previous works [BM19,Ace19,QRRM11] analysing 130 misinformation or fake news focusing on misinformation sentences themselves are mainly from natural language processing aspect, i.e., analyze sentiment, veracity, stance, etc. The social feature of misinformation such as how a piece of fake news spreads from one account/website to its vicinity has also been studied from complex network and statistics aspects [PPC20]. Related machine learning problems such as fake news classification and social bot detection are also well-studied [CNB20b]. Understanding how (COVID-19) misinformation evolves and spreads by combining both natural language processing techniques and complex network analysis has not been well-studied. Network science investigated extensively the mathematical characteristics of social (including collaboration and coauthorship [RPP18]), technological (computer, World Wide Web [AB02]), biological, semantic [ST05] and financial networks [DMIC06] and identified various connectivity mechanisms (e.g., linear and nonlinear preferential attachment [Yul25], node fitness models [BB01a], weighted multifractal measure mod- els [YB20, XB17]). Various examples exist of complex network techniques applied to natural language processing tasks, and the ways of network construction are different in diverse applications. However, few of these are taking care of full sentences and to the best of our knowledge, we are among the first to analyze the time-varying networks. For example, to converting a document to a complex network, words are represented as nodes and relationships between words, such as semantic [SC02], syntactic [iCSK04], and/or co- occurrence [REIK17] relationships, are represented as edges. Another branch of research considers chunks of document, i.e., sequence of words, as nodes and similarities between sequences as edges [FdANSQM + 18]. The exercise of complex network in combination of natural language processing is diverse and most of the time, the extracted complex net- work is time invariant. In contrast, here, we investigate the mathematical characteristics of time-varying COVID-19 related misinformation network representations (we analyze three such network constructions), where the nodes denote the misinformation sentences 131 and the edges capture the sentence-to-sentence similarity. This allows us to decipher the statistical laws that characterize the COVID-19 misinformation phenomenon. 5.3.1 Misinformation Dynamics Analysis Methods COVID-19 misinformation dataset. We analyzed a COVID-19 misinformation dataset containing misinformation collected from Twitter from March 1 st to May 3 rd [USC20]. The data was retrieved with Twitter API service 1 using keywords related to COVID- 19: ‘Covid19’, ‘coronavirus’, ‘corona virus’, ‘2019nCoV ’, ‘CoronavirusOutbreak’, ‘coronapocalypse’, from the platform in real time. We used in total 60798 pieces of misinformation identified to build our misinformation networks. There are 6 categories in the retrieved dataset: unreliable, clickbait, satire, bias, political, and conspiracy. More specifically, the unreliable category is defined to include false, questionable, rumorous and unreliable news. Conspiracy category includes conspiracy theories and scientifically dubious news. Clickbait news are misleading, distorted or exaggerated to attract atten- tion. Political and biased news are in support of a particular point of view or political orientation. In addition, satire is based on the consideration that satire has the potential to perpetuate misinformation [SSM + 20, SQJ + 19]. However, due to the fact that satire cate- gory is extremely small (only 29 tweets are labeled as satire), our analysis only focuses on the other five types. We note that in Fig. 5.4, the last category “misinformation" contains all the misinformation categories including satire. Power-law and log-normal analysis. The popularity of a misinformation sentence (tweet) is the number of times it appears on Twitter in the time span of dataset, March 1 st to May 3 rd . The mean popularity is taken across all misinformation record. There are 5 major types of COVID-19 misinformation in the dataset: unreliable, political, bias, conspiracy, and clickbait. We analyze the power-law and log-normal fits with 1 https://developer.twitter.com/en/docs/tweets/filter-realtime/guides/basic-stream-parameters 132 regard to all 5 types individually and as a whole. By using the powerlaw Python package [AB14], we perform a statistical hypothesis test analysis as follows: (i) we estimate the parameters, e.g., x min , , of the power-law model and the log-normal model via powerlaw. (ii) We calculate the goodness-of-fit between mean popularity data and the power-law (and log-normal). Specifically, we inspect a plausibility value p KS in goodness-of-fit test. Ifp KS is greater than 0:1, the power-law (or log-normal) is a plausible hypothesis for the data. (We will describe how to calculatep KS in detail later.) (iii) We compare hypotheses, power-law and log-normal, via a likelihood ratio test provided in powerlaw, e.g.,R;p = distribution_compare( 0 lognormal 0 ; 0 powerlaw 0 ), whereR is the log-likelihood ratio between the two candidate distributions. IfR > 0, then the data are more likely to follow the first distribution, otherwise the data are more likely to obey the second distribution.p is the significance value for that direction. The favored distribution is a strong fit ifp> 0:05. Now we describe the procedure of goodness-of-fit test and the calculation strategy of p KS [CSN09]. Given a dataset and the hypothesized distribution, e.g., power-law, from which the data are drawn, we calculatep KS based on measurement of the “distance" between the distribution of the empirical data and the hypothesized model. This distance D is given by powerlaw when we fit the data, which is the “Kolmogorov-Smirnov (KS)" statistic. Next, we generate a large number of power-law synthetic data with the estimated parameters and we fit the synthetic data using powerlaw. After fitting the synthetic data, we get the distance of synthetic data and the hypothesized power-law model (fitted by the synthetic data), noted asD syn . Then we repeat this procedure by generating 50 sets of synthetic data with 50D syn ’s. Finally we calculatep KS as the percentage ofD<D syn . Misinformation network formulation I. We form networks of new misinformation with respect to time (days). We construct 60 COVID-19 misinformation networks based on misinformation identified on Twitter from March 1 st to May 3 rd [USC20] (days 133 Figure 5.3: Statistics comparison between networks constructed by formulation I-III. (a-c) Node number comparison. (d-f) Edge number comparison. We can see that the cumulated misinformation networks (with and without node deletion mechanism) are larger in scale and contain much more nodes and edges compared to daily misinformation networks. with missing data are discarded). Nodes in a network are sentences, i.e., COVID-19 misinformation, appearing within one day on Twitter. Nodes are connected if two sentences have similarity more than 70%. To calculate sentence similarity, we first encode sentences by sentence Transformer model [RG19] into vectors with length 786. Then, we measure sentence similarity based on cosine distance. Each misinformation network contains new misinformation appeared on Twitter per day, and we analyze network features of these networks to characterize how misinformation evolves over time. This distinct choice of network construction comes from the fact that we would like to see how the public opinion and misinformation trends are shifted or evolved from a natural language processing point of view while assuming the emergence of new collective intelligence phenomena. This way of network construction helps us to predict next popular misinformation phenomenon on the social media and helps to combat them. Misinformation network formulation II (without node deletion) with PA and node fitness analysis. We construct a misinformation network to capture the evolution of 134 misinformation appeared on Twitter in March 2020. Firstly, we form a base network containing misinformation extracted on March 1 st , with the nodes representing misinfor- mation sentences and links indicating the text similarity between two misinformation. We then add nodes and links to the base network based on misinformation extracted from Twitter on the daily basis. Note that we connect the nodes when the text similarity is more than 80% to constrain the network size within a reasonable scale for later analysis. Having the misinformation network, we report the first evidence of co-existence of rich get richer and fit get richer effect in COVID-19 misinformation networks by using PAFit [PSS15], a general temporal model. To co-analyze both PA and node fitness of a complex network (with the assumption that both fit get richer and rich get richer exist), the probability of a node attracting a new connection isP/A k , whereA k is the PA function and is node fitness (both are time-invariant). The estimation tasks ofA k and are performed by the R package PAFit. Misinformation network formulation III (with node deletion) with probability of attachment and node fitness analysis. Similarly to the network growth procedure without node deletion mechanism, we have a base network containing misinformation collected on March 1 st . Then differently than the afore-mentioned monotonic growing process, we include a node deletion mechanism as follows: if a node (sentence) does not attract new connections in consecutive days, we remove this node from the network along with its all edges. The statistics comparison between networks extracted using formulation I-III are shown in Fig. 5.3. We take = 3 in this work and links exist only when the text similarity of two nodes is over 80% to keep the reasonable size of the misinformation network. We keep track of this misinformation network from March 1 st to May 3 rd and estimate the probability of attachment and node fitness. The general temporal model, PAFit, used to measure the misinformation network without node deletion assume the co-existence of fit get richer and fit get richer based on time-invariant 135 PA function and node fitness. However, it may not be applicable to the misinformation network with node deletion. Therefore, we estimate the probability of attachment of each node everyday as k i P j k j using Barabasi-Albert model, wherei is the target node andj represents all other nodes in the network. Node fitness represents how attractive a node is in the network, and it can be estimated as the growth exponent [KSR08]. Following Kong et al’s work [KSR08], assume the cumulative degree of a nodei at timet isk(i;t), and its logarithm reads:logk(i;t) = ( i A c 1c )logt +B = i logt +B, whereA andc are constants,B is some time-invariant offset. From this equation, node fitness and the growth exponent are related by a linear transformation, hence the slope ofk(i;t) gives an estimation of node fitness value. Deep learning-based misinformation network measures prediction. We utilize both deep learning and natural language processing techniques to enable fast network measures prediction. Our DNN takes daily misinformation networks from day 0 to dayt 1 as training data and predicts which misinformation in day(s) t will end up as central node. The input to our DNN is misinformation sentence embeddings, i.e., BERT embeddings with length 786. The output of our DNN is binary where 0 and 1 indicating a tweet (i.e., a node in a misinformation network) is with low centrality or high centrality. Our training data is obtained as follows. With 60 misinformation networks, we calculate the centralities via traditional complex network analysis mechanism and take nodes with top 100 centrality measures and label them as 1, otherwise label them as 0. Hence, the training data are misinformation sentences with binary labels. With this way of labeling, the training data end up with imbalanced classes, therefore, we up-sample the minor class to balance the data prior to training. After data balancing, we train a DNN with 3 hidden layers to do binary classification, i.e., to classify if a misinformation sentence is “important" or not. The architecture of our DNN is IN(786)-FC(32)-Dropout(0.5)-FC(32)- Dropout(0.5)-FC(32)-Dropout(0.5)-OUT(2), where IN, FC, Dropout, OUT represent 136 input layer, fully-connected layer, dropout layer, and output layer, respectively, and the number in parenthesis indicates the number of neurons or dropout rate. Fully-connected layers all use ReLU as activation function and output layer uses softmax as activation function. We utilize early stopping training technique to prevent overfitting Network centrality. The network centrality measures the importance of a node across a complex network. In this study, the network centralities are calculated by the NetworkX package in Python library [HSS08]. The degree-, closeness- , and second order-centrality are introduced as follows: Degree centrality [Bor05] is of noden is defined as: Degree(n) =deg(n) (5.17) wheredeg(n) is the number of edges connected with the noden. Closeness centrality [NBW06] of a node measures its average inverse distance to all other nodes and is a way of detecting nodes that are able to transport information across the network efficiently. The closeness centrality of a noden can be defined as follows: Closeness(n) = 1 P u d(u;n) (5.18) whered(u;n) is the distance between nodeu and noden. Of note,u6=v. Second order centrality [KLMST11] is a kind of random walk based centrality which measures the robustness of the networks. The centrality of a given noden is the expectation of the standard deviation of the return times to the noden of a perpetual random walk on graphG, where the lower that deviation, the more central the noden is. 137 Figure 5.4: The fitted power-law model (red dash line) and log-normal model (green dash line) of the COVID-19 misinformation mean popularity. a-e, Models fitted for different types of misinformation. f, Models fitted for all COVID-19 misinformation. Log-normal is a plausi- ble data-generating process of the misinformation mean popularity since the plausibility val- uesp KS are greater than 0:1. Both goodness-of-fit test and likelihood ratio test indicate that compared to power-law, log-normal is more plausible. (Detailed hypothesis test procedure is stated in Methods section, "Power-law and log-normal analysis".) The log-likelihood ratios (R’s) and significance values (p’s) between the two candidate distributions, log-normal and power-law, are (0:422; 0:429), (0:911; 0:289), (1:832; 0:245), (1:335; 0:352), (1:066; 0:369), (0:565; 0:203), for unreliable, political, bias, conspiracy, clickbait, and all type misinformation, respectively. 5.3.2 COVID-19 Misinformation Network Characterization Statistical Laws Characterizing COVID-19 Misinformation Phenomenon Researchers have noticed for decades that many measured data retrieved from biological and social sciences can be described by log-normal distribution [Sun04, LSA01] and power-law distribution [Cla18]. In this work, we estimate the log-normal and power-law models for 5 types of COVID-19 misinformation [USC20]: unreliable, political, bias, conspiracy, and clickbait misinformation. 2 We use a hypothesis test [CSN09, AB14] to 2 Data was retrieved from https://usc-melady.github.io/COVID-19-Tweet-Analysis/misinfo.html. Detailed dataset information can be found in Methods section "COVID-19 Misinformation Data". 138 estimate model parameters and model plausibility (p KS ). Estimation methodology can be found in Method section "Power-law and log-normal analysis". The estimated log-normal model has 3 parameters:x min , which represents the smallest value above which the log- normal pattern holds, , which is the expected value, and , the standard deviation. Similarly, the estimated power-law model has 3 parameters: x min , which represents the smallest value above which the power-law pattern holds, , which indicates the scaling parameter, and, the standard error of the fitted parameter. The parameters of estimated log-normal and power-law models are included in Fig. 5.4. However, these distribution fitting estimates do not represent that the empirical data, i.e., mean popularity of misinformation in our case, are independent and identically drawn (iid) from the fitted models [Cla18]. We need to evaluate the plausibility of the fitted models quantitatively. Following a standard assessment process, goodness-of-fit test [AB14], we find thatp KS of log-normal distribution for all 5 types of misinformation and the overall misinformation are much greater than 0:1. That is, log-normal distribution cannot be rejected as a data-generating process. To further ensure that log-normal rather than power-law distribution is the plausible data generating process, we compare the log-normal distribution and power-law dis- tribution using an appropriately defined likelihood ratio test [CSN09]. The likelihood ratio test provides two values: R, the log-likelihood ratio between the two candidate distributions (log-normal and power-law in our case), andp, the significance value for the favored direction. If the empirical data is more likely to obey the first distribution, thenR is positive; otherwise,R is negative. The favored distribution is a strong fit if p > 0:05 [CSN09]. As we reported in Fig. 5.4, log-normal is the favored distribution since theR values are all positive andp in all likelihood ratio tests are much greater than 0:05. These findings could suggest that the popularity of COVID-19 misinformation could obey a multiplicative process and resembles to the generalized Lotka-V olterra 139 Figure 5.5: Misinformation network centrality measures. The mean value curves of the degree centrality (a), closeness centrality (b) and second order centrality (c) for misinformation networks of 60 days across five different misinformation categories: unreliable, clickbait, political, bias and conspiracy. (GLV) system [Sol99]. GLV systems are often used to model direct competition and trophic relationships between an arbitrary number of species, e.g., a predator-prey rela- tionship [MGM + 95]. In this potential misinformation GLV , all kinds of misinformation and individual misinformation generators, e.g., social bots, may be constantly created (and distinguished), and compete with other members in the system for attention. Misinformation Networks Optimize The Network Information Transfer Overtime To characterize misinformation on the semantic level, we construct misinformation networks where nodes and corresponding edges represent the sentences and their sentence similarity, respectively (see Methods section "Misinformation network formulation I" for network formulation details). The new misinformation captured in a day form a distinct misinformation network. In order to investigate the network information transfer characteristics associated with the dynamics of misinformation networks, we quantify 140 their degree-, closeness- and second order-centrality metrics [OAS10, KLMST11]. Due to the complex networks’ highly heterogeneous structure, some nodes can be recognized as more significant than others, and centrality measures how important a node is. For instance, in a social network, influencers, recognized as influential nodes with higher centrality, have a lot of followers and can easily propagate specific messages to other nodes. Therefore, calculating the centrality about networks sheds light on information transfer analysis in complex networks [YXB + 20]. There are various centrality measures in complex network literature. Degree centrality measures the number of links connected upon a target node and can be utilized to determine the throughput capacity or network localized transmission. The higher the degree centrality is, the higher the chance to receive the information transmitted over a network. Closeness centrality of a node quantifies the average length of the shortest path between the node and all other nodes in a graph and reflects the information transmission latency across a complex network. Thus, the higher the closeness centrality of a node is, the closer it is to other nodes. Second order centrality is a random walk-based betweenness centrality which measures the average time for revisiting a node if following a random walk on a complex network. The standard process of random walk is defined by Newman [New05] where a single node has a probability to direct to a neighbor node (the probability is picked uniformly at random). The higher the second order centrality of a node is, the more information paths pass through it and the less robust the network is to targeted attacks on this node (for details on the degree-, closeness-, and second order-centrality, see Methods section "Networks Centrality"). Fig. 5.5 (a) illustrates the mean degree centrality estimated from 60 misinformation networks. Over the first 10 days, the degree centrality of the misinformation networks exhibits an increase tendency towards higher values. It is known that a node achieves an increase in degree centrality by establishing new connections to its neighboring nodes. 141 The high degree centrality of a node means that this node can propagate the received information in an efficient way. Thus, the increasing phenomenon in the first 10 days demonstrates that the misinformation networks tend to optimize their network topology to support higher information flow across the network over time. In addition, when it comes to the last 50 days, the degree centrality enters a relatively stable state which means that after increasing the degree centrality, misinformation networks try to maintain the high speed spread property. Along the same lines, Fig. 5.5 (b) shows that the mean of the closeness centrality among 60 misinformation networks across 5 different misinformation categories. In the first 10 days, the mean value of the closeness centrality for misinformation networks is increasing. Higher closeness centrality means that the target node is closer to other nodes and the information sent by the target node can reach other nodes faster. Consequently, this result shows that the misinformation network tends to optimize their network topology to minimize the information transmission latency. In the last 50 days, the mean of the closeness centrality tends to stay stable, which indicates that misinformation networks try to maintain superior transmission latency to keep the network in a high-speed transport state. It is worth noting that the degree- and closeness-centrality are two dual approaches for quantifying information transmission across a network and show a similar network performance optimization behavior in the period of our observation. Fig. 5.5 (c) shows the second order centrality mean value curves for the 5 misinfor- mation categories in 60 days. On social medias, some people periodically delete some old posts. If a post that removed from the network has high second order centrality, the misinformation network has a higher chance to be disconnected. In the first 10 days in Fig. 5.5 (c), we observe that the second order centrality exhibits an irregular fluctuation behavior. When it comes to the last 50 days, the second order centrality shows a sat- uration (slowing in increasing rate) trend, which means that misinformation networks 142 Figure 5.6: Node fitness and PA function (shown as in-plot) co-estimation for nodes in the misinformation networks at days 10 (a), 20 (b) and 30 (c). The heavy tails of fitness distributions show the existence of the fit get richer phenomenon. The estimated PA functions imply that the higher the node degree, the more competitive the node is in terms of link competition; it also shows a rich get richer phenomenon. become less-robust / unhealthy over time (a graph is robust / healthy if it is robust to multiple targeted / random attacks [AJB00]). In addition, empirically, a robust graph has most of its elements lying close to each other, and linked by many paths. We conclude that since misinformation networks tend to increase the second order centrality after the early irregular fluctuation, the topology of the misinformation networks becomes more vulnerable to targeted / random attacks over time. In conclusion, the study of the degree-, closeness- and second order-centrality shows that the COVID-19 misinformation networks tend to optimize the information transmission and the topology of the networks becomes more fragile over time. Co-existence of Fit Get Richer And Rich Get Richer Phenomena Various mechanisms have been studied to explain the complex network evolution, such as preferential attachment (PA), node fitness theory, node birth / death process. The mapping of network growth onto a Bose-Einstein condensation phenomenon elucidated three phases in the network evolution [BB01b]: a scale-free phase, where all nodes in 143 a network have the same fitness; a fit get richer phase, where nodes with high fitness / quality are more likely to draw new connections; and a Bose-Einstein condensate phase, where the node with largest fitness becomes a clear winner and takes all new links. In contrast to fit get richer effect, PA is a rich get richer mechanism where nodes with more connections are likely to win more new connections in link competition [PSS16]. The General Temporal model [PSS15] unifies both PA and node fitness by defining the probability of a node with degreek getting new links asP/A k , whereA k is the PA function and is node fitness (bothA k and are time-invariant). To show the first evidence of how misinformation network evolves under the assump- tion of co-existence of PA and node fitness mechanism, we construct the misinformation network by taking the first day’s sentences and construct a base network where nodes are sentences and links represent the sentence similarity. We then grow the network by adding nodes and links as a function of time (days). New misinformation sentences appearing in the next day connect to nodes in the base network if the sentence similarity is over 80%. We analyze the PA functionA k and node fitness with PAFit [PSS15] and the results are shown in Fig. 5.6 (detailed network growth and analysis methods are described in Methods section "Misinformation network formulation II"). The estimated node fitnesses in day 10, 20 and 30 are all centered around 1, while there exists some nodes with slightly higher node fitness. The heavy-tailed distributions serve as a clear sign of fit get richer effect. From Figs. 5.6 (a)-(c), the maximum node fitness increases, which suggests that fit get richer effect becomes stronger, while the overall effect remains low (the maximum value remains in a medium fitness range [1; 2]). By inspecting the estimated PA function in the in-plots shown in Fig. 5.6, we make the following two observations: (i) the estimated PA functionsA k ’s in day 10, 20 and 30 are all increasing with respect to degreek, which suggests the existence of a rich get richer effect; and (ii) 144 the estimated PA functions are exhibiting a log-linear trend, which matches the widely used log-linear assumption of PA functionA k =k as in extended BA model [KRL00]. Node Fitness And Probability of Attachment While the complex network evolution is heavily studied in the literature, the popular mod- els are mostly based on assumptions that PA function and node fitness are time-invariant, and the fundamental network evolution does not consider node deletion mechanism or includes random node deletion mechanism [KSR08]. However, these assumptions are not fully applicable to rapidly changing misinformation networks where people switch attention from one hot topic to another quickly. Under this consideration, we form our misinformation network with a realistic node deletion mechanism, i.e., when a node’s degree is not changing for three days, we delete the node (and its attaching links) from the network with the assumption that this sentence / topic is no longer active or popular at the time. (Detailed network formulation and analysis methods are described in Methods section "Misinformation network formulation III".) Based on this network formulation method, we estimate the probability of attachment of nodes, node fitness, and network centrality measures and the results are demonstrated in Fig. 5.7. Firstly, we estimate the probability of attachment of nodej as k i P j k j as in BA model [BA99], wherek represents the degree of a node. We find that different than other real-world networks, such as WWW, citation networks, the attachment probability in misinformation networks is linear with respect to node’s degree as shown in Fig. 5.7 (a) instead of log-linear. This implies that the misinformation network evolution with the consideration of node deletion has weak rich get richer phenomenon. In addition, we observe that the misinformation network evolution experiences expansion-shrink cycles. The slope of probability of attachment first decreases from day 0 to day 50, then increases to the similar values as in day 0 at day 55. This sudden change between day 50 and 55 shows that the network experiences 145 a destruction and reconstruction phase. We verify this observation by inspecting the network size as shown in Fig. 5.7 (b), where the light purple bars represent the cumulative sum of newly emerged misinformation on Twitter (i.e., the afore-mentioned misinforma- tion network constructed in Methods section "Misinformation network formulation I"), and the dark purple bars are the node numbers in the misinformation network constructed with node deletion mechanism. The light purple bars equivalently demonstrate how the misinformation network expands under classical network formulation, which cannot reflect the rapid changing nature of misinformation network. On the other hand, the dark purple bars demonstrate the network evolution under our realistic misinformation network construction method. It is verified by the dark purple bars that the network does experience a shrink-expand phase between day 50 and 55. In addition, the fluctuations in node centrality measures in Fig. 5.7 (f) also provide verification. Furthermore, we hypothesize that topic / attention shifting on social media causes this destruction and reconstruction, and we provide evidence in the following discussion and in Fig. 5.8. Next, we investigate the node fitness and observe that at day 51, all sentences from day 0 (used for base network) were deleted except one. It is worth noting that we construct our network based on sentence similarity, if some nodes (sentences) in the network do no relate to the newly emerged misinformation, then these nodes are removed from the network. Equivalently speaking, topics or misinformation that are not gaining attention do not fit anymore and will be removed from the misinformation network. If a large-scale node deletion appeared, the misinformation network may experience a destruction phase as we observed previously. Node fitness measures the node quality and reflects the node competitiveness [BB01b], therefore, we inspect all sentences that survived by day 50 (denoted asS [0;50] ) and disappear on day 51, and estimate their fitness by tracking the node’s accumulated degree over time k(t). The slope of k(t) in a log-log scale, i.e., growth exponent, is therefore equivalent to node fitness [KSR08] (detailed estimation 146 Figure 5.7: Probability of attachment (a), network evolution (b), node fitness estimations (c-e), and network centrality measures (f-h) of misinformation networks with deletion mechanism. strategy of node fitness is given in Methods section "Misinformation network formulation III"). Fig. 5.7 (c-e) present the estimated node fitness values and distributions ofS [0;50] . We find that before a node deletion, it’s fitness is increasing until two days before deletion. This observation is distinct from the fit get richer phenomenon usually assumed in traditional complex networks without node deletions. When rich get richer and fit get richer are both in play, nodes with high fitness have higher probability to attract new links and become rich nodes; then, rich nodes reinforce the effect. However, in our network, the rich get richer effect becomes weaker in a cycle, while fitness grows higher. Then, 147 Figure 5.8: Top words in S [0;50] (a) and S [49;55] (b). Top words are the words with highest TF-IDF scores and represent the most influential and important words / topics in sentences. We take n = 1 and 2 for n-grams, therefore, in the results there exist unigrams and bigrams. We find that sentences that survived from day 0 to 50 mainly discussed political-related topics, and sentences that survived from day 49 to 55 are more discussing non-political- or medical-related topics. Specifically, 75:31% inS [0;50] , and 41:50% inS [49;55] are discussing political-related topics, respectively. This shift of topic may in fact is the reason of cyclical behavior of probability of attachment we discovered in Fig. 5.7 (a). suddenly the nodes with high fitness are deleted at the end of one network evolution cycle. This distinct misinformation network behavior cannot be explained by conventional network models, and may be caused by the rapid attention shift characteristic of social media as we discussed. We further investigate several hot topics in order to validate the above-mentioned hypothesis on misinformation network evolution. We manually inspect the sentences that survived in the network from day 0 to day 50, noted asS [0;50] . SinceS [0;50] are all deleted from the network at day 51, and considering our misinformation network construction method, there will be no new links attached toS [0;50] . We also study sentences collected at day 49 that managed to survive to day 55, denoted byS [49;55] . We compare the top words, i.e., the words with highest TF-IDF (term frequency-inverse document frequency [RU11]) scores, inS [0;50] andS [49;55] as shown in Fig. 5.8. We find that political words appear 148 the most in the top 30 words ofS [0;50] (e.g., "Trump", "president", "white house"-related phrases appear about 9 times). In comparison, no political words exist inS [49;55] ’s top 30 words. This evidence shows that public attention shifts from political-related content to non-political in the time period we investigated. Furthermore, we find that "New York"-related phrases along with medical words such as "deaths", "killed", "patients", "cases" represent the majority of the top 30 words ofS [49;55] . Which matches the COVID- 19 break out in New York from April 18 th to 24 th . These examples confirm that our network construction method with node deletion mechanism can capture the actual misinformation network evolution. In addition, our network formulation is more sensitive to rapid network changes, e.g., the public attention shift, than classical PA or fitness-based network models. 5.3.3 COVID-19 Misinformation Network Prediction Complex network measures such as centrality are calculated based on network topology, i.e., adjacency matrix. However, these metrics are highly computationally expensive and require the adjacency matrix information. In this work, we construct misinformation networks where nodes are sentences, hence, we hypothesize that network measures can be predicted by deep learning and natural language processing (NLP) methods by considering as inputs only the sentences (without adjacency matrix). We verify that complex network metrics of misinformation networks can be easily predicted with high accuracy using deep neural networks (DNNs). In our centrality prediction, to predict day(s) t’s central nodes, we take daily misinformation networks from day t=0 up to day t-1 as training data, and the trained DNN outputs predictions for day(s) t. Specifically, we perform 1-day, 5-day, and 10-day prediction, meaning that for example, in 5-day prediction, if we predict central nodes from day 20 to day 25, we take daily misinformation networks from day 0 to day 19 as training data. In addition, instead 149 Figure 5.9: Centrality predictions of daily misinformation networks. To predict day(s)t’s central nodes with respect to degree, closeness, or betweenness centrality, daily misinformation networks prior to day(s)t are used as training data. Instead of network topology, e.g., adjacency matrix, we take the natural language embedding of each misinformation as the input to the DNN. The DNN then predicts which nodes are going to be the top 100 central nodes in day(s)t. E.g., in 1-day prediction, we predict day 10’s top nodes based on day 0-9’s information; and in 5-day prediction, we predict days 5-10’s top nodes based on day 0-5’s information. of feeding DNN with adjacency matrix, we utilize techniques from natural language processing and feed the DNN with sentence embeddings, specifically, BERT embeddings (training setup can be found in Methods section "Deep learning-based misinformation network measures prediction"). Throughout this process, there is no need to run time- consuming network analysis algorithms, and DNNs predict network measures with high accuracy in real time. Specifically, in 1-day prediction, our DNN predicts degree centrality, closeness centrality, and betweenness centrality, with 94:19 0:03%, 94:25 0:04%, 83:25 0:22% accuracies, and 98:54 0:01%, 98:47 0:01%, 90:44 0:21% 150 AURoCs, respectively, as shown in Fig. 5.9. The key contributor of this outstanding result is the extracted natural language features in rumors. We believe that the trained neural network learns the syntactic and semantic patterns of influential tweets. This finding enables real-time misinformation combat by online identification of fast-spreading and influential misinformation. With online misinformation detection mechanism, we can utilize the proposed deep learning-based network measure predictor to quickly identify, filter, and delete significant sentences before they actually become the central nodes. Therefore, break the misinformation network before it forms. 5.3.4 Discussion and Future Directions Researchers have noticed for a very long time that many measured data retrieved from biological and social systems can be described by log-normal distribution [Sun04,LSA01], e.g., survival time after cancer diagnosis [HOR87], number of words per sentence for writers [Wil40], and size of holes in cocoa cake [LSA01]. During the last decade, power- law distributions are often observed as well, e.g., size of wars [Cla18]. Here in this work, we analyze the trends of COVID-19 misinformation spread and discover that the log-normal distribution cannot be rejected as a plausible model for misinformation mean popularity data. With COVID-19 credible and unreliable information pushed to smart devices in real time across the globe, the true/false information constantly compete for finite collective attention in this COVID-19 infodemic. The log-normal distribution may suggest that the popularity of COVID-19 misinformation can obey a multiplicative process and resembles to the GLV , where individual misinformation and generators born and die, and compete for attention. These inspirations could contribute to the future analysis of misinformation collective attention and GLV related modeling and control. To further decipher the laws behind COVID-19 misinformation network evolution, we construct misinformation networks through three different strategies and analyze 151 these networks from information flow and network evolution aspects. We first construct misinformation networks where nodes are misinformation sentences collected within one day, and links represent their sentence similarity. Each network represents the mis- information appeared on Twitter within one day and the inspection of these networks shows how the COVID-19 misinformation evolves. Analysis of the network centrality measures, i.e., degree centrality, closeness centrality, and second-order centrality, shows that misinformation first learns to optimize information transfer to be more efficient and then maintains the fast-spreading property. Compared to true information, researchers found that misinformation/fake news spread differently even in early stages [ZZS + 20]. In addition, false news are discovered to be more novel and spread faster than true news [VRA18]. In our work, we showed from the information transfer aspect that misin- formation does evolve to be fast-spreading. However, the optimization of information transfer comes with a price, sacrificing the network robustness. In addition, centrality measures reveal the important nodes / influential misinformation in the network, which lay down the foundation of misinformation control. Currently, the estimation of cen- trality measures is not only time-consuming, but also requires complete information about the topology (e.g., adjacency matrix) of the misinformation networks. Therefore, with sentences as nodes and sentence similarity as links, we propose a deep learning method to predict the centrality measures with the input of sentence only. Utilizing this method, we can predict the next hot topics or central nodes without the need of knowing the whole network topology [XB19], which allows us to filter the potential influential misinformation before it actually becomes a center of attention. Researchers have expressed the concern about blocking information on COVID-19 that blocking can in turn fuel the spread of misinformation [Lar20]. This can be true from the perspective of network information flow revealed in this work. If wrong nodes, e.g., certain nodes with low centrality measures, are deleted from the network, the information transfer of 152 the whole network might be enhanced. In contrast, if we correctly remove certain central nodes, then the information transfer of network would be severely impacted. After inspecting the misinformation evolution in terms of information transfer, we construct the second series of misinformation networks, where we grow the network from a base network. We first form the base network with day 0’s misinformation. Then we add day 1’s misinformation to the base network; and we grow the network with regard to time (days). With the well-established network science methods, PA and node fitness theory, we find the co-existence of fit get richer and rich get richer phenomena. However, this way of network construction may not capture the true nature of the fast-changing feature of misinformation network because the lack of node deletion mechanism. Without node deletion, the time measure is ignored and a hot topic will remain popular regardless of time, and this is in contradiction with the fact that public attention may shift. To reveal the true nature of rapidly evolving misinformation network, we propose a third way of misinformation network construction which grows the topology from the base network, while including the node deletion mechanism to reflect that public may forget things. The determination of the node fitness and probability of attachment show distinct evolution behavior that is not fully explainable by fit get richer and rich get richer effects, i.e., some nodes with high fitness do not attract new connections and are deleted from the network. This distinct behavior may be caused by the public attention shift from one hot topic to another. We also find that different than the time-invariant assumptions in node fitness and PA theories, our misinformation network changes rapidly as well as the node fitness and the probability of attachment. These observations reveal the need for new theoretical network models that can characterize and explain the real world fast-evolving networks such as misinformation networks; and also link the collective attention with network science. 153 Furthermore, rumors are likely to have fluctuations over time [KCJ + 13]. With the node deletion mechanism, we observe evolution cycles of the misinformation network. The size of the misinformation expands and shrinks cyclically. We also find that the misinformation topics that survived in the network are mostly politically motivated. Our study provides a comprehensive data-driven and data science validation and invalidation of the hypotheses enunciated in [Fle20]. Determining in advance potential targets for fake news is an important aspect in misinformation control [VQSZ19]. We hope by identifying long-lasting, influential, fast-spreading misinformation in the network, we can help fight the COVID-19 and future infodemics by breaking the network before the increasingly popular nodes become influential; and control the misinformation by inserting combating information into the network. Lastly, through three different network formulations, we find limitations of current widely-used network models and researchers should study alternative novel strategies to properly construct networks from observations. We believe the findings and analysis of this work is a great add-on to current state- of-the-art fake news / rumor / misinformation research and inter-discipline studies of natural language processing and complex networks. In the future, we foresee that our findings and models can also contribute to fruitful technologies that help combat misinformation, identify fake news in early stages, forecast how popular fake news evolve, spread, and shift the public opinion during important events. For instance, as we have exemplified with our deep learning framework, these results can be exploited for developing a technology for detecting and forecasting popular opinions that are likely to become dominant or influential in a fast-evolving heterogeneous network. With our network analysis, to make fake news network to destroy itself, we can insert real news in the network at the lowest price and remove significantly influential false news nodes from the network with the highest reward. However, aside from the positives, more problems need solutions, and more questions require answers. In reality, given that we can only 154 partially observe the misinformation or information network, how can we design accurate and efficient algorithms to reconstruct the whole network from partial, scarce, uncertain, and noisy observations? With strategies to monitor accounts and information flow, how to control the network to make users aware of something? How can we control multiple interacting opinion dynamics that are evolving rapidly? In our future work, we will make an effort to tackle these issues, and in particular, misinformation combating problem, study the interaction between true and false information. 155 Chapter 6 Gene Mutation Detection and Rumor Detection Sequential synthetic data generation such as generating text and images that are indis- tinguishable to human eyes have become an important problem in the era of artificial intelligence (AI). Generative models, e.g., variational autoencoders (V AEs) [KW13], generative adversarial networks (GANs) [GPAM + 14], recurrent neural networks (RNNs) with long short-term memory (LSTM) cells [HS97], have shown outstanding generation power of fake faces, fake videos, etc. GANs as one of the most powerful generative models estimate generative models via an adversarial training process [GPAM + 14]. Real-valued generative models have found applications in image and video generation. However, GANs face challenges when the goal is to generate sequences of discrete tokens such as text [YZWY17]. Given the discrete nature of text, backpropagating the gradient from the discriminator to the generator becomes infeasible [FGD18]. Training instability is a common problem of GANs, especially those with discrete settings. Unlike image generation, the autoregressive property in text generation exacerbates the training instabil- ity since the loss from discriminator is only observed after a sentence has been generated completely [FGD18]. To remedy some of these difficulties, several AI approaches (e.g., Gumbel-softmax [JGP16, KHL16], Wasserstein GAN (WGAN) [ACB17, GAA + 17], reinforcement learning (RL) [Wil92, YZWY17]) have been proposed [Goo16, GSW + 20]. For instance, the Gumble-softmax uses a reparameterization trick and softmax calcu- lation to approximate the undifferentiable sampling operation on the generator output, 156 which allows the model to perform backward propagation as well as provide discrete outputs approximating to actual values. GANs with Gumbel-softmax take the first step to generate very short sequences of small vocabulary [KHL16]. WGAN method for discrete data directly calculates Wasserstein divergence between discrete labels and generator’s output as the criterion of discriminator. As a result, WGAN models can update parameters to learn the distribution of discrete data and produce some short sentences in character-level [GAA + 17]. As a result, generating natural language-level sentences is still non-trivial. GANs with RL can skirt the problem of information loss in the data conversion by modeling text generation as a sequence of decisions and update the generator with reward function. Comparing to previous methods, RL can help GANs generate interpretable text closer to natural language. [YZWY17]. In addi- tion to the recent development in GAN-based text generation, discriminator-oriented GAN-style approaches are proposed for detection and classification applications, such as rumor detection [MGW19]. Differently from the original generator-oriented GANs, discriminator-oriented GAN-based models take real data (instead of noise) as the input to the generator. Fundamentally, the detector may get high performance through the adver- sarial training technique. Current adversarial training strategies improve the robustness against adversarial samples. However, these methods lead to reduction of accuracy when the input samples are clean [RXY + 19]. Social media and micro-blogging have become increasingly popular [YOYB11,VP17]. The convenient and fast-spreading nature of micro-blogs fosters the emergence of various rumors. Social media rumors / misinformation / fake news are major concerns especially during major events, such as the global rise of COVID-19 and the U.S. presidential election. Some of the coronavirus rumors have been verified later to be very dangerous false claims, e.g., “those that suggest drinking bleach cures the illness" [Ton20] have made social media companies such as Facebook to find more effective solutions [Zoe20]. 157 Commercial giants, government authorities, and academic researchers take great effort in diminishing the negative impacts of rumors [CGL + 18]. Rumor detection has been formulated into a binary classification problem by a lot of researchers. Traditional approaches based on hand-crafted features describe the distribution of rumors [CMP11, KCJ + 13]. However, early works depending on hand-crafted features require heavy engineering skills. More recently, with the rise of deep learning architectures, deep neural network (DNN)-based methods extract and learn features automatically, and achieve significantly high accuracies on rumor detection [CLYZ18]. Generative models have also been used to improve the performance of rumor detectors [MGW19], and formulate multi-task rumor classification systems [CNB20b] to realize rumor detection, tracking, stance and veracity classification. However, binary rumor classification lacks explanation since it only provides a binary result without expressing which parts of a sentence could be the source of the problem. The majority of the literature defines rumors as “an item of circulating information whose veracity status is yet to be verified at the time of posting" [ZAB + 18]. Providing explanations is challenging for detectors working on unverified rumors. Comparably, fake news is more well-studied, as it has a verified veracity. Attribute information, linguistic features, and semantic meaning of post [YPM + 19] and/or comments [SCW + 19] have been used to provide explainability for fake news detection. A verified news database has to be established for these approaches. However, for rumor detection, sometimes a decision has to be made based on the current tweet only. Text-level models with explanations that recognize rumors by feature extraction should be developed to tackle this problem. Gene classification and mutation detection usually work with textual-gene data and also relate to a broad range of real world applications, such as gene-disease association, genetic disorder prediction, gene expression classification, and gene selection. Machine learning-based classification and prediction tools have been proposed to solve these 158 genetic problems [CM15,SLL + 19]. Since essentially a gene sequence is of textual nature, we can process a genetic sequence as text. Gene mutation detection looks for abnormal places in a gene sequence [20220]. Hence, we propose to solve this problem by using a natural language processing-based mutation detection model. When comparing a gene sequence with a natural language sequence, we observe that the mutations in genetic sequences represent abnormalities that makes the sequence do not fit well compared to other sequences from a biological perspective. The known genetic mutation detection and classification problem has been effectively explored in the literature, while the unknown mutation detection and classification has remained as a harder problem in both medical and machine learning fields. To detect unknown mutations and classify them, we propose a GAN-based framework that maintains a high performance level while facing unseen data with unknown patterns and providing explainability capabilities. In this work, we propose a GAN-based layered framework that overcomes the afore- mentioned technical difficulties and provides solutions to (i) text-level rumor detection with explanations and (ii) gene classification with mutation detection. In terms of solving the technical difficulties, our model keeps the ability of discriminating between real-world and generated samples, and also serves as a discriminator-oriented model that classifies real-world and generated fake samples. We overcome the infeasibility of propagating the gradient from discriminator back to the generator by applying policy gradient similar to SeqGAN [YZWY17] to train the layered generators. In contrast to prior works, we adopt a RL approach in our framework because by combining the GAN and RL algorithmic strategies the framework can produce textural representations with higher quality and balance the adversarial training. The training instability of long sentence generation is lowered by selectively replacing words in the sentence. We solve the per time step error attribution difficulty by word-level generation and evaluation. We show that our model 159 outperforms the baselines in terms of addressing the degraded accuracy problem with clean samples only. Our GAN-based framework consists of a layered generative model and a layered discriminative model. The generative model generates high-quality sequences by first intelligently selecting items to be replaced, then choosing appropriate substitutes to replace those items. The discriminative model provides classification output with expla- nations. For example, in the gene classification and mutation detection task, the generative model mutates part of the genetic sequence and then the discriminative model classifies this genetic sequence and tells which genes are mutated. The major contributions of this work are: (i) This work delivers an explainable rumor detection without requiring a verified news database. Rumors could stay unverified for a long period of time because of information insufficiency. Providing explanations of which words in the sentence are problematic is critical especially when there is no verified fact. When a verified news database is achievable, our model is capable to realize fake news detection with minor modifications. (ii) Our model is a powerful textural mutation detection framework. We demonstrate the mutation detection power by applying our proposed model to the task of gene classification with mutation detection. Our model accurately identifies tokens in the gene sequences that are to form the mutation, and classifies mutated gene sequences with high precision. (iii) The layered structure of our proposed model avoids the function mixture and boosts the performance. We have verified that using one layer to realize two functions either in generative or discriminative model causes function mixture and hurts the performance. 160 Figure 6.1: Our proposed framework. The generative model (shown on the left hand side) consists of two generatorsG where andG replace . The discriminative model (shown on the right hand side) consists of two discriminators, namelyD explain for explainability andD classify for classification. 6.1 GAN-based Classifier 6.1.1 Generative Adversarial Network Architecture Figure 6.1 shows the architecture of our proposed model. We have a layered generative model, which takes an input sequence and makes modifications intelligently; then a layered discriminative model to do classification and mutation detection. In rumor detection task, the generators must intelligently construct a rumor that appears like non- rumor to deceive the discriminators. Given a good lie usually has some truth in it, we choose to replace some of the tokens in the sequence and keep the majority to realize this goal. In our framework, two steps for intelligently replacing tokens in a sequence are: i) determine where (i.e., which words / items in the sequence) to replace, and ii) choose what substitutes to use. G where and G replace are designed to realize these two steps. Having constructed the strong generators, the discriminators are designed to provide 161 a defense mechanism. Through adversarial training, the generators and discriminators grow stronger together, in terms of generating and detecting rumors, respectively. In the rumor detection task, given a sentence, there are two questions that need to be answered: i) is it a rumor or a non-rumor, and ii) if a rumor, which parts are problematic.D classify andD explain are designed to answer these two questions. We found that realizing two functions in one layer either in discriminative model or generative model hurts the performance. Hence, our framework was designed to embed a layered structure, and the detailed descriptions of the generative and discriminative model are as follows. Generative Model The sequence generation task is done by the generative model: G where and G replace . Given a human-generated real-world sequence input x = (x 1 ;x 2 ;:::;x M ) with length M, such as a tweet-level sentence containingM words, G where outputs a probability vector p = (p 1 ;p 2 ;:::;p M ) indicating the probabilities of each itemx i (i2 [1;M]) to be replaced. p is applied to input x to construct a new sequence x where with some items replaced by blanks. For example,x 2 becomes a blank and then x where = (x 1 ; _;:::;x M ). x where =f(p) x =f(G where (x)) x; wheref() binarizes the input based on a hyperparameterN replace . It determines the percentage of the words to be replaced in a sentence. Operator works as follows. If a = 1, thenab =b. Ifa = 0, thenab = _ . G replace is an encoder-decoder model with the attention mechanism. It takes x where and fills in the blank, then outputs a sequence x replace = (x 1 ;x replace 2 ;:::;x M ). The generative model is not fully differentiable because of the sampling operations onG where andG replace . To train the generative model, we adopt policy gradients [SMSM00] from RL to solve the non-differentiable issue. 162 G replace GRU-based encoder. Gated Recurrent Units (GRUs) [CVMG + 14] are the improved versions of standard RNNs that use update gates and reset gates to resolve the vanishing gradient problem of a standard RNN. In our GRU-based encoder, the hidden stateh t is computed asGRU encoder (x where t ;h t1 ): h t = (1z t )h t1 +z t h 0 t ; z t =(W enc z x where t +U enc z h t1 +b enc z ); h 0 t =tanh(W enc h x where t +U enc h (r t h t1 ) +b enc h ); r t =(W enc r x where t +U enc r h t1 +b enc r ); where W enc z , W enc h , b enc r , b enc z , b enc h , W enc r , U enc z , U enc h and U enc r are encoder weight matrices. () is the sigmoid function. represents element-wise multiplication. z,r, andh 0 are update gate, reset gate, and candidate activation in encoder, respectively. G replace GRU-based decoder with attention mechanism. Our encoder-decoder G replace utilizes attention mechanism [BCB14] to automatically search for parts of a sentence that are relevant to predicting the target word. The content vectorc t summarizes all the information of words in a sentence. It depends on the annotations h t and is computed as a weighted sum of theseh t : c t = M X j=1 tj h j ; tj = exp(e tj ) P M k=1 exp(e tk ) ; e tj =a(s t1 ;h j ); wheree tj scores how well the inputs around positionj and the output at positiont match. Alignment modela is a neural network that jointly trained with all other components. 163 The GRU decoder takes the previous targety t1 and the context vectorc t as input, and utilizes GRU to compute the hidden states t asGRU decoder (y t1 ;s t1 ;c t ): s t = (1z 0 t )s t1 +z 0 t s 0 t ; z 0 t =(W dec z y t1 +U dec z s t1 +C dec z c t ); s 0 t =tanh(W dec s y t1 +U dec s (r 0 t s t1 ) +C dec s c t ); r 0 t =(W dec r y t1 +U dec r s t1 +C dec r c t ); where W dec z , W dec s , W dec r , U dec z , U dec s , U dec r , C dec z , C dec s and C dec r are decoder weight matrices. z 0 ,r 0 , ands 0 are update gate, reset gate, and candidate activation in decoder, respectively. Through this attention-equipped encoder-decoder, G replace intelligently replaces items in sequences and outputs adversarial samples. Discriminative Model The generated adversarial samples x replace combined with original data x are fed to the discriminative model. D classify and D explain are trained independently. We note that the two discriminators can depend on each other, but we have chosen to explore the dependency as part of our future work. D classify provides a probability in rumor detection, and D explain provides the probability of each word in the sentence being problematic. The explainability of our model is gained by adversarial training. We first insert adversarial items in the sequence, then trainD explain to detect them. Through this technique, our model can not only classify data with existing patterns, but also classify sequences with unseen patterns that may appear in the future. Adversarial training 164 improves the robustness and generalization ability of our model. 6.1.2 Model Training Techniques and Hyperparameter Configura- tion In the rumor detection task, a sequence x has a true label Y being either a rumor R or a non-rumor N. After manipulating the sequence x, output of the generative model x replace is labeled as R since it is machine generated. The objective of a - parameterized generative model is to mislead the-parameterized discriminators. In our case, D classify (x replace ) indicates how likely the generated x replace is classified as N. D explain (x replace ) indicates how accuratelyD explain detects the replaced words in a sequence. The error attribution per time step is achieved naturally sinceD explain evaluates each token and therefore provides a fine-grained supervision signal to the generators. For example, a case where the generative model produces a sequence that deceives the discriminative model. Then the reward signal fromD explain indicates how well the position of each replaced word contributes to the error result. The reward signal from D classify represents how well the combination of the position and the replaced word deceived the discriminator. The generative model is updated by applying a policy gradient on the received rewards from the discriminative model. The rumor generation problem is defined as follows. Given a sequence x, G where is used to produce a sequence of probabilities p indicating the replacing probability of each token in x. G replace takes x where and produces a new sequence x replace . This newly generated x replace is a sentence, part of which is replaced and labeled as R. At time step t, the state s consists of s where and s replace . s where = (p 1 ;:::;p t1 ), s replace = (x replace 1 ;:::;x replace t1 ). The policy modelG where (p t jp 1 ;:::;p t1 ) 165 andG replace (x replace t jx replace 1 ;:::;x replace t1 ) are stochastic. Following RL,G where ’s objec- tive is to maximize its expected long-term reward: J where () =E[R T js 0 ;] = X p 1 G where (p 1 js where 0 )Q G D (s replace 0 ; a); Q G D (s replace 0 ; a) =D explain (s replace 0 ) +D classify (s replace 0 ); whereQ G D (s 0 ; a) is the accumulative reward following policyG starting from state s 0 =fs where 0 ; s replace 0 g. D explain (s replace ) indicates how much the generative model misleadsD explain . a is an action set that contains output of bothG where andG replace . R T is the reward for a complete sequence. Similarly toG where ,G replace maximizes its expected long-term reward: J replace () = X x replace 1 G replace (x replace 1 js replace 0 )Q G D (s replace 0 ; a): We apply a discriminative model provided reward value to the generative model after the sequence is produced. The reason is that ourG replace doesn’t need to generate each and every word in the sequence, but only fills a few blanks that are generated byG where . Under this assumption, long-term reward is approximated by the reward gained after the whole sequence is finished. The discriminative model and the generative model are updated alternately. The loss function of discriminative model is defined as follows: L D = explain D L explain D + classify D L classify D ; L explain D =E yf(G where (x)) [ylog(D explain (x replace ))+(1y)log(1D explain (x replace ))] L classify D =E yY [ylog(D classify (x replace )) + (1y)log(1D classify (x replace ))] 166 where explain D and classify D are the balancing parameters. We adopt the training method in GANs to train the networks. In each epoch, the generative model and the discriminative model are updated alternately. Over-training the discriminators or the generators may result in a training failure. Thus hyper-parameters G STEP and D STEP are introduced to balance the training. In each epoch, the generators are trained G STEP times. Then discriminators are trained D STEP times. Experiment Setup Model setup. Our model contains a layered generative model, G where and G replace , and a layered discriminative model,D explain andD classify . The architecture setup is as follows.G where consists of an RNN with two Bidirectional LSTM (BiLSTM) and one dense layer and seeks to determine the items in a sequence to be replaced. TheG where architecture we used in all experiments has the architecture of EM-32-32-16-OUT, where EM, OUT represent embedding and output, respectively.G replace is an encoder-decoder with attention mechanism and is responsible for generating the substitutes for the items selected byG where . The encoder has two GRU layers, and the decoder has two GRU layers equipped with attention mechanism. The architecture ofG replace we used in all experiments is EM-64-64-EM-64-64-OUT.D explain has the same architecture asG where and is responsible for determine which items are problematic.D classify is a CNN with two convolutional layers followed by a dense layer. It is used for classification. The architecture we used in all experiments is EM-32-64-16-OUT. Data collection and augmentation. We evaluate our proposed model on a benchmark Twitter rumor detection dataset PHEME [KLZ18], a misinformation / fake news dataset FMG [SSSB20], and a splice site benchmark dataset NN269 [REKH97]. PHEME has two versions. PHEMEv5 contains 5792 tweets related to five news, 1972 of them are 167 rumors and 3820 of them are non-rumors. PHEMEv9 contains 6411 tweets related to nine news, 2388 of them are rumors and 4023 of them are non-rumors. The maximum sequence length in PHEME is 40, and we pad the short sequences with zero padding. FMG dataset contains two parts corresponding to a veracity detection task (i.e., determine a news is true / false) and a provenance classification task (i.e., determine a news is real / fake), respectively. Input sequences with true label in veracity classification task are verified fact and false sequences are verified false statements. Input sequences with real label in provenance classification dataset are purely human-written sentences while the fake data are generated with pre-trained language models. We set the maximum sequence length as 1024 and 512 in true / false and real / fake tasks, respectively, and we pad the short sequences with zero padding and do post truncation on the text longer than length threshold. NN269 dataset contains 13231 splice site sequences. It has 6985 acceptor splice site sequences with length of 90 nucleotides, 5643 of them are positiveAP and 1324 of them are negativeAN. It also has 6246 donor splice site sequences with length of 15 nucleotides, 4922 of them are positiveDP and 1324 of them are negativeDN. In rumor detection task, we generate a rumor / fake news / misinformation dataset denoted as PHEME’ (and FMG’), and then augment the original dataset with the gener- ated sequences. Similarly, for the gene classification with mutation detection task, the proposed model generates a dataset NN269’ by replacing nine characters in acceptor sequences and three characters in donor sequences. We label the generated sequences by the following rules. In rumor detection with explanation task, i) generated rumors based on PHEME are labeled as R (rumor) in 2-class classification (corresponds to results in Table 6.2); ii) in 4-class classification (corresponds to results in Table 6.6 and Table 6.7), if the input sequence x has labelY , then the output sequence x replace is labeled asY 0 , indicating that x replace is from classY but with modification. In gene mutation detection task, we follow the labeling rule described in ii), and the final classification output of 168 our model is two-fold: AP , AN for acceptor, orDP , DN for donor. We merge the generated classesAP 0 ,AN 0 andDP 0 ,DN 0 with original classes to evaluate the noise resistance ability of our model. Given a sequence, our model can classify it into one of the known classes, although the sequence could either be clean or modified. Baseline description. In the rumor detection task, we compare our model with six popu- lar rumor detectors: RNN with LSTM cells, CNN, V AE-LSTM, V AE-CNN, a contextual embedding model with data augmenting (DATA-AUG) [HGC19], and a GAN-based rumor detector (GAN-GRU) [MGW19]. One of the strengths of our proposed model is that under the delicate layered structure that we designed, the choice of model structure affects the results but not significantly. To showcase this ability of the layered structure, we generate a variation of the proposed model by replacingG replace with a LSTM model as one baseline. It utilizes an LSTM-based encoder-decoder with architecture EM-32- 32-EM-32-32-OUT asG replace . Our model generates a set of sequences by substituting around 10% of the items in original sequences. We pre-train theD classify by fixing the number of replacementN replace = 10%. We then freezeD classify and train the other three models. During training, we lowerN replace from 50% to 10% to guarantee data balancing for D explain and better results in terms of explanations. All the embedding layers in the generators and discriminators are initialized with 50 dimension GloVe [PSM14] pre-trained vectors. Early stopping technique is applied during training. The generated data in the rumor task are labeled asR, and we denote this dataset as PHEME’. For fairness and consistency, we train baselines LSTM, CNN, V AE-LSTM, and V AE-CNN with PHEME and PHEME+PHEME’. For all baselines, we use two evaluation principles: (i) Hold out 10% of the data for model tuning, i.e., we split the dataset into training (with 90% data) and test (with 10% data) set. (ii) Leave-one-out (L) principle, i.e., leave out one news for test, and train the models on other news. E.g., for PHEMEv5, where there are 5 events in the dataset, we 169 Table 6.1: Baselines’ architecture setup in both rumor detection task and gene classification with mutation detection task. Model Gene mutation detection task Rumor detection task LSTM EM-LSTM(64)-LSTM(32)-DENSE(8)-OUT EM-LSTM(32)-LSTM(16)-DENSE(8)-OUT CNN EM-CONV(32)-CONV(64)-DENSE(16)-OUT EM-CONV(32)-CONV(16)-DENSE(8)-OUT V AE-LSTM LSTM(32)-LSTM(32)-DENSE(8)-OUT LSTM(32)-LSTM(16)-DENSE(8)-OUT V AE-CNN CONV(32)-CONV(64)-DENSE(16)-OUT CONV(32)-CONV(64)-DENSE(16)-OUT pick 1 event as our test set and use the remaining 4 events as our training set. (Similarly, for PHEMEv9, where there are 9 events in the dataset, we pick 1 event as our test set and use the remaining 8 events as our training set.) Moreover, with L principle, we apply 5- and 9-fold cross validation for PHEMEv5 and PHEMEv9, respectively. Final results are calculated as the weighted average of all results. L principle constructs a realistic testing scenario and evaluates the rumor detection ability under new out-of-domain data. For DATA-AUG and GAN-GRU, we import the best results reported in their papers. In gene classification with mutation detection task we compare our models with five models: RNN with LSTM cells, CNN, V AE-LSTM, V AE-CNN, and the state-of-the- art splice site predictor EFFECT [KDJS14]. The first four baselines are trained under NN269+NN269’, and tested on both NN269+NN269’ and clean data NN269. We import EFFECT’s results from the original work [KDJS14]. The architectures of baselines LSTM, CNN, V AE-LSTM, and V AE-CNN used in both tasks are defined as in Table 6.1. V AE-LSTM and V AE-CNN use a pre-trained V AE followed by LSTM and CNN with the architectures we defined in Table 6.1. The V AE we pre-trained is a LSTM-based encoder-decoder. The encoder with architecture EM-32-32-32-OUT has two LSTM layers followed by a dense layer. The decoder has the architecture IN-32-32-OUT, where IN stands for input layer. 170 Table 6.2: Macro-f1 and accuracy comparison between our model and baselines on the rumor detection task. The models are trained on PHEME and tested on both original dataset PHEME and augmented dataset PHEME+PHEME’. * indicates the best result from the work that proposed the corresponding model. L represents the model is evaluated under leave-one-out principle. Variance results in cross-validations are shown in Table 6.3. PHEMEv5 PHEMEv9 PHEMEv5 PHEME+PHEME’v5 PHEMEv9 PHEME+PHEME’v9 Macro-f1 Accuracy Macro-f1 Accuracy Macro-f1 Accuracy Macro-f1 Accuracy LSTM 0:6425 0:6542 0:4344 0:4345 0:6261 0:6269 0:4999 0:5283 CNN 0:6608 0:6660 0:4792 0:4833 0:6549 0:6552 0:5028 0:5253 V AE-LSTM 0:4677 0:5625 0:2582 0:2871 0:4454 0:4589 0:4231 0:4326 V AE-CNN 0:5605 0:5605 0:4655 0:4902 0:3859 0:5029 0:2513 0:2778 GAN-GRU 0:7810 0:7810 - - - - - - Our model-LSTM 0:8242 0:8242 0:6259 0:6302 0:8066 0:8066 0:6884 0:7044 Our model-CNN 0.8475 0.8476 0.6524 0.6777 0.8084 0.8095 0.7620 0.8085 LSTM (L) 0:5693 0:6030 0:5260 0:5710 0:5217 0:5827 0:5055 0:5906 CNN (L) 0:5994 0:6406 0:5324 0:5779 0:5477 0:6035 0:5051 0:5769 V AE-LSTM (L) 0:3655 0:3996 0:3620 0:3959 0:4256 0:5367 0:4284 0:5397 V AE-CNN (L) 0:4807 0:5190 0:4816 0:5214 0:4316 0:4597 0:4314 0:4587 DATA-AUG (L) 0:5350 0.7070 - - - - - - Our model-LSTM (L) 0:6666 0:6866 0:5703 0.6411 0:5972 0:6272 0:5922 0:6371 Our model-CNN (L) 0.6745 0:7016 0.6126 0:6342 0.6207 0.6438 0.6016 0.6400 Table 6.3: Variance results in cross-validations on the rumor detection task. PHEMEv5 PHEME+PHEME’v5 PHEMEv9 PHEME+PHEME’v9 Methods/Variance Macro-f1 Accuracy Macro-f1 Accuracy Macro-f1 Accuracy Macro-f1 Accuracy LSTM (L) 0:0028 0:0060 0:0003 0:0024 0:0262 0:0036 0:0022 0:0016 CNN (L) 0:0022 0:0013 0:0003 0:0012 0:0215 0:0048 0:0017 0:0015 V AE-LSTM (L) 0:0204 0:0086 0:0001 0:0006 0:0103 0:0082 0:0067 0:0013 V AE-CNN (L) 0:0037 0:0029 0:0013 0:0014 0:0006 0:0031 0:0020 0:0020 Our model-LSTM (L) 0:0022 0:0025 0:0015 0:0020 0:0095 0:0059 0:0093 0:0066 Our model-CNN (L) 0:0013 0:0023 0:0022 0:0029 0:0101 0:0048 0:0079 0:0051 6.2 Rumor Detection With Explanations Rumors, defined as “items of circulating information whose veracity status is yet to be verified at the time of posting" [ZAB + 18], usually emerge when there are influential events and spread rapidly with the rise of social media. Far-reaching and fast-spreading rumors can cause serious consequences, for example, they are growing threats to the democratic process [RvdL19]. Rumor detection suffers from the limitation of datasets scale and the uncertain nature of rumors makes the early-detection and classification with explanation challenging. In this section, the proposed discriminator-oriented GAN 171 Figure 6.2: Macro-f1 (a) and accuracy (b) comparison between our model (-CNN and our model- LSTM) and baselines on the rumor detection task. The models are trained on augmented dataset PHEME+PHEME’ and tested on both original PHEME and augmented PHEME+PHEME’. L represents the model is evaluated under leave-one-out principle. Table 6.4: Examples of D explain and D classify ’s prediction on rumor (first) and non-rumor (second). The suspicious words in the rumor predicted byD explain are marked in bold.D classify provides a score ranging from 0 to 1. 0 and 1 represent rumor and non-rumor, respectively. 0:1579 who’s your pick for worst contribution to sydneysiege mamamia uber or the daily tele 0:8558 glad to hear the sydneysiege is over but saddened that it even happened to begin with my heart goes out to all those affected framework utilizes the layered generative model to generate augmented rumor dataset, and usesD classify to classify rumor while relying onD explain to indicate which parts of the sentence are suspicious. The detailed model description can be found in the Method section. 6.2.1 Detection Results Table 6.2 and Figure 6.2 illustrate a comparison between the proposed modelD classify and the baselines for rumor detection. In this experiment, we use PHEME data to train our model. During training, our model generates PHEME’ to enhance the discriminative model. Data in PHEME are either rumor (R), or non-rumor (N), and generated data in PHEME’ are all labeled asR since we would like ourD classify to be conservative and filter out human-written non-rumors. Hence, all models in Table 6.2 perform 2-class 172 Table 6.5: Examples of D explain predicting suspicious words in rumors (marked in bold). D classify outputs probabilities in range [0; 1], where 0 and 1 represent rumor and non-rumor, respectively. 0:0010 breaking update 2 hostages escape lindt café through front door 1 via fire door url sydneysiege url 0:0255 newest putin rumour his girlfriend just gave birth to their child url cdnpoli russia 0:0300 soldier gets cpr after being shot at war memorial in ottawa url 0:0465 sydney’s central business district is under lockdown as gunman takes hostages at a cafe live stream as it unfolds url 0:2927 so in 5mins mike brown shaved his head and changed his scandals to shoes i think your being lied too classification (R /N). In real world applications, original clean dataset is available all the time. However, the modified or adversarial data that contains different patterns are not always accessible. Models like LSTM and CNN do not have generalization ability and usually perform worse facing adversarial input. Generative models such as GANs are more robust. In V AE-LSTM and V AE-CNN, we first pre-train V AEs, then LSTM and CNN are trained under latent representations of pre-trained V AEs. Under the first evalua- tion principle, our model and the variation of our model with LSTM cells outperform all baselines in terms of both macro-f1 and accuracy. Accuracy is not sufficient when the test data are not balanced, hence macro-f1 is provided for comprehensive comparison. Under the first evaluation principle, the robustness and generalization ability of our model are shown by comparing with baselines under PHEME+PHEME’. Our model reaches the highest values in both versions of PHEME+PHEME’ and the variation of our model with LSTM cells follows as the second best. Under leave-one-out (L) principle (i.e., leave out one news topic for test and use the rest for training), our proposed model and the variation achieve the highest macro-f1 scores in all cases. These results confirm the rumor detection ability of the proposed layered structure under new, out-of-domain data. Adversarial training of baselines improves generalization and robustness under PHEME+PHEME’, but hurts the performance under clean data as expected. Although our model and the variation are trained adversarially, they achieve the highest macro-f1 173 under clean data PHEME. The results confirm that our model outperforms the baselines in terms of addressing accuracy reduction problem. Table 6.4 shows two examples that are correctly detected by our model but incorrectly detected by other baselines. For the first rumor, baselines CNN, LSTM, V AE-CNN, and V AE-LSTM provide scores 0:9802, 0:9863, 0:4917, and 0:5138, respectively. Our model provides a very low score for a rumor, while other baselines all generated relatively high scores, and even detect it as non-rumor. This is a very difficult example since from the sentence itself, we as human rumor detection agents even cannot pick the suspicious parts confidently. However, our model gives a reasonable prediction and shows that it has the ability to understand and analyze complicated rumors. For the second non-rumor, baselines CNN, LSTM, V AE-CNN, and V AE-LSTM provide scores 0:0029, 0:1316, 0:6150, and 0:4768, respectively. In this case, a non-rumor sentence gains a high score from our model, but several relatively low scores from the baselines. This example again confirms that our proposed model indeed captures the complicated nature of rumors and non-rumors. 6.2.2 Explanation Results A component for decision explanation is realized byD explain , which offers insight into the detection problem by suggesting suspicious parts of given rumor texts. Our model’s D explain recognizes the modified parts in sequences accurately. In 2-class PHEME experiments, its macro-f1 on PHEME’v5 and PHEME’v9 are 80:42% and 81:23%, respectively. Examples of D explain predicting suspicious parts in rumors are shown in Table 6.5. In the first rumor, “hostage escape" is the most important part in the sentence, and if these two words are problematic, then the sentence is highly likely to be problematic. Given an unverified or even unverifiable rumor, D explain provides reasonable explanation without requiring a previously collected verified news database. 174 Rumor / non-rumor, true / false, and real / fake. Misinformation, disinformation, fake news, and rumor classifications have been studied in the literature [RvdL19,CNB20b, PPC20,SCV + 18] and frequently suffer from small-scale datasets. The difference between misinformation, disinformation, fake news, and rumor is not well-defined and the labeling in these tasks is sometimes ambiguous and imprecise. In this work, we specifically refer rumor as a piece of information whose veracity is not verified, and its label in detection task is rumor (R) / non-rumor (N). With the consideration of veracity status, we refer facts as true (T ) and false statements as false (F ). Furthermore, we refer purely human- written statements as real (E) and machine-generated statements as fake (K). In the previous detection section, we do binary classification in rumor detection task. Our generative model replaces parts of a sequence and due to the uncertain nature of rumors, we label the generated (modified) rumors asR, and non-rumor in original dataset asN to emphasize the purpose of filtering out non-rumor in real-world applications. However, with real / fake and true / false labeling in misinformation or fake news classification, the labeling should be precise and 2-class labeling is not sufficient anymore for the generated (modified) sequences. Specifically, if an input sequence is labeled asY , its modified version (i.e., the output of our generative model) is labeled as Y 0 to represent that it is modified from a sequence with labelY . In what follows, we perform the following experiments: (i) rumor classification with PHEME again using 4-class labels:R,R 0 ,N, N 0 ; (ii) misinformation (disinformation) classification with FMG (a misinformation / fake news dataset) using 4-class labels: T ,T 0 ,F ,F 0 ; and (iii) fake news classification with FMG using 4-class labels:E,E 0 ,K,K 0 . Experimental results of PHEME (4-class) are shown in Table 6.6. Similar to previous PHEME experiment in Table 6.2, we generate a dataset PHEME’ to do data augmentation. However, different than before, this new generated PHEME’ (4-class) has four labels: R,R 0 ,N,N 0 and our GAN models are trained with 4-class classification. In addition, 175 Table 6.6: Marco-f1 and accuracy comparison between our model and baselines on the extended 4-class experiments of rumor detection task on PHEME dataset. U indicates that the model is trained on PHEME+PHEME’, otherwise it is train on original PHEME dataset. All models are tested on PHEME (R /N) and PHEME+PHEME’ (R /N /R 0 /N 0 ). PHEMEv5 PHEMEv9 PHEMEv5 (2-class) PHEME+PHEME’v5 (4-class) PHEMEv9 (2-class) PHEME+PHEME’v9 (4-class) Macro-f1 Accuracy Macro-f1 Accuracy Macro-f1 Accuracy Macro-f1 Accuracy LSTM 0:6095 0:6259 0:2753 0:4121 0:6304 0:6484 0:2788 0:4179 LSTM (U) 0:6774 0:7480 0:5082 0:5073 0:6836 0:7446 0:5194 0:5205 CNN 0:6052 0:6210 0:2766 0:4135 0:6211 0:6396 0:2759 0:4135 CNN (U) 0:6760 0:7534 0:5109 0:5083 0:6678 0:7402 0:5239 0:5229 V AE-LSTM 0:5188 0:6591 0:2464 0:2753 0:4693 0:5205 0:1976 0:2416 V AE-LSTM (U) 0:4877 0:5810 0:2473 0:2578 0:4879 0:5351 0:2135 0:2602 V AE-CNN 0:4983 0:5629 0:2239 0:2529 0:4303 0:7495 0:1514 0:2504 V AE-CNN (U) 0:4912 0:5361 0:2566 0:2719 0:4813 0:5214 0:2160 0:2617 Our model-LSTM 0.7776 0.8271 0.5703 0.5678 0.7830 0.8339 0.5631 0.5610 Our model-CNN 0:7485 0:8017 0:5352 0:5419 0:7693 0:8232 0:5558 0:5600 we train baselines with augmented dataset PHEME+PHEME’ (4-class) and test it with PHEME. Moreover, we find that training with augmented data improves the performance of baselines. Our models (-LSTM and -CNN) still provide best results compared to (augmented) baselines. Besides rumor detection, we apply our framework in misinformation and fake news detection tasks using a fake news dataset (FMG) [SSSB20], which includes both real / fake and true / false data. In real / fake task, models differentiate between purely human-written statements and (partially or fully) machine-generated statements, while in true / false task, models are required to identify true statements and false claims. We augment the original dataset (denoted as FMG) with our GAN-generated data (denoted as FMG’) and train several models with the augmented dataset (denoted as FMG+FMG’). Similarly in PHEME (4-class) experiments, we find that models trained with augmented FMG+FMG’ achieve higher performance on original FMG as shown in Table 6.7. From these experimental results, we conclude that our framework is effective in data aug- mentation and helps models to achieve higher accuracy. One thing to note is that in this experiment, our models do not outperform augmented LSTM and CNN in prove- nance classification task (although it is better than unaugmented ones). This could be 176 Table 6.7: Marco-f1 and accuracy comparison between our model and baselines on the extended 4-class experiments of provenance (real / fake) and veracity (true / false) tasks. U indicates that the model is trained on FMG+FMG’, otherwise it is train on FMG. All models are tested on FMG and FMG+FMG’. Provenance Veracity FMG (E /K) FMG+FMG’ (4-class) FMG (T /F ) FMG+FMG’ (4-class) Macro-f1 Accuracy Macro-f1 Accuracy Macro-f1 Accuracy Macro-f1 Accuracy LSTM 0:3963 0:3965 0:2752 0:3745 0:4786 0:4890 0:1792 0:2739 LSTM (U) 0:7062 0.7989 0.6401 0.6450 0:6339 0:7689 0:4985 0:5194 CNN 0:3964 0:3965 0:2738 0:3730 0:5478 0:6352 0:1940 0:2984 CNN (U) 0.7082 0:7824 0:6287 0:6325 0:6802 0:7724 0:5392 0:5613 V AE-LSTM 0:4967 0:6305 0:2137 0:2288 0:5099 0:6175 0:2268 0:2740 V AE-LSTM (U) 0:4871 0:6910 0:2630 0:2797 0:5105 0:6172 0:2793 0:2920 V AE-CNN 0:4624 0:5055 0:2207 0:2494 0:4676 0:4989 0:2075 0:2495 V AE-CNN (U) 0:5122 0:6158 0:2607 0:2615 0:5013 0:6007 0:2644 0:2650 Our model-LSTM 0:6562 0:7529 0:5027 0:5054 0:6560 0:7524 0:5027 0:5054 Our model-CNN 0:5639 0:6984 0:4543 0:4615 0.7134 0.7779 0.5637 0.5673 Table 6.8: Examples of D explain failing to predict suspicious words in some short rumors. D classify outputs probabilities in range [0; 1], where 0 and 1 represent rumor and non-rumor, respectively. 0:0112 ottawa police report a third shooting at rideau centre no reports of injuries 0:0118 breaking swiss art museum accepts artworks bequeathed by late art dealer gurlitt url 0:0361 breaking germanwings co pilot was muslim convert url 0:4451 germanwings passenger plane crashes in france url 0:5771 the woman injured last night ferguson url due to the fact that the nature of provenance classification is to distinguish patterns between human-written and machine-generated sentences. In the early training process of our model, the training data (generated sequences) of our discriminative model are low-quality since the generative model is not well-trained. The generated sequences contain our machine-generated noisy patterns, which could make our model converge to suboptimal results. Limitations and error cases in rumor detection. Examples of error cases of our model in rumor detection task are presented in Table 6.8. For some short sentences,D explain sometimes fails to predict the suspicious parts. The reason is that the majority of training data are long sentences, henceD explain performs better with long sentences. We can 177 Table 6.9: Comparison between our model and baselines on the gene classification with the mutation detection task. * indicates the best result from the corresponding paper. 2-class refers toAP ,AN for acceptor, andDP ,DN for donor. 4-class refers toAP ,AN,AP 0 ,AN 0 for acceptor, andDP ,DN,DP 0 ,DN 0 for donor. A and D indicate acceptor and donor. NN269 (2-class) NN269+NN269’ (2-class) NN269+NN269’ (4-class) Macro-f1 Accuracy AURoC Macro-f1 Accuracy AURoC Macro-f1 Accuracy AURoC LSTM (A) 0:8120 0:8870 0:9305 0:7794 0:8580 0:9036 0:7800 0:8580 0:9715 CNN (A) 0:5663 0:7933 0:6324 0:5594 0:7808 0:6131 0:5593 0:7808 0:8875 V AE-LSTM (A) 0:7664 0:8566 0:8451 0:6781 0:8323 0:7780 0:6531 0:8342 0:8806 V AE-CNN (A) 0:5657 0:7539 0:6135 0:5744 0:7651 0:6219 0:5379 0:7470 0:8411 EFFECT (A) - - 0:9770 - - - - - - Our model-LSTM (A) 0:9131 0:9458 0:9781 0:8794 0:9243 0:9658 0:8758 0:9223 0:9879 Our model-CNN (A) 0.9175 0.9494 0.9807 0.8831 0.9301 0.9691 0.8839 0.9311 0.9894 LSTM (D) 0:8336 0:8214 0:9003 0:8148 0:7998 0:8802 0:7648 0:7530 0:9246 CNN (D) 0:9131 0:9393 0:9795 0.9025 0.9323 0:9746 0.8336 0.8583 0:9596 V AE-LSTM (D) 0:8011 0:8515 0:9218 0:7336 0:8329 0:8217 0:5774 0:7692 0:9194 V AE-CNN (D) 0:8386 0:8772 0:9554 0:7909 0:8593 0:8528 0:5585 0:7415 0:9190 EFFECT (D) - - 0:9820 - - - - - - Our model-LSTM (D) 0:9272 0:9484 0.9822 0:8802 0:9140 0.9766 0:8113 0:8580 0:9541 Our model-CNN (D) 0.9274 0.9494 0:9810 0:8988 0:9296 0:9635 0:8119 0:8470 0.9776 Table 6.10: Examples of the generative model modifying gene sequences and the discriminative model detecting the modifications (marked in bold). Original GGTGGGTGTAGCCGTGGCTAGGGCTGACGGGGCCACTTGGGCTTGGCCGCATGCCCCTGTGCCCCACCAGCCATCCTGAACCCAACCTAG Modified GGTGGGTGTAGCCGTGGCTAGGGCTGACGGGGCCACTTGGGCTTGGCAGCATGNNNCTGTGCCCCACCAGCCAT GCTGAACCCAACCTAG Prediction GGTGGGTGTAGCCGTGGCTAGGGCTGACGGGGCCACTTGGGCTTGGCAGCATGNNNCTGTGCCCCACCAGCCATGCTGAACCCAACCTAG Original GCGCGGGGCGCTGAGCTCCAGGTAGGGCGCGCAGCCTGGTCAGGTGGCAGCCTTACCTCAGGAGGCTCAGCAGGGGTCCTCCCCACCTGC Modified GCGCGGGGCGCTGAGCTCCAGGTAGGGCGCGCAGCCTGGTCAGGTGGCAGGNTTATSTCAGGAGGCTCAGCAGGGGTCATCCCCACCTGC Prediction GCGCGGGGCGCTGAGCTCCAGGTAGGGCGCGCAGCCTGGTCAGGTGGCAGGNT TAT STCAGGAGGCTCAGCAGGGGTCATCCCCACCTGC Original TGGTGGCTAATTCAGGAATGTGCTGCTGTCTTTCTGCAGACGGGGGCAAGCACGTGGCATACATCATCAGGTCGCACGTGAAGGACCACT Modified TGGTGGCTAATTCAGGAATGTGNTGNTGT STTT GTGCAGACGGGGGCAAGCACGTGGCATACATCATCAGGT NGCACGTGAAGGACCACT Prediction TGGTGGCTAATTCAGGAATGTGNTGNTGTSTTTGTGCAGACGGGGGCAAGCACGTGGCATACATCATCAGGT NGCACGTGAAGGACCACT solve this problem by feeding more short sentences to our model. In most cases, although D explain does not generate predictions,D classify still can provide accurate classification. As shown in Table 6.8,D classify outputs low score, i.e., classifies the input as rumor, for four out of five rumors. 6.3 Gene Classification With Mutation Detection Genetic sequence classifications, gene mutation detection/prediction, DNA / RNA classifi- cation all work with genetic sequences, and deep learning-based methods in the literature take sequential data as input, and output the classification results [CM15,SLL + 19,LN15]. 178 Since our proposed framework demonstrates very good results for sequential / tex- tural data (as shown in previous sections), next, we adopt a textural representation [LWG + 20, Chi08] of gene sequences and investigate a gene mutation phenomenon. Note that binary format representation of genetic sequences is also frequently used in the literature [A V A + 19, TWA + 20]. In our GAN framework, the input to the models is first encoded into a high-dimensional vector, therefore, the binary formatting does not effect the experimental results. In this experiment, we first perform a mutation in genetic sequences by the generative model, and then useD classify to classify a genetic sequence and predict which parts of the sequence is mutated. We find that our framework not only provides high accuracy in classification task, but also accurately identifies the mutations in the generated sequences. In this experiment, all models are trained under NN269+NN269’ (an augmented dataset) to ensure fairness, and we follow the labeling rule in misinformation / fake news detection task. When testing with NN269+NN269’, there are 8 classes in total:AP ,AN, DP , DN from NN269 (original splice site dataset) and AP 0 , AN 0 , DP 0 , DN 0 from NN269’ (generated dataset). Detailed experiment setup can be found in the Method Section. If solely clean data from NN269 is accessible during training, then our proposed model and the variation of our proposed model are the only models that can recognize if a given sequence is modified or unmodified. Comparison between our model’s (and the variation’s)D classify and baselines is shown in Table 6.9. Under long acceptor data, baselines perform significantly worse than our model and the variation. Under short donor data, our model and the variation achieve highest AURoCs. This implies that our model and the variation are stronger when the input are long sequences. The layered structure and adversarial training under the augmented dataset provide our model the ability of extracting meaningful patterns from long sequences. For short sequences, our model and the variation provide highest AURoC, and simpler models such as CNN can 179 also give good classification results. This is because for short sequences, textural feature mining and understanding is relatively easier then in long sequences. Under NN269’, our model’sD classify andD explain achieve 92:25% and 72:69% macro-f1, respectively. Examples ofD explain ’s prediction are shown in Table 6.10. The results suggest that our model can not only classify a gene-sequence, but also provide an accurate prediction that explains which part of the sequence is modified. 6.4 Conclusion Rumor, as a piece of circulating information without verified veracity status, is hard to detect, especially when we have to point out why it is a rumor. Misinformation, whose veracity is determined, can be detected where there exists a verified database containing information about why the misinformation is wrong. Rumor detection is a hard problem and rumor detectors in the literature usually suffer from the low accuracy. The reason for unsatisfactory performance is multi-fold: for example, rumor dataset is usually small and imbalanced. The data-driven machine learning detectors don’t have sufficient high-quality data to work with, hence the data shortage causes the low or extremely imbalanced performance. Rumors usually emerge violently during emergent national or even international events and confirming the veracity of rumors can take a long time and an aggressive amount of human resource. Therefore, rumors could stay as floating and circulating pieces of information without veracity confirmed for a long time and provoke social panic, such as in the recent coronavirus breakout events. Rumors are associated with different events, so if the detector is trained with previous observed rumors on other events, the detection of current unseen rumors associated with the new event usually results in low accuracy because the patterns of the rumors are changed. 180 Compared to the detection problem, pointing out the problematic parts of the rumors is even more difficult due to the similar reasons. Genetic sequences classification, genetic mutation detection/prediction, gene-disease association, and DNA expression classification all work with gene sequences. Machine learning-based methods such as support vector machines and deep neural networks have already been used to solve these problems. We propose and verify the applicability of our designed framework on gene classification and mutation detection in this work. The fundamental rationality comes from that the genetic sequence essentially is textual data. Since our proposed framework is aiming to take textual data as input and make classification decisions, it is reasonable to apply the framework to gene data. Mutation detection in gene data is to find the abnormal places in a gene sequence and rumor detection with explanation is to find the abnormal places in a sentence. One problem facing by gene mutation detection is that there might be some unknown patterns in the gene sequence, which is similar to the generalization problem in rumor detection: unknown patterns exist in unobserved rumors. Hence, our proposed GAN-based model can alleviate this issue by intelligently augmenting the dataset. From an algorithmic perspective, the problem of rumor detection and gene classification can be formulated as a textual sequence classification problem. (Although genetic sequence representation can be in binary format, we have discussed that binary formatted genetic sequences can be further encoded into vectors as the input to our model, which does not generate different results in our experiments). Therefore, our framework as a sequential data classification model should be applicable to both rumor and gene classification. We can learn which parts are suspicious / machine generated in a rumor, and this is no different than given a sequence, we learn which parts contain abnormal patterns. Following similar reasoning, in gene mutation detection task, our model learns which parts in a genetic sequence is abnormal. The difference is that language has intuitive semantic meanings, 181 however, genetic sequence may have unknown hidden semantic meanings. Our goal is to investigate them both even though are different in order to provide this as an example of a methodology for interdisciplinary research and analysis. In summary, we proposed a layered text-level rumor detector and gene mutation detector with explanations based on GAN. We used the policy gradient method to effec- tively train the layered generators. Our proposed model outperforms the baseline models in mitigating the accuracy reduction problem, that exists in case of only clean data. We demonstrate the classification ability and generalization power of our model by compar- ing with multiple state-of-the-art models in both rumor detection and gene classification with mutation detection problems. On average, in the 2-class rumor detection task, our proposed model outperforms the baselines on clean dataset PHEME and enhanced dataset PHEME+PHEME’ by 26:85% and 17:04% in terms of macro-f1, respectively. Our model provides reasonable explanation without a previously constructed verified news database, and achieves significantly high performance. In the gene classification with mutation detection task, our model identifies the mutated gene sequence with high precision. On average, our model outperforms baselines in both NN269 and NN269+NN269’ (2-class) by 10:71% and 16:06% in terms of AURoC, respectively. In both rumor detection and gene mutation detection tasks, our model’s ability of explanation generation is demon- strated by identifying the mutations accurately (above 70% macro-f1). We find that using two discriminators to perform classification and explanation separately achieves higher performance than using one discriminator to realize both functions. We also found the pre-train ofD classify and varyingN replace contribute to the high accuracy ofD explain . Despite the high performance in both applications, we do find a limitation of our framework.D explain sometimes fails to provide explanations in rumor experiments when the input sentences are very short, even though the correspondingD classify generates accurate predictions. One potential reason for this result is that the dataset contains a 182 small number of short sentences and the model is not trained enough in short sentence cases. We also observed D explain performs a bit worse in gene mutation detection experiments than in rumor detection task. It could be caused by the choice ofN replace (the number of items to be replaced in a sequence), which is a hyper parameter that affects the mutation detection ability. As part of our future work, to improve the performance of the discriminators, we would like to chooseN replace intelligently. To enhance the performance of our generators, we would like to explore the application of hierarchical attention network [YYD + 16]. We will also investigate the dependencies between the discriminators of our model to benefitD explain from the accurateD classify . 183 Chapter 7 Conclusion and Future Research Directions 7.1 Major Contribution of this Thesis Nowadays, the need for trustworthiness, fairness, uncertainty quantification in machine- assisted decision making, human-in-the-loop CPHSs, AI-driven control, analysis, and optimization systems is rising, especially when human or intelligent agents or system’s security and safety are at stake. [BBK19] The increasing application of AI raises concerns regarding the security and morality of AI. Questions such as “Why should I trust AI", “How much should I trust AI", and “What are the chances my trust in AI, may result in tragic consequences" become daily concerns. Quantification of uncertainty and trust has become a popular research topic due to the growing applications of AI. However, the lack of theory and data becomes the major obstacle to uncertainty and trust quantifica- tion. In this thesis, we set forth the theoretical foundations for modeling, analysis, and optimization of CPHSs by proposing trust modeling frameworks in deep learning and multi-agent CPHSs. Subsequently, we summarize our main contributions as follows. • We address the trust issues of deep learning by proposing DeepTrust. DeepTrust imports an uncertainty reasoning logic to AI and quantify the trust opinion of DNNs based on data trustworthiness and topology of the neural network. We find that the trust opinion of DNN is affected by both the topology and trustworthiness of the training data, and some topologies are more robust than others. More 184 precisely, a robust topology results in higher projected trust probability values, not only when trained with trustworthy data, but also when fed with untrustworthy data. In extreme cases where only uncertain data is available, belief can still be extracted out of pure uncertainty. Based on DeepTrust, we further extend the framework to be able to generalize to CNNs and propose to involve trust in CNN optimization to improve the overall trust and accuracy performance of CNNs in cases where input data is noisy, incomplete, and untrustworthy. Designing neural networks is generally a challenging task. We propose to adjust the architecture, according to trust opinion, and optimize neural networks, based on trustworthiness. Based on our observations, the accuracy and trustworthiness of the outcome do not necessarily correlate. DeepTrust and its follow-up CNN trustworthiness work may therefore shed light on the design of neural networks with a focus not only on accuracy but also on trust while dealing with untrustworthy datasets. • In CPHSs consisting of multiple human and machine agents, such as multi-agent intelligent transportation systems, groups of drones, or self-driving vehicles, safe and efficient coordination is an important consideration. We propose a general trust framework for such multi-agent CPHSs to quantify the trustworthiness of individual agents to ensure safe and trustworthy control. We also propose a self- supervised trust-evaluation framework for perception systems in self-driving cars to ensure safety. We test our proposed methods in contexts involving a mixture of trustworthy and untrustworthy human and intelligent agents and show that with our trust-based or trust-modulated control system, safety is significantly improved and the potential fatal accidents are prevented. • In the human-dominated systems in CPHSs, such as social media, various rumors during emergency events threaten the internet credibility and provoke social panic, 185 and may lead to long-term negative consequences. Recent rumors on the 2019 novel coronavirus stand as a shocking example. To mitigate such issues provoked by rumors, we propose VRoC, a variational autoencoder-aided rumor classifi- cation system consisting of four components: rumor detection, rumor tracking, stance classification, and veracity classification. The novel architecture of VRoC, including its suitable classification techniques in the tasks associated with rumor handling and the designed co-train engine contributes to the high performance and generalization abilities of VRoC. To solve the limited and imbalanced data issue, the low-performance problem, and the lack of explanation problem, we also propose a GAN-based framework, which augments the dataset by generating new rumors/misinformation/fake news and uses the augmented data to train the discriminators to achieve high accuracy. The layered model intelligently makes detection decisions and generates a reasonable explanation. 7.2 Future Research Directions As we discussed in Chapter 2-3, we propose trustworthiness quantification frameworks for DNNs and CNNs, and use them to evaluate the trustworthiness of deep learning architectures and optimize NNs. We believe these works provide an innovative evaluation metric and are useful in deep learning model evaluation and design. In future work, we plan to apply our trust quantification framework to state-of-the-art deep learning models, and further investigate the noise-tolerant property trust-aware neural networks with more advanced attacks. Quantifying the trustworthiness of machine learning datasets, and optimizing deep learning architectures based on trust and accuracy could be explored. In Chapter 4, we propose general trust frameworks to quantify the trustworthiness of agents in multi-agent CPHSs, at both system-level and individual agent-level. We 186 also use the quantified trustworthiness in control to fulfill some security purposes, e.g., reducing collision rates and improving the trustworthiness of the system involving human and non-human agents. An air traffic control system is a great example that could benefit from trust-aware control. For example, with conflicting information collected from different sources, how should one trust this information and how should one adapt and make decisions under such scenarios. For future research directions, we are encouraged to go beyond intelligent transportation systems and develop and apply such trust-focused systems in a broader range of CPHSs such as medical and healthcare, smart city, and intelligent control systems. In addition, trust-aware control in CPHSs is also a direction worth exploring to make systems adaptive with respect to trustworthiness specifications. Following up the misinformation and rumor studies in Chapter 5-6, we provide a complete rumor classification system, an in-depth COVID-19 misinformation network evolution analysis, connect the genetic data processing and the natural language pro- cessing field, and provide new angles and opportunities for researchers in both fields to contribute mutually. One of the future directions could be detecting malicious bots or social media users based on their activities, such as their rumor-related activities and the evaluated trustworthiness of their friend accounts. We believe our proposed V AE and GAN frameworks could be beneficial to numerous textual data-based problems, such as rumor and misinformation detection, review classification for a product recommendation, twitter-bot detection and tracking, false information generation, and attack defense, and various genetic data-based applications. 187 Reference List [20220] What is a gene mutation and how do mutations occur?, 2020. https://ghr.nlm.nih.gov/primer/ mutationsanddisorders/genemutation. [AAMA10] Faisal Alkhateeb, Eslam Al Maghayreh, and Shadi Aljawarneh. A multi agent-based system for securing university campus: Design and architecture. In 2010 International Conference on Intelligent Systems, Modelling and Simulation, pages 75–79. IEEE, 2010. [AB02] Réka Albert and Albert-László Barabási. Statistical mechanics of complex networks. Reviews of modern physics, 74(1):47, 2002. [AB14] Jeff Alstott and Dietmar Plenz Bullmore. powerlaw: a python package for analysis of heavy-tailed distributions. PloS one, 9(1), 2014. [AC15] Jinwon An and Sungzoon Cho. Variational autoencoder based anomaly detection using reconstruction probability. Special Lecture on IE, 2(1), 2015. [ACB17] Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein gan. arXiv preprint arXiv:1701.07875, 2017. [Ace19] Alberto Acerbi. Cognitive attraction and online misinformation. Pal- grave Communications, 5(1):1–7, 2019. [AGY19] Hunt Allcott, Matthew Gentzkow, and Chuan Yu. Trends in the diffusion of misinformation on social media. Research & Politics, 6(2):2053168019848554, 2019. [AJB00] Réka Albert, Hawoong Jeong, and Albert-László Barabási. Error and attack tolerance of complex networks. nature, 406(6794):378–382, 2000. 188 [AN11] Ramón Fernandez Astudillo and João Paulo da Silva Neto. Propaga- tion of uncertainty through multilayer perceptrons for robust automatic speech recognition. In Twelfth Annual Conference of the International Speech Communication Association, 2011. [APH + 21] Moloud Abdar, Farhad Pourpanah, Sadiq Hussain, Dana Rezazade- gan, Li Liu, Mohammad Ghavamzadeh, Paul Fieguth, Xiaochun Cao, Abbas Khosravi, U Rajendra Acharya, et al. A review of uncertainty quantification in deep learning: Techniques, applications and chal- lenges. Information Fusion, 76:243–297, 2021. [ARC + 15] Mani Amoozadeh, Arun Raghuramu, Chen-Nee Chuah, Dipak Ghosal, H Michael Zhang, Jeff Rowe, and Karl Levitt. Security vulnerabilities of connected vehicle streams and their impact on cooperative driving. IEEE Communications Magazine, 53(6):126–132, 2015. [ASS11] Tsz-Chiu Au, Neda Shahidi, and Peter Stone. Enforcing liveness in autonomous traffic management. In Twenty-Fifth AAAI Conference on Artificial Intelligence, 2011. [A V A + 19] Leon Anavy, Inbal Vaknin, Orna Atar, Roee Amit, and Zohar Yakhini. Data storage in dna with fewer synthesis cycles using composite dna letters. Nature biotechnology, 37(10):1229–1236, 2019. [Axe16] Jakob Axelsson. Safety in vehicle platooning: A systematic literature review. IEEE Transactions on Intelligent Transportation Systems, 18(5):1033–1045, 2016. [AZS15] Tsz-Chiu Au, Shun Zhang, and Peter Stone. Autonomous intersection management for semi-autonomous vehicles. In Routledge Handbook of Transportation, pages 116–132. Routledge, 2015. [BA99] Albert-László Barabási and Réka Albert. Emergence of scaling in random networks. science, 286(5439):509–512, 1999. [BB98] David Barber and Christopher M Bishop. Ensemble learning in bayesian neural networks. Nato ASI Series F Computer and Systems Sciences, 168:215–238, 1998. [BB01a] Ginestra Bianconi and A-L Barabási. Competition and multiscaling in evolving networks. EPL (Europhysics Letters), 54(4):436, 2001. [BB01b] Ginestra Bianconi and Albert-László Barabási. Bose-einstein con- densation in complex networks. Physical review letters, 86(24):5632, 2001. 189 [BBK19] Edmon Begoli, Tanmoy Bhattacharya, and Dimitri Kusnezov. The need for uncertainty quantification in machine-assisted medical deci- sion making. Nature Machine Intelligence, 1(1):20, 2019. [BCB14] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014. [BCC + 17] Eric K Butler, Anca A Chandra, Pawan R Chowdhary, Susanne M Glissmann-Hochstein, Thomas D Griffin, Divyesh Jadav, Sunhwan Lee, and Hovey R Strong Jr. Drone air traffic control and flight plan management, December 26 2017. US Patent 9,852,642. [BDH + 21] Anand Balakrishnan, Jyotirmoy Deshmukh, Bardh Hoxha, Tomoya Yamaguchi, and Georgios Fainekos. PerceMon: Online Monitoring for Perception Systems. arXiv:2108.08289 [cs], August 2021. [BLCGPO20] Jesús Bobadilla, Raúl Lara-Cabrera, Ángel González-Prieto, and Fer- nando Ortega. Deepfair: deep learning for improving fairness in recommender systems. arXiv preprint arXiv:2006.05255, 2020. [BM19] Alexandre Bovet and Hernán A Makse. Influence of fake news in twit- ter during the 2016 us presidential election. Nature communications, 10(1):1–14, 2019. [Bor05] Stephen P Borgatti. Centrality and network flow. Social networks, 27(1):55–71, 2005. [BRG16] Jorge Bernal Bernabe, Jose Luis Hernandez Ramos, and Antonio F Skarmeta Gomez. Taciot: multidimensional trust-aware access control system for the internet of things. Soft Computing, 20(5):1763– 1779, 2016. [CA18] Jin-Hee Cho and Sibel Adali. Is uncertainty always bad?: Effect of topic competence on uncertain opinions. In 2018 IEEE International Conference on Communications (ICC), pages 1–7. IEEE, 2018. [Cau20] T Caulfield. Pseudoscience and covid-19-we’ve had enough already. Nature, 2020. [CGL + 18] Juan Cao, Junbo Guo, Xirong Li, Zhiwei Jin, Han Guo, and Jintao Li. Automatic rumor detection on microblogs: A survey. arXiv preprint arXiv:1807.03505, 2018. 190 [Che17] Hong Chen. Applications of cyber-physical system: a literature review. Journal of Industrial Integration and Management, 2(03):1750012, 2017. [Chi08] Heidi Chial. Dna sequencing technologies key to the human genome project. Nature Education, 1(1), 2008. [Cla18] Aaron Clauset. Trends and fluctuations in the severity of interstate wars. Science advances, 4(2):eaao3580, 2018. [CLNB21] Mingxi Cheng, Yizhi Li, Shahin Nazarian, and Paul Bogdan. From rumor to genetic mutation detection with explanations: a gan approach. Scientific Reports, 11(1):1–14, 2021. [CLT + 21] Charles Corbière, Marc Lafon, Nicolas Thome, Matthieu Cord, and Patrick Pérez. Beyond first-order uncertainty estimation with eviden- tial models for open-world recognition. In ICML 2021 Workshop on Uncertainty and Robustness in Deep Learning, 2021. [CLYZ18] Tong Chen, Xue Li, Hongzhi Yin, and Jun Zhang. Call attention to rumors: Deep attention based recurrent neural networks for early rumor detection. In Pacific-Asia Conference on Knowledge Discovery and Data Mining, pages 40–52. Springer, 2018. [CM15] Devi Arockia Vanitha Ca and Venkatesulu Mc. Gene expression data classification using support vector machine and mutual information- based gene selection. Procedia Computer Science, 47:13–21, 2015. [CMP11] Carlos Castillo, Marcelo Mendoza, and Barbara Poblete. Informa- tion credibility on twitter. In Proceedings of the 20th international conference on World wide web, pages 675–684. ACM, 2011. [CN20] Jon Cohen and Dennis Normile. New sars-like virus in china triggers alarm, 2020. [CNB20a] Mingxi Cheng, Shahin Nazarian, and Paul Bogdan. There is hope after all: Quantifying opinion and trustworthiness in neural networks. Frontiers in Artificial Intelligence, 3:54, 2020. [CNB20b] Mingxi Cheng, Shahin Nazarian, and Paul Bogdan. Vroc: Varia- tional autoencoder-aided multi-task rumor classifier based on text. In Proceedings of The Web Conference 2020, pages 2892–2898, 2020. [CSN09] Aaron Clauset, Cosma Rohilla Shalizi, and Mark EJ Newman. Power- law distributions in empirical data. SIAM review, 51(4):661–703, 2009. 191 [CSTL15] Xinran Chen, Sei-Ching Joanna Sin, Yin-Leng Theng, and Chei Sian Lee. Why students share misinformation on social media: Motiva- tion, gender, and study-level differences. The Journal of Academic Librarianship, 41(5):583–592, 2015. [CVMG + 14] Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statisti- cal machine translation. arXiv preprint arXiv:1406.1078, 2014. [CYZ + 21] Mingxi Cheng, Chenzhong Yin, Junyao Zhang, Shahin Nazarian, Jyotirmoy Deshmukh, and Paul Bogdan. A general trust framework for multi-agent systems. In Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems, pages 332–340, 2021. [DADF18] Adel Dokhanchi, Heni Ben Amor, Jyotirmoy V . Deshmukh, and Geor- gios Fainekos. Evaluating Perception Systems for Autonomous Vehi- cles Using Quality Temporal Logic. In Christian Colombo and Martin Leucker, editors, Runtime Verification, Lecture Notes in Computer Science, pages 409–416, Cham, 2018. Springer International Publish- ing. [Dem08] Arthur P Dempster. Upper and lower probabilities induced by a multivalued mapping. In Classic Works of the Dempster-Shafer Theory of Belief Functions, pages 57–72. Springer, 2008. [DHX + 17] Shuiguang Deng, Longtao Huang, Guandong Xu, Xindong Wu, and Zhaohui Wu. On deep learning for trust-aware recommendations in social networks. IEEE Transactions on Neural Networks and Learning Systems, 28(5):1164–1177, 2017. [DMDV20] S Devi, P Malarvezhi, R Dayana, and K Vadivukkarasi. A comprehen- sive survey on autonomous driving cars: A perspective view. Wireless Personal Communications, 114:2121–2133, 2020. [DMIC06] Giulia De Masi, Giulia Iori, and Guido Caldarelli. Fitness model for the italian interbank money market. Physical Review E, 74(6):066112, 2006. [Don20] J Donovan. Social-media companies must flatten the curve of misin- formation. Nature, 2020. [DRC + 17] Alexey Dosovitskiy, German Ros, Felipe Codevilla, Antonio Lopez, and Vladlen Koltun. CARLA: An open urban driving simulator. In 192 Proceedings of the 1st Annual Conference on Robot Learning, pages 1–16, 2017. [DS + 90] Avinash K Dixit, John JF Sherrerd, et al. Optimization in economic theory. Oxford University Press on Demand, 1990. [DS04] Kurt Dresner and Peter Stone. Multiagent traffic management: A reservation-based intersection control mechanism. In Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems-Volume 2, pages 530–537, 2004. [DS08] Kurt Dresner and Peter Stone. A multiagent approach to autonomous intersection management. Journal of artificial intelligence research, 31:591–656, 2008. [DTMD08] Dmitri Dolgov, Sebastian Thrun, Michael Montemerlo, and James Diebel. Practical search techniques in path planning for autonomous driving. Ann Arbor, 1001(48105):18–80, 2008. [DVCQ19] Xishuang Dong, Uboho Victor, Shanta Chowdhury, and Lijun Qian. Deep two-path semi-supervised learning for fake news detection. arXiv preprint arXiv:1906.05659, 2019. [DYZH20] Mengnan Du, Fan Yang, Na Zou, and Xia Hu. Fairness in deep learning: A computational perspective. IEEE Intelligent Systems, 2020. [EMH18] Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. Neural archi- tecture search: A survey. arXiv preprint arXiv:1808.05377, 2018. [Esh21] Birhanu Eshete. Making machine learning trustworthy. Science, 373(6556):743–744, 2021. [Est19] Ortiz-Ospina Esteban. The rise of social media, 2019. https: //ourworldindata.org/rise-of-social-media. [Fac20] FactCheck. Coronavirus misinformation spreads like a virus, 2020. https://www.factcheck.org/2020/01/ coronavirus-misinformation-spreads-like-a-virus/. [Fac21] Facebook. Facebook: combating misinformation, 2021. https: //about.fb.com/news/tag/misinformation/. [FAEC14] Adrien Friggeri, Lada Adamic, Dean Eckles, and Justin Cheng. Rumor cascades. In Eighth International AAAI Conference on Weblogs and Social Media, 2014. 193 [FdANSQM + 18] Henrique Ferraz de Arruda, Filipi Nascimento Silva, Vanessa Queiroz Marinho, Diego Raphael Amancio, and Luciano da Fon- toura Costa. Representation of texts as complex networks: a meso- scopic approach. Journal of Complex Networks, 6(1):125–144, 2018. [FGD18] William Fedus, Ian Goodfellow, and Andrew M Dai. Maskgan: better text generation via filling in the_. arXiv preprint arXiv:1801.07736, 2018. [FHSR + 20] Di Feng, Christian Haase-Schuetz, Lars Rosenbaum, Heinz Hertlein, Claudius Glaeser, Fabian Timm, Werner Wiesbeck, and Klaus Diet- mayer. Deep multi-modal object detection and semantic segmentation for autonomous driving: Datasets, methods, and challenges. IEEE Transactions on Intelligent Transportation Systems, 2020. [Fiv16] FiveThirtyEight. 2016 election polls. https://www.kaggle. com/fivethirtyeight/2016-election-polls, 2016. online, accessed 7 April 2019. [Fle11] Bill Fleming. New innovative ics [automotive electronics]. IEEE Vehicular Technology Magazine, 6(2):4–8, 2011. [Fle20] Nic Fleming. Coronavirus misinformation, and how scientists can help to fight it. Nature, 583(7814):155–156, 2020. [FMP05] Jean-Claude Fernandez, Laurent Mounier, and Cyril Pachon. A model- based approach for robustness testing. In IFIP International Confer- ence on Testing of Communicating Systems, pages 333–348. Springer, 2005. [Fri09] Karl Friston. The free-energy principle: a rough guide to the brain? Trends in cognitive sciences, 13(7):293–301, 2009. [Fuk69] Kunihiko Fukushima. Visual feature extraction by a multilayered network of analog threshold elements. IEEE Transactions on Systems Science and Cybernetics, 5(4):322–333, 1969. [GAA + 17] Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. Improved training of wasserstein gans. In Advances in neural information processing systems, pages 5767–5777, 2017. [GBC16] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. 6.2. 2.3 softmax units for multinoulli output distributions. Deep learning, (1):180, 2016. 194 [GBD + 18] Genevieve Gorrell, Kalina Bontcheva, Leon Derczynski, Elena Kochk- ina, Maria Liakata, and Arkaitz Zubiaga. Rumoureval 2019: Deter- mining rumour veracity and support for rumours. arXiv preprint arXiv:1809.06683, 2018. [GDDM14] Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 580–587, 2014. [GDDM15] Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Region-based convolutional networks for accurate object detection and segmentation. IEEE transactions on pattern analysis and machine intelligence, 38(1):142–158, 2015. [GG16] Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approxima- tion: Representing model uncertainty in deep learning. In international conference on machine learning, pages 1050–1059, 2016. [GIG17] Yarin Gal, Riashat Islam, and Zoubin Ghahramani. Deep bayesian active learning with image data. In International Conference on Machine Learning, pages 1183–1192. PMLR, 2017. [Gil06] H Gill. Nsf perspective and status on cyber-physical systems. in national workshop on cyber-physical systems. Austin, TX, 2006. [GKRT04] Ramanthan Guha, Ravi Kumar, Prabhakar Raghavan, and Andrew Tomkins. Propagation of trust and distrust. In Proceedings of the 13th international conference on World Wide Web, pages 403–412. ACM, 2004. [GLM + 12] Andreas Geiger, Martin Lauer, Frank Moosmann, Benjamin Ranft, Holger Rapp, Christoph Stiller, and Julius Ziegler. Team annieway’s entry to the 2011 grand cooperative driving challenge. IEEE Transac- tions on Intelligent Transportation Systems, 13(3):1008–1017, 2012. [GLSU13] A Geiger, P Lenz, C Stiller, and R Urtasun. Vision meets robotics: The KITTI dataset. The International Journal of Robotics Research, 32(11):1231–1237, September 2013. [GMDC + 18] Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, and Martin Vechev. Ai2: Safety and robustness certification of neural networks with abstract interpretation. In 2018 IEEE Symposium on Security and Privacy (SP), pages 3–18. IEEE, 2018. 195 [GMJ19] Georgia Gkioxari, Jitendra Malik, and Justin Johnson. Mesh r-cnn. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 9785–9795, 2019. [Goo16] Ian Goodfellow. Nips 2016 tutorial: Generative adversarial networks. arXiv preprint arXiv:1701.00160, 2016. [GPAM + 14] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information pro- cessing systems, pages 2672–2680, 2014. [GPGV14] V olkan Gunes, Steffen Peter, Tony Givargis, and Frank Vahid. A survey on concepts, applications, and challenges in cyber-physical systems. KSII Transactions on Internet and Information Systems (TIIS), 8(12):4242–4268, 2014. [GSW + 20] Jie Gui, Zhenan Sun, Yonggang Wen, Dacheng Tao, and Jieping Ye. A review on generative adversarial networks: Algorithms, theory, and applications. arXiv preprint arXiv:2001.06937, 2020. [GTA + 21] Jakob Gawlikowski, Cedrique Rovile Njieutcheu Tassi, Mohsin Ali, Jongseok Lee, Matthias Humt, Jianxiang Feng, Anna Kruspe, Rudolph Triebel, Peter Jung, Ribana Roscher, et al. A survey of uncertainty in deep neural networks. arXiv preprint arXiv:2107.03342, 2021. [GWWW19] Keno Garlichs, Alexander Willecke, Martin Wegner, and Lars C Wolf. Trip: Misbehavior detection for dynamic platoons using trust. In 2019 IEEE Intelligent Transportation Systems Conference (ITSC), pages 455–460. IEEE, 2019. [GZH12] Manish Gupta, Peixiang Zhao, and Jiawei Han. Evaluating event credibility on twitter. In Proceedings of the 2012 SIAM International Conference on Data Mining, pages 153–164. SIAM, 2012. [GZP19] Siyuan Gong, Anye Zhou, and Srinivas Peeta. Cooperative adaptive cruise control for a platoon of connected and autonomous vehicles con- sidering dynamic information flow topology. Transportation Research Record, 2673(10):185–198, 2019. [HAR14] Shah Ahsanul Haque, Syed Mahfuzul Aziz, and Mustafizur Rahman. Review of cyber-physical system in healthcare. international journal of distributed sensor networks, 10(4):217415, 2014. 196 [HBD + 21] Caner Hazirbas, Joanna Bitton, Brian Dolhansky, Jacqueline Pan, Albert Gordo, and Cristian Canton Ferrer. Towards measuring fairness in ai: the casual conversations dataset. arXiv preprint arXiv:2104.02821, 2021. [HD15] Sardar Hamidian and Mona T Diab. Rumor detection and classi- fication for twitter data. In Proceedings of the Fifth International Conference on Social Media Technologies, Communication, and Infor- matics (SOTICS), pages 71–77, 2015. [Hek21] Mohammad Hekmatnejad. Formalizing Safety, Perception, and Mis- sion Requirements for Testing and Planning in Autonomous Vehicles. PhD thesis, Arizona State University, 2021. [HGC19] Sooji Han, Jie Gao, and Fabio Ciravegna. Data augmentation for rumor detection using context-sensitive neural language model with large-scale credibility corpus. 2019. [HGRDG18] Muhammad Abdul Haseeb, Jianyu Guan, Danijela Ristic-Durrant, and Axel Gräser. Disnet: a novel method for distance estimation from monocular camera. 10th Planning, Perception and Navigation for Intelligent Vehicles (PPNIV18), IROS, 2018. [HKK + 18] Xiaowei Huang, Daniel Kroening, Marta Kwiatkowska, Wenjie Ruan, Youcheng Sun, Emese Thamo, Min Wu, and Xinping Yi. Safety and trustworthiness of deep neural networks: A survey. arXiv preprint arXiv:1812.08342, 2018. [HKWW17] Xiaowei Huang, Marta Kwiatkowska, Sen Wang, and Min Wu. Safety verification of deep neural networks. In International Conference on Computer Aided Verification, pages 3–29. Springer, 2017. [Hop17] V David Hopkin. Human factors in air traffic control. CRC Press, 2017. [HOR87] RONNIE D HORNER. Age at onset of alzheimer’s disease: Clue to the relative importance of etiologic factors? American journal of epidemiology, 126(3):409–414, 1987. [HS97] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997. [HSS08] Aric A. Hagberg, Daniel A. Schult, and Pieter J. Swart. Exploring network structure, dynamics, and function using networkx. In Gaël Varoquaux, Travis Vaught, and Jarrod Millman, editors, Proceedings 197 of the 7th Python in Science Conference, pages 11 – 15, Pasadena, CA USA, 2008. [HV20] Kris Hartley and Minh Khuong Vu. Fighting fake news in the covid- 19 era: policy insights from an equilibrium model. Policy Sciences, 53(4):735–758, 2020. [HVC93] Geoffrey E Hinton and Drew Van Camp. Keeping the neural networks simple by minimizing the description length of the weights. In Pro- ceedings of the sixth annual conference on Computational learning theory, pages 5–13, 1993. [HW21] Eyke Hüllermeier and Willem Waegeman. Aleatoric and epistemic uncertainty in machine learning: An introduction to concepts and methods. Machine Learning, 110(3):457–506, 2021. [HZRS16] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770– 778, 2016. [IBM15] IBM. Building trust in ai. 2015. online, accessed 19 November 2018. [iCSK04] Ramon Ferrer i Cancho, Ricard V Solé, and Reinhard Köhler. Patterns in syntactic dependency networks. Physical Review E, 69(5):051915, 2004. [IWA + 19] Radoslav Ivanov, James Weimer, Rajeev Alur, George J Pappas, and Insup Lee. Verisig: verifying safety properties of hybrid systems with neural network controllers. In Proceedings of the 22nd ACM Inter- national Conference on Hybrid Systems: Computation and Control, pages 169–178. ACM, 2019. [JCG + 17] Zhiwei Jin, Juan Cao, Han Guo, Yongdong Zhang, Yu Wang, and Jiebo Luo. Detection and analysis of 2016 us presidential election related rumors on twitter. In International conference on social com- puting, behavioral-cultural modeling and prediction and behavior representation in modeling and simulation, pages 14–24. Springer, 2017. [Jen09] Lowther Jenn. Microblogging is one of the top four trends in social media, 2009. https: //www.straight.com/article-200494/ microblogging-one-top-four-trends-social-media. 198 [Jes20] McDonald Jessica. Q&a on the wuhan coronavirus, 2020. https://www.factcheck.org/2020/01/ qa-on-the-wuhan-coronavirus/. [JGP16] Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144, 2016. [JHP06] Audun Jøsang, Ross Hayward, and Simon Pope. Trust network anal- ysis with subjective logic. In Proceedings of the 29th Australasian Computer Science Conference-Volume 48, pages 85–94. Australian Computer Society, Inc., 2006. [JKGG18] Heinrich Jiang, Been Kim, Melody Y Guan, and Maya Gupta. To trust or not to trust a classifier. arXiv preprint arXiv:1805.11783, 2018. [Jøs16] Audun Jøsang. Subjective logic. Springer, 2016. [Jul20] Wernau Julie. Virus sparks chinese panic buy- ing, travel cancellations and social-media misinforma- tion, 2020. https://www.wsj.com/articles/ coronavirus-sparks-chinese-panic-buying-travel-cancellations-and-social-media-misinformation-11579698948. [KBdS17] Andrew Koster, Ana LC Bazzan, and Marcelo de Souza. Liar liar, pants on fire; or how to use subjective logic and argumentation to evaluate information from untrustworthy sources. Artif. Intell. Rev., 48(2):219–235, 2017. [KC19] Sumeet Kumar and Kathleen M Carley. Tree lstms with convolution units to predict stance and rumor veracity in social media conversa- tions. In Proceedings of the 57th Conference of the Association for Computational Linguistics, pages 5047–5058, 2019. [KCJ + 13] Sejeong Kwon, Meeyoung Cha, Kyomin Jung, Wei Chen, and Yajun Wang. Prominent features of rumor propagation in online social media. In 2013 IEEE 13th International Conference on Data Mining, pages 1103–1108. IEEE, 2013. [KCZ + 21] Anna-Kathrin Kopetzki, Bertrand Charpentier, Daniel Zügner, Sand- hya Giri, and Stephan Günnemann. Evaluating robustness of predictive uncertainty estimation: Are dirichlet-based models reliable? In Inter- national Conference on Machine Learning, pages 5707–5718. PMLR, 2021. [KDJS14] Uday Kamath, Kenneth De Jong, and Amarda Shehu. Effective auto- mated feature construction and selection for classification of biological sequences. PloS one, 9(7):e99982, 2014. 199 [KE18] Matsa Katerina and Shearer Elisa. News use across social media platforms 2018, 2018. [KH + 09] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. [KHL16] Matt J Kusner and José Miguel Hernández-Lobato. Gans for sequences of discrete elements with the gumbel-softmax distribution. arXiv preprint arXiv:1611.04051, 2016. [KLMST11] Anne-Marie Kermarrec, Erwan Le Merrer, Bruno Sericola, and Gilles Trédan. Second order centrality: Distributed assessment of nodes criticity in complex networks. Computer Communications, 34(5):619– 628, 2011. [KLZ18] Elena Kochkina, Maria Liakata, and Arkaitz Zubiaga. All-in- one: Multi-task learning for rumour verification. arXiv preprint arXiv:1806.03713, 2018. [KMR15] Jakub Koneˇ cn` y, Brendan McMahan, and Daniel Ramage. Federated optimization: Distributed optimization beyond the datacenter. arXiv preprint arXiv:1511.03575, 2015. [KRL00] Paul L Krapivsky, Sidney Redner, and Francois Leyvraz. Connectivity of growing random networks. Physical review letters, 85(21):4629, 2000. [KSH12] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012. [KSR08] Joseph S Kong, Nima Sarshar, and Vwani P Roychowdhury. Experi- ence versus talent shapes the structure of the web. Proceedings of the National Academy of Sciences, 105(37):13724–13729, 2008. [KW13] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. [Lar20] Heidi J Larson. Blocking information on covid-19 can fuel the spread of misinformation. Nature, 580(7803):306, 2020. [LBBH98] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. 200 [LBK15] Jay Lee, Behrad Bagheri, and Hung-An Kao. A cyber-physical systems architecture for industry 4.0-based manufacturing systems. Manufac- turing letters, 3:18–23, 2015. [LCB + 18] Yuanfu Luo, Panpan Cai, Aniket Bera, David Hsu, Wee Sun Lee, and Dinesh Manocha. Porca: Modeling and planning for autonomous driving among many pedestrians. IEEE Robotics and Automation Letters, 3(4):3418–3425, 2018. [LDWH18] Xiaoyuan Liang, Xunsheng Du, Guiling Wang, and Zhu Han. Deep reinforcement learning for traffic light control in vehicular networks. arXiv preprint arXiv:1803.11115, 2018. [Lee10] Edward A Lee. Cps foundations. In Design automation conference, pages 737–742. IEEE, 2010. [Lee18] Timothy B. Lee. Report: Software bug led to death in Uber’s self- driving crash. Ars Technica, May 2018. [LES + 12] Stephan Lewandowsky, Ullrich KH Ecker, Colleen M Seifert, Norbert Schwarz, and John Cook. Misinformation and its correction: Contin- ued influence and successful debiasing. Psychological Science in the Public Interest, 13(3):106–131, 2012. [LN15] Maxwell W Libbrecht and William Stafford Noble. Machine learning applications in genetics and genomics. Nature Reviews Genetics, 16(6):321–332, 2015. [LSA01] Eckhard Limpert, Werner A Stahel, and Markus Abbt. Log-normal dis- tributions across the sciences: keys and clues: on the charms of statis- tics, and how mechanical models resembling gambling machines offer a link to a handy way to characterize log-normal distributions, which can provide deeper insight into variability and probability—normal or log-normal: that is the question. BioScience, 51(5):341–352, 2001. [LSV + 17] Hanxiao Liu, Karen Simonyan, Oriol Vinyals, Chrisantha Fernando, and Koray Kavukcuoglu. Hierarchical representations for efficient architecture search. arXiv preprint arXiv:1711.00436, 2017. [LSZ10] Yanling Li, Guoshe Sun, and Yehang Zhu. Data imbalance problem in text classification. In 2010 Third International Symposium on Information Processing, pages 301–305. IEEE, 2010. [LW20] Zhiming Liu and Ji Wang. Human-cyber-physical systems: concepts, challenges, and research opportunities. Frontiers of Information Tech- nology & Electronic Engineering, 21(11):1535–1553, 2020. 201 [LWG + 20] Howon Lee, Daniel J Wiegand, Kettner Griswold, Sukanya Puntham- baker, Honggu Chun, Richie E Kohman, and George M Church. Photon-directed multiplexed enzymatic dna synthesis for molecular digital data storage. BioRxiv, 2020. [LYU18] Wenjie Luo, Bin Yang, and Raquel Urtasun. Fast and furious: Real time end-to-end 3d detection, tracking and motion forecasting with a single convolutional net. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pages 3569–3577, 2018. [LZS19] Quanzhi Li, Qiong Zhang, and Luo Si. Rumor detection by exploit- ing user credibility information, attention and multi-task learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1173–1179, 2019. [Mag20] Miller Maggie. Google to spend 6.5 million in fight against coronavirus misinformation, 2020. https://thehill.com/policy/technology/ 490865-google-to-invest-65-million-to-fight-coronavirus-misinformation. [Mat20] Field Matt. Fake news epidemic: Coronavirus breeds hate and disinformation in india and beyond, 2020. https://thebulletin.org/2020/01/ fake-news-epidemic-coronavirus-breeds-hate-and-disinformation-in-india-and-beyond/. [MB10] Radu Marculescu and Paul Bogdan. Cyberphysical systems: workload modeling and design optimization. IEEE Design & Test of Computers, 28(4):78–87, 2010. [Mer20] Livingston Mercey. Coronavirus fact check: How to spot fake reports about the mysterious dis- ease, 2020. https://www.cnet.com/how-to/ false-information-about-coronavirus-here-are-the-top-rumors-spreading-about-it/. [MG18] Andrey Malinin and Mark Gales. Predictive uncertainty estimation via prior networks. Advances in neural information processing systems, 31, 2018. [MGM + 95] Johan AJ Metz, Stefan AH Geritz, Géza Meszéna, Frans JA Jacobs, and Joost S Van Heerwaarden. Adaptive dynamics: a geometrical study of the consequences of nearly faithful reproduction. 1995. [MGW18] Jing Ma, Wei Gao, and Kam-Fai Wong. Detect rumor and stance jointly by neural multi-task learning. In Companion Proceedings of 202 the The Web Conference 2018, pages 585–593. International World Wide Web Conferences Steering Committee, 2018. [MGW19] Jing Ma, Wei Gao, and Kam-Fai Wong. Detect rumors on twitter by promoting information campaigns with generative adversarial learning. In The World Wide Web Conference, pages 3049–3055. ACM, 2019. [MKS + 15] V olodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. nature, 518(7540):529–533, 2015. [MMS + 21] Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. A survey on bias and fairness in machine learning. ACM Computing Surveys (CSUR), 54(6):1–35, 2021. [MN04] O. Maler and D. Nickovic. Monitoring temporal properties of continu- ous signals. In FORMATS/FTRTFT, 2004. [Moo85] Robert C Moore. Semantical considerations on nonmonotonic logic. Artificial intelligence, 25(1):75–94, 1985. [MSS + 13] Vicente Milanés, Steven E Shladover, John Spring, Christopher Nowakowski, Hiroshi Kawazoe, and Masahide Nakamura. Coopera- tive adaptive cruise control in real traffic situations. IEEE Transactions on intelligent transportation systems, 15(1):296–305, 2013. [NBW06] Mark Ed Newman, Albert-László Ed Barabási, and Duncan J Watts. The structure and dynamics of networks. Princeton university press, 2006. [NDCD19] Duc Minh Nguyen, Tien Huu Do, Robert Calderbank, and Nikos Deligiannis. Fake news detection using deep markov random fields. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1391–1400, 2019. [New05] Mark EJ Newman. A measure of betweenness centrality based on random walks. Social networks, 27(1):39–54, 2005. [NH11] Muaz Niazi and Amir Hussain. Agent-based computing from multi- agent systems to agent-based models: a visual survey. Scientometrics, 89(2):479–499, 2011. 203 [nts19] Highway Collision Investigation. Collision Investigation HWY18MH010, National Transportation Safety Board, November 2019. [OAS10] Tore Opsahl, Filip Agneessens, and John Skvoretz. Node centrality in weighted networks: Generalizing degree and shortest paths. Social networks, 32(3):245–251, 2010. [OQW18] Ray Oshikawa, Jing Qian, and William Yang Wang. A survey on natural language processing for fake news detection. arXiv preprint arXiv:1811.00770, 2018. [PAD + 17] Scott Drew Pendleton, Hans Andersen, Xinxin Du, Xiaotong Shen, Malika Meghjani, You Hong Eng, Daniela Rus, and Marcelo H Ang. Perception, planning, control, and coordination for autonomous vehi- cles. Machines, 5(1):6, 2017. [PMB + 21] John Phillips, Julieta Martinez, Ioan Andrei Bârsan, Sergio Casas, Abbas Sadat, and Raquel Urtasun. Deep multi-task learning for joint localization, perception, and prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4679–4689, 2021. [PPC20] Francesco Pierri, Carlo Piccardi, and Stefano Ceri. Topology com- parison of twitter diffusion networks effectively reveals misleading information. Scientific reports, 10(1):1–9, 2020. [PSG + 11] Radha Poovendran, Krishna Sampigethaya, Sandeep Kumar S Gupta, Insup Lee, K Venkatesh Prasad, David Corman, and James L Pau- nicka. Special issue on cyber-physical systems [scanning the issue]. Proceedings of the IEEE, 100(1):6–12, 2011. [PSM14] Jeffrey Pennington, Richard Socher, and Christopher Manning. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543, 2014. [PSS15] Thong Pham, Paul Sheridan, and Hidetoshi Shimodaira. Pafit: A statistical method for measuring preferential attachment in temporal complex networks. PloS one, 10(9):e0137796, 2015. [PSS16] Thong Pham, Paul Sheridan, and Hidetoshi Shimodaira. Joint estima- tion of preferential attachment and node fitness in growing complex networks. Scientific reports, 6:32558, 2016. 204 [PT09] Anna Petrovskaya and Sebastian Thrun. Model based vehicle detection and tracking for autonomous urban driving. Autonomous Robots, 26(2):123–139, 2009. [QCG + 09] Morgan Quigley, Ken Conley, Brian Gerkey, Josh Faust, Tully Foote, Jeremy Leibs, Rob Wheeler, and Andrew Y . Ng. ROS: An open- source Robot Operating System. In ICRA Workshop on Open Source Software, volume 3, page 5. Kobe, Japan, 2009. [QRRM11] Vahed Qazvinian, Emily Rosengren, Dragomir Radev, and Qiaozhu Mei. Rumor has it: Identifying misinformation in microblogs. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1589–1599, 2011. [RAHL19] Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V Le. Reg- ularized evolution for image classifier architecture search. In Pro- ceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 4780–4789, 2019. [RDS + 15] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211–252, 2015. [REIK17] Srinivasan Radhakrishnan, Serkan Erbis, Jacqueline A Isaacs, and Sagar Kamarthi. Novel keyword co-occurrence network-based meth- ods to foster systematic reviews of scientific literature. PloS one, 12(3):e0172778, 2017. [REKH97] Martin G Reese, Frank H Eeckman, David Kulp, and David Haussler. Improved splice site detection in genie. Journal of computational biology, 4(3):311–323, 1997. [RF18] Joseph Redmon and Ali Farhadi. Yolov3: An incremental improve- ment. arXiv preprint arXiv:1804.02767, 2018. [RG19] Nils Reimers and Iryna Gurevych. Sentence-bert: Sentence embed- dings using siamese bert-networks. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing. Asso- ciation for Computational Linguistics, 11 2019. [Ros19] Francesca Rossi. Building trust in artificial intelligence. Journal of international affairs, 72(1):127–134, 2019. 205 [RPP18] Guillermo Armando Ronda-Pupo and Thong Pham. The evolutions of the rich get richer and the fit get richer phenomena in scholarly net- works: the case of the strategic management journal. Scientometrics, 116(1):363–383, 2018. [RT19] Akshay Rangesh and Mohan Manubhai Trivedi. No blind spots: Full- surround multi-object tracking for autonomous vehicles using cameras and lidars. IEEE Transactions on Intelligent Vehicles, 4(4):588–599, 2019. [RU11] Anand Rajaraman and Jeffrey David Ullman. Mining of massive datasets. Cambridge University Press, 2011. [RvdL19] Jon Roozenbeek and Sander van der Linden. Fake news game confers psychological resistance against online misinformation. Palgrave Communications, 5(1):1–10, 2019. [RXY + 19] Aditi Raghunathan, Sang Michael Xie, Fanny Yang, John C Duchi, and Percy Liang. Adversarial training can hurt generalization. arXiv preprint arXiv:1906.06032, 2019. [SC02] Mariano Sigman and Guillermo A Cecchi. Global organization of the wordnet lexicon. Proceedings of the National Academy of Sciences, 99(3):1742–1747, 2002. [SCV + 18] Chengcheng Shao, Giovanni Luca Ciampaglia, Onur Varol, Kai-Cheng Yang, Alessandro Flammini, and Filippo Menczer. The spread of low- credibility content by social bots. Nature communications, 9(1):1–9, 2018. [SCW + 19] Kai Shu, Limeng Cui, Suhang Wang, Dongwon Lee, and Huan Liu. defend: Explainable fake news detection. In KDD, 2019. [SG16] Friederike Schneemann and Irene Gohl. Analyzing driver-pedestrian interaction at crosswalks: A contribution to autonomous driving in urban environments. In 2016 IEEE intelligent vehicles symposium (IV), pages 38–43. IEEE, 2016. [Sha76] Glenn Shafer. A mathematical theory of evidence, volume 42. Prince- ton university press, 1976. [SKK18] Murat Sensoy, Lance Kaplan, and Melih Kandemir. Evidential deep learning to quantify classification uncertainty. In Advances in Neural Information Processing Systems, pages 3179–3189, 2018. 206 [SLF + 14] Zhiyuan Su, Mingchu Li, Xinxin Fan, Xing Jin, and Zhen Wang. Research on trust propagation models in reputation management sys- tems. Mathematical Problems in Engineering, 2014, 2014. [SLL + 19] Sayed Mohammad Ebrahim Sahraeian, Ruolin Liu, Bayo Lau, Karl Podesta, Marghoob Mohiyuddin, and Hugo YK Lam. Deep convolu- tional neural networks for accurate somatic mutation detection. Nature communications, 10(1):1–10, 2019. [Sma04] Florentin Smarandache. An in-depth look at information fusion rules and the unification of fusion theories. Infinite Study, 2004. [SMSM00] Richard S Sutton, David A McAllester, Satinder P Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. In Advances in neural information processing systems, pages 1057–1063, 2000. [Sol99] Sorin Solomon. Generalized lotka-volterra (glv) models. arXiv: con- mat/9901250 v1, 1999. [SQJ + 19] Karishma Sharma, Feng Qian, He Jiang, Natali Ruchansky, Ming Zhang, and Yan Liu. Combating fake news: A survey on identification and mitigation techniques. ACM Transactions on Intelligent Systems and Technology (TIST), 10(3):1–42, 2019. [SS17] Guni Sharon and Peter Stone. A protocol for mixed autonomous and human-operated vehicles at intersections. In International Confer- ence on Autonomous Agents and Multiagent Systems, pages 151–167. Springer, 2017. [SSM + 20] Karishma Sharma, Sungyong Seo, Chuizheng Meng, Sirisha Ramb- hatla, and Yan Liu. Covid-19 on social media: Analyzing misinfor- mation in twitter conversations. arXiv e-prints, pages arXiv–2003, 2020. [SSP03] Patrice Y Simard, Dave Steinkraus, and John C Platt. Best practices for convolutional neural networks applied to visual document analysis. In null, page 958. IEEE, 2003. [SSSB20] Tal Schuster, Roei Schuster, Darsh J Shah, and Regina Barzilay. The limitations of stylometry for detecting machine-generated fake news. Computational Linguistics, pages 1–12, 2020. [SSZ + 16] Sulayman K Sowe, Eric Simmon, Koji Zettsu, Frederic De Vaulx, and Irena Bojanova. Cyber-physical-human systems: Putting people in the loop. IT professional, 18(1):10–13, 2016. 207 [ST05] Mark Steyvers and Joshua B Tenenbaum. The large-scale structure of semantic networks: Statistical analyses and a model of semantic growth. Cognitive science, 29(1):41–78, 2005. [Sub20] Priyadarshini Subhra. Fighting the coronavirus misinformation epi- demic. Nature India, 2020. [Sun04] Kai Sun. Explanation of log-normal distributions and power-law distributions in biology and social science. Tech. Report, Department of Physics, 2004. [SVI + 16] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818–2826, 2016. [SWSF19] Saad Sadiq, Nicolas Wagner, Mei-Ling Shyu, and Daniel Feaster. High dimensional latent space variational autoencoders for fake news detection. In 2019 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR), pages 437–442. IEEE, 2019. [SZ14] Karen Simonyan and Andrew Zisserman. Very deep convolu- tional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. [SZCY20] Weishi Shi, Xujiang Zhao, Feng Chen, and Qi Yu. Multifaceted uncertainty estimation for label-efficient deep learning. Advances in Neural Information Processing Systems, 33:17247–17257, 2020. [SZS + 13] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013. [Tem20] Brad Templeton. Tesla In Taiwan Crashes Directly Into Overturned Truck, Ignores Pedestrian, With Autopilot On. Forbes, June 2020. [TJK18] Jessica S Titensky, Hayden Jananthan, and Jeremy Kepner. Uncertainty propagation in deep neural networks using extended kalman filtering. MIT IEEE Undergraduate Research Technology Conference, 2018. [TL19] Mingxing Tan and Quoc Le. Efficientnet: Rethinking model scaling for convolutional neural networks. In International Conference on Machine Learning, pages 6105–6114. PMLR, 2019. 208 [Ton20] Romm Tony. Facebook will remove misinformation about coronavirus, 2020.https://www.washingtonpost.com/technology/ 2020/01/30/facebook-coronavirus-fakes/. [TWA + 20] S Kasra Tabatabaei, Boya Wang, Nagendra Bala Murali Athreya, Behnam Enghiad, Alvaro Gonzalo Hernandez, Christopher J Fields, Jean-Pierre Leburton, David Soloveichik, Huimin Zhao, and Olgica Milenkovic. Dna punch cards for storing data on native dna sequences via enzymatic nicking. Nature communications, 11(1):1–10, 2020. [Twi21] Safety Twitter. Updates to our work on covid-19 vaccine misinformation, 2021. https://blog. twitter.com/en_us/topics/company/2021/ updates-to-our-work-on-covid-19-vaccine-misinformation. html. [UKD + 19] Raquel Urena, Gang Kou, Yucheng Dong, Francisco Chiclana, and Enrique Herrera-Viedma. A review on trust propagation and opinion dynamics in social networks and group decision making frameworks. Information Sciences, 478:461–475, 2019. [USC20] Lab USC, Melady. Coronavirus on social media: misinforma- tion analysis, 2020. https://usc-melady.github.io/ COVID-19-Tweet-Analysis/misinfo.html. [V A13] Vincent Van Asch. Macro-and micro-averaged evaluation measures [[basic draft]]. Belgium: CLiPS, 49, 2013. [vdHLK17] Rens van der Heijden, Thomas Lukaseder, and Frank Kargl. Analyzing attacks on cooperative adaptive cruise control (cacc). In 2017 IEEE Vehicular Networking Conference (VNC), pages 45–52. IEEE, 2017. [vdOKK16] A"a ron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. CoRR, abs/1601.06759, 2016. [VP17] Marco Viviani and Gabriella Pasi. Credibility in social media: opin- ions, news, and health information—a survey. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 7(5):e1209, 2017. [VQSZ19] Michela Del Vicario, Walter Quattrociocchi, Antonio Scala, and Fabi- ana Zollo. Polarization and fake news: Early warning of potential mis- information targets. ACM Transactions on the Web (TWEB), 13(2):1– 22, 2019. 209 [VRA18] Soroush V osoughi, Deb Roy, and Sinan Aral. The spread of true and false news online. Science, 359(6380):1146–1151, 2018. [Wil40] Carrington B Williams. A note on the statistical analysis of sentence- length as a criterion of literary style. Biometrika, 31(3/4):356–361, 1940. [Wil92] Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3- 4):229–256, 1992. [WZ + 17] Dongxia Wang, Jie Zhang, et al. Multi-source fusion in subjective logic. In Information Fusion (Fusion), 2017 20th International Con- ference on, pages 1–8. IEEE, 2017. [XB17] Yuankun Xue and Paul Bogdan. Reliable multi-fractal characterization of weighted complex networks: algorithms and implications. Scientific reports, 7(1):1–22, 2017. [XB19] Yuankun Xue and Paul Bogdan. Reconstructing missing complex networks against adversarial interventions. Nature communications, 10(1):1–12, 2019. [XCG + 20] Yi Xiao, Felipe Codevilla, Akhil Gurram, Onay Urfalioglu, and Anto- nio M López. Multimodal end-to-end autonomous driving. IEEE Transactions on Intelligent Transportation Systems, 2020. [YB20] Ruochen Yang and Paul Bogdan. Controlling the multifractal gener- ating measures of complex networks. Scientific Reports, 10(1):1–13, 2020. [YBN21] Zikun Yang, Paul Bogdan, and Shahin Nazarian. An in silico deep learning approach to multi-epitope vaccine design: a sars-cov-2 case study. Scientific reports, 11(1):1–21, 2021. [YLCT20] Ekim Yurtsever, Jacob Lambert, Alexander Carballo, and Kazuya Takeda. A survey of autonomous driving: Common practices and emerging technologies. IEEE Access, 8:58443–58469, 2020. [YOYB11] Rashad Yazdanifard, Waqas Khalid Obeidy, Wan Fadzilah Wan Yusoff, and Hossein Reza Babaei. Social networks and microblogging; the emerging marketing trends & tools of the twenty-first century. In The Proceedings of 2011 International Conference on Computer Commu- nication and Management, 2011. 210 [YPM + 19] Fan Yang, Shiva K Pentyala, Sina Mohseni, Mengnan Du, Hao Yuan, Rhema Linder, Eric D Ragan, Shuiwang Ji, and Xia Ben Hu. Xfake: Explainable fake news detector with visualizations. In The World Wide Web Conference, pages 3600–3604. ACM, 2019. [Yul25] George Udny Yule. Ii.—a mathematical theory of evolution, based on the conclusions of dr. jc willis, fr s. Philosophical transactions of the Royal Society of London. Series B, containing papers of a biological character, 213(402-410):21–87, 1925. [YX16] Xinghuo Yu and Yusheng Xue. Smart grids: A cyber–physical systems perspective. Proceedings of the IEEE, 104(5):1058–1070, 2016. [YXB + 20] Chenzhong Yin, Xiongye Xiao, Valeriu Balaban, Mikhail E Kandel, Young Jae Lee, Gabriel Popescu, and Paul Bogdan. Network science characteristics of brain-derived neuronal cultures deciphered from quantitative phase imaging data. Scientific reports, 10(1):1–13, 2020. [YYD + 16] Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. Hierarchical attention networks for document classification. In Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: human language technologies, pages 1480–1489, 2016. [YZWY17] Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. Seqgan: Sequence generative adversarial nets with policy gradient. In Thirty-First AAAI Conference on Artificial Intelligence, 2017. [ZAB + 18] Arkaitz Zubiaga, Ahmet Aker, Kalina Bontcheva, Maria Liakata, and Rob Procter. Detection and resolution of rumours in social media: A survey. ACM Computing Surveys (CSUR), 51(2):32, 2018. [ZF19] Jing Zhu and Yi Fang. Learning object-specific distance from a monoc- ular image. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3839–3848, 2019. [ZKL + 16] Arkaitz Zubiaga, Elena Kochkina, Maria Liakata, Rob Procter, and Michal Lukasik. Stance classification in rumours as a sequential task exploiting the tree structure of social media conversations. arXiv preprint arXiv:1609.09028, 2016. [ZKSE15] Jun-Yan Zhu, Philipp Krahenbuhl, Eli Shechtman, and Alexei A Efros. Learning a discriminative model for the perception of realism in com- posite images. In Proceedings of the IEEE International Conference on Computer Vision, pages 3943–3951, 2015. 211 [ZLLY19] Qiang Zhang, Aldo Lipani, Shangsong Liang, and Emine Yilmaz. Reply-aided detection of misinformation via bayesian deep learning. In The World Wide Web Conference, pages 2333–2343. ACM, 2019. [ZLP17] Arkaitz Zubiaga, Maria Liakata, and Rob Procter. Exploiting context for rumour detection in social media. In International Conference on Social Informatics, pages 109–123. Springer, 2017. [Zoe20] Thomas Zoe. Coronavirus: How facebook, tiktok and other apps tackle fake claims, 2020. https://www.bbc.com/news/ technology-51337357. [ZTS + 16] Tinghui Zhou, Shubham Tulsiani, Weilun Sun, Jitendra Malik, and Alexei A Efros. View synthesis by appearance flow. In European conference on computer vision, pages 286–301. Springer, 2016. [ZYW + 18] Rumin Zhang, Yifeng Yang, Wenyi Wang, Liaoyuan Zeng, Jianwen Chen, and Sean McGrath. An algorithm for obstacle detection based on yolo and light filed camera. In 2018 12th International Conference on Sensing Technology (ICST), pages 223–226. IEEE, 2018. [ZZGZ17] Kai Zhang, Wangmeng Zuo, Shuhang Gu, and Lei Zhang. Learning deep cnn denoiser prior for image restoration. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3929–3938, 2017. [ZZS + 20] Zilong Zhao, Jichang Zhao, Yukie Sano, Orr Levy, Hideki Takayasu, Misako Takayasu, Daqing Li, Junjie Wu, and Shlomo Havlin. Fake news propagates differently from real news even at early stages of spreading. EPJ Data Science, 9(1):7, 2020. 212
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Theoretical and computational foundations for cyber‐physical systems design
PDF
Verification, learning and control in cyber-physical systems
PDF
Understanding dynamics of cyber-physical systems: mathematical models, control algorithms and hardware incarnations
PDF
Data-driven and logic-based analysis of learning-enabled cyber-physical systems
PDF
Theoretical foundations and design methodologies for cyber-neural systems
PDF
Optimization strategies for robustness and fairness
PDF
Dealing with unknown unknowns
PDF
Differential verification of deep neural networks
PDF
Dynamic graph analytics for cyber systems security applications
PDF
Defending industrial control systems: an end-to-end approach for managing cyber-physical risk
PDF
Learning logical abstractions from sequential data
PDF
Theoretical foundations for dealing with data scarcity and distributed computing in modern machine learning
PDF
Learning distributed representations of cells in tables
PDF
Federated and distributed machine learning at scale: from systems to algorithms to applications
PDF
Assume-guarantee contracts for assured cyber-physical system design under uncertainty
PDF
Machine learning for efficient network management
PDF
Distribution system reliability analysis for smart grid applications
PDF
Learning and decision making in networked systems
PDF
Novel graph representation of program algorithmic foundations for heterogeneous computing architectures
PDF
Advanced techniques for object classification: methodologies and performance evaluation
Asset Metadata
Creator
Cheng, Mingxi
(author)
Core Title
Theoretical foundations for modeling, analysis and optimization of cyber-physical-human systems
School
Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Electrical Engineering
Degree Conferral Date
2022-08
Publication Date
07/26/2022
Defense Date
05/17/2022
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
cyber-physical-human systems,deep learning,misinformation classification,OAI-PMH Harvest,rumor detection,trustworthiness in AI
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Bogdan, Paul (
committee chair
), Deshmukh, Jyotirmoy (
committee member
), Jonckheere, Edmond (
committee member
), Leahy, Richard (
committee member
), Nazarian, Shahin (
committee member
)
Creator Email
cmx0608@gmail.com,mingxic@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-oUC111374354
Unique identifier
UC111374354
Legacy Identifier
etd-ChengMingx-10997
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Cheng, Mingxi
Type
texts
Source
20220728-usctheses-batch-962
(batch),
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the author, as the original true and official version of the work, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright. The original signature page accompanying the original submission of the work to the USC Libraries is retained by the USC Libraries and a copy of it may be obtained by authorized requesters contacting the repository e-mail address given.
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Repository Email
cisadmin@lib.usc.edu
Tags
cyber-physical-human systems
deep learning
misinformation classification
rumor detection
trustworthiness in AI