Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Theoretical foundations and design methodologies for cyber-neural systems
(USC Thesis Other)
Theoretical foundations and design methodologies for cyber-neural systems
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Theoretical Foundations and Design Methodologies for Cyber-Neural Systems by Emily A. Reed A Dissertation Presented to the FACULTY OF THE USC GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (ELECTRICAL AND COMPUTER ENGINEERING) August 2023 Copyright 2023 Emily A. Reed Dedication I dedicate this thesis to my family, friends, and my fianc ´ e, for their unending support and belief in me. I also dedicate this thesis in loving memory of my grandmothers Mary Rita Bockhorst and Barbara Jean Reed, who were exemplary role models in my life and always believed in me and loved me. ii Acknowledgements I acknowledge the financial support from the National Science Foundation GRFP DGE-1842487, the University of Southern California Annenberg Fellowship, and a USC WiSE Top-Off Fellow- ship. I also acknowledge the financial support from the National Science Foundation under the Career Award CPS/CNS-1453860, the NSF award under Grant Numbers CCF-1837131, MCB- 1936775, CMMI-1936624, and CNS-1932620, the U.S. Army Research Office (ARO) under Grant No. W911NF-17-1-0076 and W911NF-23-1-0111, and the DARPA Young Faculty Award and DARPA Director Award, under Grant Number N66001-17-1-4044, a 2021 USC Stevens Center Technology Advancement Grant (TAG) award, an Intel faculty award and a Northrop Grumman grant. The financial supporters of this work had no role in the study design, data collection and anal- ysis, decision to publish, or preparation of the manuscript. The views, opinions, and/or findings contained in this dissertation are those of the author and should not be interpreted as representing official views or policies, either expressed or implied by the Defense Advanced Research Projects Agency, the Department of Defense or the National Science Foundation. I would like to thank my advisors Paul Bogdan and S´ ergio Pequito for their mentorship and for their support of my career aspirations as a tenure-track professor. Without their continual support, help, excellent advice, review and critique of my work and presentations, dedication to meeting with me on a weekly basis, and belief in me, I would not be where I am today. I would like to thank my committee members Edmond Jonckheere, Andrei Irimia, Bhaskar Krishnamachari, and Shrikanth Narayanan for their helpful comments, interactions, and feedback. iii I would like to thank my collaborators Sarthak Chatterjee, Guilherme Ramos, Yaoyue Wang, Khaled Al-Zamel, Arian Ashourvan, Carrie Weidner, Jonathan Monroe, Benjamin Sheller, Sophie Schirmer, Frank Langbein, Francisco Valero-Cuevas, Evangelos Theodorou, Marcus Pereira, Pat Wang, and Tianrong Chen. I enjoyed working together to make new discoveries and learn new things. I would like to thank my mentors for their support, including Abhishek Gupta, David McKinnon, Jiankang Wang, and Kevin Passino. I would like to thank all of my friends for their support, including Zal´ an F´ abi´ an, Sarah Cooney, Buyun Chen, Shushan Arakelyan, Ani Saxena, Dongsheng Ding, Valeriu Balaban, Ridha Znaidi, Rimita Lahiri, Betsy Melenbrink, Christine Cheng, Valerie Swanson, Gaurav Gupta, Jayson Sia, Fernando Valladares Montiero, Dristhi Baid, Elizabeth Ondula, Alex Cathis, Rachel Patton, Mary Berkley, Mahnoor Naqvi, Abby Sharp, Megan Ryan, Alyssa Montalbine, Mary Lenk, Terese Gullo, Emma Lancaster, Tanya Suri, Mary King, Noelle Brobst, Yuna Tan, Andrea Bowers, Colleen Riker, Suzanna Hochevar, Jackie Liu, Lauren and Santiago Casanova, Kathy and John Rosenberry, Marc Abramson, Finally, I would like to thank my family for all of their love and support throughout my whole life, including Bob and Rachel Bockhorst, Frank Reed Sr., Bill and Sheila Reed, Larry and Karen Sheipline, Jay and Marcela Reed, Terri and Andy Wheeler, Brooklyn Wheeler, Charlotte Wheeler, Mike Reed, Steve and Brenda Reed, Arˆ ete Reed, Charles Reed, Eloise and Bob Cahill, Jeff and Patty Zettler, Patti Jo Zettler, Matt Jordan, Lily Jordan, Edison Jordan, Jeffrey Zettler, Angela Sul- livan, Cathy and Donald Leesman, Kevin Leesman, David Leesman and Allison Gardner, Kristin and Thomas McGuire, Sam Bockhorst and Meghan Vickrey, Erica Bockhorst and Johne Hoenle, Monica Bockhorst, Emily Outlaw, Braydon Cunz, Alexis and Amit Gupta, and Onyx Gupta. I would especially like to thank my parents Frances Reed and Frank Reed, my brothers Robert Reed and Ryan Reed, and my fianc ´ e and future husband Marcus Pereira. iv Table of Contents Dedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv List of Publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xviii Chapter 1: Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Cyber-Neural Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Fractional-Order Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 Designing Cyber-Neural Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.3.1 Monitoring Brain Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.3.2 Characterizing, Detecting, and Predicting Disease . . . . . . . . . . . . . . 6 1.3.3 Mitigating Disease . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.4 Distributed Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.5 Thesis Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.6 Thesis Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Chapter 2: A Fractal Dynamics Approach to Cyber-Neural Systems . . . . . . . . . . . . . 12 2.1 Fractional Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.2 Discrete-Time Fractional-Order Systems . . . . . . . . . . . . . . . . . . . . . . . 17 2.3 Controllability and Observability Fractional-Order Systems . . . . . . . . . . . . . 21 2.4 System Identification for Fractional-Order Systems . . . . . . . . . . . . . . . . . 22 2.5 State Estimation for Fractional-Order Systems . . . . . . . . . . . . . . . . . . . . 27 2.6 Model Predictive Control for Fractional-Order Systems . . . . . . . . . . . . . . . 35 Chapter 3: Controlling Complex Dynamical Networks . . . . . . . . . . . . . . . . . . . . 38 3.1 Minimum Actuator Placement for Fractional-Order Systems . . . . . . . . . . . . 40 3.1.1 Structural Controllability of Fractional-Order Dynamical Networks . . . . 43 3.1.2 Minimal Dedicated Actuation to Ensure Structural Controllability of Fractional-Order Dynamical Networks . . . . . . . . . . . . . . . . . . . . 47 v 3.1.3 Minimal Dedicated Actuation to Ensure Structural Controllability of Fractional-Order Dynamical Networks in a Given Number of Time Steps T < n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 3.2 On the Effects of the Minimum Number of Actuators . . . . . . . . . . . . . . . . 51 3.2.1 Long-Term Memory Dynamical Networks Require Fewer Driven Nodes Than Markov Counterparts . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3.2.2 Network Topologies, Not Size, Determine the Control Trends . . . . . . . . 57 3.2.3 Multi-Fractal Spectrum Dictates Savings When Controlling Long-Term Memory Dynamical Networks . . . . . . . . . . . . . . . . . . . . . . . . 58 3.2.4 Fewer Required Driven Nodes, Dictated by Network Topology and Multi-Fractal Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3.2.5 A Scalable and Robust Framework to Determine the Trade-Offs Between the Minimum Number of Driven Nodes and Time-To-Control for Controlling Large-Scale Long-Term Power-Law Memory Dynamical Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 3.2.6 Determining the Existence of Long-Term Memory in Dynamical Networks 61 3.2.7 Determining the Effects of the Degree Distribution on the Minimum Number of Actuators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 3.3 Conclusions and Future Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 Chapter 4: Measuring Complex Dynamical Networks . . . . . . . . . . . . . . . . . . . . 68 4.1 Minimum Sensor Placement Problem Formulation . . . . . . . . . . . . . . . . . . 70 4.2 Minimum Structural Sensor Placement for Switched LTI Systems with Unknown Inputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 4.3 Minimum Sensor Placement on the IEEE 5-bus System . . . . . . . . . . . . . . . 80 4.4 Conclusions and Future Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Chapter 5: Distributed Algorithms for Directed Networks . . . . . . . . . . . . . . . . . . 84 5.1 Finding Strongly Connected Components and the Finite Diameter of Directed Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 5.2 A Scalable Distributed Dynamical Systems Approach to Learn the Strongly Connected Components and Diameter of Networks . . . . . . . . . . . . . . . . . 88 5.3 Applying the Distributed Algorithm to Directed Networks . . . . . . . . . . . . . . 96 5.4 Performance Evaluation on Random Networks . . . . . . . . . . . . . . . . . . . . 98 5.4.1 Finding the Strongly Connected Components of a Directed Network . . . . 98 5.4.2 Finding the Finite Diameter of a Directed Network . . . . . . . . . . . . . 103 5.4.3 Analysis of the Computational-Time Complexity of the Distributed Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 5.5 Conclusions and Future Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 Chapter 6: Stability and Stabilization of Complex Dynamical Networks . . . . . . . . . . . 113 6.1 Sufficient Condition for Global Asymptotic Stability of (Discrete-Time) Fractional-Order Linear Time-Invariant Systems . . . . . . . . . . 118 6.2 Necessary and Sufficient Conditions for Global Asymptotic Stability of FLTIs . . . 122 6.3 Stabilizing FLTIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 vi 6.3.1 Computationally Efficient and Graphically-Interpretable Sufficient Approximate Solutions for P c 1 and P c 2 . . . . . . . . . . . . . . . 128 6.4 Mitigating Epilepsy by Stabilizing FLTIs . . . . . . . . . . . . . . . . . . . . . . . 130 6.4.1 Fractional Dynamics are a Suitable Proxy for Brain Behavior . . . . . . . . 134 6.4.2 Time of Seizure Onset is Related to the Stability of Fractional Dynamics . . 136 6.4.3 Stabilizing Fractional Dynamics Lowers the Energy in the System . . . . . 142 6.4.4 Mathematically Representing Neural Dynamics . . . . . . . . . . . . . . . 143 6.4.5 Detecting Seizures by Identifying Stable Fractional Dynamics . . . . . . . 146 6.4.6 Stabilizing Fractional Dynamics Ultimately Mitigates Seizures . . . . . . . 147 6.5 Conclusions and Future Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 Chapter 7: Conclusions and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 7.1 Leveraging Topology to Design Cyber-Neural Systems . . . . . . . . . . . . . . . 151 7.2 Designing Private and Secure Cyber-Neural Systems . . . . . . . . . . . . . . . . 153 7.3 Designing Robust Cyber-Neural Systems . . . . . . . . . . . . . . . . . . . . . . . 153 7.4 Designing Safe Cyber-Neural Systems . . . . . . . . . . . . . . . . . . . . . . . . 154 7.5 New Discoveries in Neuroscience . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 7.6 Advancing Cyber-Physical Systems . . . . . . . . . . . . . . . . . . . . . . . . . 155 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 vii List of Tables 4.1 States and Unknown Inputs for IEEE 5-bus system . . . . . . . . . . . . . . . . . 81 5.1 This table enumerates the values of the parameters (P) at each iteration (k) of Algorithm 4 for all nodes v i when executed on Example 1. . . . . . . . . . . . . . 97 5.2 This table enumerates the values of the parameters (P) at each iteration (k) of Algorithm 4 for all nodes v i when executed on the complete network. . . . . . . . 97 5.3 This table shows the values of the parameters (P) at each iteration (k) of Algorithm 4 for all nodes v i , executed on the tree network. . . . . . . . . . . . . . 98 6.1 This table shows the metadata for the patients. . . . . . . . . . . . . . . . . . . . . 135 6.2 This table shows the results of the mean-squared error for both the LTI and FOS dynamics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 viii List of Figures 1.1 The contributions of this thesis provide the tools to design effective cyber-neural systems for treating neurological diseases, such as epilepsy. In particular, we leverage a fractional- order linear time-invariant system with spatial matrix A and fractional-order exponentsα to derive new tools to design effective detection and stimulation strategies. . . . . . . . . 2 1.2 The left figure shows the intranial EEG recording from a left frontal seizure. The right figure shows the autocorrelation of this signal, which is a measure of the correlation between the signal and a delayed copy of itself. This signals exhibit long-term memory as demonstrated by the non-zero statistically significant autocorrelation of the signal. The fractional-order linear time-invariant system effiently captures this inherent long-term memory. . . . . . . . . . . . . . . . . . . 4 2.1 This figure illustrates the main sections of the thesis and their role and implications in designing cyber-neural systems. . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.1 (a) The number of driven nodes is shown across the time-to-control for the rat brain network for both Markov and long-term memory dynamics. The rat brain network [134] has 503 regions, which are captured by nodes in the network. At 0% of the time-to-control, all nodes in any dynamical network need to be driven nodes (shown in red). In just 20% of the time-to-control for the rat brain network, we see a drastic reduction in the required number of driven nodes for the long-term memory network as compared to the Markov dynamical network. As the time-to-control increases, the number of driven nodes decreases for both network dynamics; however, it is much more pronounced for the long-term memory dynamical network. The relationship between the percent savings (in green) and the time-to-control is shown at the bottom (highlighted in green). (b) shows the percent savings for the power network [135] (60 nodes). (c) shows the percent savings for the C. Elegans network [136, 137] (277 nodes). (d) shows the percent savings for the cortical brain structure of a macaque having 71 regions [138, 139]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 3.2 These figures show the behavior of the function Ψ(α, j) for different values ofα. (a) shows the function ofΨ(α, j) for integer values ofα. (b) shows the function of Ψ(α, j) for non-integer values of α. The non-zero values of Ψ(α, j) for non-integer values ofα enable the fractional-order system to capture long-term memory. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 ix 3.3 These figures show the relationship between the average difference (over 100 networks) in the required number of driven nodes across the time-to-control for different types of synthetic networks with different sizes and parameters. For network sizes 250, 500, and 1000, respectively, (a), (b), and (c) show the log-log plot of the average difference in the required number of driven nodes (n T ) versus the time-to-control (%) for 100 realizations of Erd˝ os–R´ enyi networks with various edge parameters. For network sizes 250, 500, and 1000, respectively, (d), (e), and (f) show the log-log plot of the average difference in the required number of driven nodes (n T ) versus the time-to-control (%) for 100 realizations of Barab´ asi–Albert networks with various k parameters. For network sizes 250, 500, and 1000, respectively, (g), (h), and (i) show the log-log plot of the average difference in the required number of driven nodes (n T ) versus the time-to-control (%) for 100 realizations of Watts-Strogatz networks with various p parameters. . . . . . . . . . 53 3.4 These figures plot the average difference (over 100 networks) in the required number of driven nodes across the time-to-control for different types of synthetic networks having different sizes and parameter values. For network sizes 250, 500, and 1000, respectively, (a), (b), and (c) show average difference in the required number of driven nodes (n T ) across the number of edges in the network versus the time-to-control (%) for 100 realizations of Erd˝ os–R´ enyi networks. For network sizes 250, 500, and 1000, respectively, (d), (e), and (f) show the average difference in the required number of driven nodes (n T ) across the k parameter versus the time-to-control (%) for 100 realizations of Barab´ asi–Albert networks. For network sizes 250, 500, and 1000, respectively, (g), (h), and (i) show the average difference in the required number of driven nodes (n T ) across the p parameter versus the time-to-control (%) for 100 realizations of Watts-Strogatz networks. . . . . . . . . 54 3.5 These figures plot the average difference (over 100 networks) in the required number of driven nodes across the time-to-control for different types of synthetic networks having different sizes and parameter values. For network sizes 250, 500, and 1000, respectively, (a), (b), and (c) show the average difference in the required number of driven nodes (n T ) along the size of the network versus the time-to-control (%) for 100 realizations of Erd˝ os–R´ enyi networks for a fixed set of parameters. For network sizes 250, 500, and 1000, respectively, (d), (e), and (f) show the average difference in the required number of driven nodes (n T ) along the size of the network versus the time-to-control (%) for 100 realizations of Barab´ asi–Albert networks for a fixed set of parameters k= 2,5,10. For network sizes 250, 500, and 1000, respectively, (g), (h), and (i) show the average difference in the required number of driven nodes (n T ) along the size of the network versus the time-to-control (%) for 100 realizations of Watts-Strogatz networks for a fixed set of parameters p= 0.2,0.5,0.8. . . . . . . . . . . . . . . . . . . . . . . . . . . 55 x 3.6 These figures show the relationship between the percent difference in the number of driven nodes across the time-to-control (%) and the multi-fractal spectrum of four real-world networks, including a power network (60 nodes), rat brain network [134] (503 nodes), C. elegans network [136, 137] (277 nodes), and a macaque brain network [138, 139] (71 nodes). (a) shows the plot of the percent difference of the required number of driven nodes (n T ) versus the percent of time-to-control (%) for several real-world networks. (b) shows the plot of the multi-fractal spectrum of several real-world networks. (c) shows the spectrum of the difference in the required number of driven nodes (n T ) compared with the multi-fractal spectrum width and the time-to-control ((%) for the same four real-world networks. (d) shows the spectrum of the difference in the required number of driven nodes (n T ) compared with the multi-fractal spectrum height and the time-to-control (%) for the same four real-world networks. . . . . . . . . . . . 56 3.7 (a), (b), and (c) show the average difference in the required number of driven nodes (n T ) for networks of varying average degree distributions versus the time-to-control (%) for 100 realizations of Erd˝ os–R´ enyi networks for a fixed set of parameters. (d), (e), and (f) show the average difference in the required number of driven nodes (n T ) for networks of varying average degree distributions versus the time-to-control (%) for 100 realizations of Barab´ asi–Albert networks for a fixed set of parameters k= 2,5,10. (g), (h), and (i) show the average degree for networks versus required number of driven nodes (n T ) versus the time-to-control (%) for 100 realizations of Watts-Strogatz networks for a fixed set of parameters p= 0.2,0.5,0.8. We notice that the average degree is the same for all of the networks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 3.8 (a) shows the mean and standard deviation of the difference in driven nodes versus time-to-control for 100 networks generated from the rat brain network following the progressive Chung Lu method [150]. (b) shows the degree distribution for the original rat brain network. (c) shows the clustering coefficient for the original rat brain network. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 4.1 IEEE 5-bus system: This figure shows the union of the two modes of the continuous-time system with unknowns for the IEEE 5-bus system. The SCCs are outlined by two dotted black rectangles, and the target-SCCs is outlined by a dotted blue polygon. The minimum output sensor and its placement is shown by y 1 . 82 5.1 This network has SCCs{5,6},{3,4}, and{1,2}. . . . . . . . . . . . . . . . . . . 96 5.2 The complete network contains a single SCC, which is made up of all of the nodes in the network (i.e.,{1,2,3,4,5}). . . . . . . . . . . . . . . . . . . . . . . . . . . 96 5.3 The SCCs of the tree are the individual nodes themselves (i.e.,{1},{2},...,{9}). . 98 5.4 These figures show the relationship between the network properties of some randomly generated Erd˝ os-R´ enyi networks and their run times for both our proposed algorithm and Kosaraju’s algorithm. . . . . . . . . . . . . . . . . . . . . 100 xi 5.5 This figure compares the run times of both our proposed algorithm and Kosaraju’s algorithm for several randomly generated Erd˝ os-R´ enyi networks. We see that our algorithm performs better on networks with a higher number of nodes. . . . . . . . 101 5.6 These figures show the relationship between the network properties of some randomly generated Erd˝ os-R´ enyi networks and their run times for both our proposed algorithm and Kosaraju’s algorithm. . . . . . . . . . . . . . . . . . . . . 102 5.7 This figure compares the run times of both our proposed algorithm and Kosaraju’s algorithm for several randomly generated Erd˝ os-R´ enyi networks. We see that our algorithm performs better on networks with a higher number of nodes. . . . . . . . 103 5.8 These figures show the relationship between the network properties of some randomly generated Barab´ asi-Albert networks and their run times for both our proposed algorithm and Kosaraju’s algorithm. . . . . . . . . . . . . . . . . . . . . 104 5.9 This figure compares the run times of both our proposed algorithm and Kosaraju’s algorithm for several randomly generated Barab´ asi-Albert networks. We see that our algorithm performs better on networks with a higher number of nodes. . . . . . 105 5.10 These figures show the relationship between the network properties of some randomly generated Barab´ asi-Albert networks and their run times for both our proposed algorithm and Kosaraju’s algorithm. . . . . . . . . . . . . . . . . . . . . 106 5.11 This figure compares the run times of both our proposed algorithm and Kosaraju’s algorithm for different randomly generated Barab´ asi-Albert networks. . . . . . . . 107 5.12 These figures show the relationship between the network properties of some randomly generated Watts-Strogatz networks and their run times for both our proposed algorithm and Kosaraju’s algorithm. . . . . . . . . . . . . . . . . . . . . 108 5.13 This figure compares the run times of both our proposed algorithm and Kosaraju’s algorithm for several randomly generated Watts-Strogatz networks. . . . . . . . . . 109 5.14 These figures show the relationship between the network properties of some randomly generated Watts-Strogatz networks and their run times for both our proposed algorithm and Kosaraju’s algorithm. . . . . . . . . . . . . . . . . . . . . 110 5.15 This figure compares the run times of both our proposed algorithm and Kosaraju’s algorithm for several randomly generated Watts-Strogatz networks. . . . . . . . . . 111 5.16 This figure shows the relationship between the number of iterations needed before terminating our algorithm and the diameter plus two of several randomly generated networks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 5.17 This figure compares the runtimes of computing the finite digraph diameter when using our algorithm against the Floyd-Warshall algorithm on several randomly generated networks, including Erd˝ os-R´ enyi, Barab´ asi-Albert, and Watts-Strogatz. . 112 xii 6.1 This figure describes the method currently used in Neuropace’s RNS device for detecting and mitigating seizures, which relies on physicians to tune the device based on trial-and-error methods and feedback from patients. . . . . . . . . . . . . 116 6.2 This figure illustrates the novel stimulation strategy to mitigate seizures in the brain. As is common in today’s technology, the brain’s behavior is being continuously monitored. The novelty arises in detecting the start of a seizure, which we define as the instability of a fractional-order dynamical network, which is characterized by the eigenvalues of the system. Next, we devised a method to calculate a stabilizing closed-form feedback. Finally, we demonstrate the mitigation of the seizure in the brain after applying the closed-loop feedback. . . . 130 6.3 Spatial matrix and fractional-order exponents 12 seconds before the patient’s seizure.131 6.4 Eigenvalues of A 0 = A− D(α,1) using the DTLFOS parameters 12 seconds before and 48 seconds after the seizure starts. . . . . . . . . . . . . . . . . . . . . . . . . 132 6.5 Spatial matrix and fractional-order exponents during the seizure, and their updated values and updated eigenvalues after solving P c 1 and P c 2 . . . . . . . . . . . . . . . . 133 6.6 These figures show the updated spatial matrix, fractional-order exponents, and eigenvalues after solving P g 1 and P g 2 . . . . . . . . . . . . . . . . . . . . . . . . . . 134 6.7 These figures show the data from the patients that we are analyzing in our experiments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 6.8 HUP64 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 6.9 HUP68 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 6.10 HUP72 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 6.11 HUP86 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 6.12 MAYO10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 6.13 MAYO11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 6.14 MAYO16 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 6.15 MAYO20 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 6.16 HUP64 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 6.17 HUP68 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 6.18 HUP72 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 6.19 HUP86 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 6.20 MAYO10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 xiii 6.21 MAYO11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 6.22 MAYO16 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 6.23 MAYO20 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 7.1 These tools will enable the unveiling of new insights into epilepsy as well as other neurological diseases and even mental health conditions. Furthermore, these tools will be used to solve problems in other cyber-physical systems, such as the power grid, wildfire management, and healthcare monitoring. . . . . . . . . . . . . . . . 151 7.2 This figure illustrates possible directions for future research that will make future cyber-neural systems, with the ability to treat neurological disease, augment physical capability, and even augment human intelligence, a reality. . . . . . . . . 152 xiv Abstract Large-scale complex dynamical networks are ubiquitous in our everyday lives and range from artificial and natural systems. Examples include power grids, to social networks, brain networks, and even cyber-physical systems. It is our goal to understand and design these complex behaviors exhibited by these large-scale systems to properly monitor, detect, predict, and mitigate catastro- phes in these networks. The complexity of the behaviors exhibited by these dynamical networks are non-Markovian and nonlinear. Hence, these dynamical networks and many others possess inherent long-term memory. To aid in designing these long-term memory dynamical systems, we often employ a mathe- matical model for these systems. In this thesis, we focus on fractional-order linear time-invariant systems to capture the long-term memory (non-Markovian behavior) inherent to many real-world systems. A related dynamical model is switched linear time-invariant systems, which is also con- sidered. Unfortunately, adequate methods to efficiently analyze and design such large-scale net- works exhibiting complex non-Markovian dynamics are lacking. In this thesis, we aim to fill this gap by deriving theoretical tools that lay the foundation for designing usable and efficient large- scale dynamical networks. In particular, we focus on designing cyber-neural systems, which describe the underlying mech- anisms of neurotechnology, where the hardware, software, and the neural system interact together to treat patients experiencing brain-related disorders. We focus in a special way on epileptic pa- tients (approximately 50 million worldwide) by designing techniques to monitor the brain, detect seizures, and mitigate seizures. xv We seek to design controllable complex dynamical networks. In particular, in designing suit- able neurostimulation strategies, we seek to determine the minimum number of electrodes and their placement required to mitigate seizures. Hence, we develop an efficient algorithm to solve the minimum actuator placement problem for networks exhibiting fractional-order linear time- invariant dynamics in discrete-time. We prove that controlling this class of complex dynamical networks is easier than controlling linear time-invariant dynamical networks. This work has im- portant implications in reducing the number of actuator resources necessary to control complex dynamical networks. To properly measure brain dynamics and other large-scale complex dynamical networks, we establish a method to determine the minimum number of sensors and their placement to achieve structural observability for continuous-time switched linear time-invariant systems with unknown inputs. We apply our algorithm to the IEEE 5-bus power grid network. This work has important consequences in reducing the number of sensor resources needed to adequately measure a complex dynamical network in order to regulate it. Next, we aim to provide a distributed solution to the minimum actuator and sensor placement problems as well as other important machine learning problems by developing the first distributed algorithm to find the strongly connected components and finite diameter of directed networks. We show that the algorithm out-performs the state-of-the-art for Erd˝ os–R´ enyi and Barab´ asi-Albert networks. Furthermore, our algorithm has a computational time-complexity comparable with the state-of-the-art. This work makes an important contribution as it is the first distributed algorithm of its kind. Finally, we aim to characterize the stability of complex dynamical networks exhibiting fractional- order linear time-invariant dynamics in discrete-time. One leading hypothesis suggests that seizures start in the brain due to an instability. Therefore, we can leverage the stability conditions of fractional-order linear time-invariant systems whose parameters were calculated from epileptic patient data to derive a state-feedback control method to stabilize these systems and ultimately xvi mitigate seizures in the brain. We apply this method to an epileptic data set to demonstrate a neurostimulation strategy that mitigates a seizure. In future work, we aim to develop other theoretical tools, such as contraction theorems, safe control strategies, and observing unknown inputs for fractional dynamics to continue to uncover new insights into epilepsy and other neurological diseases as well as mental health conditions. By deriving new theoretical tools for fractional-order dynamical networks, we will uncover new insights into epilepsy as well as other neurological diseases and even mental health conditions that will lead to a paradigm shift in the way we treat patients, which will ultimately allow them to experience a better quality of life. These new tools will successfully lead us into the future where neurotechnology and brain-machine interface systems are ubiquitous in treating many diseases and conditions in medicine. Finally, these tools will be applied broadly to solve problems spanning cyber-physical systems, including healthcare monitoring, climate-change mitigation, and power grid resilience. xvii List of Publications Journal Publications 1. E.A. Reed, Y . Wang, K. Alzamel, P. Bogdan, S. Pequito, and A. Ashourvan. “The Role of Stability in Pinpointing Seizure Onset and Mitigating Seizures.” In Preparation. 2. E.A. Reed, G. Ramos, P. Bogdan, S. Pequito. “The Role of Long-Term Power-Law Memory in Controlling Large-Scale Dynamical Networks.” Submitted to Nature Scientific Reports. Impact Factor 4.997. 3. E.A. Reed, S. Chatterjee, G. Ramos, P. Bogdan, S. Pequito. “Fractional cyber-neural systems– a brief survey.” Annual Reviews in Control Analysis and Control Design for Neurodynamics: Special Section. July 2022. Impact Factor 9.15. 4. E.A. Reed, G. Ramos, P. Bogdan, and S. Pequito. “A scalable distributed dynamical systems approach to learn the strongly connected components and diameter of networks.” To appear in Transactions on Automatic Control Special Issue for Learning and Control. Impact Factor 7.41. 5. E.A. Reed, G. Ramos, P. Bogdan, and S. Pequito. “Minimum Structural Sensor Placement for Switched Linear Time-Invariant Systems and Unknown Inputs.” Automatica. September 2022. Impact Factor 5.944. xviii 6. E.A. Reed, P. Bogdan, and S. Pequito. “Quantification of Fractional Dynamical Stability of EEG Signals as a Bio-Marker for Cognitive Motor Control.” Frontiers in Control Engineer- ing. November 2021. Conference Proceedings 1. E.A. Reed, G. Ramos, P. Bogdan, S. Pequito. “Mitigating Epilepsy by Stabilizing Linear Fractional-Order Systems.” In the Proceedings of the IEEE American Control Conference. San Diego, California. May 2023. xix Chapter 1 Introduction Complex dynamical networks pervade our every day lives from examples such as power grids [1], social networks [2], brain networks [3], biological networks [4, 5], physiological networks [6], cyber-physical systems [7], and even biotechnology networks [8]. A complex dynamical network can be described as a large set of nodes whose states evolve dynamically in time and are updated through exchanges with states of other nodes. The overall evolution of such states is often dictated by a model that is described by a set of differential or difference equations over space (i.e., dif- ferent nodes and their states) and time (i.e., the way the states change over time). Since complex dynamical networks are commonplace in society, it is important to develop rigorous methods to measure, analyze, and ultimately control them. This thesis provides the tools to analyze and design complex dynamical networks. 1.1 Cyber-Neural Systems One of the most complex dynamical systems in the world is the human brain. The human brain is composed of billions of neurons, which interact in complex and time-varying manner. Because of its size and complexity, the human brain is difficult to study under healthy conditions. How- ever, it becomes even more difficult to study the human brain when it is adversely affected by neurological disease. One of the most common neurological diseases in the world is epilepsy, 1 Figure 1.1: The contributions of this thesis provide the tools to design effective cyber-neural systems for treating neurological diseases, such as epilepsy. In particular, we leverage a fractional-order linear time-invariant system with spatial matrix A and fractional-order exponentsα to derive new tools to design effective detection and stimulation strategies. which is characterized by unprovoked seizures in the brain. Unfortunately, epilepsy significantly inhibits the quality of life for approximately 50 million patients worldwide [WHO], and in 2016, the disease accounted for $16 billion in annual expenditures in the United States alone [9]. Be- cause 30% of patients are unable to be treated by medication and surgery is not always an option due to the difficulty in locating the seizure-onset zone, other cyber-neural treatments have been developed, such as neurostimulation, which involves a surgically implanted device that delivers electrical stimulus directly to brain tissue [10, 11]. Many works have proposed methods for neu- rostimulation treatments in various diseases and conditions, including epilepsy [12, 13], chronic pain [14, 15], and other neurological diseases [11]. However, current neurostimulation devices suffer from fundamental problems such as the inability to provide efficient and effective care [16]. 2 This thesis focuses on developing the theoretical foundations and design methodologies for cyber-neural systems that are efficient and effective in treating disease, augmenting physical ca- pability and human intelligence. The solution combines cyber-physical systems, control theory, neuroscience, and complex networks to design formal methods that detect and respond to catas- trophes in dynamical networks with a special emphasis on seizures in the brain. The end goal is to build a new generation of neurotechologies that will effectively treat patients with brain-related disorders. 1.2 Fractional-Order Dynamics Fractional-order linear systems are able to accurately model many complex networks, includ- ing biological networks [17], cyber-physical systems [7, 18], nanotechnology [19], finance [20], quantum mechanics [21], phasor measurement unit (PMU) data in the power grid [1], and net- worked control systems [22, 23, 24]. Fractional-order linear dynamical systems are appealing mainly due to their representation as a compact spatiotemporal dynamical system with two easy-to- interpret sets of parameters, namely the fractional-order coefficients and the spatial coupling [18]. Fractional-order coefficients capture the long-range memory in the dynamics of each state vari- able of the system, and the spatial matrix represents the spatial coupling between different state variables, where the latter is described by a matrix. Our proposed framework to design cyber-neural systems leverages a spatio-temporal model that employs fractional-order dynamics, which remarkably, have been shown to accurately represent neural behavior [25]. Fractional-order systems are advantageous because they are easy to interpret since they require only two parameters, namely a spatial matrix and a vector of fractional-order exponents [18]. Most importantly however, fractional-order systems capture the inherent long- range memory exhibited in neural data, including epileptic seizure data as seen in Figure 1.2. Hence, fractional-order models outperform traditional integer-order models in accurately repre- senting neural data [26, 17, 27, 28, 29, 30, 31]. Employing these fractional-order models to design 3 Figure 1.2: The left figure shows the intranial EEG recording from a left frontal seizure. The right figure shows the autocorrelation of this signal, which is a measure of the correlation between the signal and a delayed copy of itself. This signals exhibit long-term memory as demonstrated by the non-zero statistically significant autocorrelation of the signal. The fractional-order linear time- invariant system effiently captures this inherent long-term memory. new techniques to characterize, detect, and treat seizures requires the development of new tools to analyze, control, and validate these models in the context of epilepsy. These new tools will enable the accurate detection of seizures, the appropriate delivery of stimulation, and the improvement in the quality of life for epileptic patients. 1.3 Designing Cyber-Neural Systems Designing cyber-neural systems requires three main steps, including monitoring brain behavior, detecting disease, and mitigating disease. Next, we outline these three steps and their subsequent challenges. 4 1.3.1 Monitoring Brain Behavior We investigate the ability to measure the behavior of complex dynamical networks. The chal- lenge in measuring the behavior of a large-scale complex dynamical network is to determine the minimum number of sensors that are necessary to ensure the system is fully observable, which is called the minimum sensor placement problem. Solving this problem with the exact parame- terization even in the case of linear time-invariant dynamics is NP-hard [32]. Furthermore, the exact parameters may be uncertain due to noise; however, often times we know the structure of the system or network. Therefore, structural systems have been leveraged to solve many different problems in control theory [33]. While the solution to the minimum sensor placement problem for structural linear time-invariant systems is available [32], no solution exists for switched linear time-invariant systems in continuous- time. Switched linear time-invariant dynamics are suitable to model many different systems such as the power grid [34], autonomous systems [35], and biological systems [36]. Therefore, deter- mining a method to properly observe and monitor these systems is important. However, it is often the case that we cannot perfectly model systems due to external factors from the environment, often termed as ‘unknown inputs.’ To date, only a couple of papers considered the structural observability of switched linear time- invariant systems with unknown inputs in continuous-time [37] and [38, 39]. In particular, [37, 39] considers the graph-theoretic necessary and sufficient conditions for generic discrete mode observability of a continuous-time switched linear system with unknown inputs and proposed a computational method to verify such conditions with a complexity of O(n 6 ), where n is the number of states. The works of [38, 39] present sufficient conditions for the generic observability of the discrete mode of continuous-time switched linear systems with unknown inputs and find an exhaustive lo- cation set to place sensors when these conditions are not satisfied with a computational complexity 5 of O(n 4 ). However, these methods present only sufficient conditions to verify structural observ- ability of switched linear time-invariant systems under unknown inputs. Furthermore, they do not consider the minimum sensor placement problem. In the context of cyber-neural systems, it is important to effectively measure the brain to obtain an accurate depiction of the brain’s behavior. In the case of the brain, we cannot model every aspect of the system, so we must consider the impact of unknown inputs. Unknown inputs are described as any unknown force acting on the system. 1.3.2 Characterizing, Detecting, and Predicting Disease In analyzing complex dynamical networks, it is important to characterize their stability. Stability describes the long-term behavior of a system [40]. While there are different notions of stability, in effect, a system is stable if the behavior of the system is bounded [41]. Stability conditions and stabilizing methods for linear time-invariant systems are available [40, 41]; however, a framework for stability analysis for fractional-order linear time-invariant systems is still yet to be complete. The prior literature describes conditions for continuous-time fractional-order linear systems [42, 43, 44] and for single-input single-output continuous-time commensurate systems [45]. The work in [46] provides the generalized Mittag–Leffler stability conditions of continuous-time fractional- order systems using the Lyapunov direct method. For a more detailed discussion on the stability of continuous-time systems, we refer the reader to [47, 48, 49]. For discrete-time fractional-order systems, the authors of [50] leverage an infinite dimensional representation of truncated discrete-time linear fractional-order systems (i.e., with finite memory) to give conservative sufficient conditions for stability. While the work in [51] does provide neces- sary and sufficient conditions for practical and asymptotic stability of discrete-time fractional-order systems, they only consider commensurate-order systems. That said, a simple to state necessary and sufficient condition for both continuous and discrete-time non-commensurate systems is still 6 missing. Furthermore, no methods exist to stablize fractional-order systems. This limits the capa- bility to assess the stability of such systems and their applicability in the context of neural systems and neurological diseases, such as epilepsy. By leveraging notions of stability, we aim to charac- terize the start of seizures. 1.3.3 Mitigating Disease We investigate the ability to control complex dynamical networks. In particular, we aim to deter- mine the minimum number of actuators that are necessary to ensure a fractional-order dynamical network is fully controllable, which is called the minimum actuator placement problem. Solving this problem relies on finding conditions for controllability. Conditions for the controllability of discrete-time fractional-order linear time-invariant systems have been proposed in [52]. Two works have proposed algorithms to design controllable networks exhibiting discrete-time fractional-order linear time-invariant dynamics [53, 54]; however, they suffer from lack of scability and robustness because both works are not tractable for large networks and require knowing the precise parameterization of the dynamics. For example, the work in [53] uses energy-based methods to control long-term memory dynam- ics modeled as a discrete-time linear fractional-order system; however, is not tractable for large- scale networks as it has a computational time-complexity ofO(N 5.748 ), where N is the number of nodes in the network. Similarly, in [54], which uses a greedy algorithm that maximizes the rank of the controllability matrix, the work relies on the computation of the matrix rank, so it quickly becomes intractable as a network grows in size. In fact, the computational time-complexity of the algorithm proposed in [54] isO(N 5 ), where N is the number of nodes in the network. 7 Both of these works require the precise dynamics to be known [53, 54]. Simply speaking, these works do not account for the inherent uncertainty in the parametric model for long-term mem- ory dynamics. Hence, tractable conditions to compute the controllability of networks exhibiting discrete-time fractional-order linear time-invariant dynamics are still lacking. In addition, neither of these works assess the trade-offs between the time-to-control and the required number of driven nodes. Therefore, a comprehensive analysis of the trade-offs between the time-to-control and the number of required actuators have yet to be established for this class of dynamics. 1.4 Distributed Computation Finally, in designing networks, the issue of distributed computation becomes critical. In the case of large-scale networks, such as social networks, the brain, and robotic swarms, we need distributed algorithms that are scalable, so we can analyze these large-scale networks and their properties. Furthermore, distributed algorithms will also enable us to efficiently draw conclusions about these large-scale networks as well as design large-scale networks with specific desirable properties. For example, solving the minimum sensor and actuator placement problems requires finding the strongly connected components of networks [55, 56, 32]. Strongly connected components are maximal subgraphs for which every node has a path to every other node in the subgraph. The algorithm used most often to compute the strongly connected components was proposed by Tar- jan [57], which employs a single pass of depth-first search and whose computational complexity is O(|V|+|E|), whereV denotes the set of nodes in the network andE denotes the set of edges in the network. It is worth mentioning that depending on the network sparsity, the effective computa- tional complexity isO(|V| 2 ), sinceE ⊂ (V × V). 8 Similar to Tarjan’s algorithm, Dijkstra introduced the path-based algorithm to find strongly con- nected components and also runs in linear time (i.e.,O(|V|+|E|)) [58]. Finally, Kosaraju’s algo- rithm uses two passes of depth-first search but is also upper-bounded by O(|V|+|E|) [59]. The following work presents an overview of the centralized algorithms that find the strongly connected components, which all have computational complexityO(|V|+|E|) [60]. A possible alternative is to develop better data structure algorithms that are suitable for par- allelization, which can then lead to implementations with computational complexity equal to O(|V|log(|V|)) [61] – see also [62] for an overview of different parallelized algorithms for SCC decomposition. However, a distributed algorithm to find strongly connected components in a di- rected network is currently lacking. Aside from strongly connected components, another important property of directed networks is the finite diameter. The finite diameter is the longest shortest path in the network. The finite diameter is important in improving internet search engines [63], identifying faults in both the power grid [64], and multiprocessor systems [65]. State-of-the-art methods to determine the finite diameter of a directed network include the Floyd- Warshall algorithm, which has a computational complexity ofO(|V| 3 ) [66]. All of these methods require knowledge of the overall structure of the directed network, which may be limiting in the case of certain applications and in large-scale networks. Therefore, a distributed algorithm to com- pute both the strongly connected components and finite diameter of directed networks is missing. 1.5 Thesis Contributions In this thesis, we develop novel tools that are essential for designing and analyzing large-scale complex dynamical networks. In particular, we focus on designing cyber-neural systems. We present important theoretical tools to achieve this goal. By leveraging the tools presented in this thesis, we unveil new insights into epilepsy and design novel treatments to mitigate seizures. 9 1. Mitigating Neurological Disease By Controlling Dynamical Networks In Chapter 3, we derive necessary and sufficient conditions for the structural controllability of discrete-time fractional-order linear time-invariant systems. With these conditions, we provide an algo- rithm to solve the minimum actuator placement problem. We prove that controlling this class of complex dynamical networks is easier than controlling linear time-invariant dynam- ical networks [67]. We demonstrated the advantage of fractional-order dynamics over linear time-invariant dynamics in terms of controllability by through experimental simulations to determine the effect of different network properties on the required number of actuators for both synthetic and real-world networks. 2. Measuring Brain Behavior by Measuring Complex Dynamical Networks In Chapter 4, we derive necessary and sufficient conditions to ensure the structural observability of switched linear systems with unknown inputs. We leverage these conditions to propose an algorithm that can determine the minimum sensors and their placement [55]. We apply this algorithm to the IEEE 5-bus power grid network. 3. Distributed Algorithms for Cyber-Neural Systems In Chapter 5, we provide the first dis- tributed algorithm to find the strongly connected components and finite diameter of di- rected networks [68]. We show that the algorithm out-performs the state-of-the-art for Erd˝ os–R´ enyi and Barab´ asi-Albert networks. Furthermore, our algorithm has a computa- tional time-complexity comparable with the state-of-the-art. 4. Characterizing and Mitigating Disease by Stabilizing Complex Dynamical Networks In Chapter 6, we derive sufficient [69] as well as necessary and sufficient conditions to characterize the stability of discrete-time fractional-order linear time-invariant systems. We leverage these conditions to derive a state-feedback control method to stabilize these sys- tems [70]. We apply this method to an epileptic data set to demonstrate seizure mitigation. The final step in designing cyber-neural systems combines the newly developed tools to create an end-to-end framework that effectively prevents oncoming seizures. 10 1.6 Thesis Organization Chapter 2 gives an overview of the current state-of-the-art for estimating, controlling, and ana- lyzing fractional-order systems, including model predictive control, state estimation, and parameter estimation. We also provide an overview of fractional-order systems in the context of cyber-neural systems. Chapter 3 provides a solution to the minimum actuator placement problem [67] to determine the optimal placement of the control inputs in a fractional-order dynamical network. Remarkably, we find that fractional-order linear time-invariant dynamics require fewer actuators as the same topological network that instead possesses linear time-invariant dynamics [67]. Chapter 4 presents the solution to the minimum sensor placement problem for switched linear time-invariant systems under unknown inputs [55]. In Chapter 6 we present a method to characterize a seizure by quantifying the instability of epileptic data. In particular, we derive global asymptotic stability conditions for fractional-order linear time-invariant systems as a means to characterize seizure onset [70]. We also develop a method to effectively and efficiently mitigate seizures by stabilizing fractional-order linear time- invariant systems and demonstrated the efficacy of this method on real-world data [70]. In short, Chapter 6 answers the question of how much stimulation should be delivered to the brain. To enable a distributed solution to the minimum actuator and sensor problems, Chapter 5 presents the first distributed algorithm that finds both the strongly connected components and the finite di- ameter of a directed network [68]. Chapter 7 presents important future directions. 11 Chapter 2 A Fractal Dynamics Approach to Cyber-Neural Systems We have witnessed an increase in the popularity of neurotechnology, which in part have been propelled by several Silicon Valley companies such as NeuraLink [71] (founded by Elon Musk), Google, and Facebook, just to mention a few. This tendency is now emerging in Europe as well with a variety of start-up companies across different countries. Yet, we have a long path moving forward to commercialize these devices to a clinical domain [72, 73, 74]. Among the different neurotechnologies, the one experiencing the biggest growth is the neurostimulation device, which assesses the neural activity (e.g., by tracking the change in electrical potential) and injects a timely stimulus (e.g., current in electrical neurostimulation devices) that aims to disrupt such activity [75]. These devices consist of tightly integrated hardware and software components that monitor and regulate the dynamics of the neural system. Together, the intertwined behavior generated by the interaction of the hardware/software and the neural systems form the so-called cyber-neural system (CNS). Neural systems, like the brain, generally exhibit quite diverse activity patterns across subjects under different operational setups [76]. Therefore, it is important to develop tools to translate these behaviors to enable CNS to become tomorrow’s reality. Many efforts world-wide, such as the Brain Initiative [77], Human Connectome Project [78], and Human Brain Project [79], have sought to understand the brain in health and disease. Additionally, they also seek to provide new insights into how to “reverse-engineer the brain,” a grand challenge deemed by the National Academy 12 of Engineering [80]. Consequently, it is imperative to establish a unified robust framework to understand and regulate brain activity across individuals and regimes (both healthy as well as diseased/disordered) [79, 78, 81]. Ultimately, this understanding will lead to an improvement in treatments and therapies for diseased patients, which will improve their quality of life. Fortunately, each year, we get new insights and understanding about life-changing neurological diseases. These advances in understanding neural systems and in providing adequate treatment for these diseases have been mostly achieved with the help of technology that can measure and record neural activity [82]. Scientists and researchers use these measurements of the brain’s activity to create models of the brain. There are different methods to analyze and design neural dynamical systems. One useful tool in modeling is a dynamical network system [83]. For example, [84] provides an overview of the recent advances in modeling the multi-scale behavior of the brain using dynamical networks. These models have allowed researchers to draw conclusions regarding the brain’s topology and function. While many studies have used linear dynamical systems to model neural behavior [12, 85, 86], these models are unable to capture the nonlinear and non-Markovian behavior exhibited in the brain [87, 88, 89]. On the other hand, several studies have used more complex nonlinear models; however, these models are not easy to interpret and explain in the context of brain dynamics [90, 91]. Fractional-order dynamical systems, which originated in physics and economics and quickly found their way into engineering applications [92, 93, 94, 95, 96, 97, 98], are appealing mainly due to their representation as a compact spatiotemporal dynamical system with two easy-to-interpret sets of parameters, namely the fractional-order coefficients and the spatial coupling. Fractional- order coefficients capture the long-range memory in the dynamics of each state variable of the system, and the spatial matrix represents the spatial coupling between different state variables, where the latter is described by a matrix. 13 Figure 2.1: This figure illustrates the main sections of the thesis and their role and implications in designing cyber-neural systems. Fractional-order systems provide an efficient way to model many different systems, including viral spreading [99], heat flux [100, 101], the human bronchus [102], human muscles [103], the nervous system [29], electrocardiogram signals [104], brain activity [30, 31, 28], and anomalous diffusion [105]. Furthermore, fractional-order systems have been used in domains as disparate as biological networks [17], cyber-physical systems [7, 18], nanotechnology [19], finance [20], quan- tum mechanics [21], phasor measurement unit (PMU) data in the power grid [1], and networked control systems [22, 23, 24], to mention a few. We focus our attention on neural behavior, which can be accurately represented by fractional- order systems [26, 17, 27, 28, 29, 30, 31]. Fractional-order systems have also been explored in the context of neurophysiological networks constructed from electroencephalographic (EEG), electrocorticographic (ECoG), or blood-oxygen-level-dependent (BOLD) data [13, 106]. We provide an overview of the work that has been done on controlling, estimating, and pre- dicting neural dynamical systems modeled using fractional-order dynamics in the discrete-time 14 domain, towards the next generation of CNS, which is important for advancing the understanding of neural systems and the treatments of neurological diseases. A recent survey is given in [25]. 2.1 Fractional Calculus Consider first the integer-order derivatives of a continuous-time function f(t) f ′ (t)= d f dt = lim h→0 f(t)− f(t− h) h . (2.1) Next, we compute the second-order derivative f ′′ (t)= d 2 f dt 2 = lim h→0 f ′ (t)− f ′ (t− h) h = lim h→0 1 h ( f(t)− f(t− h) h − f(t− h)− f(t− 2h) h ) = lim h→0 f(t)− 2 f(t− h)− f(t− 2h) h 2 . (2.2) By induction, we obtain the following relation for any integer n f (n) = d n f dt n = lim h→0 1 h n n ∑ j=0 (− 1) j n j f(t− jh). (2.3) Suppose we want to evaluate the derivative at a fixed point x. Then, we obtain the following result D n x ( f(t))= d n f dt n t=x = lim h→0 1 h n n ∑ j=0 (− 1) j n j f(t− jh). (2.4) We are interested in generalizing the derivative, so that n may be any arbitrary positive real number. To do so, we need to first introduce some key assumptions. • Choose fixed a< x • Choose a positive integer N • Set h= x− a N , let N→∞ 15 • Notice that n j = 0 for j> n. • Using gamma functions, we have that(− 1) j n j = Γ( j− n) Γ(− n Γ ( j+1) , whereΓ(x)= R ∞ 0 e − t t x− 1 dt Hence, we are now able to equate (2.4) to the following (Section 2.2, [107]) a D n x ( f(t))= lim N→∞ ( x− a N ) − n N− 1 ∑ j=0 Γ( j− n) Γ(− n)Γ( j+ 1) f x− j x− a N (2.5) Hence, with the introduction of (2.5), n can be any arbitrary positive real number. Because of these assumptions, the derivative is defined over an interval [a,x] and is only a local property when n is a positive integer. In a similar fashion, we will generalize the integer-order integral to a fractional-order integral. We start by examining the single integral operation, where we leverage the definition of an integral as the limit of a sum and maintain the same assumptions as before. a J 1 x ( f(t))= Z x a f(t) dt= lim N→∞ x− a N N− 1 ∑ j=0 f x− j x− a N . (2.6) Next, we compute the double integral a J 2 x ( f(t))= Z x a Z x 1 a f(t) dt dx 1 = lim N→∞ x− a N 2 N− 1 ∑ j=0 ( j+ 1) f x− j x− a N . (2.7) By induction, we obtain the following a J n x ( f(t))= lim N→∞ x− a N n N− 1 ∑ j=0 Γ( j+ n) Γ(n)Γ( j+ 1) f x− j x− a N . (2.8) Hence, we notice that we can generalize the integral to any arbitrary positive real number . Now if we compare (2.5) to (2.8), then we find that the only difference is in the sign of n. Hence, when n > 0, we can compute the fractional-order derivative from (2.5), and when n < 0, we 16 can compute the fractional-order integral from (2.5). This leads us to introduce the discrete-time Gr¨ unwald-Letnikov fractional differintegration operation [107, 108] as ∆ α f[k]= k ∑ j=0 Γ( j− α) Γ(− α)Γ( j+ 1) f[x− j], (2.9) whereα∈R is the order of the operation and may be integer or non-integer (i.e., fractional). 2.2 Discrete-Time Fractional-Order Systems In what follows next, we briefly introduce discrete-time fractional-order system models. We start by introducing the Gr¨ unwald-Letnikov derivative as ∆ α σ(t)= lim h− → 0 1 h α k ∑ j=0 (− 1) j α j σ(t− jh), (2.10) where∆ α is the fractional differintegration operator and α∈R + is the fractional-order exponent [109]. Ifα > 0, m=⌈α⌉, andσ is continuously differentiable at least m times, then, for t∈(a,b], the Gr¨ unwald-Letnikov definition is equivalent to the Riemann-Liouville definition ([Theorem 2.25], [110]). Subsequently, it is possible to consider the Gr¨ unwald-Letnikov difference equation (or, simply speaking, the discrete derivative) as follows [108] ∆ α σ[k]= 1 h α k ∑ j=0 (− 1) j α j σ[k− j], (2.11) where α∈R is the fractional-order exponent and h is a sampling time and k∈N is the sample number for which the derivative is calculated. In what follows, and without loss of generality, we consider unitary sampling time, i.e., we consider h= 1. A discrete-time linear fractional-order system (DTLFOS) is described as follows: ∆ α x[k+ 1]= Ax[k]+ Bu[k]+ w[k], (2.12) 17 where x[k] is the state for time step k∈N, A∈R n× n is the state coupling matrix andα∈(R n ) + is the vector of fractional-order coefficients. The signal u[k]∈R n u denotes the input corresponding to the actuation signal and the matrix B∈R n× n u is the input matrix that scales the actuation signal. The term w[k] denotes the process noise or additive disturbance, whose stochastic characteriza- tion (or the lack thereof) will be clear from the context in which these systems are being used. These models are similar to classical discrete-time linear time-invariant system models with the exception of the inclusion of the Gr¨ unwald-Letnikov fractional derivative, whose expansion and discretization for the i-th state, 1≤ i≤ n, can be expressed as [111, 108] ∆ α i x i [k]= k ∑ j=0 ψ(α i , j)x i [k− j], (2.13) whereα i is the fractional-order coefficient corresponding to the state i and ψ(α i , j)= Γ( j− α i ) Γ(− α i )Γ( j+ 1) . (2.14) Simply put, larger values of the fractional-order coefficients imply a lower dependency on the previous data from that state (i.e., a faster decay of the weights used as linear combination of previous data). 18 We now review some essential theory for fractional-order systems, including an approximation of (2.12) with u[k]= 0 for all k∈N as an LTI system. Using the expansion of the Gr¨ unwald- Letnikov derivative in (2.13), we have ∆ α x[k]= ∆ α 1 x 1 [k] . . . ∆ α n x n [k] = ∑ k j=0 ψ(α 1 , j)x 1 [k− j] . . . ∑ k j=0 ψ(α n , j)x n [k− j] = k ∑ j=0 ψ(α 1 , j) ... 0 . . . . . . . . . 0 ... ψ(α n , j) | {z } D(α, j) x 1 [k− j] . . . x n [k− j] = k ∑ j=0 D(α, j)x[k− j]. (2.15) The above formulation distinctly highlights one of the main peculiarities of DTLFOS in that the fractional derivative∆ α x[k] is a weighted linear combination of not just the previous state but of every single state up to the current one, with the weights given by (2.14) following a power-law decay. Plugging (2.15) into the DTLFOS formulation (2.12) with u[k]= 0 for all k∈N, we have k+1 ∑ j=0 D(α, j)x[k+ 1− j]= Ax[k]+ w[k], (2.16) or, equivalently, D(α,0)x[k+ 1]=− k+1 ∑ j=1 D(α, j)x[k+ 1− j]+ Ax[k]+ w[k], (2.17) which leads to x[k+ 1]=− k ∑ j=0 D(α, j+ 1)x[k− j]+ Ax[k]+ w[k], (2.18) 19 since D(α,0)= I n , where I n is the n× n identity matrix. Alternatively, (2.18) can be written as x[k+ 1]= k ∑ j=0 A j x[k− j]+ w[k] x[0]= x 0 , (2.19) where A j = A− diag(ψ(α 1 ,1),...,ψ(α n ,1)) if j= 0 − D(α, j+ 1) if j≥ 1 . (2.20) For discrete-time linear fractional-order system modeled by (2.12), the system is controllable if there exists a control sequence {u[0],...,u[T− 1]} such that x[T] = 0 from any initial state x[0]∈R n in a finite time [52]. To present the conditions for controllability and observability for discrete-time fractional-order systems, we first start by noticing that the discrete-time linear fractional-order system (2.12) can be re-written as (Lemma 2, [112]): x[k]= G k x[0], (2.21) where G k = I n , k= 0 ∑ k− 1 j=0 A j G k− 1− j , k≥ 1 (2.22) with A 0 = A− D(α,1), A j =− D(α, j+ 1), for j≥ 1, and D(α, j)= ψ(α 1 , j) 0 ... 0 0 ψ(α 2 , j) ... 0 0 . . . . . . 0 0 0 ... ψ(α n , j) . (2.23) 20 2.3 Controllability and Observability Fractional-Order Systems A discrete-time linear time-varying system given as x[k+ 1]= A k x[k]+ B k u[k], (2.24) where x[k]∈R n , u[k]∈R m , A k ∈R n× n , B k ∈R n× m , and k∈N, is said to be controllable if and only if there exists a control input sequence u ⊺ [K− 1],...,u ⊺ [0] ⊺ , for K > 0 that transfers the state x[0]̸= 0 to x[K]= 0 given in Definition 11.2, [40]. A discrete-time linear time-varying system is said to be controllable if and only if there exists a finite time K such that the rank(W c (0,K))= n, where W c (0,K)=∑ K− 1 j=0 φ(0, j)B j B ⊺ j φ ⊺ (0, j) given in Theorem 11.3, [40], whereφ(k,0) transforms the state x[0] to x[k] for some k> 0. The control input sequence is given by u[K− 1] u[K− 2] . . . u[0] =− B ⊺ k φ ⊺ (0,K)W − 1 c (0,K)x[0]. (2.25) The linear discrete-time fractional-order system modeled by (2.12) is controllable if and only if there exists a finite time K such that rank(W c (0,K))= n, where W c (0,K)= G − 1 K ∑ K− 1 j=0 G j BB ⊺ G ⊺ j G − ⊺ K (Theorem 4, [52]). Furthermore, an input sequence u ⊺ [K− 1],u ⊺ [K− 2],...u ⊺ [0] ⊺ that transfers x[0]̸= 0 to x[K]= 0 is given by u[K− 1] u[K− 2] . . . u[0] =− [G 0 BG 1 B...G K− 1 B] ⊺ G − ⊺ K W − 1 c (0,K)x[0]. (2.26) 21 Similarly, a system is observable if and only if the initial state x[0] can be uniquely determined from the knowledge of the control input and observations. For linear discrete-time fractional-order systems modeled by (2.12), the system is said to be ob- servable if and only if there exists some K > 0 such that the initial state x[0] at time k= 0 can be uniquely determined from the knowledge of{u[0],...,u[K− 1]} and{y[0],...,y[K− 1]}. There- fore, by Theorem 5 in [52], the linear discrete-time fractional-order system is observable if and only if there exists a finite time K such that rank(O K )= n, whereO K = CG 0 ,CG 1 ,...,CG K− 1 ⊺ or, equivalently, rank(W o (0,K))= n, where W o (0,K)=∑ K− 1 j=0 G ⊺ j C ⊺ CG j . Furthermore, the initial state at x[0] is given by x[0]= W − 1 o (0,K)O ⊺ K [ ˜ Y K − M K ˜ U K ], (2.27) where ˜ U K = u ⊺ [0],u ⊺ [1],...,u ⊺ [K− 1] , ˜ Y K = y ⊺ [0],...,y ⊺ [K− 1] ⊺ , and M K = 0 0 ... 0 0 CG 0 B 0 ... 0 0 CG 1 B CG 0 B ... 0 0 CG 2 B CG 1 B ... 0 0 . . . . . . . . . . . . . . . CG K− 2 B CG K− 3 B ... CG 0 B 0 . 2.4 System Identification for Fractional-Order Systems An important problem is learning the parameters of a fractional-order dynamical system, that is the fractional-order exponents and the spatial matrix. However, the maximum-likelihood ap- proach poses limitations due to the nonlinearity of the objective. However, some approaches were sucessfully developed for discrete-time linear fractional-order systems in [112, 113, 114], where an 22 approximate solution is based on a variant of the expectation-maximization algorithm. Nonethe- less, such approaches do not enable a finite-time assessment of the uncertainty associated with the parameters, which play a key role in the context of CNS. A recent approach that relies on a bilevel iterative bisection scheme [115] to perform identifi- cation of the spatial and temporal parameters of a linear discrete-time fractional-order system is reviewed next. First, consider ˜ x[k]= x[k] x[k− 1] . . . x[k− p+ 1] (2.28) as the augmented state vector. If we assume that the system is causal, i.e., the state and disturbances are all considered to be zero before the initial time (i.e., x[k]= 0 and w[k]= 0 for all k< 0), then we have ˜ x[k+ 1]= A 0 ... A p− 2 A p− 1 I ... 0 0 . . . . . . . . . . . . 0 ... I 0 | {z } ˜ A ˜ x[k]+ I 0 . . . 0 |{z} ˜ B w w[k] = ˜ A ˜ x[k]+ ˜ B w w[k], (2.29) for all k≥ 0. Note that (2.29) is an LTI system model, which we refer to as the p-augmented LTI approximation of (2.12). Having established the p-augmented LTI approximation of a DTLFOS in (2.29), we can consider the two-level iterative bisection-like approach to identify the spatial and temporal parameters of the DTLFOS in (2.12). In particular, we start by noting the fact that for the Gr¨ unwald-Letnikov definition of the fractional derivative provided in (2.13), α i = 1 and α i =− 1 can be interpreted, 23 respectively, to be the discretized version of the derivative and the integral for 1≤ i≤ n, as defined in integer calculus. To proceed with a bisection-like approach to identify{α i } n i=1 and ˜ A, we first fix the endpoints of the search space forα i to beα i =− 1 andα i = 1 for 1≤ i≤ n. We also calculate the value of α c,i =(α i +α i )/2. Now, given the values ofα i ,α i , andα c,i , we calculate the row vectors ˜ a i , ˜ a i , and ˜ a c,i , respectively, using the ordinary least squares technique. These row vectors guide the evolution of the states in the p-augmented LTI approximation ˜ x i [k+ 1]= ˜ a i ˜ x i [k]+ ˜ b w i w i [k], (2.30) where ˜ a i = ˜ a i when α i =α i , ˜ a i = ˜ a i when α i =α i , and ˜ a i = ˜ a c,i when α i =α c,i with ˜ b w i being obtained by extracting the i-th row of ˜ B w for 1≤ i≤ n. Next, we propagate the dynamics according to the obtained values of the parameters ˜ a i and calculate the mean squared error (MSE) between the states obtained as a result of the estimated ˜ a i ’s and the observed states. If the MSE is smaller corresponding to theα i case, then we setα i =α c,i . If the MSE is smaller corresponding to theα i case, then we setα i =α c,i . This approach is repeated until|α i − α i | does not exceed a certain pre-specified tolerance ε. Algorithm 1 summarizes the procedure of determining the spatial and temporal components of a DTLFOS using the two-level iterative bisection-like approach that we have outlined above. In estimating the parameters of a DTLFOS, the iteration complexity of the bisection-like process is given, and the finite-sample complexity of computing the spatial parameters using a least squares approach is also given in [115]. First, numerical and experimental evidence suggests that the computation of the fractional-order exponents of a DTLFOS, using, e.g., a wavelet-like technique described in [116], does not directly depend on the number of samples or observations used for the aforementioned estimation proce- dure. Empirical evidence suggests that a small number of samples (usually 30 to 100) suffice in order to compute the fractional-order exponents{α i } n i=1 [115]. Furthermore, the work in [115] 24 certifies the iteration complexity of the bisection method to find the parameters of a DTLFOS. Specifically, the bisection-based technique detailed above is minmax optimal and the number ν of iterations needed in order to achieve a certain specified tolerance ε is bounded above by ν≤ log 2 2 ε . (2.31) Algorithm 1 Learning the parameters of a DTLFOS 1: for i= 1 to n do 2: Initializeα i =− 1,α i = 1, and toleranceε. 3: Calculateα c,i =(α i +α i )/2. 4: Given the above values of α i , α i , and α c,i , find, using the ordinary least squares (OLS) method, the row vectors ˜ a i , ˜ a i , and ˜ a c,i , respectively, that guide the evolution of the states in the p-augmented LTI approximation ˜ x i [k+ 1]= ˜ a i ˜ x i [k]+ ˜ b w i w i [k]. 5: Propagate the dynamics according to the obtained OLS estimates and calculate the mean squared error (MSE) between the propagated states and the observed state trajectory. 6: if MSE is lower for theα i case then 7: Setα i =α c,i . 8: else if MSE is lower for theα i case then 9: Setα i =α c,i . 10: end if 11: Terminate if|α i − α i |<ε, else return to step 3. 12: end for Secondly, we can now delve into the problem of identifying the spatial parameter using a least squares-like approach and its finite-time guarantees. We start with the p-augmented LTI model of (2.29), i.e., ˜ x[k+ 1]= ˜ A ˜ x[k]+ ˜ B w w[k], (2.32) where the process noise w[k] is independent and identically distributed (i.i.d.) zero-mean Gaus- sian. The OLS method then outputs the matrix ˜ A[K] as the solution of the following optimization problem ˜ A[K] := argmin ˜ A∈R d× d K ∑ k=1 1 2 ˜ x[k+ 1]− ˜ A ˜ x[k] 2 2 , (2.33) by observing the state trajectory of (2.29), i.e.,{x[0],x[1],...,x[K+ 1]}. 25 Thus, prior to characterizing the sample complexity of the OLS method for the p-augmented LTI approximation of the DTLFOS, we define a few quantities of interest. The finite-time control- lability Gramian of the approximated system (2.29), W t , is defined by W t := t− 1 ∑ j=0 ˜ A j ( ˜ A j ) T . (2.34) Intuitively, the controllability Gramian gives a quantitative measure of how much the system is excited when induced by the process noise w[k] acting as an input to the system. Additionally, given a symmetric matrix A∈R d× d , we define λ max (A) and λ min (A) to denote, respectively, the maximum and minimum eigenvalues of the matrix A. Lastly, for any square matrix A∈R d× d , the spectral radius of the matrix A, ρ(A), is given by the largest absolute value of its eigenvalues. Also, the operator norm of a matrix is denoted by ∥A∥ op = inf{c≥ 0 :∥Av∥≤ c∥v∥ for all v∈ V}. Hence, the work in [115] following result that characterizes the sample complexity of the above OLS method for the DTLFOS approximation. Fixδ∈(0,1/2) and consider the p-augmented sys- tem in (2.29), where ˜ A∈R d× d is a marginally stable matrix (i.e.,ρ( ˜ A)≤ 1) and w[k]∼N (0,σ 2 I). Then, there exist universal constants c,C> 0 such that, P " ˜ A[K]− ˜ A op ≤ C p Kλ min (W k ) × s d log d δ + logdet W K W − 1 k # ≥ 1− δ, (2.35) for any k, such that K k ≥ c d log d δ + logdet W K W − 1 k (2.36) holds. 26 Remark 1. We note here that although the operator norm parameter estimation error in (2.35) is stated in terms of ˜ A, the operator norm errors, associated with the matrices A 0 ,A 1 ,...,A p− 1 , are strictly lower compared to ˜ A[K]− ˜ A op , since A 0 ,A 1 ,...,A p− 1 are submatrices of ˜ A, and for any operator norm, the operator norm of a submatrix is upper bounded by one of the whole matrices (see Lemma A.9 of [117] for a proof). Additionally, it is worth mentioning that a similar finite-sample complexity bound similar to the one presented before can also be derived when we consider the ordinary least squares identification of the spatial parameters of a DTLFOS with inputs. For instance, within the purview of epileptic seizure mitigation using intracranial EEG data, the objective is to suppress the overall length or duration of an epileptic seizure. Thus, the goal is to steer the state of the neurophysiological system in consideration away from seizure-like activity, using a control strategy like model predictive control [13]. 2.5 State Estimation for Fractional-Order Systems Most of the estimators that exist for fractional-order dynamical systems are obtained under the assumption that the disturbance and noise have Gaussian distributions [118, 119, 120, 121, 122, 123]. However, such an assumption is not realistic in the context of neural systems as disturbance frequencies can only lie within a specific frequency band. Therefore, the work in [124] presents the so-called minimum-energy state estimation, where it is assumed that the disturbance and noise are unknown, but deterministic and bounded uncertainties. The results are presented next. 27 Consider a left-bounded sequence {x[k]} k∈Z over k, i.e., with limsup k→− ∞ ∥x[k]∥ <∞. Then, the Gr¨ unwald-Letnikov fractional-order difference, for anyα∈R + , can be re-written as ∆ α x[k] := ∞ ∑ j=0 c α j x[k− j], c α j =(− 1) j α j , α j = 1 if j= 0, j− 1 ∏ i=0 α− i i+ 1 = Γ(α+ 1) Γ( j+ 1)Γ(α− j+ 1) if j> 0, (2.37) for all j∈N. The summation in (2.37) is well-defined from the uniform boundedness of the se- quence{x[k]} k∈Z and the fact that|c α j |≤ α j j! , which implies that the sequence{c α j } j∈N is absolutely summable for anyα∈R + [125, 126]. With the above ingredients, a discrete-time fractional-order dynamical network with additive disturbance can be described, respectively, by the state evolution and output equations l ∑ i=1 A i ∆ a i x[k+ 1]= r ∑ i=1 B i ∆ b i u[k]+ s ∑ i=1 G i ∆ g i w[k], (2.38a) z[k]= C ′ k x[k]+ v ′ [k], (2.38b) with the variables x[k]∈R n , u[k]∈R m , and w[k]∈R p denoting the state, input, and disturbance vectors at time step k∈N, respectively. The scalars a i ∈R + with 1≤ i≤ l, b i ∈R + with 1≤ i≤ r, and g i ∈R + with 1≤ i≤ s are the fractional-order coefficients corresponding, respectively, to the state, the input, and the disturbance. The vectors z[k],v ′ [k]∈R q denote, respectively, the output and measurement disturbance at time step k∈N. We assume that the (unknown but deterministic) disturbance vectors are bounded as ∥w[k]∥≤ b w ,∥v ′ [k]∥≤ b v ′, k∈N, (2.39) for some scalars b w ,b v ′∈R + . We also assume that the control input u[k] is known for all time steps k∈N. We denote by x[0]= x(0) the initial condition of the state at time k = 0. In the 28 computation of the fractional-order difference, we assume that the system is causal, i.e., the state, input, and disturbances are all considered to be zero before the initial time (i.e., x[k]= 0,u[k]= 0, and w[k]= 0 for all k< 0). Next, consider the quadratic weighted least-squares objective function J x[0],{w[i]} N− 1 i=0 ,{v ′ [ j]} N j=1 = N− 1 ∑ i=0 w[i] T Q − 1 i w[i]+ N ∑ j=1 v ′ [ j] T R − 1 j v ′ [ j] +(x[0]− ˆ x 0 ) T P − 1 0 (x[0]− ˆ x 0 ), (2.40) subject to the constraints l ∑ i=1 A i ∆ a i x[k+ 1]= r ∑ i=1 B i ∆ b i u[k]+ s ∑ i=1 G i ∆ g i w[k] (2.41a) and z[k]= C ′ k x[k]+ v ′ [k], (2.41b) for some N∈N, with the weighting matrices Q i (0≤ i≤ N− 1),R j (1≤ j≤ N), and P 0 chosen to be symmetric and positive definite, and ˆx 0 chosen to be the a priori estimate of the system’s initial state. The minimum-energy estimation procedure seeks to solve the following optimization problem minimize {x[k]} N k=0 ,{w[i]} N− 1 i=0 ,{v ′ [ j]} N j=1 J x[0],{w[i]} N− 1 i=0 ,{v ′ [ j]} N j=1 subject to z[k]= C ′ k x[k]+ v ′ [k], l ∑ i=1 A i ∆ a i x[k+ 1]= r ∑ i=1 B i ∆ b i u[k]+ s ∑ i=1 G i ∆ g i w[k], (2.42) for some N∈N. To derive the solution to (2.42), we first start with some alternative formulations of the discrete- time fractional-order system in (2.38a) and (2.38b) and relevant definitions that will be used in the 29 sequel. Then, we present the solution and some additional properties of the derived solution, i.e., the exponential input-to-state stability of the estimation error. In what follows, we consider the mild technical assumption that l ∑ i=1 A i is invertible. Additionally, we consider a truncation of the last v temporal components of (2.38a), which we will refer to as the v-approximation for the DTLFOS. That being said, the DTLFOS model in (2.38a) can be equivalently written as x[k+ 1]= ∞ ∑ j=1 ˇ A j x[k− j+ 1]+ ∞ ∑ j=0 ˇ B j u[k− j]+ ∞ ∑ j=0 ˇ G j w[k− j], (2.43) where ˇ A j =− ˆ A − 1 0 ˆ A j , ˇ B j = ˆ A − 1 0 ˆ B j , and ˇ G j = ˆ A − 1 0 ˆ G j with ˆ A j =∑ l i=1 A i c a i j , ˆ B j =∑ r i=1 B i c b i j , and ˆ G j =∑ s i=1 G i c g i j . Furthermore, for any positive integerv∈N + , the DTLFOS model in (2.38a) can be recast as ˜ x[k+ 1]= ˜ A v ˜ x[k]+ ˜ B v u[k]+ ˜ G v r[k], ˜ x[0]= ˜ x 0 , (2.44a) y[k+ 1]= C k+1 ˜ x[k+ 1]+ v[k+ 1], (2.44b) where r[k]= ∞ ∑ j=v+1 ˇ A j x[k− j+ 1]+ ∞ ∑ j=v+1 ˇ B j u[k− j]+ ∞ ∑ j=0 ˇ G j w[k− j], (2.45) with the augmented state vector ˜ x[k]=[x[k] T ,...,x[k− v+1] T ,u[k− 1] T ,...,u[k− v] T ] T ∈R v× (n+m) and appropriate matrices ˜ A v , ˜ B v , and ˜ G v , where ˜ x 0 =[x T 0 ,0,...,0] T denotes the initial condition. The matrices ˜ A v and ˜ B v are formed using the terms{ ˇ A j } 1≤ j≤ v and{ ˇ B j } 1≤ j≤ v , while the remain- ing terms{ ˇ G j } 1≤ j<∞ and the state and input components not included in ˜ x[k] are absorbed into the term ˜ G v r[k]. Furthermore, we refer to (2.44a) as the v-approximation of the DTLFOS presented in (2.38a). 30 To obtain the minimum-energy estimator, let us consider the quadratic weighted least-squares objective function J ˜ x[0],{r[i]} N− 1 i=0 ,{v[ j]} N j=1 = N− 1 ∑ i=0 r[i] T Q − 1 i r[i]+ N ∑ j=1 v[ j] T R − 1 j v[ j] +( ˜ x[0]− ˆ x 0 ) T P − 1 0 ( ˜ x[0]− ˆ x 0 ), (2.46) subject to the constraints ¯ x[k+ 1]= ˜ A v ¯ x[k]+ ˜ B v u[k]+ ˜ G v ¯ r[k], (2.47a) y[k+ 1]= C k+1 ¯ x[k+ 1]+ ¯ v[k+ 1], (2.47b) for some N∈N. The weighting matrices Q i (0≤ i≤ N− 1) and R j (1≤ j≤ N) are chosen to be symmetric and positive definite. The term ˆx 0 denotes the a priori estimate of the (unknown) initial state of the system, with the matrix P 0 being symmetric and positive definite. Subsequently, we consider the weighted least-squares optimization problem minimize { ¯ x[k]} N k=0 ,{¯ r[i]} N− 1 i=0 ,{ ¯ v[ j]} N j=1 J ˜ x[0],{r[i]} N− 1 i=0 ,{v[ j]} N j=1 subject to ¯ x[k+ 1]= ˜ A v ¯ x[k]+ ˜ B v u[k]+ ˜ G v ¯ r[k], y[k+ 1]= C k+1 ¯ x[k+ 1]+ ¯ v[k+ 1] (2.48) for some N∈N. We denote the state vector that corresponds to the solution of the optimization problem (2.48) by ˆ x[k]. Then, ˆ x[k] satisfies the recursion ˆ x[k+ 1]= ˜ A v ˆ x[k]+ ˜ B v u[k]+ K k+1 y[k+ 1]− C k+1 ˜ A v ˆ x[k]+ ˜ B v u[k] , (2.49) 31 given 0≤ k≤ N− 1, with initial conditions specified for ˆx 0 and{u[ j]} k j=0 , and with the update equations K k+1 = M k+1 C T k+1 (C k+1 M k+1 C T k+1 + R k+1 ) − 1 , (2.50a) M k+1 = ˜ A v P k ˜ A T v + ˜ G v Q k ˜ G T v , (2.50b) and P k+1 =(I− K k+1 C k+1 )M k+1 (I− K k+1 C k+1 ) T + K k+1 R k+1 K T k+1 =(I− K k+1 C k+1 )M k+1 , (2.50c) with symmetric and positive definite P 0 . Notice that the dynamics of the recursion in (2.49) (with the initial conditions on ˆ x 0 and the values of{u[ j]} k j=0 being known) along with the update equations (2.50) together solve (2.48). It is interesting to note here that the output term y[k+ 1] presented in (2.47b) and (2.49) is the output of the v-approximated system (2.44), which, in turn, is simply a subset of the outputs z[k+ 1] obtained from (2.38b), truncatedv time steps in the past, provided v[k] and C k are formed from the appropriate blocks of v ′ [k] and C ′ k for all k∈N. Secondly, the minimum-energy estimator has exponential input-to-state stability of the estima- tion error. In order to prove the exponential input-to-state stability of the minimum-energy estimation error, we need to consider the following mild technical assumptions. Specifically, there exist constants α,α,β,γ∈R + such that αI⪯ ˜ A v ˜ A T v ⪯ αI, ˜ G v ˜ G T v ⪯ βI, and C T k C k ⪯ γI, (2.51) for all k∈N. 32 Additionally, notice that the state transition matrix for the dynamics in (2.44a) is given by Φ(k,k 0 )= ˜ A (k− k 0 ) v , with Φ(k 0 ,k 0 )= I, (2.52) for all k≥ k 0 ≥ 0. We also consider the discrete-time controllability Gramian associated with the dynamics (2.44a) described by W c (k,k 0 )= k− 1 ∑ i=k 0 Φ(k,i+ 1) ˜ G v ˜ G T v Φ T (k,i+ 1), (2.53) and the discrete-time observability Gramian associated with (2.44a) to be W o (k,k 0 )= k ∑ i=k 0 +1 Φ T (i,k 0 )C T i C i Φ(i,k 0 ), (2.54) for k≥ k 0 ≥ 0. We also make the following assumptions regarding complete uniform controllability and complete uniform observability of thev-approximated system in (2.44a). As such, we have to also consider that the v-approximated system (2.44a) is completely uni- formly controllable, i.e., there exist constantsδ∈R + and N c ∈N + such that W c (k+ N c ,k)⪰ δI, (2.55) for all k≥ 0. And, similarly, the v-approximated system (2.44a) is completely uniformly observ- able, i.e., there exist constantsε∈R + and N o ∈N + such that W o (k+ N o ,k)⪰ εΦ T (k+ N o ,k)Φ(k+ N o ,k), (2.56) for all k≥ 0. 33 Next, we also present an assumption certifying lower and upper bounds on the weight matrices Q k and R k+1 in (2.46). That is, without loss of generality, we assume that the weight matrices Q k and R k+1 satisfy ϑI⪯ Q k ⪯ ϑI and ρI⪯ R k+1 ⪯ ρI, (2.57) for all k≥ 0 and constantsϑ,ϑ,ρ,ρ∈R + . Hence, it is possible to establish lower and upper bounds for the matrix P k but is not required to show that the estimation error is exponentially input-to-state stable. Specifically, the minimum- energy estimation error e[k], given by e[k]= ˆ x[k]− ˜ x[k], (2.58) is such that there exist constants σ,τ,χ,ψ ∈R + with τ < 1 such that the estimation error e[k] satisfies ∥e[k]∥≤ max ( στ k− k 0 ∥e[k 0 ]∥,χ max k o ≤ i≤ k− 1 ∥r[i]∥,ψ max k o ≤ j≤ k− 1 ∥v[ j+ 1]∥ ) (2.59) for all k≥ k 0 ≥ max{N c ,N o }. It is interesting to note that the bound on the estimation error e[k] in (2.59) actually depends on∥r[i]∥, where k 0 ≤ i≤ k− 1 for all i∈N. In fact, a distinguishing feature of DTLFOS is the presence of a finite non-zero disturbance term in the input-to-state stability bound of the tracking error when tracking a state other than the origin. This disturbance is dependent on the upper bounds on the non-zero reference state being tracked as well as the input. While the linearity of the Gr¨ unwald-Letnikov fractional-order difference operator allows one to mitigate this issue in the case of tracking a non-zero exogenous state by a suitable change of state and input coordinates, this approach is not one we can pursue in this paper, since the state we wish to estimate is unknown. However, it can be shown that as the value ofv in thev-approximation increases, the upper bound associated with∥r[i]∥ decreases drastically since thev-approximation gives us progressively better 34 representations of the unapproximated system. This further implies that ∥r[i]∥ in (2.59) stays bounded, with progressively smaller upper bounds associated with∥r[i]∥ (and hence,∥e[k]∥) with increasingv. Lastly, the estimation error associated with the minimum-energy estimation process in (2.58) is defined in terms of the state of the v-approximated system ˜ x[k]. In reality, as detailed above, with larger values of v, the v-approximated system approaches the real system dynamics, and thus we obtain an expression for the estimation error with respect to the real system in the limiting case, where the input-to-state stability bound as presented in (2.59) holds. 2.6 Model Predictive Control for Fractional-Order Systems Model predictive control (MPC) is a control strategy that allows the control of processes while satisfying a set of constraints. At its core, MPC uses explicit process models (which may be linear or nonlinear) to predict how a plant will respond to arbitrary inputs. For each instant of time, an MPC algorithm seeks to optimize plant behavior in the future by computing a series of control inputs over a time horizon called the prediction horizon by solving an optimization problem – often with constraints. Once this step is complete, the computed control inputs corresponding to the first subsection of the prediction horizon (called the control horizon) are then sent to the plant. This procedure is then repeated at subsequent control intervals [127]. This receding horizon strategy implicitly introduces closed-loop feedback. We provide an overview of the case where the predictive model is a linear fractional-order sys- tem [13]. Based on the state signal’s evolution predicted by the model, and by regarding the impact of an arbitrary control input signal in the state’s evolution, we can set out to adapt the stimulation signal in real-time by choosing the parameters that lead to stimulation signals within a safe range towards optimizing some measure of performance that encapsulates the goal of steering abnormal activity to normal ranges. In general, however, the predictive model will not precisely match the 35 real dynamics of the system. Therefore, the proposed stimulation strategy will periodically re- evaluate the current estimated state and corresponding predictions and re-compute the appropriate optimal stimulation strategy. In the fractional-order model predictive control framework presented in [13], the focus is on the design of a model predictive controller for a (possibly time-varying) discrete-time fractional-order dynamical system model ∆ α x[k+ 1]= A k x[k]+ B k u[k]+ B w k w[k], (2.60) where w[k] denotes a sequence of independent and identically distributed random vectors, follow- ing anN (0,Σ) distribution (with the covariance matrixΣ∈R n× n ) and B w k denotes the matrix of weights that scales the noise term w[k]. The objective is to design the feedback controller such that it minimizes a quadratic cost functional of the input and state vectors over a finite time horizon P (the prediction horizon). In other words, the objective is to determine the sequence of control inputs u[k],...,u[k+ P− 1] that minimizes a quadratic cost function of the form (cost function) minimize u[k],...,u[k+P− 1] E n P ∑ j=1 ∥x[k+ j]∥ 2 Q k+ j + P ∑ j=1 c T k+ j x[k+ j] + P− 1 ∑ j=0 ∥u[k+ j]∥ 2 R k+ j o (constraints) subject to x[k]= observed or estimated current state ∆ α x[k+ j+ 1]= A k+ j x[k+ j]+ B k+ j u[k+ j] + B w k+ j w[k+ j], j= 0,1,...,P− 1, other linear constraints on x[k+ 1],...,x[k+ P], u[k],...,u[k+ P− 1], (2.61) 36 where Q k+1 ,...,Q k+P ∈R n× n and R k ,...,R k+P− 1 ∈R n u × n u are given positive semidefinite ma- trices. Here, Q∈R n× n is a positive semidefinite matrix if x T Qx≥ 0, for every x∈R n , and ∥x∥ Q = p x T Qx in that case. The quadratic term on the input, which represents the electrical neurostimulation signal, is in- tended to add a penalization term for stimulating the patient too harshly, since this may be unsafe, create discomfort for the patient, or result in harmful psychological effects [128]. It is also in- teresting to note that even if we need the estimation of the system states in the above problem, the presence of a separation principle for discrete-time fractional-order systems [129] gives us guarantees that we can perform model predictive control with state estimation for these systems. Note that, here, P is called the prediction horizon, and the framework only deploys the control strategy associated with the first M time steps (referred to as the control horizon). Simply speaking, after we reach state x[k+ M− 1], we update k with k+ M− 1 and recompute the new solution. This way, we have robust solutions, since, by design, the optimal strategy is constantly being re- evaluated based on the short-term control action implementation of a long-term prediction [130, 131]. 37 Chapter 3 Controlling Complex Dynamical Networks Dynamical networks, including brain [3], quantum [1], financial [132], gene [4], protein [5], cyber-physical system [7] (e.g., power [1], healthcare [8]), social [133], and physiological net- works [6], exhibit not only an intricate set of higher-order interactions, but also distinct long-term memory dynamics where both recent and more distant past states influence their state’s evolution. Regulating these dynamical networks in a timely fashion becomes critical to avoid a full-blown catastrophe. Examples include treating epilepsy by arresting a seizure in a human brain, mitigat- ing climate-related power surges in the power grid, anticipating an undesired shock in the financial market, defending against cyber-attacks in cyber-physical systems, and even thwarting the spread of misinformation in social networks. Therefore, we need a general mathematical framework to assess and design controllable large-scale long-term memory dynamical networks within a specified time frame (i.e., time-to-control). While initial efforts on analyzing the controllability of long-term memory dynamic networks exist in [53, 54], these methods suffer from three main shortcomings. First, they do not assess the trade-offs between the time-to-control and the required number of driven nodes. Second, they require the knowledge of the exact parametrization of the system, which makes assessing such trade-offs com- putationally intractable. Third, these methods are limited to a few hundred states, which prohibits the analysis of real-world large-scale dynamical networks. In contrast, we propose a novel frame- work that overcomes all the aforementioned limitations by considering the structure of the system 38 Figure 3.1: (a) The number of driven nodes is shown across the time-to-control for the rat brain network for both Markov and long-term memory dynamics. The rat brain network [134] has 503 regions, which are captured by nodes in the network. At 0% of the time-to-control, all nodes in any dynamical network need to be driven nodes (shown in red). In just 20% of the time-to-control for the rat brain network, we see a drastic reduction in the required number of driven nodes for the long-term memory network as compared to the Markov dynamical network. As the time-to-control increases, the number of driven nodes decreases for both network dynamics; however, it is much more pronounced for the long-term memory dynamical network. The relationship between the percent savings (in green) and the time-to-control is shown at the bottom (highlighted in green). (b) shows the percent savings for the power network [135] (60 nodes). (c) shows the percent savings for the C. Elegans network [136, 137] (277 nodes). (d) shows the percent savings for the cortical brain structure of a macaque having 71 regions [138, 139]. that manifests when the exact system parameters are not known, thus making our framework both robust and scalable. In particular, our framework assesses and designs controllable long-term memory dynamical networks in a given time frame. Furthermore, our framework investigates the trade-off between the time horizon required to steer the network behavior to a desirable state and the required amount of resources necessary to correct the evolution of long-term dynamical networks. Subsequently, our framework provides answers to the following important questions: How does the nature of 39 the dynamics (Markov versus long-term memory) affect the required number of driven nodes in networks having the same spatial topology? In the case of long-term power-law memory systems, how do the interactions between the nodes of a network, described as the spatial dynamics, affect the network’s ability to manipulate its evolution within a time frame to achieve a desired behavior? How does the size of a long-term memory dynamical network affect its ability to properly control itself on a specific time horizon? How does the inherent structure (i.e., topology) and its properties of a long-term power-law memory dynamical network affect its ability to quickly alter itself to operate correctly within a time horizon? Figure 3.1 summarizes a few of the important outcomes of our proposed framework. In short, given a network exhibiting either Markov or long-term memory dynamics, we determine the mini- mum number of required driven nodes (in red) to steer the network to a desired goal within a given time horizon. Ultimately, our evidence suggests that long-term power-law memory dynamical net- works result in fewer required driven nodes to steer the network to a desired behavior regardless of the given time-to-control. 3.1 Minimum Actuator Placement for Fractional-Order Systems We consider a fractional-order dynamical network driven by a control input and additive noise. It is described as follows: ∆ α x[k+ 1]= Ax[k]+ Bu[k], (3.1) where k∈N is the time step, n is the number of nodes in the network, x[k]∈R n denotes the state, A∈R n× n is the coupling matrix that describes the spatial relationship between different states, u[k]∈R n is the input vector, B∈R n× n is the coupling matrix that describes the spatial relationship between the inputs and the states,α∈R n are the fractional-order exponents encoding the memory associated with the different state variables, and ∆ α is the Gr¨ unwald-Letnikov discretization of the fractional derivative (Chpt.2, [140]). Fractional-order dynamical networks possess long-term 40 memory. For each i-th state (1≤ i≤ n), the fractional-order operator acting on x i leads to the following expression: ∆ α i x i [k]= k ∑ j=0 ψ(α i , j)x i [k− j], (3.2) where ψ(α i , j) = Γ( j− α i ) Γ(− α i )Γ( j+1) , with Γ(·) denoting the Gamma function [141]. where Ψ(α i , j) encodes the weight or importance on each past state. To illustrate the effect thatΨ(α i , j) has on introducing long-term memory into the system, we plot the behavior of the Ψ(α i , j) function in Figure 3.2. The values ofΨ(α i , j) for 0<α i < 1 weigh the previous state values, where a larger index j indicates how far back in the past is the state being indexed. Furthermore, the weights Ψ(α i , j) decay according to a power-law. We notice that forα i = 1, there is no long-term memory, which matches with the behavior of a Markov dynamical network. 0 2 4 6 8 − 10 − 5 0 5 10 j Ψ(α, j) α = 1 α = 3 α = 5 (a) 0 2 4 6 8 − 0.5 0 0.5 1 j Ψ(α, j) α = 0.1 α = 0.3 α = 0.5 (b) Figure 3.2: These figures show the behavior of the function Ψ(α, j) for different values of α. (a) shows the function ofΨ(α, j) for integer values of α. (b) shows the function ofΨ(α, j) for non-integer values of α. The non-zero values ofΨ(α, j) for non-integer values of α enable the fractional-order system to capture long-term memory. We aim to determine the minimum number of state nodes and their placement that need to be driven to ensure the structural controllability of the fractional-order dynamical network. A fractional-order dynamical network is said to be controllable if there exists a sequence of inputs such that any initial state of the system can be steered to any desired state in a finite number of time steps. Therefore, assuming that the system is being actuated during T time steps, we can describe the system (3.1) by the matrix tuple(α,A,B,T). 41 Controllability associated with the system described in (3.1) can be characterized as follows. Definition 2. (Controllability in T time steps) The fractional-order dynamical network described by (α,A,B,T) is said to be controllable in T time steps if and only if there exists a sequence of inputs u[k](0≤ k≤ T− 1) such that any initial state x[0]∈R n can be steered to any desired state (x desired [T]∈R n ) in T time steps. ◦ Due to the uncertainty in the system’s parameters, we adopt a structural systems approach that relies solely on the system’s parameters. Consider the class of possible tuples with a predefined structure([ ¯ α],[ ¯ A],[ ¯ B],T), with[ ¯ α]={α∈R n } and[ ¯ M]={M∈R m 1 × m 2 : M i, j = 0 if ¯ M i, j = 0}, where ¯ M∈{0,⋆} m 1 × m 2 is a structural matrix with fixed zeros and arbitrary scalar parameters. Specifically, in the context of this paper, we seek to assess the structural controllability defined as follows: Definition 3. (Structural Controllability): The fractional-order dynamical network with structural pattern( ¯ α, ¯ A, ¯ B,T) is said to be structurally controllable in T time steps if and only if there exists a tuple(α ′ ,A ′ ,B ′ ,T)∈([ ¯ α],[ ¯ A],[ ¯ B],T) that is controllable in T time steps. ◦ Remark 4. If a system is structurally controllable, then almost all (α ′′ ,A ′′ ,B ′′ ,T) ∈ ([ ¯ α],[ ¯ A],[ ¯ B],T) are controllable in T time steps, by invoking similar density arguments to those in [56]. ⋄ From the above discussion, it readily follows that structural controllability will depend on the system’s structure and actuation capabilities being deployed. We consider the following assump- tion: A1: All state variables can be directly controlled by dedicated actuators (i.e., there is a one-to- one mapping between an actuator and a state variable). Thus, the input matrixI J n ∈R n× n , where J ={1,...,n} is the set of all state variables, is a diagonal matrix such that any diagonal entry is non-zero (i.e.,I J n (i,i)̸= 0 where i={1,...,n}) if and only if the associated actuator (i.e., u i ) is connected to the associated state variable (i.e., x i ). Hence, the minimum set of state variables 42 that need to be connected to dedicated actuators to ensure structural controllability is denoted by J ∗ ⊆ J. Formally, we seek the solutionJ ∗ to the following problem: given( ¯ α, ¯ A) and a time horizon T time steps min J⊆{ 1,...,n} |J| s.t. ( ¯ α, ¯ A,I J n ,T) is structurally controllable in T time steps. (P 1 ) 3.1.1 Structural Controllability of Fractional-Order Dynamical Networks We will start by first providing the graph-theoretical necessary and sufficient conditions to en- sure structural controllability in T time steps of fractional-order dynamical networks. With these conditions, we will solve P 1 providing a characterization of all the minimum combinations of state variables that satisfy these conditions. Let us start by recalling that the fractional-order dynamical network in (3.2) can be written as follows [25]: x[k+ 1]= Ax[k]− k+1 ∑ j=1 D(α, j)x[k+ 1− j]+ Bu[k], (3.3) where D(α, j)= ψ(α 1 , j) 0 ... 0 0 ψ(α 2 , j) ... 0 0 ... . . . 0 0 0 ... ψ(α n , j) . In fact, it admits a compact representation given by x[k+ 1]= k ∑ j=0 A j x[k− j]+ Bu[k], (3.4) 43 where A 0 = A− D(α,1) and A j =− D(α, j+ 1), for j≥ 1. Thus, the fractional-order dynamical network can be re-written in a closed-form as follows: x[k]= G k x[0]+ k− 1 ∑ j=0 G k− 1− j Bu[ j], (3.5) with G k = I n , k= 0 k− 1 ∑ j=0 A j G k− 1− j , k≥ 1, (3.6) where I n is the identity matrix of size n. Hereafter, the following remark will play a key role. Remark 5. The matrix G k in (3.6) corresponds to the transition matrix Φ(k,0) of the fractional-order dynamical network. In particular, G k is a combination of the powers of A 0 and diagonal matrices that depend on the fractional-order exponents. For example, G 3 = 2 ∑ j=0 A j G 2− j = A 0 G 2 + A 1 G 1 + A 2 G 0 = A 3 0 − A 0 D(α,2)− D(α,2)A 0 − D(α,3). ⋄ To provide necessary and sufficient graph-theoretical conditions, we need to introduce the fol- lowing terminology. A directed graph (digraph) is described byG =(V ,E), whereV denotes the set of vertices (or nodes) andE the (directed) edges between the vertices in the graph. A walk is any sequence of edges where the last vertex in one edge is the beginning of the next edge. Notice that a walk does may include the repetition of vertices. As such, a path is a walk where vertices are not repeated. If the beginning and ending vertex of a path is the same, then we obtain a cycle. Additionally, a sub-digraphG s =(V ′ ,E ′ ) is described as any subcollection of vertices V ′ ⊂ V and the edges between them (i.e.,E ′ ⊂ E ). If a subgraph has the property that there exists a path between any two pairs of vertices in the subgraph, then it is a strongly connected digraph. 44 The maximal strongly connected subgraph forms a strongly connected component (SCC), and any digraph can be uniquely decomposed into SCCs that can be seen as nodes in a directed acyclic digraph. A source SCC is an SCC that does not possess incoming edges to its vertices from other SCCs. Now, we introduce the following notion of structural equivalence, which will play a key role in the derivation of our main results. Definition 6. (Structural Equivalence) Let ¯ M and ¯ N be two n× n structural matrices. A structural matrix ¯ M dominates ¯ N if ¯ N i, j =⋆, then ¯ M i, j =⋆ for all i, j∈{1,...,n}, which we denote as ¯ M≥ ¯ N. Also, if ¯ M≥ ¯ N and ¯ N≥ ¯ M, then we say that ¯ M is structurally equivalent to ¯ N. ◦ Now, we provide the first main result of our paper. Theorem 7. (Structural equivalence of fractional-order dynamical networks to linear time-invariant dynamical networks) The structural fractional-order dynamical network ( ¯ α, ¯ A) described by its transition matrix ¯ G k in (3.5) and (3.6) is structurally equivalent to the structural linear time- invariant dynamical network described by system matrix ¯ A 0 , where A 0 = A− D(α,1). ◦ Proof. First recall Remark 5, and notice that if we consider G k in (3.6), then we obtain a combi- nation of the powers of A 0 and diagonal matrices that depend on the fractional-order exponents. In fact, some of the powers of A 0 might be multiplied on the left or right by these diagonal matri- ces, which does not change the structural pattern of the outcome (i.e., DA k 0 or A k 0 D is structurally equivalent to ¯ A k 0 , where D is a diagonal matrix). Therefore, ¯ G k structurally equivalent to ¯ A k 0 . For a linear time-invariant system having system matrix A 0 , the state transition is described by x[k]= A k 0 x[0]+∑ k− 1 j=0 A k− j− 1 0 Bu[ j]. By comparing this state transition relationship with the state transition relationship in (3.5) and because ¯ G k in (3.6) structurally equivalent to ¯ A k 0 , then the struc- tural fractional-order dynamical network described by ¯ G k is structurally equivalent to the structural linear time-invariant network described by system matrix ¯ A 0 . Next, we show that the structural matrix ¯ A 0 has non-zero diagonal generically. 45 Theorem 8. (Generic non-zero diagonal) The structural matrix ¯ A 0 has non-zero diagonal generi- cally. Proof. We have by definition that A 0 = A− D(α,1). A non-zero diagonal entry may appear in A 0 if there exists an i∈{1,...,n} such thatα i = 0 and if the corresponding diagonal entry of A is zero (i.e., a i,i = 0). Another instance occurs if there exists an i∈{1,...,n} such that a i,i ̸= 0, but a given combination of parameters due to α i ̸= 0 results in a perfect cancellation of the diagonal entry. These two cases occur with probability zero (whenever they are uniformly sampled onR orC) by invoking density arguments. Hence, the matrix ¯ A 0 has non-zero diagonal entries generically. From Theorems 7 and 8, given the structural characterization, we can associate fractional-order dynamical networks, characterized by (α,A,B), with a system digraphG ≡ G( ¯ A 0 , ¯ B)=(V ,E), whereV =X ∪U whereX ={x 1 ,...,x n } andU ={u 1 ,...,u n } are the state and input vertices, respectively. Furthermore, we have thatE =E X ,X ∪E X ,U , whereE X ,X ={(x j ,x i ) : ¯ A 0 (i, j)̸= 0} andE X ,U ={(x j ,u i ) : ¯ B(i, j)̸= 0} are the state and input edges, respectively. Similarly, we can define the state digraphG( ¯ A 0 )=(X ,E X ,X ), characterized by(α,A). Remark 9. We remark that due to the structural equivalence notion introduced in this pa- per we observe that the fractional-order exponents play an important role in capturing the memory of the state variables, which is structurally equivalent to nodal dynamics in a lin- ear time-invariant system. Ultimately, by Theorems 7 and 8, considering fractional-order dynamics leads to a system digraph with self-loops almost always. ⋄ Subsequently, by invoking Theorem 7, we provide the graphical conditions that ensure structural controllability of fractional-order dynamical networks. Theorem 10. (Structural controllability for fractional-order dynamical networks) Given a struc- tural fractional-order dynamical network( ¯ α, ¯ A, ¯ B,T = n), we say that this network is structurally controllable in T = n time steps if and only if at least one state variable in each of the source SCCs ofG( ¯ A 0 ) is connected to an incoming input in the system digraphG( ¯ A 0 , ¯ B). ◦ 46 Proof. From Theorems 7 and 8, it follows that we only need to guarantee that the linear time- invariant network described by ( ¯ A 0 , ¯ B) is structurally controllable. Therefore, to attain structural controllability of( ¯ A 0 , ¯ B), we need to guarantee two conditions onG( ¯ A 0 , ¯ B) [33]: (i) all state vari- ables belong to a disjoint union of cycles, and (ii)G( ¯ A 0 , ¯ B) has at least one state variable in each of the source SCCs ofG( ¯ A 0 ) that is connected to an incoming input. Notice that the first condition is fulfilled since all the states have self-loops generically – see Remark 9. Subsequently, it suf- fices to guarantee structural controllability of the fractional-order dynamical network if and only ifG( ¯ A 0 , ¯ B) has at least one state variable in each of the source SCCs ofG( ¯ A 0 ) that is connected to an incoming input. 3.1.2 Minimal Dedicated Actuation to Ensure Structural Controllability of Fractional-Order Dynamical Networks With the result in Theorem 10, we readily obtain the following corollary required for ensuring the feasibility of P 1 . Corollary 11. A fractional-order dynamical network( ¯ α, ¯ A, ¯ I J n ,T = n) is structurally controllable if and only ifJ contains the index of at least one state variable in each of the source SCCs in G( ¯ A 0 ). ◦ Proof. The result follows from invoking Theorem 10. Therefore, by guaranteeing that at least one state per source SCC is actuated, we guarantee thatG( ¯ A 0 , ¯ I J n ) is accessible and hence, structurally controllable. Consequently, we obtain the solution to P 1 . Theorem 12. (Solution to P 1 ) Consider a fractional-order dynamical networks( ¯ α, ¯ A, ¯ I J n ,T = n). The solution to P 1 is as follows: J ∗ ={i 1 ,...,i l }, 47 where{i 1 ,...,i l } denotes the set of indices corresponding to the l states x i 1 ,...,x i l that each belong to a different source SCC inG( ¯ A 0 ). ◦ Proof. First, notice that Corollary 11 establishes the feasibility of the solution to P 1 . Therefore, to achieve the minimum feasible set, we select one state variable from each of the different source SCCs inG( ¯ A 0 ) to be actuated. The minimal number of variables is equal to the number of source SCCs, and hence, the result follows. Theorem 13. Any network modeled as a fractional-order system as in (3.1) requires less than or equal to the number of driven nodes than that of the same network possessing linear time-invariant dynamics. Proof. Based on the results in Theorem 12 and the results in [32], linear time-invariant networks have one more additional condition to verify structural controllability than the sole condition re- quired for fractional-order networks. Therefore, a network possessing linear time-invariant dy- namics must have the same or more total number of driven nodes than the equivalent topological network possessing fractional-order dynamics. Finally, we provide the computational-time complexity for solving P 1 . Theorem 14. The computational-time complexity of the solution to P 1 is given asO(n 2 ). Proof. Based on Theorem 12, the solution to P 1 depends on finding the source strongly connected components. Tarjan’s algorithm finds all the strongly connected components in a directed network with a computational-time complexity ofO(|V|+|E|) [57], whereV is the number of vertices and E is the number of edges in the network. Hence, by performing another pass of depth-first search, which also has a computational-time complexity ofO(|V|+|E|), then the strongly connected components that do not have an incoming edge can be identified, which are the source strongly connected components. We notice thatE ⊆ V × V . Hence,O(|V|+|E|)=O(|V| 2 )=O(n 2 ), and the result follows. 48 3.1.3 Minimal Dedicated Actuation to Ensure Structural Controllability of Fractional-Order Dynamical Networks in a Given Number of Time Steps T < n Next, we will we provide the solution to find the minimum combination of state variables that ensure the structural controllability of fractional-order dynamical networks with a given number of time steps T < n, which is written as follows min J⊆{ 1,...,n} |J| s.t. ( ¯ α, ¯ A, ¯ I J n ,T < n) is structurally controllable. (P 2 ) In the next result, we provide a solution to P 2 . Theorem 15. (Structural controllability for fractional-order dynamical networks with a given horizon T < n) A fractional-order dynamical network( ¯ α, ¯ A, ¯ B,T < n), is structurally controllable for a given horizon T < n if and only if the following two conditions are satisfied: 1. there is at least one state variable in each source SCC inG( ¯ A 0 ) connected to an input, and 2. the length of the longest shortest path from the starting node of any source SCC inG( ¯ A 0 , ¯ B) is less than or equal to T . ◦ Proof. The first condition follows directly from Theorem 10. The second condition ensures that the system is controllable in T < n time steps since the network can only communicate information as fast as the longest shortest path from the input to the last node in the network. While Theorem 15 does provide an exact solution to P 2 , this solution is NP-hard. We prove this claim in the next result. 49 Theorem 16. Problem P 2 is NP-hard. Proof. We need to show that there exists a polynomial reduction from a problem known to be NP-hard to our problem. The known NP-hard problem that we consider is the graph partitioning problem [142], which aims to determine the minimum decomposition ofG =(V ,E) into p con- nected directed subgraphsG i =(V i ,E i ), with i∈{1,..., p} such that|V i |≤ T ,V i ∩V j = / 0 for i̸= j and S p i=1 V i =V . If we partition the networkG( ¯ A 0 ) into p subgraphs such that each subgraph has |V i |≤ T , then we can ensure that the longest shortest path from the starting node of any source SCC in each subgraph is less than or equal to T because each subgraph has at most T nodes, which satisfies condition 2 in Theorem 15. Furthermore, the source SCCs can be found in polynomial time [68], which satisfies condition 1 of Theorem 15. Together, this method provides a solution to P 2 . Hence, our problem is at least as difficult as the graph partitioning problem, which is known to be NP-hard, so P 2 is NP-hard. Since P 2 cannot be solved exactly, we propose an approximated solution to P 2 , which is em- ployed in our simulations and shown in Algorithm 3.1.3. Briefly, Algorithm 3.1.3 takes a fractional- order dynamical network and a given number of time steps T < n and finds the minimum set of state variablesJ to ensure structural controllability. First, the algorithm computes the digraph from the fractional-order dynamical network. Next, the software package METIS [142] is used to partition the graph into n T subgraphs of roughly equal size T . Finally, all of the source SCCs are found in each subgraph, and a single node from each source SCC is added to the setJ . Algorithm 2 Find the minimum set of state variablesJ to ensure structural controllability of fractional-order dynamical networks for a given time horizon T < n 1: Input:Fractional-Order Dynamical Network( ¯ α, ¯ A,T) and network size n 2: Output:The set of state variables denoted byJ ⊆{ 1,...,n} 3: Initialization: ComputeG( ¯ A 0 ) from the fractional-order dynamical network( ¯ α, ¯ A,T) and network size n 4: Step 1: Using METIS [142], partition the digraphG( ¯ A 0 ) into n T partitions denoted byG i =(V i ,E i ), where each partition is roughly equal sized, i.e.|V i |≤ T . 5: Step 2: Find all the source SCCsS i, j in each partitionG i , where j is the index of all the source SCCs in subgraphG i 6: Step 3: Add one state from each source SCCS i, j to the setJ . 50 Next, we provide an lower-bound on the optimal solution to P 2 . Theorem 17. The minimum number of driven nodes d required to solve P 2 for a given time horizon T is given by the following inequality: d≥ l n T m . (3.7) Proof. When partitioning the graph into subgraphs, we ensure each subgraph has|V i |≤ T . There- fore, there are a maximum of⌈ n T ⌉ subgraphs. Each subgraph can have a minimum of only one source SCC, so the lower-bound on the number of driven nodes is equal to the number of sub- graphs, i.e.,⌈ n T ⌉. Finally, we present the computational-time complexity of Algorithm 3.1.3. Theorem 18. The computational-time complexity of Algorithm 3.1.3 is given asO(n 2 log(n)). Proof. The complexity of this sequential algorithm is determined by the step that has the maxi- mum computational-time complexity. The initialization step has a complexity ofO(n 2 ) since we construct the network from its adjacency matrix ¯ A 0 . Step 1 has a computational time-complexity ofO(n 2 log(n)) [143]. Step 2 has a computational-time complexity ofO(n 2 ) [57]. Step 3 has a computational-time complexity ofO(n) since we select a single node out of all the nodes in a source SCC, which could be possibly n nodes. Hence, Step 1 has the largest computational-time complexity, so this dictates the overall complexity of the algorithm, and the result follows. 3.2 On the Effects of the Minimum Number of Actuators We performed experiments on both synthetic and real-world networks. In each experiment, we find the difference between the required number of driven nodes ( n T ) for the Markov dynamics and the long-term memory dynamics in a given time-to-control. Since the available time-to-control varies with the size of a particular network, we consider the percentage of the time-to-control by dividing the time-to-control by the size of the network. 51 3.2.1 Long-Term Memory Dynamical Networks Require Fewer Driven Nodes Than Markov Counterparts Our experimental results confirm our theoretical results that long-term power-law memory dy- namical networks require equal or fewer driven nodes than Markov dynamical networks having the same structure. In all of the experiments, we find that the difference in the required num- ber of driven nodes (n T ) is non-negative. In fact, the experiments for the Erd˝ os–R´ enyi networks (Figures 3.3 (a), (b), and (c)) show that the average (over 100 random networks) difference in the number of driven nodes generally scales by a power-law with the time-to-control. For the Barab´ asi–Albert networks (Figures 3.3 (d), (e), and (f)), the average difference in the number of driven nodes initially scales by a power-law but then quickly drops very low nearly to zero dif- ference as the time-to-control increases. Finally, the average difference in the number of driven nodes for the Watts-Strogatz networks (Figures 3.3 (g), (h), and (i)) scales initially by a power- law and then levels out to a linear relationship towards the final time-to-control. The results in Figure 3.3 show that, depending on the network topology, a Markov dynamical network requires up to a power-law times more driven nodes to achieve structural controllability than a long-term memory dynamical network having the same spatial structure. For several real-world dynamical networks, we notice similar trade-offs between the difference in the number of driven nodes and the time-to-control see Figure 3.6 (a). We note that we consider the percent difference in the number of driven nodes by dividing the difference in the number of driven nodes by the size of the network. We find that the rat brain network [134] gives approxi- mately a 60% difference in the minimum number of driven nodes, which is achieved around 20% of the time-to-control (Figure 3.6 (a)). These results are significant considering that the rat brain network has 503 total nodes. The degree distribution and clustering coefficient of the rat brain network may play a role in achieving a high difference in the required number of driven nodes. The Caenorhabditis elegans (C. elegans) network [136, 137], which has 277 nodes, has ap- proximately a 12% difference in the number of driven nodes at around 20% of the time-to-control 52 2.5 3 3.5 4 4.5 1 2 3 Log(Time-to-Control %) Log(Average Difference in n T ) Erd˝ os R´ enyi Network N= 250 300 edges 375 edges 500 edges (a) 0 1 2 3 4 − 2 0 2 4 Log(Time-to-Control %) Log(Average Difference in n T ) Erd˝ os R´ enyi Network N= 500 800 edges 1000 edges 1200 edges (b) 0 1 2 3 4 − 2 0 2 4 Log(Time-to-Control %) Log(Average Difference in n T ) Erd˝ os R´ enyi Network N= 1000 1200 edges 1500 edges 2000 edges (c) 0 1 2 3 4 − 4 − 2 0 2 4 Log(Time-to-Control %) Log(Average Difference in n T ) Barab´ asi-Albert Network N= 250 k= 2 k= 5 k= 10 (d) 1 2 3 4 − 2 0 2 4 Log(Time-to-Control %) Log(Average Difference in n T ) Barab´ asi-Albert Network N= 500 k= 2 k= 5 k= 10 (e) 0 1 2 3 4 − 2 0 2 4 Log(Time-to-Control %) Log(Average Difference in n T ) Barab´ asi-Albert Network N= 1000 k= 2 k= 5 k= 10 (f) 2.5 3 3.5 4 4.5 1 2 3 Log(Time-to-Control %) Log(Average Difference in n T ) Watts-Strogatz Network N= 250 p= 0.2 p= 0.5 p= 0.8 (g) 2.5 3 3.5 4 4.5 2 3 Log(Time-to-Control %) Log(Average Difference in n T ) Watts-Strogatz Network N= 500 p= 0.2 p= 0.5 p= 0.8 (h) 1 2 3 4 − 2 0 2 4 Log(Time-to-Control %) Log(Average Difference in n T ) Watts-Strogatz Network N= 1000 p= 0.2 p= 0.5 p= 0.8 (i) Figure 3.3: These figures show the relationship between the average difference (over 100 networks) in the required number of driven nodes across the time-to-control for different types of synthetic networks with different sizes and parameters. For network sizes 250, 500, and 1000, respectively, (a), (b), and (c) show the log-log plot of the average difference in the required number of driven nodes (n T ) versus the time-to-control (%) for 100 realizations of Erd˝ os–R´ enyi networks with var- ious edge parameters. For network sizes 250, 500, and 1000, respectively, (d), (e), and (f) show the log-log plot of the average difference in the required number of driven nodes (n T ) versus the time-to-control (%) for 100 realizations of Barab´ asi–Albert networks with various k parameters. For network sizes 250, 500, and 1000, respectively, (g), (h), and (i) show the log-log plot of the average difference in the required number of driven nodes (n T ) versus the time-to-control (%) for 100 realizations of Watts-Strogatz networks with various p parameters. 53 Figure 3.4: These figures plot the average difference (over 100 networks) in the required number of driven nodes across the time-to-control for different types of synthetic networks having different sizes and parameter values. For network sizes 250, 500, and 1000, respectively, (a), (b), and (c) show average difference in the required number of driven nodes (n T ) across the number of edges in the network versus the time-to-control (%) for 100 realizations of Erd˝ os–R´ enyi networks. For network sizes 250, 500, and 1000, respectively, (d), (e), and (f) show the average difference in the required number of driven nodes (n T ) across the k parameter versus the time-to-control (%) for 100 realizations of Barab´ asi–Albert networks. For network sizes 250, 500, and 1000, respectively, (g), (h), and (i) show the average difference in the required number of driven nodes (n T ) across the p parameter versus the time-to-control (%) for 100 realizations of Watts-Strogatz networks. 54 (a) (b) (c) (d) (e) (f) (g) (h) (i) Figure 3.5: These figures plot the average difference (over 100 networks) in the required number of driven nodes across the time-to-control for different types of synthetic networks having different sizes and parameter values. For network sizes 250, 500, and 1000, respectively, (a), (b), and (c) show the average difference in the required number of driven nodes (n T ) along the size of the network versus the time-to-control (%) for 100 realizations of Erd˝ os–R´ enyi networks for a fixed set of parameters. For network sizes 250, 500, and 1000, respectively, (d), (e), and (f) show the average difference in the required number of driven nodes (n T ) along the size of the network versus the time-to-control (%) for 100 realizations of Barab´ asi–Albert networks for a fixed set of parameters k= 2,5,10. For network sizes 250, 500, and 1000, respectively, (g), (h), and (i) show the average difference in the required number of driven nodes (n T ) along the size of the network versus the time-to-control (%) for 100 realizations of Watts-Strogatz networks for a fixed set of parameters p= 0.2,0.5,0.8. (Figure 3.6 (a)). However, the percent difference in the number of driven nodes decreases as the time-to-control increases in the C. elegans network. The power network [135], with only 60 nodes, 55 0 10 20 30 40 50 60 70 80 90 100 0 20 40 60 Time-to-control (%) Difference in Driven Nodes (%) Percent Difference in Driven Nodes v. Percent of Time-to-control Power Network C. elegans Network Rat Brain Network Macaque Brain Network (a) 1 1.2 1.4 1.6 1.8 2 2.2 2.4 − 0.8 − 0.6 − 0.4 − 0.2 0 Lipschitz-Holder Exponent (α) Multi-fractal Spectrum ( f(α)) Multi-fractal Spectrum of Real-World Networks Power Network C. elegans Network Rat Brain Network Macaque Brain Network (b) (c) (d) Figure 3.6: These figures show the relationship between the percent difference in the number of driven nodes across the time-to-control (%) and the multi-fractal spectrum of four real-world networks, including a power network (60 nodes), rat brain network [134] (503 nodes), C. elegans network [136, 137] (277 nodes), and a macaque brain network [138, 139] (71 nodes). (a) shows the plot of the percent difference of the required number of driven nodes (n T ) versus the percent of time-to-control (%) for several real-world networks. (b) shows the plot of the multi-fractal spectrum of several real-world networks. (c) shows the spectrum of the difference in the required number of driven nodes (n T ) compared with the multi-fractal spectrum width and the time-to- control ((%) for the same four real-world networks. (d) shows the spectrum of the difference in the required number of driven nodes (n T ) compared with the multi-fractal spectrum height and the time-to-control (%) for the same four real-world networks. has approximately an 8% difference in the number of driven nodes at the final time-to-control (Figure 3.6 (a)). Lastly, the macaque brain network [138, 139], containing 71 regions (and, sub- sequently 71 nodes), gives less dramatic results with only a peak of 5% difference in the number 56 of driven nodes achieved at the 25% time-to-control (Figure 3.6 (a)). These real-world networks show the capability of saving up to 91% of resources as early as 20% of the total time-to-control when controlling large-scale networks exhibiting long-term memory dynamics. 3.2.2 Network Topologies, Not Size, Determine the Control Trends We generated 100 random networks having the same size and synthetic parameters and found the average difference in the number of driven nodes (n T ) versus the time-to-control (see Figure 3.3). The results in Figure 3.3 show that the same overall trend occurs for each of the three types of random networks (Erd˝ os–R´ enyi, Barab´ asi–Albert, and Watts-Strogatz) independent of the network size. These results suggest that the network topology significantly affects the controllability over other network attributes, such as the size of the network. We examine the effect of the parameters for the three types of random networks on the required number of driven nodes and the time-to-control. We notice similar trends for the Erd˝ os–R´ enyi and Watts-Strogatz networks and drastically different results for the Barab´ asi–Albert networks (Fig- ure 3.4). The results in Figure 3.4 suggest that varying the parameters of the Erd˝ os–R´ enyi and Watts-Strogatz networks, in general, gives similar behavior for the difference in the number of driven nodes across the time-to-control. However, for the Barab´ asi–Albert network, the parame- ters greatly impact the difference in the number of driven nodes across the time-to-control. For example, on the one hand, a lower k parameter in the Barab´ asi–Albert network gives a higher dif- ference in the number of driven nodes at higher time-to-control values. On the other hand, a higher k parameter gives a lower difference in the number of driven nodes at higher time-to-control val- ues (Figure 3.4). These results provide further evidence that the topology of the network has a significant influence on the controllability of the network. We seek to understand the effect on the average difference in the required number of driven nodes as both the size of the network and the time-to-control increase. For the same three sets of parameters, we generate 100 random networks and find the average difference in the required 57 number of driven nodes. We notice that the difference in the required number of driven nodes increases as both the size of the network and the time-to-control increase. The results in Figure 3.5 suggest that the difference in the number of driven nodes scales proportionally with the number of nodes in the network for the Erd˝ os–R´ enyi and Watts-Strogatz networks even across different parameters, whereas the difference in the number of driven nodes for the Barab´ asi–Albert network is heavily dependent on the parameter k. These results support the claim that the network topology plays a relevant role in determining the controllability of the network, whereas the network size does not seem to play a relevant role. 3.2.3 Multi-Fractal Spectrum Dictates Savings When Controlling Long-Term Memory Dynamical Networks The multi-fractal spectrum of a network gives a complete characterization of that network [144, 145]. We investigate the relationship between the multi-fractal spectrum and the difference in the required number of driven nodes in several real-world networks, namely the rat brain [134], C. elegans [136, 137], macaque brain [138, 139], and power networks [135]. We pay particular attention to the width and height of the spectrum. The results indicate that, in general, the wider and higher the multi-fractal spectrum, the larger the percent difference in the number of driven nodes see Figures 3.6 (a) and (b). Figures 3.6 (c) and (d) plot the relationship between the multi-fractal spectrum height and width, respectively, time-to-control, and percent difference in the number of driven nodes for the four real-world networks. These figures, Figures 3.6 (c) and (d), support the notion that, in general, the greater the width and height of the multi-fractal spectrum, then the greater the difference in the number of driven nodes for nearly all time-to-control horizons. Hence, the greater the multi-fractal spectrum width and height, then the greater the savings in the amount of driven nodes when controlling long-term memory dynamical networks. Therefore, the multi- fractal spectrum is an indicator of the total savings of driven nodes that can be achieved when controlling long-term memory dynamical networks. 58 3.2.4 Fewer Required Driven Nodes, Dictated by Network Topology and Multi-Fractal Spectrum Following from the results of Theorem 7, we proved that a long-term power-law memory re- quires less than or equal to the number of driven nodes as the same network possessing Markov dynamics requires. In our experimental results, we showed that long-term power-law memory dynamical networks can provide a significant advantage in terms of the resources for controlling networks. In particular, the results in Figure 3.3 showed that, depending on the network topol- ogy, a Markov dynamical network requires up to a power-law times more driven nodes to achieve structural controllability than a long-term power-law memory dynamical network having the same spatial structure. Long-term power-law memory dynamical networks can require 91% fewer driven nodes at 20% of the time-to-control compared to Markov dynamical networks. More specifically, the rat brain network showed a 60% difference in the minimum number of driven nodes at approximately 20% of the time-to-control (Figure 3.6 (a)). Hence, we provided evidence that long-term power-law memory dynamical networks may be easier to control than Markov dynamical networks. Our experimental results showed that the network topology, not the size of the network, deter- mined the overall trends of the minimum resources needed for control over the time-to-control (see Figures 3.3, 3.4, and 3.5). The results in Figure 3.3 showed that the same overall trend occurs for each of the three types of random networks (Erd˝ os–R´ enyi, Barab´ asi–Albert, and Watts-Strogatz) independent of the network size. The results in Figure 3.4 suggest that varying the parameters of the Erd˝ os–R´ enyi, Barab´ asi–Albert, and Watts-Strogatz networks produced higher variation in the difference in the minimum number of driven nodes across the time-to-control than varying the size of the network. Finally, the results in Figure 3.5 showed that the difference in the mini- mum number of driven nodes scaled proportionally with the number of nodes in the network for the Erd˝ os–R´ enyi and Watts-Strogatz networks even across different parameters, whereas the difference 59 in the minimum number of driven nodes for the Barab´ asi–Albert network is heavily dependent on the parameter k. Finally, we find that the height and width of the multi-fractal spectrum serve as an indication of the total savings in the minimum number of driven nodes for a given network, when considering long-term memory dynamics over Markov dynamics (see Figure 3.6). In particular, Figures 3.6 (c) and (d), support the notion that, in general, the greater the width and height of the multi-fractal spectrum, then the greater the difference in the minimum number of driven nodes for nearly all time-to-control horizons. 3.2.5 A Scalable and Robust Framework to Determine the Trade-Offs Between the Minimum Number of Driven Nodes and Time-To-Control for Controlling Large-Scale Long-Term Power-Law Memory Dynamical Networks The work in [32] characterized the minimum number of driven nodes for controlling Markov networks and used these results to provide algorithms to design controllable Markov networks. Liu et. al. showed the relationship between the required number of driven nodes in Markov networks and the average degree distribution for different network topologies [146]. The work in [147] showed the importance of nodal dynamics in determining the minimum number of inputs (i.e., an input can be connected to multiple nodes) to achieve structural controllability in the case of Markov networks. Recent work by Lin et. al. claims to control Markov networks with only two time-varying control inputs with the assumption that these sources can be connected to any node at any time; however, this is not a realistic solution in the case of many physical systems, such as the power grid or the brain [148]. Most notably though, none of these previously mentioned works take into account the long-term memory dynamics exhibited in many real-world complex dynamical 60 networks. Furthermore, only the work in [149] has analyzed the trade-offs between the time-to- control of a dynamical network and the required number of driven nodes, but this study is limited to only considering Markov dynamical networks. Here, we have examined the effect that long-term memory dynamics plays on the ability to control the network while considering the time-to-control. There are only two works [53, 54] that have examined the control of networks exhibiting long- term memory dynamics, but they have several shortcomings, including scalability and lack of robustness. For example, the work in [53], which uses energy-based methods to control long-term memory dynamics modeled as a linear time-invariant fractional-order system in discrete-time, is not tractable as it has computational-time complexity ofO(n 7.748 ). Similarly, the work in [54], which uses a greedy algorithm that maximizes the rank of the controllability matrix, relies on the computation of the matrix rank, so it quickly becomes intractable as a network grows in size and has computational-time complexity ofO(n 5 ). Furthermore, both of these works require that the precise dynamics are known [53, 54]. Simply speaking, these works do not account for the inherent uncertainty in the parametric model for long-term memory dynamics. Hence, we present the first scalable and robust framework to determine the trade-offs between the minimum number of driven nodes and time-to-control for controlling large-scale long-term power-law memory dynamical networks with a computational-time complexity ofO(n 2 log(n)) – See Theorem 14. Our novel framework that can assess the trade-offs in controlling a large-scale long-term memory dynamical network with unknown parameters in a given number of time steps, which leads us to provide answers to fundamental questions regarding the relationships between the number of driven nodes, the time-to-control, the network topology, and the size of the network. 3.2.6 Determining the Existence of Long-Term Memory in Dynamical Networks Our framework, which assesses the trade-offs between the minimum number of driven nodes in a given time-to-control for controlling long-term power-law dynamical networks, can determine the existence of long-term memory in dynamical networks. We can achieve this by considering 61 whether a network can be controlled with the minimum number of driven nodes given by the proposed framework. For example, suppose that we want to control a dynamical network in a finite amount of time, where we assume that we know the structure of the network including its size (i.e., number of nodes) and the relationship between nodes in the network (i.e., the edges and their placement in the network), but we do not know the dynamics of the system. In this case, our framework can be used to determine whether the dynamical network possesses long-term memory by first trying to control the network with the minimum number of driven nodes in a specified time frame, which can be computed using our novel framework. If the dynamical network can indeed be steered to a desired behavior with the minimum number of driven nodes in a given time-to-control as computed by our novel framework, then the dynamical network indeed possesses long-term power- law memory dynamics. On the other hand, if the network cannot be controlled with the minimum number of driven nodes in a specified time frame as computed using our novel framework, then the network must not possess long-term power-law memory dynamics as it requires more driven nodes to control the network. This is a powerful result that provides a significant advantage over current state-of-the-art meth- ods to determine the existence of long-term memory dynamics in large-scale networks, which rely on first finding the exact parameterization of the known dynamics of the system from data [7, 112, 114, 115]. In particular, these approaches estimate the two parameters of the long-term power-law dynamical networks, namely the fractional-order exponents and the spatial matrix. By investigat- ing whether the estimated values of the fractional-order exponents are indeed fractional inherently determines the existence of long-term memory in the dynamics (see Figure 3.2). There are several works that have proposed methods to estimate the parameters of long-term power-law dynamical networks. The work in [7] proposes an approximate approach to estimate the fractional-order exponents and based on this result finds the spatial matrix using a least-squares approach. The work in [116] leverages wavelets to find the fractional-order exponents. Using this 62 wavelet approach, the work in [112] proposes an expectation-maximization framework to estimate the the spatial matrix and the unknown inputs of a fractional-order system. In a similar vein, the work in [114] proposes a method to estimate from data the parameters of a fractional-order system having partially unknown states. Finally, the work in [115] considers a finite-sized augmented fractional-order system and proposes an iterative algorithm to find the fractional-order exponents and a least-squares approach to find the spatial matrix. While these works rely on computing the parameters of long-term power-law dynamical net- works from data, the advantage of our proposed framework is that it can determine the existence of long-term memory dynamics in large-scale networks without knowing the exact dynamics of the system. 3.2.7 Determining the Effects of the Degree Distribution on the Minimum Number of Actuators We investigate the relationship between the average degree and the average difference in the required number of driven nodes for the three random networks. The results are shown in Fig- ure 3.7. With the exception of the Watt-Strogatz networks, which have the same degree for each of the generated networks, the required number of driven nodes stays relatively similar as the average degree of the network increases. We examine the rat brain network since this gave the highest difference in required number of driven nodes. In particular, we examine the degree distribution and clustering coefficient distri- bution for the rat brain network to gain insight as to why this network gives such a significant improvement in the required number of driven nodes when considering the fractional-order dy- namical network model – see Figure 3.8 (b) and (c). We notice that the rat brain network has wide range of degrees and a fairly high clustering coefficient. We conjecture that these properties play a role in achieving a high difference in driven nodes. 63 0 20 40 60 80 100 2.5 3.0 3.5 4.0 Time-to-Control (%) Average degree Average Difference innT Erd ős–Rényi N=250 0 5 10 15 20 (a) (b) (c) (d) (e) (f) (g) (h) (i) Figure 3.7: (a), (b), and (c) show the average difference in the required number of driven nodes (n T ) for networks of varying average degree distributions versus the time-to-control (%) for 100 realizations of Erd˝ os–R´ enyi networks for a fixed set of parameters. (d), (e), and (f) show the aver- age difference in the required number of driven nodes (n T ) for networks of varying average degree distributions versus the time-to-control (%) for 100 realizations of Barab´ asi–Albert networks for a fixed set of parameters k= 2,5,10. (g), (h), and (i) show the average degree for networks versus required number of driven nodes (n T ) versus the time-to-control (%) for 100 realizations of Watts- Strogatz networks for a fixed set of parameters p= 0.2,0.5,0.8. We notice that the average degree is the same for all of the networks. Using the progressive ChungLu method developed in [150], we generate 100 networks that are on average similar in degree distribution to the rat brain network. From the results in Figure 3.8 (a), we see that the mean and standard deviation of the difference in driven nodes for the generated 64 0 20 40 60 80 100 0 5 10 15 20 25 30 35 Time-To-Control (%) Difference innT Generated RatBrain network Difference innT (a) 0 50 100 150 0 0.02 0.04 0.06 0.08 0.1 Degree Probability Degree Distribution RatBrain network (b) 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 Cluster Coefficient Probability Cluster Coefficient Distribution RatBrain network (c) Figure 3.8: (a) shows the mean and standard deviation of the difference in driven nodes versus time-to-control for 100 networks generated from the rat brain network following the progressive Chung Lu method [150]. (b) shows the degree distribution for the original rat brain network. (c) shows the clustering coefficient for the original rat brain network. networks drastically differ the results for the original rat brain network. As a way to understand why we see this drastic difference in results, we performed the Spearman Rank Test (Chapter 8.5, [151]), which tests whether any two real-valued vectors of equal length are independent. In particular, the null hypothesis states that the two vectors are indeed independent. Hence, if the p-value is large, then the null hypothesis is accepted, whereas if the p-value is small, then the null hypothesis is rejected. For each of the 100 generated networks, we compare the distribution of the in-degree, out-degree, and total degree for the generated network with those for the rat brain network. First, we provide the results of the Spearman Rank Test when considering the in-degree distribu- tion. With 99% confidence, only 55% of the generated networks have vertex in-degree distributions that are independent of the vertex in-degree distribution for the rat brain network. With 95% con- fidence, 80% of the generated networks have vertex in-degree distributions that are independent of the vertex in-degree distribution for the rat brain network. With 90% confidence, 87% of the generated networks have vertex in-degree distributions that are independent of the vertex in-degree distribution for the rat brain network. Therefore, we can say with high confidence that most of the generated networks have in-degree distributions that are independent from the in-degree distribu- tion of the rat brain network. This may provide an explanation as to why the number of driven 65 nodes needed for the generated networks differs drastically from the required number for the rat brain network. Next, we provide the results of the Spearman Rank Test when considering the out-degree distri- bution. With 99% confidence, only 26% of the generated networks have vertex out-degree distri- butions that are independent of the vertex out-degree distribution for the rat brain network. With 95% confidence, 52% of the generated networks have vertex out-degree distributions that are inde- pendent of the vertex out-degree distribution for the rat brain network. With 90% confidence, 63% of the generated networks have vertex out-degree distributions that are independent of the vertex out-degree distribution for the rat brain network. Surprisingly, we can say with high confidence that very few of the generated networks have out-degree distributions that are independent from the out-degree distribution of the rat brain network. Finally, we provide the results of the Spearman Rank Test when considering the total degree distribution. With 99% confidence, 70% of the generated networks have total degree distributions that are independent of the total degree distribution for the rat brain network. With 95% confi- dence, 94% of the generated networks have total degree distributions that are independent of the total degree distribution for the rat brain network. With 90% confidence, 94% of the generated networks have total degree distributions that are independent of the total egree distribution for the rat brain network. Therefore, we can say with high confidence that more than 70% of the generated networks have total degree distributions that are independent from the total degree distribution of the rat brain network. This provides evidence to support that the number of driven nodes needed for the generated networks would differ drastically from the required number of driven nodes for the rat brain network. 66 3.3 Conclusions and Future Works In this section, we presented a novel framework that can assess the trade-offs in controlling a large-scale long-term memory dynamical networks with unknown parameters in a given amount of time. This has important implications in a wide range of applications spanning cyber-physical systems [7], biological networks [6], and power networks [1]. Our newly proposed framework enables us to find the minimum number of driven nodes (i.e., the state vertices in the network that are connected to one and only one input) and their placement to control a long-term power-law memory dynamical network given a specific time horizon, which we define as the ‘time-to-control’. Remarkably, we provide evidence that long-term power-law memory dynamical networks require considerably fewer driven nodes to steer the network’s state to a desired goal for any given time- to-control as compared with Markov dynamical networks. Finally, our framework can be used as a tool to determine the existence of long-term memory dynamics in networks. In future work, we aim to design a framework that can account for unknown inputs. Furthermore, we seek to understand the trade-offs between the number of required actuators and the required control energy to effectively control the dynamical network. 67 Chapter 4 Measuring Complex Dynamical Networks We are interested in understanding the behavior from complex dynamical systems. To do so, scientists and engineers develop models of systems by describing the nature of their dynamics and the environment in which they interact. One powerful tool to model complex switching dynamics is to adopt a switched linear time-invariant framework. This model assumes that the system un- der scrutiny transitions between different (yet known) linear time-invariant dynamics, where such transitions are discrete in nature and are captured by a switching signal for which the sequence of the switches may not be known a priori. Examples of such systems include the power electric grid [152], where the change in dynamics may be dictated by a faulty transmission line [153, 154], or a multi-agent system [155, 156], where the dynamics may change due to a loss in communication among agents. However, when modeling a process, it is common to neglect the fact that the interaction of a dynamical system with its environment introduces errors. We can describe these external environ- mental errors as unknown inputs entering into the dynamical system. For instance, in the power grid, the generated power and/or the customer demand behave as unknown inputs. Similarly, in multi-agent robotic systems, particularly in surface vehicles, friction behaves as an unknown input, whereas in the context of unmanned vehicles, airflow or ocean currents act as unknown inputs. An alternate scenario is in networked systems where the unknown input is due to the interconnections with the remaining hidden network [157, 158, 159, 160, 161, 113]. As is evident in the previously 68 mentioned examples, a reoccurring practice in control engineering is to model the unknown inputs in a latent space that can capture the main features of the incoming signal but does not model the system from which the unknown input originates. To monitor such switched linear time-invariant systems under unknown inputs requires us to assess both the state and the inputs by guaranteeing that the system is state and input observable [162]. Often, however, we cannot accurately know the parameters of the system. Moreover, if the parameters are known, the study of controllability and/or observability properties leads to NP-hard problems [163]. Hence, we assume that only the structure of the system is known meaning that a system parameter is either zero or could take on any real scalar value [33]. In this context, we can rely on the notion of structural state and input observability that yields state and input observability for almost all system parameterizations. Previous work has provided the necessary and sufficient conditions to ensure structural state and input observability for discrete-time systems under unknown inputs [164]. Nonetheless, the counterpart for continuous-time switched linear-time invariant systems under unknown inputs were only studied in [37] and [38, 39]. In particular, [37, 39] considers the graph-theoretic necessary and sufficient conditions for generic discrete mode observability of a continuous-time switched linear system with unknown inputs and proposed a computational method to verify such conditions with a complexity of O(n 6 ), where n is the number of states. The works of [38, 39] present sufficient conditions for the generic observability of the discrete mode of continuous-time switched linear systems with unknown inputs and find an exhaustive location set to place sensors when these conditions are not satisfied with a computational complexity of O(n 4 ). However, none of these works considered the minimum number of required sensors and their placement to guarantee structural state and input observability as we consider in this work. This problem is important in designing control schemes for large scale systems and is often referred to as the minimum sensor placement. While this problem has been studied for a variety of systems [32], to the best of the authors’ knowledge, it has not been studied in the context of continuous-time switched linear time- invariant systems under unknown inputs. 69 The main contributions are as follows. We first provide necessary and sufficient conditions for structural state and input observability of continuous-time switched linear-time invariant systems under unknown inputs. Moreover, we can verify these conditions in O((m(n+ p)) 2 ), where n is the number of state variables, p is the number of unknown inputs, and m is the number of modes. Furthermore, we address the minimum sensor placement for these systems using a feed- forward analysis and an algorithm with a computational complexity of O((m(n+ p)+α) 2.373 ), where the n× n matrix multiplication algorithm with best asymptotic complexity runs in O(n ς ), withς≈ 2.3728596 [165], and whereα is the number of target strongly connected components of the system’s digraph representation. Finally, we provide a real-world example from power systems to illustrate our results. 4.1 Minimum Sensor Placement Problem Formulation Here, we consider a continuous-time switched linear time-invariant (LTI) system with (un- known) inputs that can be described as follows: ˙ x(t)=A σ(t) x(t)+ F σ(t) d(t), (4.1a) ˙ d(t)=Q σ(t) d(t), (4.1b) y(t)=C σ(t) x(t)+ D σ(t) d(t), (4.1c) where x(t)∈R n is the state, d(t)∈R p represents the unknown inputs, y(t)∈R n is the output, and σ(t) :[0,∞)→M≡{ 1,...,m} is the unknown switching signal. System (4.1) contains m possible known subsystems also known as modes, which we denote by the tuple(A k ,F k ,Q k ,C k ,D k ), where σ(t)= k∈M. Lastly, we implicitly assume that the dwell time of each mode is greater than zero. In what follows, we seek to assess and determine the minimum sensor placement that ensures state and input observability for the continuous-time switched LTI system with unknown inputs in (4.1). 70 Definition 19 (State and Input Observability [166]). The switched LTI system described by (A σ(t) , F σ(t) , Q σ(t) , C σ(t) , D σ(t) , σ(t); T f ) is said to be state and input observable for a time horizon T f if and only if the initial state x(t 0 ) and the unknown inputs d(t) where t∈[t 0 ,T f ] can be uniquely determined, given(A σ(t) ,F σ(t) ,Q σ(t) ,C σ(t) ,D σ(t) ,σ(t);T f ) and measurements y(t)(t 0 ≤ t≤ T f ).◦ We focus on the sensor placement problem. For the sake of simplicity, we assume that the measurements take the following form y(t)= Cx(t)+ Dd(t). (1c’) Simply speaking, we assume that the output and feed-forward matrices are the same across all modes. Notice that this assumption can be waived as we discuss in the following Remark 20. Remark 20. We can consider a fixed set of measurements represented by C and D without loss of generality since taking the union of the measurements made in different modes, rep- resented by C σ(t) and D σ(t) , will result in the total set represented by C and D. ⋄ We assume that each sensor is dedicated, meaning that each sensor can measure only one state or only one input. Considering an arbitrary set of sensors would lead to an NP-hard problem as this is the case for the linear time-invariant systems [32]. We state this formally in the following assumption. A1 The output matrix and feed-forward matrix are written as C=I J x n and D=I J d p , whereI J x n is a matrix where its rows are composed of canonical identity matrix rows that are each multiplied with any arbitrary value. These canonical rows are indexed byJ x ={1,...,n}. Similarly,I J d p is a matrix where its rows are composed of canonical identity matrix rows that are each multiplied with any arbitrary value. These canonical rows are indexed byJ d ={1,..., p}. Due to uncertainty in the system’s parameters, we consider a structural systems framework [33]. We introduce the following definition for a structural matrix. 71 Definition 21. (Structural Matrix) A matrix ¯ M∈{0,⋆} m 1 × m 2 is referred to as a structural matrix if ¯ M i j = 0, then M i j = 0, and if ¯ M i j =⋆, then M i j ∈R, so M i j is any arbitrary real number and M i j is assumed to be independent of M i ′ j ′ for all i, j,i ′ , j ′ such that i̸= i ′ and j̸= j ′ . With this notion in mind, we next define structural state and input observability for the switched LTI system with unknown inputs in (4.1). Definition 22. (Structural State and Input Observability) The switched LTI system with unknown inputs described by the structural matrices ( ¯ A σ(t) , ¯ F σ(t) , ¯ Q σ(t) , ¯ C σ(t) , ¯ D σ(t) ,σ(t);T f ) is said to be structurally state and input observable for a time horizon T f if and only if there exists a system described by(A σ(t) ,F σ(t) ,Q σ(t) ,C σ(t) ,D σ(t) ,σ(t);T f ) that is state and input observable and satisfies the structural pattern imposed by the structural ma- trices( ¯ A σ(t) , ¯ F σ(t) , ¯ Q σ(t) , ¯ C σ(t) , ¯ D σ(t) ). ◦ Subsequently, the problem statement we seek to address in this paper is as follows: given ¯ A σ(t) , ¯ F σ(t) , ¯ Q σ(t) , which are the known structural matrices of system in (4.1a) and (4.1b), and time horizon T f , we aim to find the minimum set of states J x and inputsJ d that need to be measured to ensure structural state and input observability. We present this formally as min J x ⊆{ 1,...,n} J d ⊆{ 1,...,p} |J x |+|J d | s.t. ( ¯ A σ(t) , ¯ F σ(t) , ¯ Q σ(t) , ¯ I J x n , ¯ I J d p ,σ(t);T f ) is struct. state and input observable. (P 1 ) For the sake of clarity, we assume that the matrix ¯ F σ(t) does not have zero columns as this would correspond to having disturbances that do not affect the dynamics of the system. 72 4.2 Minimum Structural Sensor Placement for Switched LTI Systems with Unknown Inputs First, we provide necessary and sufficient conditions for the feasibility of the optimization prob- lemP 1 . Next, we characterize the minimal solution of problemP 1 . Lastly, we develop an algorithm to obtain a solution toP 1 , and we assess its computational complexity. We start by introducing the notion of generic rank, which allows us to provide conditions for structural state and input observability of continuous-time switched LTI systems with unknown inputs. Definition 23. (Generic rank): The generic rank (g-rank) of an n 1 × n 2 structural matrix ¯ M is g− rank( ¯ M)= max M∈[ ¯ M] rank(M), where[ ¯ M]={M∈R n 1 × n 2 : M i, j = 0 if ¯ M i j = 0}. ◦ Next, we introduce several graph-theoretical and algebraic definitions required for defining the conditions for structural state and input observability of switched LTI systems with unknown in- puts. A directed graph associated with any structural system matrix ¯ M is constructed in the following manner. A directed graph is written asG( ¯ M)=(V ,E), whereV denotes the set of vertices (or nodes) such thatV =M x , andE denotes the (directed) edges between the vertices in the graph such thatE =E M x ,M x ={(m j ,m i ) : ¯ M(i, j)̸= 0}. For a specific time t ′ such that σ(t ′ )= k, we associate the system in (4.1a) and (4.1b) with a system digraphG ≡ G( ¯ A k , ¯ F k , ¯ Q k ,I J x n ,I J d p )= (V ,E k ), whereV =X ∪D∪Y ,X ={x 1 ,...,x n },D ={d 1 ,...,d p }, andY ={y 1 ,...,y n } are the state, unknown input, and output vertices, respectively. Furthermore, we have thatE k = E k X ,X ∪E k D,X ∪E k D,D ∪E X ,Y ∪E D,Y , whereE k X ,X ={(x j ,x i ) : ¯ A k (i, j)̸= 0},E k D,X ={(d j ,x i ) : 73 ¯ F k (i, j)̸= 0}, E k D,D ={(d j ,d i ) : ¯ Q k (i, j)̸= 0}, E X ,Y ={(x j ,y i ) :I J x n (i, j)̸= 0}, andE D,Y = {(d j ,y i ) :I J d p (i, j)̸= 0} are the state, input, and output edges, respectively. Next, we introduce a mathematical operator, which plays a key role in presenting the conditions for structural state and input observability of switched LTI systems with unknown inputs. Definition 24. (Union of structural matrices) The mathematical operator∨ is an entry-wise oper- ation such that a structural matrix ¯ A= W m k=1 ¯ A k = ¯ A 1 ∨ ¯ A 2 ∨···∨ ¯ A m has a non-zero entry at(i, j) if at least one of the matrices ¯ A k has a non-zero entry in that same location(i, j), and ¯ A(i, j)= 0, otherwise. ◦ With this definition, we introduce the directed graphs G W m k=1 ¯ A ′ k andG W m k=1 ¯ A ′ k , ¯ C ′ . More specifically, G W m k=1 ¯ A ′ k =(X ′ ,E X ′ ,X ′) whereE X ′ ,X ′ ={(x ′ j ,x ′ i ) : W m k=1 ¯ A ′ k i, j ̸= 0}. In addition, G W m k=1 ¯ A ′ k , ¯ C ′ =(V ,E) whereV =X ′ ∪Y ′ andE =E X ′ ,X ′∪E X ′ ,Y ′ such thatE X ′ ,X ′ = {(x ′ j ,x ′ i ) : W m k=1 ¯ A ′ k i, j ̸= 0} andE X ′ ,Y ′ ={(x ′ j ,y ′ i ) : C ′ i, j ̸= 0}. We next introduce the necessary and sufficient conditions for structural state and input observability for continuous-time switched LTI systems with unknown inputs. Theorem 25. (Necessary and sufficient conditions for structural state and input observability) A continuous-time switched LTI system with unknown inputs in (4.1a), (4.1b), and (1c’) is structurally state and input observable if and only if the next two conditions hold: (i) G m _ k=1 ¯ A ′ k , ¯ C ′ ! has all state vertices that access at least one output vertex; (ii) g-rank [ ¯ A ′ 1 ;...; ¯ A ′ m ; ¯ C ′ ] = n+ p, where, for k∈M and the matrices ¯ A ′ k and ¯ C ′ are defined as ¯ A ′ k = h ¯ Q k 0 ¯ F k ¯ A k i and ¯ C ′ =[ ¯ D ¯ C]. ◦ 74 Proof. The continuous-time switched LTI system with unknown inputs described in (4.1a), (4.1b), and (1c’) may be re-written as the following augmented continuous-time switched LTI system where the new augmented system is x ′ =[d ⊺ x ⊺ ] ⊺ , ˙ x ′ (t)= h Q σ(t) 0 F σ(t) A σ(t) i | {z } A ′ σ(t) x ′ (t) and y ′ (t)=[ D C] |{z} C ′ σ(t) x ′ (t). (4.2) Moreover, letM={1,...,m} be the ordered finite set of modes where the function σ(t) is constant. Then, we have that A ′ k = h Q k 0 F k A k i and C ′ =[ D C],∀k∈M. Therefore, when the system in (4.1a), (4.1b), and (1c’) is structurally state and input observable, it is equivalent to when the system in (4.2) is structurally state observable. Interestingly, despite the fact that observability and controllability are not dual in general for switched LTI systems, in Theorem 4 in [167], the authors showed that switched LTI systems are dual in the case of circulatory switching (see Definition 3 in [167]). From Remark 3 in [168], it readily follows that, in the context of structural switched LTI systems, the order of the switches does not play a role in attaining structural controllability. Therefore, in particular, it follows that structural controllability can be attained in the case of circulatory switching. As such, we can leverage Theorem 4 in [167] and invoke duality between structural controllability and structural (state) observability. Hence, by Theorem 3 of [169], the system in (4.2) is structurally observable whenever the conditions(i) and(ii) hold. Remark 26. Consider a switching signal that ensures the structural observability of the switched linear continuous-time systems. The order of transitions of system modes does not influence its struc- tural observability. This property comes from the fact that: • the “∨” operation, in condition (i) Theorem 25, is commutative; • a permutation of the matrices, in condition (ii) Theorem 25, yields the same g-rank. ⋄ Next, we define a few other important graph-theoretic concepts. A bipartite graph denoted asB associates a matrix M of dimension n 1 × n 2 to two vertex setsV r ={1,...,n 1 } andV c ={1,...,n 2 }, 75 which are the set of row and column vertices, respectively. The connections in the matrix M relate to the connections between vertex setsV r andV c by an edge setE V c ,V r ={(v c j ,v r i ) : M i j ̸= 0} thereby allowing the bipartite graph of matrix M to be written asB(V c ,V r ,E V c ,V r ). A matching is a collection of edges where the beginning vertex is different from the ending vertex for all edges in the set and there are no two edges in the set that have any of the same vertices. A maximum matching is the matching that has the maximum number of edges among all possible matchings. A weighted bipartite graph of a matrix M, denoted asB((V c ,V r ,E V c ,V r ),w), has weights w :E V c ,V r → R associated with the edges in the bipartite graph. Finding the maximum matching such that the sum of the weights is minimized in the weighted bipartite graph is called the minimum weight maximum matching (MWMM). Now, we must introduce the notions of a strongly connected component and non-accessible states. LetZ ≥ 0 denote the set of non-negative integers. First, we define a path of size l∈Z ≥ 0 as a sequence of vertices, p s =(v 1 ,v 2 ,...,v l ), where the vertices do not repeat, v i ̸= v j for i̸= j, and (v i ,v i+1 ) is an edge of the directed graph for i= 1,...l− 1. A subgraph denoted byG(V ′ ,E ′ ) is a subset of verticesV ′ ⊂ V and its corresponding edgesE ′ ⊂ E of a particular graphG(V ,E). A connected component is any subgraph with paths that connect any two vertices in the subgraph. A connected component is said to be a strongly connected component (SCC) if the subgraph is maximal meaning there is no other subgraph that contains the maximal subgraph. A sink SCC is a strongly connected component that is connected to an output vertex. A source SCC is a strongly connected component that is connected to an input vertex. A target-SCC is a strongly connected component that does not have any outgoing edges. We note that every digraph can be represented as a directed acyclic graph (DAG), where each node in the DAG represents an SCC in the digraph. Finally, a non-accessible state is one that does not have a path to an output vertex (either measuring a state or input). We present graph-theoretic conditions for structural state and input observability of continuous- time switched LTI systems with unknown inputs. 76 Corollary 27. A switched LTI continuous-time system (4.1) is structurally observable if and only if the next two conditions hold: (i) there exists an edge from one state variable of each target-SCC ofG W m k=1 ¯ A ′ k to an output variable ofG W m k=1 ¯ A ′ k , ¯ C ′ ; (ii) B [ ¯ A ′ 1 ;...; ¯ A ′ m ; ¯ C ′ ] has a maximum matching of size n+ p; where, for k∈M, the matrices ¯ A ′ k and ¯ C ′ are defined as ¯ A ′ k = h ¯ Q k 0 ¯ F k ¯ A k i and ¯ C ′ =[ ¯ D ¯ C]. ◦ Proof. First, we construct the augmented system (4.2) from the original system (4.1). Second, we need to ensure that the conditions in Theorem 25 are satisfied. When the digraph G W m k=1 ¯ A ′ k , ¯ C ′ has no non-accessible output vertices, it is equivalent to the existence of an edge from a state variable in each target-SCCG W m k=1 ¯ A ′ k to an output vertex ofG W m k=1 ¯ A ′ k , ¯ C ′ . Thus, condition (i) is equivalent to condition (i) of Theorem 25. Subsequently, we recall the result from [170], which states that for ¯ M∈{0,⋆} n 1 × n 2 , when the g-rank( ¯ M)= min{n 1 ,n 2 }, it is equivalent to when there exists a maximum matching ofB( ¯ M) of size min{n 1 ,n 2 }. Hence, by the previous result, condition(ii) is equivalent to condition(ii) of Theorem 25. In the following remark, we outline the computational complexity in which we can verify the conditions of Corollary 27. Remark 28. We can verify the two conditions in Corollary 27 in O((m(n+ p)) 2 ), where n is the number of state variables, p is the number of unknown inputs, and m is the number of modes (Section 3.3, [168]). We notice that the number of variables required to be measured is always less than or equal to n+ p. ⋄ With the graph-theoretic conditions for structural state and input observability enumerated, we introduce Algorithm 3. Briefly, the algorithm finds the minimum set of state and input variables to ensure that the conditions of Corollary 27 are satisfied. First, the algorithm finds the maximum collection of variables that satisfy the condition of Corollary 27 by constructing the MWMM of 77 B([ ¯ A ′ 1 ;...; ¯ A ′ m ; ¯ T]), where ¯ T has as many rows as target-SCCs, and the non-zero column entries of ¯ T specify the indices of the augmented states that make up each target-SCC. Furthermore, weights are considered on the edges of the bipartite graph such that all edge weights are zero unless the edges connect to a vertex established by ¯ T at which the weight is set to one. If there is an edge in the MWMM that has a weight of one, then the index of the column vertex connecting the edge is the same index of the augmented state variable that satisfies both conditions in Corollary 27. The algorithm then proceeds to find the minimum set of variables from the maximum collection that still ensure the conditions of Corollary 27. Algorithm 3 Dedicated solution toP 1 1: Input: A structural switched LTI system with M = {1,...,m} modes described by { ¯ A 1 ,..., ¯ A m , ¯ F 1 ,..., ¯ F m , ¯ Q 1 ,..., ¯ Q m }, where ¯ A k ∈ {0,⋆} n× n , ¯ F k ∈ {0,⋆} n× p , ¯ Q k ∈ {0,⋆} p× p ,∀k ∈ {1,...,m} 2: Output: Output ¯ C= ¯ I J x n and ¯ D= ¯ I J d p , whereJ =J x ∪J d ,J d ={i∈J : i≤ p}, andJ x ={i∈ J : i> p} 3: Set ¯ A ′ k = h ¯ Q k 0 ¯ F k ¯ A k i 4: Compute theα target-SCCs ofG W m k=1 ¯ A ′ k =(X ′ ,E ′ X ′ ,X ), denoted by{S 1 ,...,S α } 5: Build the bipartite graphB([ ¯ A ′ 1 ;...; ¯ A ′ m ; ¯ T])=(V c ,V r ,E V c ,V r ), where ¯ T ∈{0,⋆} α× (n+p) and ¯ T i, j =⋆ if x ′ j ∈S i and ¯ T i, j = 0, otherwise. We denote the rows of matrix ¯ A ′ k by{r k 1 ,...,r k n+p }, and the rows of ¯ T by{t 1 ,...,t α }. 6: Set the weight of the edges e∈E V c ,V r to ( 1, if e∈({t 1 ,...,t α }× V c )∩E V c ,V r 0, otherwise . 7: Find a MWMMM ′ of the bipartite graph constructed in Step 5, with the cost of the edges defined in Step 6. 8: Set the column vertices associated with ¯ T belonging to M ′ , i.e., J ′ = {i : (t j ,c i ) ∈M ′ , j ∈ {1,...,α} and c i ∈V c } andT ={ j : (t j ,c i )∈M ′ , j∈{1,...,α}} 9: SetJ ′′ ={1,...,n+ p}\{i∈{1,...,n+ p} : (r k j ,c i )∈M ′ ,k∈{1,...,m}, j∈{1,...,n+ p}} 10: SetJ ′′′ to contain one and only one index of a state variable from each target-SCC in {S s : s∈ {t 1 ,...,t α }\T} 11: SetJ =J ′ ∪J ′′ ∪J ′′′ 12: SetJ d ={i∈J : i≤ p}, andJ x ={i∈J : i> p} In the next result, we show that Algorithm 3 finds the minimum set of states and inputs to ensure structural state and input observability. Theorem 29. Algorithm 3 is sound, i.e., it provides a solution toP 1 , and the computational complexity of Algorithm 3 is O((m(n+ p)+α) ς ), where ς < 2.373 is the exponent of the best known computational complexity of performing the product of two square matrices [165]. ◦ 78 Proof. To address the problemP 1 , we augment the system in (4.1a) and (4.1b) to be written as in (4.2) where x ′ =[d ⊺ x ⊺ ]. With this augmented system, Algorithm 3 constructs three minimum sets of dedicated outputs that combine to satisfy the two conditions outlined in Theorem 25, which guarantee structural state and input observability. Minimality of the combined sets is ensured as we use maximum matchings to build the three sets. Subsequently, we use Algorithm 3 with the structural switched LTI system withM={1,...,m} modes described by the matrices{ ¯ A ′ 1 ,..., ¯ A ′ m }, where the matrices ¯ A ′ k are defined as ¯ A ′ k = h ¯ Q k 0 ¯ F k ¯ A k i ,∀k∈ M. First, we observe thatJ ′ comprises of a minimum set of dedicated outputs, which maximizes the g-rank([ ¯ A ′ 1 ;...; ¯ A ′ m ;I J ′ (n+p) ]), whereI J ′ (n+p) is a diagonal matrix whose entries inJ ′ are nonzero. Concatenating[ ¯ A ′ 1 ;...; ¯ A ′ m ] withI J ′ (n+p) increases the generic rank by|J ′ | and produces dedicated outputs assigned to state variables in distinct target-SCCs. In fact,B([ ¯ A ′ 1 ;...; ¯ A ′ m ;I J ′ (n+p) ]) yields a MWMMM with weight 0 and size|M|. Hence, by the result from [170] used in the proof of Corollary 27, it follows that g-rank([ ¯ A ′ 1 ;...; ¯ A ′ m ;I J ′ (n+p) ])=|M|. Next, a MWMMM ′ ofB([ ¯ A ′ 1 ;...; ¯ A ′ m ; ¯ T ⊺ ]) has size|M ′ |. This corresponds to an increase in g-rank([ ¯ A ′ 1 ;...; ¯ A ′ m ;I (J ′ ∪J ′′ ) (n+p) ]) from g-rank([ ¯ A ′ 1 ;...; ¯ A ′ m ;I J ′ (n+p) ]) of|M ′ |−| M|. Observe that, by the construction of the matrix ¯ T , we have thatI J ′′ (n+p) corresponds to dedicated outputs assigned to state variables in distinct target-SCCs. This means that |J ′′ | target-SCCs will have outgoing edges to different outputs of the system digraph. This is necessary to satisfy condition(i) of Theorem 25 but may not be sufficient. Therefore, we have to finally consider a third set, J ′′′ , to ensure that condition(i) is fulfilled. In other words, there might still be target-SCCs that are not accounted for by state variables indexed inJ ′ ∪J ′′ , which we account for inJ ′′′ . By minimizing the number of additional dedicated outputsI J ′′ (n+p) , in step 8, we satisfy condition (ii) in Theorem 25 since g-rank([ ¯ A ′ 1 ;...; ¯ A ′ m ;I (J ′ ∪J ′′ ) (n+p) ])= n+ p. Additionally, the setJ ′′′ of minimum extra dedicated outputs, found in step 9, ensures that there are not state vertices that 79 do not access at least one output vertex inG W m k=1 ¯ A ′ k ,I J (n+p) , whereJ =J ′ ∪J ′′ ∪J ′′′ , thereby fulfilling condition (i) of Theorem 25. Notice that I J ′′ (n+p) are not assigned to previously assigned target-SCCs, as they would have been considered inI J ′ (n+p) . Consequently, by the construction, settingJ =J ′ ∪J ′′ ∪J ′′′ in step 10 yields a solution I J (n+p) that is minimal, ensuring both conditions of Corollary 27. Notice that the produced solution easily translates to the original problemP 1 solution by setting the originals ¯ C=I J x n and ¯ D=I J d p , whereJ x ={i∈J : i> p} andJ d ={i∈J : i≤ p}. The computational complexity of Algorithm 3 comes from the step with the highest computa- tional cost (step 6) since the remaining steps of the algorithm have lower complexity. The com- putational complexity of step 6 can be solved by resorting to the Hungarian algorithm [171] that finds a MWMM of B([ ¯ A ′ 1 ;...; ¯ A ′ m ; ¯ T]) in O(max{|V r |,|V c |} ς ), whereV r andV c are defined in step 5 and ς < 2.373 is the exponent of the best known computational complexity of perform- ing the product of two square matrices. Since|V c |≤| V r |, this results in a computational cost of O(|V r | ς )= O((m(n+ p)+α) ς ). Remark 30. The computational complexity presented in Theorem 29 might not be amenable for ensuring the sensor placement for very large systems. Nonetheless, there are some particular classes of systems for which algorithms with lower computational complexity can be devised – see [172] for further details. ⋄ 4.3 Minimum Sensor Placement on the IEEE 5-bus System In this section, we find the minimum sensor placement for a real-world example from power systems by considering the IEEE 5-bus system [153], which has three generators and two loads. Through linearization, we can model this system as a continuous-time switched LTI system with unknown inputs by considering two modes. The union of the two modes are shown in Figure 4.1. One mode is the working system, and the second mode contains a fault that disconnects 80 generator 1 to load 1, which corresponds to the connection between x 14 and x 10 being eliminated. The unknown inputs d 1 and d 2 capture the unknown amount of load consumed by loads 1 and 2, respectively. Table 4.1 describes the states and unknown inputs of the network. The shaded rows in the table correspond to the unknown inputs. The variables/nodes that are not listed in the table but appear in the system digraph correspond to the internal variables that connect the different bus, generators, and loads. The blue nodes correspond to load 1. The orange nodes correspond to load 2. The green nodes correspond to generator 1. The red nodes correspond to generator 2. The gray nodes correspond to generator 3. Description Node frequency of G 1 x 1 turbine output mechanical power of G 1 x 2 steam valve opening position of G 1 x 3 frequency of G 2 x 4 turbine output mechanical power of G 2 x 5 steam valve opening position of G 2 x 6 frequency of G 3 x 7 turbine output mechanical power of G 3 x 8 steam valve opening position of G 3 x 9 unknown uncertainty L 1 d 1 load consumed by L 1 x 10 unknown uncertainty of L 2 d 2 load consumed by L 2 x 12 Table 4.1: States and Unknown Inputs for IEEE 5-bus system Since the system possesses nodal dynamics on all the inputs and states, we only need to perform steps 1-4 and steps 10-12 of Algorithm 3 to find the minimum set of dedicated sensors to achieve structural observability, which involves only finding the target-SCCs. We start by finding the union of the modes – see Figure 4.1. Next, we augment the system and relabel it with the state x ′ =[d ⊺ x ⊺ ] ⊺ – see Step 3 of Algorithm 3. With the system properly combined and augmented, we continue by finding the target-SCCs of G( W m k=1 ¯ A ′ k ) – see Step 4 of Algorithm 3. We find that there are 3 SCCs, which are outlined in dashed polygons depicted in Figure 4.1. We also find that there is 1 target-SCCs, which is outlined in a blue dashed polygon in Figure 4.1, implying that 81 α = 1. Next, we find a single state in the target-SCCs and add its index to J – see Step 10 of Algorithm 3. Hence,J x ={12} orJ x ={10} – see Step 12 of Algorithm 3. These solutions correspond to measuring the load consumed from either of the two loads in the considered power grid example, which matches the physical intuition for the power grid. of dedicated sensors to achieve structural observability. We start by finding the union of the modes, which is shown in Figure 1. Next, we augmented the system and relabeled it with state x 0 =[d | x | ] | .Withthesystem properlycombinedandaugmented,wecontinuebyfind- ing the target-SCCs of G( W k=1 m ¯ A 0 k ). We find that thereare3SCCs,whichareoutlinedindashedpolygons in Figure 1. We also find that there is 1 target-SCCs, which is outlined in a dashed blue polygon in Figure 1, so↵ = 1.Next,wefindasinglestateinthetarget-SCCs and add its indices to J. In this real-world example, it makes most sense to measure the load consumed from either load. Hence, J ={12} or J ={10}. x 1 x 3 x 14 x 15 x 11 x 2 x 4 x 6 x 16 x 5 x 7 x 9 x 13 x 8 x 10 d 1 x 12 d 2 y 1 Generator 1 Load 1 Generator 2 Load 2 Generator 3 Figure 1. This figure shows the union of the two modes of the continuous-time system with unknowns for the IEEE 5-bus system. The SCCs are outlined by two dotted black rectangles, and the target-SCCs is outlined by a dotted blue polygon. The minimum output sensor and its placement is shown by y 1 . 5 Conclusions Inthispaper,weinvestigatedthestructuralstateand inputobservabilityofcontinuous-timeswitchedLTIsys- temsunderunknowninputs.Tothisend,wederivednec- essary and sucient conditions for the structural state andinputobservabilityofcontinuous-timeswitchedLTI systems.Theseconditionscanbeverifiedinpolynomial- time, more precisely in O((m(n+p)) 2 ), where n is the number of state variables, p is the number of unknown inputs, and m is the number of modes. Additionally, adoptinganovelfeed-forwardanalysis,weaddressedthe minimum sensor placement for these systems design- ing an algorithm with a computational complexity of O((m(n+p)+↵ ) 2.373 ), where↵ is the number of target strongly connected components. Finally,weexaminedvariousassumptionsonboththe system and unknown inputs (latent space) dynamics. These assumptions translated into imposing additional structure on the problem, which allowed for finding so- lutions in a more ecient manner (i.e., with lower com- putationalcomplexity).Thesealgorithmsaresuitableto handle large-scale systems. Appendix Proof of Theorem 7: The continuous-time switched LTI system with unknown inputs described in (1a), (1b), and (1c’) may be re-written as the following augmented continuous-time switched LTI system where the new aug- mented system is x 0 =[d | x | ] | , ˙ x 0 (t)= h Q (t) 0 F (t) A (t) i | {z } A 0 (t) x 0 (t)and y 0 (t)=[DC] | {z } C 0 (t) x 0 (t). (2) Moreover, letM = {1,...,m} be the ordered finite set of modes where the function (t)isconstant.Then,wehave that A 0 k = h Q k 0 F k A k i and C 0 =[DC]. Therefore, when the system in (1a), (1b), and (1c’) is structurally state and in- put observable, it is equivalent to when the system in (2) is structurally state observable. Interestingly, despite the fact that observability and controllability are not dual in general for switched LTI systems, in Theorem 4 in [14], the authors showedthatswitchedLTIsystemsaredualinthecaseofcir- culatory switching (see Definition 3 in [14]). From Remark 3 in [13], it readily follows that, in the context of structural switched LTI systems, the order of the switches does not play a role in attaining structural controllability. Therefore, in particular, it follows that structural controllability can be attained in the case of circulatory switching. As such, we can leverage Theorem 4 in [14] and invoke duality between structural controllability and structural (state) observabil- ity. Hence, by Theorem 3 of [17], the system in (2) is struc- turally observable whenever the conditions (i)and(ii)hold. Proof of Corollary 9: First, we construct the augmented system (2) from the original system (1). Second, we need to ensure that the conditions in Theorem 7 are satisfied. When the digraph G W m k=1 ¯ A 0 k , ¯ C 0 has no non-accessible output vertices, it is equivalent to the existence of an edge from a state variable in each target-SCCG W m k=1 ¯ A 0 k to an output vertex of G W m k=1 ¯ A 0 k , ¯ C 0 .Thus,condition(i)isequivalent to condition (i) of Theorem 7. Subsequently, we recall the result from [6], which states that for ¯ M2{ 0,?} n 1 ⇥ n 2 ,when the g-rank( ¯ M)=min{n 1 ,n 2 }, it is equivalent to when there exists a maximum matching of B( ¯ M)ofsizemin{n 1 ,n 2 }. Hence, by the previous result, condition (ii)isequivalentto condition (ii) of Theorem 7. Proof of Theorem 11: To address the problem P 1 ,we augment the system in (1a) and (1b) to be written as in (2) where x 0 =[d | x | ]. With this augmented system, Algo- rithm 1 constructs three minimum sets of dedicated outputs that combine to satisfy the two conditions outlined in The- orem 7, which guarantee structural state and input observ- ability. Minimality of the combined sets is ensured as we use maximum matchings to build the three sets. Subsequently, we use Algorithm 1 with the structural switched LTI system withM ={1,...,m} modes described 6 Figure 4.1: IEEE 5-bus system: This figure shows the union of the two modes of the continuous- time system with unknowns for the IEEE 5-bus system. The SCCs are outlined by two dotted black rectangles, and the target-SCCs is outlined by a dotted blue polygon. The minimum output sensor and its placement is shown by y 1 . 82 4.4 Conclusions and Future Works To this end, we derived necessary and sufficient conditions for the structural state and in- put observability of continuous-time switched LTI systems. These conditions can be verified in polynomial-time, more precisely in O((m(n+ p)) 2 ), where n is the number of state variables, p is the number of unknown inputs, and m is the number of modes. Additionally, adopting a novel feed- forward analysis, we addressed the minimum sensor placement for these systems by designing an algorithm with a computational complexity of O((m(n+ p)+α) 2.373 ), whereα is the number of target strongly connected components. Finally, we applied our algorithm to find the minimum sensor placement to a real-world example in power systems to illustrate our results. In future work, we will examine the case when a network is partially hidden and seek to derive conditions for observable subspaces. We will also examine the impact of these results under a stochastic framework. 83 Chapter 5 Distributed Algorithms for Directed Networks Strongly connected components (SCCs) and the finite diameter of directed networks are impor- tant in many control theory problems. Nowadays, networks associated with data are becoming increasingly larger, which demands scalable and distributed algorithms that enable an efficient determination of the SCCs and diameter of such networks. The applications of distributively finding the strongly connected components include distribu- tively monitoring and regulating power systems, physiological networks, and swarms of unmanned vehicles. For example, these systems are often represented as structural systems [33] and are dis- tributively controlled [173]. More specifically, strongly connected components are important in determining the structural systems properties (e.g., controllability and observability) [33] and play a key role in guaranteeing that distributed control algorithms function properly [173]. In the wide range of large-scale applications mentioned above, it is common that only local information is available at each node, and therefore, distributed algorithms, like the one proposed hereafter, must be considered. Computing the finite diameter is important in improving internet search engines [63], quantify- ing the multifractal geometry of complex networks [174], and identifying faults in both the power grid [64] and multiprocessor systems [65]. More specifically, the finite diameter of the world wide web determines the maximum number of clicks between any two web pages [63]. Finding this 84 number in a distributed fashion is important when there are multiple processors in a computer that are conducting different searches at the same time. In the case of identifying faults in systems such as the power grid [64] or multiprocessor systems [65], the finite diameter of a system is calcu- lated in real-time and is compared with the known finite diameter to determine whether a fault has occurred. In this case, determining the finite diameter distributively is key in quickly diagnosing where the fault has occurred in the network. Finally, the finite diameter is important in quantify- ing the multifractal geometry of networks [174], and it becomes necessary to calculate the finite diameter distributively when the nodes of the network only have access to local information. Identifying the different SCCs in a directed network (directed graph – digraph for short) leads to a unique decomposition of the digraphG =(V ,E), whereV denotes the nodes andE the set of directed edges. We may find this decomposition, for instance, using the classic algorithm by Tarjan [57], which employs a single pass of depth-first search and whose computational complexity isO(|V|+|E|). It is worth mentioning that depending on the network sparsity, the effective computational complexity isO(|V| 2 ), sinceE ⊂ (V × V). Similar to Tarjan’s algorithm, Dijkstra introduced the path-based algorithm to find strongly connected components and also runs in linear time (i.e.,O(|V|+|E|)) [58]. Finally, Kosaraju’s algorithm uses two passes of depth-first search but is also upper-bounded byO(|V|+|E|) [59]. The following work presents an overview of the centralized algorithms that find the strongly connected components, which all have computational complexityO(|V|+|E|) [60]. A possible alternative is to develop better data structure algorithms that are suitable for parallelization, which can then lead to implementations with computational complexity equal toO(|V|log(|V|)) [61] – see also [62] for an overview of different parallelized algorithms for SCC decomposition. The solutions mentioned above require the knowledge of the overall structure of the system digraph, which may not be suitable for large-scale applications in control systems or in machine learning. Subsequently, we propose for the first time a scalable distributed algorithm to determine the SCCs that relies solely on control systems tools, specifically max-consensus-like dynamics. 85 Furthermore, our algorithm converges in D+ 2 iterations and thereby enables us to determine the finite diameter D of the network. State-of-the-art methods to determine the finite diameter of a directed network include the Floyd-Warshall algorithm, which has a computational complexity of O(|V| 3 ) [66]. The main contributions are as follows. Main contributions: • We provide for the first time a scalable distributed algorithm to find the strongly connected components and finite diameter of a directed graph with computational time-complexity O NDd max in-degree , and • we provide ample numerical evidence of the out performance of our algorithm against the state-of-the-art on several random networks including Erd˝ os-R´ enyi, Barab´ asi-Albert, and Watts-Strogatz. Consider a directed graph (digraph)G =(V ,E) whereV is the set of vertices with|V|= N, andE ⊂ V × V is the set of edges, where the maximum number of edges is|E|=|V × V|= N 2 . GivenG =(V ,E), the in-degree of a vertex v∈V is d in-degree (v)=|{(u,u ′ ) : (u,u ′ )∈E,u ′ = v}|, and we denote the maximum in-degree ofG by d max in-degree = max v∈V d in-degree (v). Moreover, given a vertex v∈V , we define the set of its in-neighbors asN − v ={u : (u,v)∈E}. A walk in a digraph is any sequence of edges where the last vertex in one edge is the beginning of the next edge, except for the beginning vertex of the first edge and the ending vertex of the last edge. Notice that a walk does not exclude the repetition of vertices. In contrast, a path, is a walk where the same vertex is not the beginning or ending of two nonconsecutive edges in the sequence. The size of the path is the number of edges that constitute it. If the beginning and ending vertex of a path is the same, then we obtain a cycle. Additionally, a sub-digraphG s =(V ′ ,E ′ ) is described as any sub-collection of verticesV ′ ⊂ V and the edgesE ′ ⊂ V ′ × V ′ between them. If a subgraph has the property that there exists a path between any two pairs of vertices, then it is a strongly connected (di)graph. Next, we provide the definition of a strongly connected component. 86 Definition 31. A strongly connected component (SCC) is any maximal strongly connected sub- graph. Any digraph can be uniquely decomposed into SCCs. A digraphG ′ =(V ′ ,E ′ ) is said to span G =(V ,E), denoted byG ′ = span(G), ifV ′ =V andE ′ ⊆ E . Finally, given a digraphG = (V ,E), we define its finite digraph diameter D. Definition 32. The finite digraph diameter is the size of the longest shortest path between any two vertices inV for which such a path exists. 5.1 Finding Strongly Connected Components and the Finite Diameter of Directed Networks We propose to address the following two problems. (P 1 ): Given a digraphG = (V ,E), determine the unique decomposition of m∈N SCCs by finding the maximal subgraphs G s =(V s ,E s ),s= 1,··· ,m, where each subgraph is a SCC such thatV s ∩V q = / 0 for s̸= q with q= 1,...,m,V s ,V q ⊂ V ,E s ⊂ (E ∩(V s × V s )), and ∪ m s=1 G s ≡ ∪ m s=1 V s ,∪ m s=1 E s = span(G). (P 2 ): Given a digraphG =(V ,E), determine the finite digraph diameter D. Next, we provide the solution to the above problems in both a centralized and distributed fashion that enables a scalable approach to determine all the SCCs and the finite digraph diameter of a given network. Notice that a digraph has a unique decomposition into m SCCs, but we do not require a priori knowledge of such number. 87 5.2 A Scalable Distributed Dynamical Systems Approach to Learn the Strongly Connected Components and Diameter of Networks To determine a solution to (P 1 ) and (P 2 ), we leverage a max-consensus-like protocol. Definition 33. [175] ConsiderG =(V ,E), where each vertex v i ∈V ,i= 1,...,N, has an asso- ciated state r i [k]∈R at any time k∈N. Then, we have the following max-consensus-like update rule r i [k+ 1]= max v j ∈N − v i ∪{v i } r j [k], (5.1) for each node v i , whereN − v i denotes all of the nodes v j such that there is an edge (v j ,v i )∈E . We simply say that consensus is achieved if there exists an instance of time h such that for all h ′ ≥ h, r i [h ′ ]= r j [h ′ ], for all v i ,v j ∈V ,i∈{1,...,N}, j∈{1,...,N} and for all initial conditions r[0]=[r 1 [0] ⊺ ...r n [0] ⊺ ] ⊺ . ◦ Definition 33 is similar to the max-consensus update, but in addition to considering the infor- mation from the neighbors, Definition 33 also considers the information from the node itself. Fur- thermore, it is worth emphasizing that from Definition 33 it follows that every node only needs to be able to receive information from its in-neighbors, (i.e., the nodes connected to it). Hence, each node only needs the local information, which is pertinent to distributed algorithms. Furthermore, we note that each node uses a unique ID, that for simplicity consists of N consecutive positive inte- ger numbers from 1 to N. A node is able to obtain the information from its in-neighbors, including the unique IDs of its in-neighbors. Next, we present Algorithm 4, which can be used to find the solutions to ( P 1 ) and (P 2 ). Algorithm 4 is performed on each node v i and obtains a setS ∗ i , which consists of the nodes that belong to the same SCC as node v i , and a scalar k i , which is one more than the number of iterations. 88 Algorithm 4 Find the SCCs and finite diameter distributively Input:N − v i , which is the set of in-neighbors of node v i Output: Each node v i obtains a setS ∗ i , which contains the nodes belonging to the same SCC as node v i , and a scalar k i , which is one more than the number of iterations Initialization: SetS ∗ i = / 0, k i = 0; x i [0]={i}; y i [0]= 1; z i [0]= / 0; and w i [0]= FALSE; while w i [k i ]== FALSE do Step 1: x i [k i + 1]= [ v j ∈N − v i ∪{v i } x j [k i ] Step 2: y i [k i + 1]=|x i [k i + 1]| Step 3: z i [k i + 1]= v j : y i [k i + 1]== y j [k i ]∧ v j ∈ [ v l ∈N − v i ∪{v i } x l [k i ] Step 4: w i [k i + 1]=(y i [k i + 1]== y i [k i ])∧(|z i [k i + 1]|==|z i [k i ]|) Step 5: k i = k i + 1 Step 6: S ∗ i = z i [k i ] end while Briefly speaking, Algorithm 4 works as follows. For each node v i , we first find the set of nodes that have a directed path ending in node v i . Next, we record the size of this set. Finally, we add the nodes contained in the same SCC as node v i to the setS ∗ i . More specifically, Algorithm 4 starts by initializing the local (i.e., at node v i ) sets and parameters for the algorithm. In particular, we setS ∗ i = / 0, k i = 0, x i [0] ={i}, y i [0] = 1, z i [0] = / 0, and w i [0]= FALSE. At each iteration of the algorithm, Step 1 finds the set of state ‘ids’ (or, equivalently, nodes’ indices) that form directed paths that end in node v i . Step 2 records the size of the set of directed paths to node v i . Step 3 determines the nodes that are contained in the same SCC as node v i , where∧ denotes the logical ‘and’ operation. In Step 4, if the maximum size of the set of directed 89 paths to v i has been obtained, then an indication to end the algorithm for node v i is provided. Step 5 tracks the iterations, which is important for finding the finite digraph diameter. The algorithm terminates when no new information is received. Lastly, Step 6 setsS ∗ i , which is the set of nodes contained in the same SCC as node v i . The following lemma is key in proving the correctness of Algorithm 4. Lemma 34. If, for any two nodes v i and v j , we have that y i [k i + 1]= y j [k i ] (as in Step 2) and v j ∈ S v l ∈N − v i ∪{v i } x l [k i ] (as in Step 1), then v i and v j are in the same SCC. Proof. Suppose for a contradiction that y i [k i + 1]= y j [k i ] and v j ∈ S v l ∈N − v i ∪{v i } x l [k i ], but v i and v j are not in the same SCC. This would mean that there is not a direct path from v i to v j or there is not one from v j to v i . However, if v j ∈∪ v l ∈N − i ∪{i} x l [k i ], then v j can reach node v i , so there is a direct path from v j to v i . Furthermore, if y i [k i + 1]= y j [k i ], then there must also be a direct path from v i to v j or we would have y i [k i + 1]> y j [k i ]. Therefore, there is a direct path from v i to v j and from v j to v i , so v i and v j must be in the same SCC. The next lemma is key in giving a stopping criteria for Algorithm 4. Lemma 35. If y i [k i +1]== y i [k i ] and|z i [k i +1]|==|z i [k i ]|, for all i= 1,...,N (as in Step 4), then all the SCCs have been found. Proof. At each iteration of the algorithm, the number of elements in set x i either increases or it remains the same. If from one iteration to the next, the number of elements in x i stays the same for all nodes v i , then every node has received all of the information that it possibly can. Furthermore, each node knows the other nodes that can reach it as captured by the set x i . Since y i records the size of the set of nodes that can reach node v i , then if y i remains the same from one iteration to the next, then it is clear that the number of elements in x i also remains the same. Furthermore, if y i remains the same from one iteration to the next, then the network cannot communicate any new information. Additionally, if the set z i , which is the set of nodes that are contained in the same SCC as node v i , remains the same from one iteration to the next, then all SCCs have been found. 90 In the following theorem, we prove the correctness of Algorithm 4. Theorem 36. LetS ∗ i be the set of nodes that results after Algorithm 4 is executed on node v i ∈V . Then, S i∈{1,...,N} (S ∗ i ,(S ∗ i × S ∗ i )∩E) is a solution to P 1 . ◦ Proof. The algorithm iterates until y i [k i + 1]= y i [k i ] and|z i [k i + 1]|=|z i [k i ]|, for all i= 1,...,N, at which point all of the SCCs have been found–see Lemma 35. At each iteration, Step 1 forms the set of nodes that reach node v i and is recorded in set x i . Step 2 finds the cardinality of the set x i . Step 3 finds the set of nodes that are contained in the same SCC as node v i – see Lemma 34 – and is recorded in set z i . Step 4 determines whether the maximum set of nodes that can reach node v i has been found, whether the size of the set of nodes contained in the same SCC has increased, and it serves as a stopping criteria for the algorithm. Step 5 tracks the iterations, and step 6 records the set of nodes that are contained in the same SCC as node v i in the setS ∗ i . Finally, the SCCs are formed in the following subgraphsG ∗ i =(S ∗ i ,(S ∗ i × S ∗ i )∩E) as mentioned in the statement of Theorem 36. Any duplicate subgraphs of SCCs are eliminated by taking the union of all the subgraphs S i∈{1,...,N} G ∗ i . Hence, we obtain the SCCs ofG(V ,E). Next, we show the computational time-complexity for Algorithm 4. Theorem 37. Algorithm 4 has computational time-complexityO N 2 Dd max in-degree , where N is the number of vertices, D is the finite diameter of the network and d max in-degree is the maximum in-degree of the network. ◦ Proof. Algorithm 4 executes for all nodes v i ∈V ,i= 1,...,N. Furthermore, Algorithm 4 con- tains a single while loop, where the number of iterations is upper-bounded by the diameter since x i finds the longest shortest path to node v i . The steps inside the while loop are steps 1-5. Step 1 is upper-bounded by the size of the network times the maximum in-degree of the network (i.e., O(Nd max in-degree )) since the union has complexityO(N) and we take the union for all of the in- neighbors. Step 2 is upper-bounded by a constant. Similar to Step 1, Step 3 is upper-bounded 91 O(Nd max in-degree ). Finally, steps 4, 5, and 6 are upper-bounded by a constant. Hence, the compu- tational time-complexity isO N 2 Dd max in-degree , where N is the number of nodes, D is the finite digraph diameter of the network, and d max in-degree is the maximum in-degree of the network. The following result demonstrates the scability of Algorithm 4. Corollary 38. Algorithm 4 can be implemented in a distributed fashion and is scalable with com- putational time-complexityO NDd max in-degree . ◦ Proof. This readily follows from Theorem 37 and from noticing that Steps 1-6 can be performed locally for each node v i , where i= 1,...,N. Therefore, the algorithm can be computed in a dis- tributed fashion, which eliminates an N in the complexity in Theorem 37. While we have provided the computational complexity of the algorithm for the worst-case sce- nario, we observe that intuitively the maximum in-degree is negatively correlated with the diameter, so (on average) this may allow for a lower time complexity. The space-complexity for performing Algorithm 4 on each node v i , where i∈{1,...,N}, in a distributed fashion isO(N), where N is the number of nodes in the network. Next, we give the computational time-complexity to find the SCCs. Theorem 39. Finding the sets of nodes that contain the SCCs, S i∈{1,...,N} S ∗ i , requires a compu- tational time-complexity ofO NDd max in-degree . Proof. As shown in Theorem 1, for each node v i , where i= 1,...,N, Algorithm 1 finds the set of nodesS ∗ i that are contained in the same SCC as node v i , which, according to Corollary 1, has a computational time-complexity ofO NDd max in-degree when executed distributively. To find the sets of nodes that contain the SCCs, we compute S i∈{1,...,N} S ∗ i , which requires the addition of a term N, soO NDd max in-degree + N . Thus, it readily leads toO NDd max in-degree . Next, we provide a table comparing the computational time complexities of several different algorithms that find the SCCs of a given directed network. 92 Algorithm to compute SCCs Computational Time Complexity Tarjan [57] O(N+|E|) Dijkstra [58] O(N+|E|) Kosaraju [59] O(N+|E|) Gabow [176] O(N+|E|) Our proposed distributed algorithm O(NDd max in-degree ) While asymptotically the computational time-complexity of our distributed algorithm does not outperform the state-of-the-art centralized algorithms that find the SCCs (since the number of edges|E| in the graph is bounded by N× d max in-degree ), our proposed algorithm can be implemented in a distributed manner, whereas the other algorithms shown in the table above cannot be. Further- more, as we will show in the simulation results, our centralized algorithm empirically outperforms Kosaraju’s algorithm on several randomly generated networks. In the next result, we give a solution to(P 2 ). Theorem 40. By running Algorithm 4 on every node v i ∈V , with i∈{1,...,N}, we get the solution to(P 2 ) to be D= max v i ∈V k i − 3. ◦ Proof. We will show that Algorithm 4 converges after D+2 iterations, where D is the finite digraph diameter of the input digraph. From Lemma 34, the algorithm terminates when no new information is being received by any node from its neighbors (or itself) at a subsequent time step and all of the SCCs have been found. If we assume that the digraph has diameter D, this implies that there exists a pair of nodes u and v such that the size of the shortest path between u and v is D. First, we will show that no new information is received in D iterations. Suppose that node v receives all the information of node u in k < D iterations where the information travels to the neighbors of each node in exactly one iteration. Then, there must be another path from u to v with k edges, which contradicts the fact that the shortest path between u and v has size D. Now, suppose that no new information is communicated to any node in the network after k> D iterations. This means that there is information from node u that only reaches node v after k 93 iterations. However, since information is sent to the neighbors at each iteration, then the shortest path between u and v has size k, which contradicts the fact that the longest shortest finite path has size D. Therefore, no new information is being communicated in D iterations, which is verified in the next iteration, i.e., the D+1th iteration. Then, z i is finished updating after D+1 iterations since it is dependent on all of the information having been received. It is verified that z i is finished updating at the next iteration, i.e., the D+ 2nd iteration. Hence, the diameter will be two less than the maximum number of iterations among all nodes. Since Step 5 increments k i before terminating, then, k i denotes the number of iterations (for v i ) plus one. Conveniently, D= max v i ∈V k i − 3 since max v i ∈V k i finds precisely the maximum number of iterations plus one among all nodes v i , so three is subtracted to obtain the finite digraph diameter D. We emphasize that computing the finite digraph diameter requires the number of iterations of Algorithm 4 for each node. Furthermore, we notice that we can compute the finite digraph diameter and the digraph diameter. For example, if there is more than one SCCs, then the digraph diameter is infinite, else the two quantities are equivalent. Next, we give the computational time-complexity for computing the finite digraph diameter. Theorem 41. Computing the finite digraph diameter requires a computational time-complexity of O NDd max in-degree . Proof. Following from Theorem 40, we obtain the finite digraph diameter by executing Algo- rithm 4 on every single node v i ∈V . Hence, from Corollary 38, we see that Algorithm 4 has a computational time-complexity ofO NDd max in-degree when executed distributively. To determine the maximum number of iterations among all of the nodes v i , where i∈ 1,...,N, a final term N is added in the complexity, soO NDd max in-degree + N . Thus, it readily leads toO NDd max in-degree . 94 We provide a table comparing the computational-time complexity of the state-of-the-art Floyd- Warshall algorithm to our proposed distributed algorithm. We notice that our proposed distributed algorithm performs no worse than the Floyd-Warshall algorithm. Algorithm to find the finite diameter Computational Time Complexity Floyd-Warshall [66] O(N 3 ) Our proposed distributed algorithm O(NDd max in-degree ) Finally, we explore the expected computational time-complexity of Algorithm 4 in some special random networks. Corollary 42. For an Erd˝ os–R´ enyi network with N nodes and m edges, the expected time-complexity of Algorithm 4 isO log(N)− γ log(2m/N) + 1 2 2m , whereγ is the Euler-Mascheroni constant. Proof. The average degree of an Erd˝ os–R´ enyi network is 2m N , and the average path length is log(N)− γ log(2m/N) + 1 2 [177]. Thus, by Corollary 38, the expected time-complexity isO log(N)− γ log(2m/N) + 1 2 2m , whereγ is the Euler-Mascheroni constant. Corollary 43. For a Barab´ asi-Albert network with N nodes and m edges added to a new vertex at each step, the expected time-complexity of Algorithm 4 isO 2mN log(N)− log(m/2)− 1− γ log(log(N))+log(m/2) + 3 2 . Proof. Since the average degree is 2m, and the average path length is log(N)− log(m/2)− 1− γ log(log(N))+log(m/2) + 3 2 [177], by Corollary 38, the average time-complexity reduces toO 2mN log(N)− log(m/2)− 1− γ log(log(N))+log(m/2) + 3 2 . Corollary 44. For a Watts-Strogatz network with N nodes, K edges per vertex, and rewiring prob- ability p, the expected time-complexity of Algorithm 4 isO N 2 2 as p→ 0 andO NK log(N) log(K) as p→ 1. Proof. The Watts-Strogatz network has an average degree of K, and the average path length is N 2K as p→ 0 and log(N) log(K) as p→ 1 [178]. Hence, by Corollary 38, the average time-complexity of Algorithm 4 for the Watts-Strogatz network isO N 2 2 as p→ 0 andO NK log(N) log(K) as p→ 1. 95 5.3 Applying the Distributed Algorithm to Directed Networks In this section, we present several pedagogical examples to illustrate how Algorithm 4 works and demonstrate its computational time-complexity. In what follows, when referring to each of the SCCs, we will only mention the indices of the nodes contained in that particular SCC (i.e., if v i ∈V s then with some abuse of notation we refer to that node as i∈V s ) as we are implicitly assuming that their edges are formed byE s =((V s × V s )∩E). Fig. 5.1 shows a network with six nodes that contains the following strongly connected compo- nents:{1,2},{3,4}, and{5,6}. 1 2 3 4 5 6 Figure 5.1: This network has SCCs{5,6},{3,4}, and{1,2}. Table 5.1 shows the trace of running Algorithm 4 on Example 5.1 for each node v i , whereP is the set of parameters for the algorithm and k is the total number of iterations. It shows in column one that it takes seven iterations (k= 7) to identify the SCCs for Example 5.1. Here, the diameter of the network is 5, which is two less than the total required iterations and is consistent with the results in Theorem 40. Fig. 5.2 shows a complete network with five nodes, so there is a single SCC containing all of the nodes (i.e.,{1,2,3,4,5}). In Table 5.2, we see that three iterations are necessary as this is two more than the diameter of the network – see Theorem 40. 1 2 3 4 5 Figure 5.2: The complete network contains a single SCC, which is made up of all of the nodes in the network (i.e.,{1,2,3,4,5}). Fig. 5.3 shows a tree with nine nodes, so the SCCs are the individual nodes themselves (i.e., {1},{2},...,{9}). 96 k P v 1 v 2 v 3 v 4 v 5 v 6 0 x[0] {1} {2} {3} {4} {5} {6} y[0] 1 1 1 1 1 1 z[0] {1} {2} {3} {4} {5} {6} w[0] False False False False False False 1 x[1] {1,2} {1,2} {2,3,4} {3,4} {4,5,6} {5,6} y[1] 2 2 3 2 3 2 z[1] {} {} {} {} {} {} w[1] False False False False False False 2 x[2] {1,2} {1,2} {1,2,3,4} {2,3,4} {3,4,5,6} {4,5,6} y[2] 2 2 4 3 4 3 z[2] {1,2} {1,2} {} {3} {} {5} w[2] False False False False False False 3 x[3] {1,2} {1,2} {1,2,3,4} {1,2,3,4} {2,3,4,5,6} {3,4,5,6} y[3] 2 2 4 4 5 4 z[3] {1,2} {1,2} {3} {3} {} {3,5} w[3] True True False False False False 4 x[4] {1,2} {1,2} {1,2,3,4} {1,2,3,4} {1,2,3,4,5,6} {2,3,4,5,6} y[4] 2 2 4 4 6 5 z[4] {1,2} {1,2} {3,4} {3,4} {} {5} w[4] True True False False False False 5 x[5] {1,2} {1,2} {1,2,3,4} {1,2,3,4} {1,2,3,4,5,6} {1,2,3,4,5,6} y[5] 2 2 4 4 6 6 z[5] {1,2} {1,2} {3,4} {3,4} {5} {5} w[5] True True True True False False 6 x[6] {1,2} {1,2} {1,2,3,4} {1,2,3,4} {1,2,3,4,5,6} {1,2,3,4,5,6} y[6] 2 2 4 4 6 6 z[6] {1,2} {1,2} {3,4} {3,4} {5,6} {5,6} w[6] True True True True False False 7 x[7] {1,2} {1,2} {1,2,3,4} {1,2,3,4} {1,2,3,4,5,6} {1,2,3,4,5,6} y[7] 2 2 4 4 6 6 z[7] {1,2} {1,2} {3,4} {3,4} {5,6} {5,6} w[7] True True True True True True Table 5.1: This table enumerates the values of the parameters (P) at each iteration (k) of Algorithm 4 for all nodes v i when executed on Example 1. k P v 1 v 2 v 3 v 4 v 5 0 x[0] {1} {2} {3} {4} {5} y[0] 1 1 1 1 1 z[0] {1} {2} {3} {4} {5} w[0] False False False False False 1 x[1] {1,2,3,4,5} {1,2,3,4,5} {1,2,3,4,5} {1,2,3,4,5} {1,2,3,4,5} y[1] 5 5 5 5 5 z[1] {} {} {} {} {} w[1] False False False False False 2 x[2] {1,2,3,4,5} {1,2,3,4,5} {1,2,3,4,5} {1,2,3,4,5} {1,2,3,4,5} y[2] 5 5 5 5 5 z[2] {1,2,3,4,5} {1,2,3,4,5} {1,2,3,4,5} {1,2,3,4,5} {1,2,3,4,5} w[2] False False False False False 3 x[3] {1,2,3,4,5} {1,2,3,4,5} {1,2,3,4,5} {1,2,3,4,5} {1,2,3,4,5} y[3] 5 5 5 5 5 z[3] {1,2,3,4,5} {1,2,3,4,5} {1,2,3,4,5} {1,2,3,4,5} {1,2,3,4,5} w[3] True True True True True Table 5.2: This table enumerates the values of the parameters (P) at each iteration (k) of Algorithm 4 for all nodes v i when executed on the complete network. In Table 5.3, we see that five iterations are required to identify the SCCs of the tree network, which is two more than the diameter of the network – see Theorem 40. 97 1 2 3 4 5 6 7 8 9 Figure 5.3: The SCCs of the tree are the individual nodes themselves (i.e.,{1},{2},...,{9}). k P v 1 v 2 v 3 v 4 v 5 v 6 v 7 v 8 v 9 0 x[0] {1} {2} {3} {4} {5} {6} {7} {8} {9} y[0] 1 1 1 1 1 1 1 1 1 z[0] {1} {2} {3} {4} {5} {6} {7} {8} {9} w[0] False False False False False False False False False 1 x[1] {1} {1,2} {1,3} {2,4} {2,5} {2,6} {3,7} {4,8} {4,9} y[1] 1 2 2 2 2 2 2 2 2 z[1] {1} {} {} {} {} {} {} {} {} w[1] True False False False False False False False False 2 x[2] {1} {1,2} {1,3} {1,2,4} {1,2,5} {1,2,6} {1,3,7} {2,4,8} {2,4,9} y[2] 1 2 2 3 3 3 3 3 3 z[2] {1} {2} {3} {} {} {} {} {} {} w[2] True False False False False False False False False 3 x[3] {1} {1,2} {1,3} {1,2,4} {1,2,5} {1,2,6} {1,3,7} {1,2,4,8} {1,2,4,9} y[3] 1 2 2 3 3 3 3 4 4 z[3] {1} {2} {3} {4} {5} {6} {7} {} {} w[3] True True True False False False False False False 4 x[4] {1} {1,2} {1,3} {1,2,4} {1,2,5} {1,2,6} {1,3,7} {1,2,4,8} {1,2,4,9} y[4] 1 2 2 3 3 3 3 4 4 z[4] {1} {2} {3} {4} {5} {6} {7} {8} {9} w[4] True True True True True True True False False 5 x[5] {1} {1,2} {1,3} {1,2,4} {1,2,5} {1,2,6} {1,3,7} {1,2,4,8} {1,2,4,9} y[5] 1 2 2 3 3 3 3 4 4 z[5] {1} {2} {3} {4} {5} {6} {7} {8} {9} w[5] True True True True True True True True True Table 5.3: This table shows the values of the parameters (P) at each iteration (k) of Algorithm 4 for all nodes v i , executed on the tree network. 5.4 Performance Evaluation on Random Networks In this section, we compare the performance of our centralized algorithm with the current state- of-the-art that find the SCCs and the finite digraph diameter. 5.4.1 Finding the Strongly Connected Components of a Directed Network We start by comparing the run times of our algorithm against Kosaraju’s algorithm [59] to find all the SCCs on a series of random networks, including the Erd˝ os–R´ enyi, Barab´ asi–Albert, and Watts- Strogatz networks. We compare the run times of both our algorithm and the Kosaraju algorithm as we vary the parameters of the networks, including the diameter, the maximum in-degree, the 98 number of SCCs, and the number of nodes. We ran all the algorithms using Wolfram Mathematica on a MacBook Pro with an Apple M1 and 8GB RAM. For each type of random network (i.e., Erd˝ os-R´ enyi, Barab´ asi-Albert, and Watts-Strogatz), we randomly generated 50 networks in the following manner. For five different sets of nodes, we randomly generated ten different networks, where the sets of nodes were 100, 200, 300, 400, and 500 nodes. Furthermore, to generate the random networks, we selected two different sets of parameters for each type of random network. The Erd˝ os-R´ enyi network requires two parameters, the number of nodes and the number of edges. For the first set of parameters (Figs. 5.4 and 5.5), the number of nodes were chosen to be 100, 200, 300, 400, 500, and the number of edges were chosen to be the number of nodes raised to the 2/3 power. In the second set of parameters (Figs. 5.6 and 5.7), again the number of nodes remained the same, but the number of edges was fixed to 500 for all the sets of nodes. In Fig. 5.4, we see the comparison between different properties of the network, including the maximum in-degree, diameter, and total number of SCCs, with the run times of both our algorithm and Kosaaraju’s algorithm. In this first set of parameters, the number of SCCs plays a much larger role in determining the run-time of the algorithm. In Fig. 5.5, we see the comparison between the run times of both our algorithm and Kosaraju’s algorithm on different randomly generated Erd˝ os-R´ enyi networks using the first set of parameters. Our algorithm outperforms Kosaraju’s. The results from the second set of parameters for Erd˝ os-R´ enyi networks are shown in Figs. 5.6 and 5.7. In these networks, the diameter and maximum in-degree are much larger, so they increase the runtime of our algorithm. Fig. 5.7 shows that the runtime of our algorithm only outpeforms the Kosaraju algorithm when there are more nodes in the network. The Barab´ asi-Albert networks require two parameters, including the number of nodes and the number of edges added to a new vertex at each step. For the first set of parameters (Figs. 5.8 and 5.9), the numbers of nodes were fixed to 100, 200, 300, 400, 500, and the numbers of edges added at each step were chosen to be the numbers of 99 × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × 100 nodes 200 nodes 300 nodes 400 nodes 500 nodes 1.0 1.5 2.0 2.5 3.0 0.000 0.005 0.010 0.015 0.020 Maximum In-Degree Our algorithm runtime (secs) Erdős–Rényi (a) × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × 100 nodes 200 nodes 300 nodes 400 nodes 500 nodes 1.0 1.5 2.0 2.5 3.0 0.00 0.05 0.10 0.15 0.20 Maximum In-Degree Kosaraju runtime (secs) Erd ős–Rényi (b) × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × 100 nodes 200 nodes 300 nodes 400 nodes 500 nodes 2.0 2.5 3.0 3.5 4.0 0.000 0.005 0.010 0.015 0.020 Diameter Our algorithm runtime (secs) Erdős–Rényi (c) × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × 100 nodes 200 nodes 300 nodes 400 nodes 500 nodes 2.0 2.5 3.0 3.5 4.0 0.00 0.05 0.10 0.15 0.20 Diameter Kosaraju runtime (secs) Erd ős–Rényi (d) × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × 100 nodes 200 nodes 300 nodes 400 nodes 500 nodes 100 200 300 400 500 0.000 0.005 0.010 0.015 0.020 Number of SCCs Our algorithm runtime (secs) Erd ős–Rényi (e) × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × 100 nodes 200 nodes 300 nodes 400 nodes 500 nodes 100 200 300 400 500 0.00 0.05 0.10 0.15 0.20 Number of SCCs Kosaraju runtime (secs) Erd ős–Rényi (f) Figure 5.4: These figures show the relationship between the network properties of some ran- domly generated Erd˝ os-R´ enyi networks and their run times for both our proposed algorithm and Kosaraju’s algorithm. nodes divided by 5. In the second set of parameters (Figs. 5.10 and 5.11), again the number of nodes remained the same, but the number of edges added at each step was fixed to 50 for all the sets of nodes. 100 × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × 100 nodes 200 nodes 300 nodes 400 nodes 500 nodes 0.00 0.05 0.10 0.15 0.20 0.00 0.05 0.10 0.15 0.20 Our algorithm runtime (secs) Kosaraju runtime (secs) Erdős–Rényi Figure 5.5: This figure compares the run times of both our proposed algorithm and Kosaraju’s algorithm for several randomly generated Erd˝ os-R´ enyi networks. We see that our algorithm per- forms better on networks with a higher number of nodes. In Fig. 5.8, we see that the maximum in-degree plays a larger role in determining the run-time of the algorithm. In Fig. 5.9, we see the comparison between the run times of our algorithm and Kosaraju’s algorithm on different randomly generated Barab´ asi-Albert networks. Our algorithm performs better for networks with more nodes. The results from the second set of parameters for Barab´ asi–Albert networks are shown in Figs. 5.10 and 5.11. The results from the two sets of parameters do not present much difference. Again, our algorithm performs better on networks with a higher number of nodes. Finally, the Watts-Strogatz networks require the following two parameters, the number of nodes and the rewiring probability. The first set of parameters included the nodes 100, 200, 300, 400, and 500 with a rewiring prob- ability of 0.8, and the results are shown in Figs. 5.12 and 5.13. For the second set of parameters, the set of nodes remained the same, but the rewiring probability was reduced to 0.2 with the results shown in Figs. 5.14 and 5.15. In Fig. 5.12, we see that the diameter dominates the complexity. In Fig. 5.13, we see the com- parison between the run times of ours and Kosaraju’s algorithm on the randomly generated Watts- Strogatz networks. Our algorithm underperforms compared to Kosaraju’s. 101 × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × 100 nodes 200 nodes 300 nodes 400 nodes 500 nodes 5 6 7 8 9 10 11 12 0.0 0.2 0.4 0.6 0.8 1.0 Maximum In-Degree Our algorithm runtime (secs) Erdős–Rényi (a) × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × 100 nodes 200 nodes 300 nodes 400 nodes 500 nodes 5 6 7 8 9 10 11 12 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Maximum In-Degree Kosaraju runtime (secs) Erdős–Rényi (b) × × × × × × ×× × × ×× × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × 100 nodes 200 nodes 300 nodes 400 nodes 500 nodes 0 10 20 30 40 0.0 0.2 0.4 0.6 0.8 1.0 Diameter Our algorithm runtime (secs) Erdős–Rényi (c) × × × × ×× ×× × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × 100 nodes 200 nodes 300 nodes 400 nodes 500 nodes 0 10 20 30 40 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Diameter Kosaraju runtime (secs) Erdős–Rényi (d) × × × × × × × × × × ×× × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × 100 nodes 200 nodes 300 nodes 400 nodes 500 nodes 0 100 200 300 400 500 0.0 0.2 0.4 0.6 0.8 1.0 Number of SCCs Our algorithm runtime (secs) Erd ős–Rényi (e) × × × × × × × × × × × × × × × × × × × × × × × × × ×× × × × × × × × × × × × × × × × × × × × ×× × × 100 nodes 200 nodes 300 nodes 400 nodes 500 nodes 0 100 200 300 400 500 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Number of SCCs Kosaraju runtime (secs) Erdős–Rényi (f) Figure 5.6: These figures show the relationship between the network properties of some ran- domly generated Erd˝ os-R´ enyi networks and their run times for both our proposed algorithm and Kosaraju’s algorithm. The results from the second set of parameters for the Watts-Strogatz networks are shown in Figs. 5.14 and 5.15. The results from the two sets of parameters do not present much difference. It 102 × × × × ×× × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × 100 nodes 200 nodes 300 nodes 400 nodes 500 nodes 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 Our algorithm runtime (secs) Kosaraju runtime (secs) Erdős–Rényi Figure 5.7: This figure compares the run times of both our proposed algorithm and Kosaraju’s algorithm for several randomly generated Erd˝ os-R´ enyi networks. We see that our algorithm per- forms better on networks with a higher number of nodes. is clear that Kosaraju’s algorithm outperforms our algorithm. We believe that the reason Kosaraju’s algorithm outperforms ours has to do with the number of edges in the network. 5.4.2 Finding the Finite Diameter of a Directed Network From the results of Theorem 40, our algorithm can determine the finite digraph diameter of the network. Here, we illustrate the relationship between the number of iterations required before terminating our algorithm compared with the finite digraph diameter plus two. Fig. 5.16 shows the results from running our algorithm on the random networks using the second set of parameters. We see that the number of required iterations is identical to the network diameter plus two. Finally, we compared the runtime of our algorithm with the Floyd-Warshall algorithm [66] on the Erd˝ os-R´ enyi, Barab´ asi-Albert, and Watts-Strogatz networks. We randomly generated ten dif- ferent Erd˝ os-R´ enyi networks, using 25 nodes and 50 edges. For the Barab´ asi-Albert network, we used 25 nodes and 3 edges added to each new vertex at each time step to generate ten differ- ent networks. Finally, we generated ten different Watts-Strogatz networks using 25 nodes and a rewiring probability of 0.2. In Fig. 5.17, we see that our algorithm outperforms the Floyd-Warshall algorithm for all networks. 103 × ×× ×× × × ×× × × × × × × × × ×× × × × × ×× × × × × × × × × × × ×× × × × × × × × × × × × × × 100 nodes 200 nodes 300 nodes 400 nodes 500 nodes 50 100 150 0.0 0.2 0.4 0.6 0.8 1.0 Maximum In-Degree Our algorithm runtime (secs) Barabási–Albert (a) × ×× ×× × × ×× × × × × × × × × ×× × × × × ×× × × × × × ××× × × ×× × × × × × × × × × × × × × 100 nodes 200 nodes 300 nodes 400 nodes 500 nodes 50 100 150 0 1 2 3 4 5 6 7 Maximum In-Degree Kosaraju runtime (secs) Barabási–Albert (b) × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × 100 nodes 200 nodes 300 nodes 400 nodes 500 nodes 0 1 2 3 4 5 6 0.0 0.2 0.4 0.6 0.8 1.0 Diameter Our algorithm runtime (secs) Barabási–Albert (c) × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × 100 nodes 200 nodes 300 nodes 400 nodes 500 nodes 0 1 2 3 4 5 6 0 1 2 3 4 5 6 7 Diameter Kosaraju runtime (secs) (d) × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × 100 nodes 200 nodes 300 nodes 400 nodes 500 nodes 0.0 0.5 1.0 1.5 2.0 0.0 0.2 0.4 0.6 0.8 1.0 Number of SCCs Our algorithm runtime (secs) Barabási–Albert (e) × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × 100 nodes 200 nodes 300 nodes 400 nodes 500 nodes 0.0 0.5 1.0 1.5 2.0 0 1 2 3 4 5 6 7 Number of SCCs Kosaraju runtime (secs) Barabási–Albert (f) Figure 5.8: These figures show the relationship between the network properties of some ran- domly generated Barab´ asi-Albert networks and their run times for both our proposed algorithm and Kosaraju’s algorithm. 104 × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × 100 nodes 200 nodes 300 nodes 400 nodes 500 nodes 0 1 2 3 4 5 6 7 0 2 4 6 Our algorithm runtime (secs) Kosaraju runtime (secs) Barabási–Albert Figure 5.9: This figure compares the run times of both our proposed algorithm and Kosaraju’s algorithm for several randomly generated Barab´ asi-Albert networks. We see that our algorithm performs better on networks with a higher number of nodes. 5.4.3 Analysis of the Computational-Time Complexity of the Distributed Algorithm The proposed distributed algorithm to compute the SCCs in a digraph can achieve a lower time complexity than the Kosaraju algorithm in instances when there is a large number of edges that exceed the number of nodes times the finite diameter times the maximum in-degree. In other instances, the Kosaraju algorithm will have a better time complexity. For example, in the worst- case, a network can have its maximum in-degree and finite diameter equal to the number of nodes in the network, which would mean that the time complexity of our distributed algorithm would beO(N 3 ). However, the time complexity of the Kosaraju algorithm in the worst case isO(N 2 ) since the number of edges could be on the order of N 2 . Therefore, we conclude that our distributed algorithm may or may not outperform Kosaraju’s depending on the network’s topology. In the case of the centralized algorithm, we provide evidence through exhaustive simulations that suggests that our algorithm outperforms the state-of-the-art Kosaraju algorithm on certain network topologies. However, we remark that this may not always be the case. For instance, when considering the same worst-case network, where the maximum in-degree and finite diameter are equal to the number of vertices in the network, our proposed centralized algorithm will have a time 105 × × × × ×× × × ×× × × × × × × × × × × × × ×× × ×× × ×× × × × × ×× × × × × × × × × × × × × × × 100 nodes 200 nodes 300 nodes 400 nodes 500 nodes 60 80 100 120 140 0.0 0.2 0.4 0.6 0.8 1.0 Maximum In-Degree Our algorithm runtime (secs) Barabási–Albert (a) × × × × ×× × × ×× × × × × × × × ×× × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × 100 nodes 200 nodes 300 nodes 400 nodes 500 nodes 60 80 100 120 140 0.0 0.5 1.0 1.5 2.0 Maximum In-Degree Kosaraju runtime (secs) Barabási–Albert (b) × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × 100 nodes 200 nodes 300 nodes 400 nodes 500 nodes 2.0 2.2 2.4 2.6 2.8 3.0 0.0 0.2 0.4 0.6 0.8 1.0 Diameter Our algorithm runtime (secs) Barabási–Albert (c) × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × 100 nodes 200 nodes 300 nodes 400 nodes 500 nodes 2.0 2.2 2.4 2.6 2.8 3.0 0.0 0.2 0.4 0.6 0.8 1.0 Diameter Our algorithm runtime (secs) Barabási–Albert (d) × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × 100 nodes 200 nodes 300 nodes 400 nodes 500 nodes 0.0 0.5 1.0 1.5 2.0 0.0 0.2 0.4 0.6 0.8 1.0 Number of SCCs Our algorithm runtime (secs) Barabási–Albert (e) × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × 100 nodes 200 nodes 300 nodes 400 nodes 500 nodes 0.0 0.5 1.0 1.5 2.0 0.0 0.5 1.0 1.5 2.0 Number of SCCs Kosaraju runtime (secs) Barabási–Albert (f) Figure 5.10: These figures show the relationship between the network properties of some ran- domly generated Barab´ asi-Albert networks and their run times for both our proposed algorithm and Kosaraju’s algorithm. complexity ofO(N 4 ), which is worse than the time complexity of the Kosaraju algorithm (i.e., O(N 2 ) in the worst-case). 106 × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × 100 nodes 200 nodes 300 nodes 400 nodes 500 nodes 0.0 0.5 1.0 1.5 2.0 0.0 0.5 1.0 1.5 2.0 Our algorithm runtime (secs) Kosaraju runtime (secs) Barabási–Albert Figure 5.11: This figure compares the run times of both our proposed algorithm and Kosaraju’s algorithm for different randomly generated Barab´ asi-Albert networks. When contrasting our proposed distributed algorithm to compute the diameter of a digraph to the state-of-the-art Floyd-Warshall algorithm, we notice that in the worst-case our algorithm’s time complexity is no worse than the time complexity of the Floyd-Warshall algorithm, (i.e.,O(N 3 )). The time complexity for our centralized algorithm isO(N 4 ) in the worst case, which is worse than the Floyd-Warshall algorithm in the worst-case; however, our simulations suggest that our centralized algorithm can outperform the Floyd-Warshall algorithm in some instances. It is important to remark that the results presented here focused on the time-complexity to enable a direct comparison with the algorithms in the literature. Nonetheless, the nature of distributed algorithms requires the assessment of the communication-complexity. Depending on the proto- col being used to exchange information (e.g., IDs), it could further increase the complexity by O(N log(k)), where k is the number of bits needed to transmit the Nth ID. Further investigation should consider the design of suitable communication protocols within their specific applications to improve the overall performance of a new class of distributed algorithms, for which the founda- tion is laid out in this paper. 107 × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × 100 nodes 200 nodes 300 nodes 400 nodes 500 nodes 5.0 5.5 6.0 6.5 7.0 0.0 0.5 1.0 1.5 2.0 2.5 Maximum In-Degree Our algorithm runtime (secs) Watts-Strogatz (a) × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × 100 nodes 200 nodes 300 nodes 400 nodes 500 nodes 5.0 5.5 6.0 6.5 7.0 0.0 0.5 1.0 1.5 Maximum In-Degree Kosaraju runtime (secs) Watts-Strogatz (b) × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × 100 nodes 200 nodes 300 nodes 400 nodes 500 nodes 15 20 25 30 0.0 0.5 1.0 1.5 2.0 2.5 Diameter Our algorithm runtime (secs) Watts-Strogatz (c) × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × 100 nodes 200 nodes 300 nodes 400 nodes 500 nodes 15 20 25 30 0.0 0.5 1.0 1.5 Diameter Kosaraju runtime (secs) Watts-Strogatz (d) × × × × × × × × × × × × × × × × × ×× × × × × × × × ×× × × × × × × × × × × × × × × × × × × × × × × 100 nodes 200 nodes 300 nodes 400 nodes 500 nodes 0 50 100 150 200 0.0 0.5 1.0 1.5 2.0 2.5 Number of SCCs Our algorithm runtime (secs) Watts-Strogatz (e) × × × × × × × × × × × × × × × × × ×× × × × × × × × ×× × × × × × × × × × × × × × × × × × × × × × × 100 nodes 200 nodes 300 nodes 400 nodes 500 nodes 0 50 100 150 200 0.0 0.5 1.0 1.5 Number of SCCs Kosaraju runtime (secs) Watts-Strogatz (f) Figure 5.12: These figures show the relationship between the network properties of some ran- domly generated Watts-Strogatz networks and their run times for both our proposed algorithm and Kosaraju’s algorithm. 108 × × × × × × × × × × × × × × ×× × × × × × × × × × × × × × × × × × × ×× × × × × × × × × × × × × × × 100 nodes 200 nodes 300 nodes 400 nodes 500 nodes 0.0 0.5 1.0 1.5 2.0 2.5 0.0 0.5 1.0 1.5 2.0 2.5 Our algorithm runtime (secs) Kosaraju runtime (secs) Watts-Strogatz Figure 5.13: This figure compares the run times of both our proposed algorithm and Kosaraju’s algorithm for several randomly generated Watts-Strogatz networks. 5.5 Conclusions and Future Works We provided for the first time a scalable distributed algorithm to find the strongly connected components and finite diameter of a directed network. The proposed solution has a time-complexity O NDd max in-degree , where N is the number of vertices, D is the finite diameter and d max in-degree is the maximum in-degree of the network. We demonstrated the performance of our centralized algo- rithm on several random networks. We compared the runtime of our centralized algorithm against Kosaraju’s algorithm and found that our centralized algorithm outperformed Kosaraju’s for cer- tain network topologies. Additionally, we provided exhaustive simulations that support that our centralized algorithm outperformed Floyd-Warshall’s in computing the finite digraph diameter on every tested random network. In our future work, we seek to understand how the time complexity may be improved for differ- ent types of densely directed networks. Furthermore, we plan to examine how to find the SCCs and diameter while taking privacy into consideration such that the ID of the nodes can be hidden in the process of sharing information. Finally, our future work will focus on developing possible asyn- chronous protocols capable of determining the strongly connected components and finite diameter of a digraph. 109 × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × 100 nodes 200 nodes 300 nodes 400 nodes 500 nodes 5.0 5.5 6.0 6.5 7.0 0.0 0.5 1.0 1.5 2.0 2.5 Maximum In-Degree Our algorithm runtime (secs) Watts-Strogatz (a) × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × 100 nodes 200 nodes 300 nodes 400 nodes 500 nodes 5.0 5.5 6.0 6.5 7.0 0.0 0.5 1.0 1.5 Maximum In-Degree Kosaraju runtime (secs) Watts-Strogatz (b) × × × × × × × × × × × ×× × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × 100 nodes 200 nodes 300 nodes 400 nodes 500 nodes 20 25 30 35 40 45 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Diameter Our algorithm runtime (secs) Watts-Strogatz (c) × × × × × × × × × × × ×× × × × × × × × × × × × × × × × × × × × ×× × × × × × × × × × × × × × × × × 100 nodes 200 nodes 300 nodes 400 nodes 500 nodes 20 25 30 35 40 45 0.0 0.5 1.0 1.5 Diameter Kosaraju runtime (secs) Watts-Strogatz (d) × × ×× × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × 100 nodes 200 nodes 300 nodes 400 nodes 500 nodes 0 50 100 150 200 250 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Number of SCCs Our algorithm runtime (secs) Watts-Strogatz (e) × × ×× × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × 100 nodes 200 nodes 300 nodes 400 nodes 500 nodes 0 50 100 150 200 250 0.0 0.5 1.0 1.5 Number of SCCs Kosaraju runtime (secs) Watts-Strogatz (f) Figure 5.14: These figures show the relationship between the network properties of some ran- domly generated Watts-Strogatz networks and their run times for both our proposed algorithm and Kosaraju’s algorithm. 110 × × × × × × × × × × ×× × × × × ×× × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × 100 nodes 200 nodes 300 nodes 400 nodes 500 nodes 0.0 0.5 1.0 1.5 2.0 2.5 3.0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Our algorithm runtime (secs) Kosaraju runtime (secs) Watts-Strogatz Figure 5.15: This figure compares the run times of both our proposed algorithm and Kosaraju’s algorithm for several randomly generated Watts-Strogatz networks. × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × 100 nodes 200 nodes 300 nodes 400 nodes 500 nodes 0 10 20 30 40 0 10 20 30 40 Iterations D+2 Erdős–Rényi (a) × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × 100 nodes 200 nodes 300 nodes 400 nodes 500 nodes 0 1 2 3 4 5 0 1 2 3 4 5 Iterations D+2 Barabási–Albert (b) x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x 100 nodes 200 nodes 300 nodes 400 nodes 500 nodes 0 10 20 30 40 50 0 10 20 30 40 50 Iterations D+2 Watts-Strogatz (c) Figure 5.16: This figure shows the relationship between the number of iterations needed before terminating our algorithm and the diameter plus two of several randomly generated networks. 111 × × × × × × × × × × × × × × × × × × × × × × × × × × × × × × Erdős–Rényi Barabási–Albert Watts-Strogatz 0.000 0.005 0.010 0.015 0.020 0.025 0.030 0.000 0.005 0.010 0.015 0.020 0.025 0.030 0.035 Our algorithm runtime (secs) Floyd-Warshall runtime (secs) Runtimes for Computing Finite Digraph Diameter Figure 5.17: This figure compares the runtimes of computing the finite digraph diameter when using our algorithm against the Floyd-Warshall algorithm on several randomly generated networks, including Erd˝ os-R´ enyi, Barab´ asi-Albert, and Watts-Strogatz. 112 Chapter 6 Stability and Stabilization of Complex Dynamical Networks Systems biology and biomedicine has been studied in conjunction with control theory over sev- eral decades [179]. By applying the tools developed in control theory to biological and biomedical systems, new discoveries have been made that enhance our understanding of both the function and dysfunction of biological systems [180]. One such control theoretic tool is stability, which has been particularly useful in uncovering new critical insights in a multitude of biological and biomedical applications [85, 181, 182]. Stability assesses the long-term behavior of a dynamical model and its equilibrium properties, which are critical to understanding the nature of the underlying biological or biomedical process and its characteristics. However, the accuracy of such conclusions drawn on the properties of the biological or biomedical process under observation will depend on the quality- of-fit of the model that is used to describe the underlying process. Assessing the goodness-of-fit requires measurements of the biological process. There are many different tools to measure neurophysiological signals, including electrocardio- gram (ECG), electromyogram (EMG), electroencephalogram (EEG), and blood oxygenation level dependent (BOLD), and their models have been studied in the literature [183, 184, 112, 185, 26]. Many studies have opted to use linear dynamical models to describe the underlying neural dy- namics exhibited in the brain [12, 86]; however, these models fail to capture the long-term memory clearly evidenced in neural signals [88, 89]. This inherent memory is characterized by the current 113 state depending on an infinite combination of the previous states, where the absolute values of the weights on these previous states decay with time and are enclosed in an envelope that is described by a power law [18]. As an alternative to linear systems, fractional-order systems, which encapsulate this so-called long-term memory, have been shown to accurately represent neural behavior [17, 27, 26, 54, 29, 31]. These nonlinear spatiotemporal models rely on only a few parameters [183, 44, 186]. These parameters include a spatial matrix and a set of temporal scale coefficients called fractional ex- ponents. The spatial matrix describes the spatial dependencies between different state variables. The fractional exponents describe the long-term memory of the dynamical process across different state variables. With these few required and easy-to-interpret parameters, together with the supe- rior goodness-of-fit, these FLTI models present distinct advantages over both linear time-invariant and (generically) nonlinear models. Specifically, LTI systems are limited in their ability to cap- ture different time-scales occurring in biological applications, which are inherently modeled by the spatial component (i.e., the system’s autonomous matrix) [184, 187, 188]. On the other hand, spa- tiotemporal nonlinear systems have been widely accepted as tools capable of accurately modeling neural data but may lead to over-complicated and difficult to interpret models that often over-fit the data. Despite the advantages of FLTI, there are few results regarding the stability of fractional-order systems (FOS). The prior literature describes conditions for continuous-time FOS [42] and for single-input single-output continuous-time commensurate systems (i.e., systems with equal frac- tional exponents across state variables) [45]. We focus on discrete-time systems since the measurements of neural behavior are inherently discrete in nature. In [50], the authors leverage an infinite dimensional representation of truncated FLTI (i.e., with finite memory) to give conservative sufficient conditions for stability. While the work in [51] does provide necessary and sufficient conditions for practical and asymptotic stability of FLTI, they only consider commensurate-order systems (i.e.,α is the same for all state variables). 114 While recent work has investigated conditions for the stability of discrete-time linear fractional- order systems (FLTI) [69], these conditions are not tractable for analysis let alone for design as they are dependent on each time step. That said, a comprehensive analysis of FLTI stability is lacking. Henceforth, we aim to analyze the stability of FLTI and its relevance in studying epilepsy. Epilepsy significantly inhibits the quality of life for approximately 50 million patients worldwide, and in 2016, the disease accounted for $16 billion in annual expenditures in the United States alone [9]. Unfortunately, 15 million of those patients are unresponsive to medication [189], and surgery success rates are in the range of 30% - 70% due to our limited understanding of how and where the disease originates [190]. Fortunately, for these patients, other treatments, such as neurostimulation exist. Many works have proposed methods for neurostimulation treatments in various diseases and conditions, including epilepsy [12, 13], chronic pain [14, 15], and other neurological diseases [11]. Many works have proposed methods for neurostimulation treatments [12, 13, 14, 15, 11]. However, effective and efficient stimulation techniques rely on first properly detecting seizure onset in the brain so that the appropriate amount of stimulation can be delivered to the correct location in the brain at the precise moment in time to inhibit an oncoming seizure. Physicians characterize the start of a seizure, also known as the ‘unequivocal onset of seizure,’ as the time at which the iEEG pattern typically associated with seizure first became unquestionably clear [191]. Thus, to identify seizures from a patient’s neural recordings, physicians use a method known as ‘earliest iEEG change,’ which relies on visual cues to find the point in time prior to the start of the seizure where the first clear sustained change from the patient’s iEEG baseline is detected [191]. However, these methods of identifying the ‘unequivocal onset of seizure’ and ‘earliest iEEG change’ may yield different results when the data is analyzed by different clinicians. Currently, NeuroPace’s Responsive Neurostimlator System (RNS), an FDA-approved neurostim- ulation device, relies on physicians to pre-program the device to respond to pre-specified abnormal 115 brain patterns – see Figure 6.1. While the initial studies involving the RNS device claim to reduce seizures [10], indications that the device detects and stimulates too frequently are a contributing cause for concerns around the efficacy of this device. In fact, critics of the technology have pointed out that high frequency electrical stimulation may decrease seizure intensity but may lead to an in- crease in the frequency of seizures [16]. Figure 6.1: This figure describes the method currently used in Neuropace’s RNS device for detect- ing and mitigating seizures, which relies on physicians to tune the device based on trial-and-error methods and feedback from patients. To improve detection of seizures and thereby reduce unnecessary stimulation of the brain, many works have turned to machine-learning data-driven detection methods to identify the start of seizure [192, 193, 194], but these methods lack interpretability and often require tuning to gen- eralize across patients. Hence, to date, there is still not a widely accepted formal definition to accurately characterize the start of seizure. Without this, patients may be receiving too much or too little stimulation leading to undesired consequences, such as an increase in the frequency of seizures. 116 Recently, some works have begun to uncover some insight into developing a rigorous definition for the start of seizure. For example, the work in [195] showed that prior to a critical transition, such as the start of seizure, a phenomenon known as ‘critical slowing’ occurs, which is when the dominant eigenvalue becomes zero and is associated with an increase in autocorrelation in the pattern of fluctuations. Other works have characterized the start of seizure as an instability and have studied the energy in the neural signals to uncover its relationship to the start of seizure and location in the brain [190, 85]. Still others have studied the spectrum of the signals as a means to characterize the start of seizure [86, 12]. One of the most widely used techniques to detect seizure investigates the dynamic changes in electrical activity and frequency by calculating the line length, which is a proxy for the fractal dimension [196]. While this technique has the ability to capture the long-term temporal changes in the system, it is unable to characterize these changes in the spatial context of the brain. Hence, new tools are needed to capture the spatio-temporal aspects of brain behavior to accurately detect seizures. While some works give clues about the characterization of the start of seizure [195, 190, 85, 86, 12, 196], none provide a formal definition that applies generally across all epileptic patients and all seizures. This fundamental limitation inhibits our ability to design effective neurostimulation techniques to properly treat epileptic patients. This chapter aims to fill this gap by first developing a non-Markovian model-based framework that can accurately and efficiently characterize the start of seizure. Then, by leveraging this frame- work and the patient’s data, we plan to design a safe and stabilizing cyber-neural system that employs closed-loop stimulation strategies to effectively and efficiently mitigate seizures. One leading hypothesis suggests that a seizure starts because of an instability in the brain [190, 197]. Furthermore, seizures in the brain can be characterized as critical shifts in dynamical be- havior, which have been shown to correspond to an increase in long-term memory [195]. There- fore, considering the long-term memory of fractional-order systems and the stability of these sys- tems may be beneficial in assessing, predicting, and mitigating seizures. In the context of neural 117 systems, the work in [198] provides a survey of recent results about fractional-order models in cyber-neural systems. From here on, we focus on the stability of fractional-order systems and the stabilization of these systems as a mechanism to mitigate seizures in the brain. Subsequently, the main contributions of this chapter are as follows. First, we introduce suffi- cient stability conditions for multivariate FLTI with arbitrary fractional exponents. However, these conditions are only sufficient. Hence, we provide necessary and sufficient stability conditions for multivariate FLTI with arbitrary fractional exponents. Then, we leverage these conditions to solve two design problems to stabilize FLTI. We apply this stabilization method to an epileptic dataset to provide a novel stimulation strategy to mitigate seizures and to unveil new insights into epilepsy. 6.1 Sufficient Condition for Global Asymptotic Stability of (Discrete-Time) Fractional-Order Linear Time-Invariant Systems In this section, we will provide a sufficient condition for the global asymptotic stability of discrete-time fractional-order linear time-invariant systems. Hence, we consider a (discrete-time) non-commensurate fractional-order linear time-invariant system (FLTI) described by ∆ α x[k+ 1]= Ax[k], (6.1) where x[k]∈R n denotes the state, A∈R n× n is the spatial matrix, α ∈R n are the fractional- order exponents, and∆ α is the Gr¨ unwald-Letnikov discretization of the fractional derivative. The Gr¨ unwald-Letnikov discretization for any i-th state (1≤ i≤ n) can be expressed as ∆ α i x i [k]= k ∑ j=0 ψ(α i , j)x i [k− j], whereα i ∈R is the fractional-order exponent of the ith state andψ(α i , j)= Γ( j− α i ) Γ(− α i )Γ( j+1) withΓ(·) denoting the Gamma function [141]. From here onward, we will use the tu- ple(A,α) to represent the FLTI (6.1). From here onward, we will use the tuple(A,α) to represent the FLTI (6.1). 118 Notice that System (6.1) is a (finite-dimensional) non-Markovian system, and it considers a weighted linear combination of all the previous data from the beginning to the current time. The the fractional-order exponents determine the weights on the previous states that are needed to compute the next state. These weights decay as a power law. As the fractional-order exponents approach 1, the next state depends less on states in the distant past and ultimately become LTI if all exponents are equal to 1 [18]. Thus, when learned, if the exponents are significantly different from zero, then the system is inherently not an LTI system. Intuitively, a larger fractional-order coefficient (i.e., α closer to 1) implies a lower dependency on the previous data meaning that the weights decay at a faster rate. Furthermore, as the fractional-order coefficient α approaches 1, the system becomes closer to an integer-order linear time-invariant system. Towards deriving the stability conditions for FLTI, we start by noticing that the FLTI (6.1) can be re-written as (Lemma 2, [112]): x[k]= G k x[0], (6.2) where G k = I n , k= 0 ∑ k− 1 j=0 A j G k− 1− j , k≥ 1 (6.3) with A 0 = A− D(α,1), A j =− D(α, j+ 1), for j≥ 1, and D(α, j)= ψ(α 1 , j) 0 ... 0 0 ψ(α 2 , j) ... 0 0 . . . . . . 0 0 0 ... ψ(α n , j) . (6.4) 119 In particular, we observe that (6.2) will lead to the following realizations. G 0 = I n G 1 = A− D(α,1) G 2 = A 2 − AD(α,1)− D(α,1)A+ D(α,1) 2 − D(α,2) G 3 = A 3 − AD(α,1)− D(α,1)A 2 + D(α,1) 2 − A 2 D(α,1)+ AD(α,1) 2 + D(α,1)AD(α,1) + D(α,1) 3 − AD(α,2)+ D(α,1)D(α,2)− D(α,2)A+ D(α,2)D(α,1)− D(α,3) . . ., (6.5) Using (6.2), we show that a FLTI as in (6.1) can be represented as a discrete-time linear time- varying (DTLTV) system under mild assumptions. First, we notice that a DTLTV is described by x[k+ 1]= M k x[k], (6.6) with a state-transition matrix (that follows from using (6.6) recursively) x[k]= k− 1 ∏ i=0 M k− 1− i x[0], (6.7) which provides a mapping from the initial state to the state at any time k. Consequently, from (6.2) and (6.7), we find the conditions for which a FLTI can be represented as a DTLTV by equating the state-transition matrices G k = k− 1 ∏ i=0 M k− 1− i ,∀k. Next, we observe that M k = G k+1 G − 1 k under the assumption that G k is invertible for all k. In par- ticular, we notice that k− 1 ∏ i=0 M k− 1− i = k− 1 ∏ i=0 G k− i G − 1 k− 1− i =(G k G − 1 k− 1 )(G k− 1 G − 1 k− 2 )(G k− 2 G − 1 k− 3 )...G 1 G − 1 0 = G k G − 1 0 = G k . Notice that the assumption on the invertability of G k for all k can be seen as a mild assumption by invoking measure theoretical arguments. Simply speaking, consider any random matrix A, which 120 is invertible almost surely since having a non-invertible A would require the determinant of A to be zero, which has a zero probability of occurring. From (6.5), we see that there would need to be a very specific combination to make G k have a determinant of zero and thus be non-invertible, which again occurs with probability zero. Since the specific combinations required to make G k have a zero determinant are different for each k, we do not define the specific combination; however, we can give some intuition as to what this combination might be by providing some examples. We remark that for G 1 , the diagonal entries of D(α,1) would need to be such that when subtracted from A, the matrix A− D(α,1) is not invertible. Similarly for G 3 , we see that there needs to be specific combinations of the matrices A, D(α,1), D(α,2), D(α,3) and any certain powers of these to attain a non-invertible matrix G 3 . From these examples, we see that the specific combination for a matrix G k to be non-invertible is dependent on A, D(α,i), and any powers of these up to i, where i={1,...,k}. By showing that the FLTI can be re-cast as a DTLTV system under certain assumptions, we can now leverage the stability results from DTLTV systems to derive a stability condition for FLTI. We start by introducing the definition of global asymptotic stability. Definition 45. Global Asymptotic Stability (Definition 5.4.1, [199]): A FLTI system (6.1) is said to be globally asymptotically stable when the following two conditions hold 1. for every ε > 0 and k 0 ∈N, there exists δ =δ(ε,k 0 )> 0 such that for every x[k 0 ]∈R n if ∥x[k 0 ]∥ 2 <δ, then∥x[k]∥ 2 <ε for all k≥ k 0 2. lim k→∞ ∥x[k]∥ 2 = 0, where∥·∥ 2 is the Euclidean norm We provide a sufficient condition for the global asymptotic stability of FLTI (6.1). Theorem 46. A FLTI (6.1), where G k is invertible for all k∈N, is globally asymptotically stable if there exists a constant 0<λ < 1 ∥G k G − 1 k− 1 ∥ 2 ≤ λ, (6.8) ∀k≥ k 0 , with k 0 ∈N. 121 Proof. Assuming that G k is invertible for all k∈N, from the previous derivation it follows that we can represent the FLTI as a LTV system, i.e., x[k+1]= M k x[k], where M k = G k+1 G − 1 k . Leveraging the fact that the FLTI can be represented as a LTV we know that a discrete-time LTV system is glob- ally asymptotically stable if for any k 0 , there exists a 0<λ < 1, such that lim k→∞ ∥Φ(k,k 0 )∥ 2 = 0,∀k 0 , where k≥ k 0 andΦ(k,k 0 ) is the state-transition matrix from the state at k 0 to the state at k (Lemma 1, [200]). Therefore, to show sufficiency, we notice that lim k→∞ ∥Φ(k,k 0 )∥ 2 = lim k→∞ ∥M k− 1 ...M k 0 ∥ 2 ≤ lim k→∞ ∥G k G − 1 k− 1 ∥ 2 ∥G k− 1 G − 1 k− 2 ∥ 2 ...∥G k 0 +1 G − 1 k 0 ∥ 2 ≤ lim k→∞ λ k− k 0 = 0 6.2 Necessary and Sufficient Conditions for Global Asymptotic Stability of FLTIs In contrast with the conditions given in [69], we provide a closed-form solution to assess the global asymptotic stability of FLTI. Theorem 47. A non-commensurate FLTI (6.1) is said to be globally asymptotically stable if and only if for A 0 := A− D(α,1), where D(α, j)= ψ(α 1 , j) 0 ... 0 0 ψ(α 2 , j) ... 0 0 . . . . . . 0 0 0 ... ψ(α n , j) , we have|λ|< 1 for allλ∈σ(A 0 ), whereσ(A 0 ) is the set of eigenvalues of matrix A 0 . Proof. To present the global asymptotic stability conditions of the non-commensurate FLTI, we start by re-writing the FLTI in (6.1) as x[k+ 1]= k ∑ j=0 A j x[k− j], (6.9) 122 where A 0 = A− D(α,1), A j =− D(α, j+ 1), for j≥ 1, and D(α, j)= ψ(α 1 , j) 0 ... 0 0 ψ(α 2 , j) ... 0 0 . . . . . . 0 0 0 ... ψ(α n , j) , (6.10) by invoking Lemma 2 in [112]. Next, we can re-write (6.9) as x[1] x[2] x[3] . . . = A 0 0 0 0 ... A 1 A 0 0 0 ... A 2 A 1 A 0 0 ... . . . . . . . . . . . . . . . | {z } A x[0] x[1] x[2] . . . . (6.11) We observe that the system in (6.11) is an infinite-dimensional linear time-invariant system de- scribed by an operatorA . We start by noticing that the dimension ofA is countably infinite since it is described by a countably infinite set of finite-dimensional matrices. So, the point spectrum of A is countably infinite. Next, we consider the point spectrum of the operatorA , denoted by spec(A)= spec A 0 0 0 ... A 1 A 0 0 ... A 2 A 1 A 0 ... . . . . . . . . . . . . =σ(A 0 ) [ spec A 0 0 0 ... A 1 A 0 0 ... A 2 A 1 A 0 ... . . . . . . . . . . . . , where the second equality is a consequence of the properties of matrix determinants and the Leib- niz expansion [201]. By induction and Theorem 2.12 in [202], it readily follows that spec(A)= ∞ [ σ(A 0 ), where the symbol∞ indicates the union of a countable collection of sets. Subsequently, it follows that spec(A)=σ(A 0 ). Therefore, by the definition of globally asymptotic stability (Def- inition 5.4.1, [199]) and the stability conditions stated in Theorem 8.3 in [40], the FLTI is globally asymptotically stable if and only if|λ|< 1,∀λ∈σ(A 0 ). 123 6.3 Stabilizing FLTIs As previously mentioned, it is believed that a seizure starts because of a critical transition due to an instability [190, 197]. Under this assumption, we seek to develop methods to stabilize the brain dynamics, when modeled as a FLTI–for which we later provide evidence to be a suitable model. That said, towards stabilization, one approach is to alter the interconnections between different states, which could represent the inter-dependencies among neuronal populations. Thus, we introduce the first problem as follows: given (A,α), find ˜ A that satisfies the following minimize ˜ A∈R n× n ∥ ˜ A∥ 0 s.t. (A+ ˜ A,α) is globally asymptotically stable, (P 1 ) where∥·∥ 0 represents the zero quasi-norm, which measures the number of non-zero entries in a matrix or vector. Ifα = 1, then we would be dealing with a LTI system, and the problem could be solved using the methods in [203, 204]. Yet, whenα̸= 1, we wonder if it is possible to alter the fractional-order exponents to achieve stability. This leads us to introduce the following problem: given(A,α), find ˜ α that satisfies the following minimize ˜ α∈R n ∥ ˜ α∥ 0 s.t. (A,α+ ˜ α) is globally asymptotically stable. (P 2 ) Notice that this problem is somewhat unconventional. It means that we may be able to change the memory dependency of specific brain regions, which then suggests that the lack of asymptotic stability is the result of either too much or too little integration of the memory in a neural region. 124 By invoking Theorem 47, we can rewrite P 1 and P 2 as minimize ˜ A∈R n× n ∥ ˜ A∥ 0 s.t. ρ(A+ ˜ A− D(α,1))< 1, (P 1 ) and minimize ˜ α∈R n ∥ ˜ α∥ 0 s.t. ρ(A− D(α+ ˜ α,1))< 1, (P 2 ) where ρ(M) := max{|λ| :λ ∈σ(M)} is the spectral radius, which is the largest eigenvalue in magnitude of arbitrary matrix M∈R n× n . Unfortunately, the objective functions of P 1 and P 2 are nonconvex, so we propose a convexifi- cation by considering the sparsity promoting 1− norm [205]. Specifically, we obtain respectively, the following objectives for both problems: minimize ˜ A∈R n× n ∥ ˜ A∥ 1 s.t. ρ(A+ ˜ A− D(α,1))< 1, (P c 1 ) and minimize ˜ α∈R n ∥ ˜ α∥ 1 s.t. ρ(A− D(α+ ˜ α,1))< 1. (P c 2 ) Next, we present the solutions to P c 1 and P c 2 . 125 Proposition 1. The solution to P c 1 is given by ˜ A= L 1 P − 1 1 , (6.12) where P 1 and L 1 are found by solving the following convex optimization problem: minimize P 1 ∈{P 1 ∈R n× n :P 1 >0},L 1 ∈R n× n ∥P 1 ∥ 1 +∥L 1 ∥ 1 s.t. P 1 P 1 A ⊺ 0 + L ⊺ 1 A 0 P 1 + L 1 P 1 > 0. Proof. To solve P c 1 , we notice that by invoking Theorem 8.4 in [40], P c 1 can be restated as minimize P 1 ∈{P 1 ∈R n× n :P 1 >0}, ˜ A∈R n× n ∥ ˜ A∥ 1 s.t. (A 0 + ˜ A) ⊺ P 1 (A 0 + ˜ A)− P 1 < 0. (P c 1 ) By applying Theorem 3 in [204], the problem becomes minimize P 1 ∈{P 1 ∈R n× n :P 1 >0},L 1 ∈R n× n ∥P 1 ∥ 1 +∥L 1 ∥ 1 s.t. h P 1 P 1 A ⊺ 0 +L ⊺ 1 A 0 P 1 +L 1 P 1 i > 0, (P c 1 ) where ˜ A= L 1 P − 1 1 . Since P c 1 is convex, it can be easily solved for L 1 and P 1 by using the interior points method [206]. In contrast, the solution to P c 2 is as follows. Proposition 2. A suboptimal solution to P c 2 is given by ˜ α i =Γ(2)(L 2 P − 1 2 ) i,i − α i (6.13) 126 for all i∈{1,...,n}, where P 2 and L 2 are found by solving the following convex optimization problem: minimize P 2 ∈{P 2 ∈R n× n :P 2 >0},L 2 ∈R n× n ∥P 2 ∥ 1 +∥L 2 ∥ 1 s.t. h P 2 P 2 A ⊺ +L ⊺ 2 AP 2 +L 2 P 2 i > 0 and P 2 ,L 2 diagonal. Proof. Similarly to Proposition 1, we begin to solve P c 2 . Hence, by Theorem 8.4 in [40], we can restate P c 2 as minimize P 2 ∈{P 2 ∈R n× n :P 2 >0}, ˜ α∈R n ∥ ˜ α∥ 1 s.t. (A− D(α+ ˜ α,1)) ⊺ P 2 (A− D(α+ ˜ α,1))− P 2 < 0. (P c 2 ) We again apply Theorem 3 in [204] to obtain the following minimize P 2 ∈{P 2 ∈R n× n :P 2 >0},L 2 ∈R n× n ∥P 2 ∥ 1 +∥L 2 ∥ 1 s.t. h P 2 P 2 A ⊺ +L ⊺ 2 AP 2 +L 2 P 2 i > 0, (P c 2 ) where D(α+ ˜ α,1)=− L 2 P − 1 2 . Since D(α+ ˜ α,1) is diagonal, we restrict L 2 and P 2 to be diagonal, which imposes 2(n 2 − n) additional linear constraints. P c 2 is convex and can be solved for L 2 and P 2 by using the interior points method [206]. From D(α+ ˜ α,1)=− L 2 P − 1 2 , we need a way to easily obtain ˜ α. From (6.10), we notice that D(α+ ˜ α,1) is dependent onψ(α i + ˜ α i ,1)= Γ(1− (α i + ˜ α i )) Γ(− (α i + ˜ α i ))Γ(2) for all i∈{1,...,n}. Yet, remarkably, from the relationshipΓ(1+ z)= zΓ(z), we have that ψ(α i + ˜ α i ,1) can be simplified to Γ(1− (α i + ˜ α i )) Γ(− (α i + ˜ α i ))Γ(2) = − (α i + ˜ α i ) Γ(2) ( ( ( ( ( ( ( ( Γ(− (α i + ˜ α i )) ( ( ( ( ( ( ( ( Γ(− (α i + ˜ α i )) . (6.14) 127 Hence, D(α+ ˜ α,1) becomes D(α+ ˜ α,1)= − (α 1 + ˜ α 1 ) Γ(2) 0 ... 0 0 − (α 2 + ˜ α 2 ) Γ(2) ... 0 0 . . . . . . 0 0 0 ... − (α n + ˜ α n ) Γ(2) . By equating the diagonal entries of L 2 P − 1 2 to the diagonal entries of D(α+ ˜ α,1), we solve for ˜ α and obtain the result. Notice that in contrast with the solution to P c 1 , in the suboptimal solution to P c 2 , we only alter the elements of ˜ α to be possibly nonzero. In turn, ˜ α corresponds to the diagonal entries of L 2 P − 1 2 that in all likelihood are only numerically possible when both matrices L 2 and P 2 are restricted to be diagonal and P 2 is positive definite. 6.3.1 Computationally Efficient and Graphically-Interpretable Sufficient Approximate Solutions for P c 1 and P c 2 In this section, we present graphically-interpretable sufficient approximate solutions to solve P c 1 and P c 2 . Proposition 3. The following problem formulation is sufficient for solving P c 1 : minimize ˜ A∈R n× n ∥ ˜ A∥ 1 s.t. a i,i + ˜ a i,i + α i Γ(2) < 1− ∑ j∈N\{i} |a i, j + ˜ a i, j |. (P g 1 ) Proof. We start by invoking Gershgorin’s theorem [207] and the reverse triangle inequality, which combine to say that for all λ∈σ(A+ ˜ A− D(α,1)) there exists a positive integer i∈{1,...,n} such that the following holds|λ|≤| a i,i + ˜ a i,i − ψ(α i ,1)|+ ∑ j∈N\{i} |a i, j + ˜ a i, j |. 128 Hence, in providing sufficient graphical conditions to solve P c 1 , we seek to find all ˜a i, j , for all i∈{1,...,n} and j∈{1,...,n} such that|a i,i + ˜ a i,i − ψ(α i ,1)|+ ∑ j∈N\{i} |a i, j + ˜ a i, j |< 1. This implies that a i,i + ˜ a i,i − Γ(1− α i ) Γ(− α i )Γ(2) < 1− ∑ j∈N\{i} |a i, j + ˜ a i, j |. (6.15) Interestingly, we notice that the sufficient graphical constraint in (6.15) implicitly requires that 1− ∑ j∈N\{i} |a i, j + ˜ a i, j | > 0, which shows that the problem may not always be feasible. By the same relationship used in (6.14), the term Γ(1− α i ) Γ(− α i )Γ(2) reduces to α i Γ(2) . Similarly, we present the graphically-interpretable sufficient approximate solution for P c 2 . Proposition 4. The following problem formulation is sufficient for solving P c 2 : minimize ˜ α∈R n ∥ ˜ α∥ 1 s.t. a i,i + (α i + ˜ α i ) Γ(2) < 1− ∑ j∈N\{i} |a i, j |. (P g 2 ) Proof. Similar to the proof of Proposition 3, we again start by invoking Gershgorin’s theorem [207] and the reverse triangle inequality, which combine to say that for allλ∈σ(A− D(α+ ˜ α,1)) there exists a positive integer i∈{1,...,n} such that the following holds|λ|≤| a i,i − ψ(α i + ˜ α i ,1)|+ ∑ j∈N\{i} |a i, j |. Hence, in providing sufficient graphical conditions to solve P c 2 , we seek to find ˜ α i , for all i∈ {1,...,n} and j∈{1,...,n} such that|a i,i − ψ(α i + ˜ α i ,1)|+ ∑ j∈N\{i} |a i, j |< 1, which implies that a i,i − Γ(1− (α i + ˜ α i )) Γ(− (α i + ˜ α i ))Γ(2) < 1− ∑ j∈N\{i} |a i, j |. (6.16) 129 The sufficient graphical constraint in (6.16) implicitly requires that 1 − ∑ j∈N\{i} |a i, j |> 0. There- fore, the problem may not always be feasible. By the same relationship used in (6.14), the term Γ(1− (α i + ˜ α i )) Γ(− (α i + ˜ α i ))Γ(2) reduces to (α i + ˜ α i ) Γ(2) . We remark that these solutions are more computationally efficient as there is only a single (n× n) matrix in the case of P g 1 and a single vector of size n in the case of P g 2 that need to be found, whereas in P c 1 and P c 2 two (n× n) matrices must be found. Furthermore, P c 1 has 4n 2 constraints and P c 2 has 6n 2 − 4n constraints, whereas both P g 1 and P g 2 only have n constraints. These sufficient approximate solutions are advantageous in the context of epilepsy as there may be many sensors to measure brain activity, so the network may be very large. Besides, they enable neuroscientists and physicians with criteria that are graphically intuitive. 6.4 Mitigating Epilepsy by Stabilizing FLTIs Figure 6.2: This figure illustrates the novel stimulation strategy to mitigate seizures in the brain. As is common in today’s technology, the brain’s behavior is being continuously monitored. The novelty arises in detecting the start of a seizure, which we define as the instability of a fractional- order dynamical network, which is characterized by the eigenvalues of the system. Next, we devised a method to calculate a stabilizing closed-form feedback. Finally, we demonstrate the mitigation of the seizure in the brain after applying the closed-loop feedback. We illustrate the usefulness of our framework by applying it to a dataset from an epileptic pa- tient [208]. Specifically, we unveil new insights into novel treatments for epilepsy. We studied the 130 S p at i al M a t r i x ( A ) B e f o r e S e i z u r e 1 2 3 4 5 6 1 2 3 4 5 6 -1.5 -1 -0.5 0 0.5 1 (a) F r ac t i o n al - o r d e r E x p on e n t s ( , ) B e f or e S e i z u r e 1 2 3 4 5 6 -1.5 -1 -0.5 0 0.5 1 (b) Figure 6.3: Spatial matrix and fractional-order exponents 12 seconds before the patient’s seizure. first 6 channels of electrocorticography data from patient HUP64 ictal block 1 in the International Epilepsy Electrophysiology Portal [208]. The data was recorded at a sampling rate of 512 Hz, it was marked by clinical experts, and it was pre-processed according to the procedure outlined in [12]. We verified that the data exhibits long-range dependence by computing the Hurst expo- nents for each channel{0.66,0.66,0.77,0.72,0.75,0.7}, which motivates the use of a DTLFOS as a suitable model for the data [209]. Next, we estimated the DTLFOS’ parameters from the data using the methods in [112] and a time window of 1 second. We then examined the parameters 12 seconds before the seizure in Figs. 6.3 (a) and (b), and during the seizure, specifically, 48 seconds after its start, in Figs. 6.5 (a) and (b). By Theorem 47, we find that both systems (before and during the seizure), are unstable, yet with different dynamical properties, as captured by the systems’ eigenvalues before and during the seizure, see Fig. 6.4 (a) and (b), respectively. Next, to stabilize the DTLFOS during the seizure, we solve P c 1 and P c 2 as proposed in Proposi- tion 1 and Proposition 2, respectively, using CVX in MATLAB [210] and the parameters during the seizure. We present the results in Fig. 6.5. On the one hand, after solving P c 1 , the diagonal values of ˜ A and the updated spatial matrix A+ ˜ A are lower as compared to the diagonal values of A as shown in Figs. 6.5 (a), (c), and (e). Here, 131 -2 0 2 -2 -1 0 1 2 E i g e n v a l u e s B e f or e S e i z u r e A 0 = A ! D ( , ; 1 ) f r om T h e or e m 1 (a) -2 0 2 -2 -1 0 1 2 E i ge n v al u e s D u r i n g S e i z u r e A 0 = A ! D ( , ; 1) f r om T h e or e m 1 (b) Figure 6.4: Eigenvalues of A 0 = A− D(α,1) using the DTLFOS parameters 12 seconds before and 48 seconds after the seizure starts. we can interpret the spatial matrix as the level of activity between neuron populations. Therefore, having lower values might indicate that there is less activity between neuron populations. On the other hand, after solving P c 2 , we notice that the values of the altered ˜ α and updated fractional-order exponentsα+ ˜ α are, in general, lower than the values ofα as shown in Figs. 6.5 (b), (d), and (f). The original fractional-order exponents (α) during the seizure are close to 1. However, the updated fractional-order exponents have lower values in the range of (− 0.6,0.5), which provides converging evidence for the empirical results presented in [195]. After solving P c 1 and P c 2 , we find the eigenvalues for the new systems as shown in Figs. 6.5 (g) and (h), which verify the feasibility of our problems by invoking Theorem 47. Finally, for illustration, we obtain the solutions to P g 1 and P g 2 given by Proposition 3 and Propo- sition 4, respectively, depicted in Fig. 6.6. We again find that the diagonal values of the new spatial matrix are lower than the diagonal values of A. Similarly, the values of the new fractional-order exponents are lower than the values of α. Once again, feasibility is ensured by invoking Theo- rem 47. We analyze the implications that our framework has on providing potential treatments for epilepsy. We notice in Fig. 6.4 that while both systems before and during the seizure are unstable, in the case of the system during the seizure, there are more eigenvalues outside of the unit circle. This might indicate that the system moves further from stability [197, 190]. Therefore, to mitigate the seizure, 132 S p a t i a l M a t r i x ( A) d u r i n g S e i z u r e 1 2 3 4 5 6 1 2 3 4 5 6 -1.5 -1 -0.5 0 0.5 1 (a) F r ac t i on al - o r d e r E x p on e n t s ( , ) d u r i n g S e i z u r e 1 2 3 4 5 6 -1.5 -1 -0.5 0 0.5 1 (b) Al t e r e d S p a t i al M at r i x ( ~ A ) d u r i n g S e i z u r e 1 2 3 4 5 6 1 2 3 4 5 6 -1.5 -1 -0.5 0 0.5 1 (c) Al t e r e d F r ac t i on al - o r d e r E x p on e n t s ( ~ , ) d u r i n g S e i z u r e 1 2 3 4 5 6 -1.5 -1 -0.5 0 0.5 1 (d) U p d a t e d S p a t i a l M at r i x ( A + ~ A) d u r i n g S e i z u r e 1 2 3 4 5 6 1 2 3 4 5 6 -1.5 -1 -0.5 0 0.5 1 (e) Up d a t e d F r a c t i o n al - o r d e r E x p o n e n t s ( , + ~ , ) d u r i n g S e i z u r e 1 2 3 4 5 6 -1.5 -1 -0.5 0 0.5 1 (f) -1 0 1 -1 -0.5 0 0.5 1 Up d at e d E i ge n v al u e s d u r i n g S e i z u r e U s i n g t h e S ol u t i on f r o m P c 1 (g) -1 0 1 -1 -0.5 0 0.5 1 Up d at e d E i ge n v al u e s d u r i n g S e i z u r e Us i n g t h e S o l u t i on of P c 2 (h) Figure 6.5: Spatial matrix and fractional-order exponents during the seizure, and their updated values and updated eigenvalues after solving P c 1 and P c 2 . it is hypothesized that we must correct for instability [197, 190]. Our framework achieves this by either altering the fractional-order exponents or the spatial matrix. In practice, changing the system parameters could be achieved through the target release of a drug [211], electrical or ultrasound neurostimulation [13], optogenetics [212], or even through the regulation of the glia astrocytes [213]. Our method is an event-triggered state-feedback control, i.e., u[k]= Kx[k], where u[k] is the control input, and it is employed when a seizure is detected. For example, in the case of P c 1 , this is as simple as setting K = ˜ A. For P c 2 , we simply set the diagonal entries of K i,i = − ˜ α i Γ(2) for all i∈{1,...,n}, and the off-diagonal entries are zero. We conduct a detailed study to validate fractional-order systems as a suitable model for neural behavior, to characterize the relationship between stability and seizure onset, and evaluate the ef- fectiveness of our feedback control strategy in mitigating seizures. We examined a real-world data set from eight epileptic patients exhibiting similar seizures in the International Epilepsy Electro- physiology Portal [208]. The patients from the Mayo Clinic are referred to as ‘MAYO,’ and the 133 U p d a t e d ( S u / c i e n t ) S p a t i a l M a t r i x ( A + ~ A) d u r i n g S e i z u r e 1 2 3 4 5 6 -1.5 -1 -0.5 0 0.5 1 (a) U p d a t e d ( S u / c i e n t ) F r ac t i on al - o r d e r E x p on e n t s ( , + ~ , ) d u r i n g S e i z u r e 1 2 3 4 5 6 -1.5 -1 -0.5 0 0.5 1 (b) -1 0 1 -1 -0.5 0 0.5 1 U p d a t e d E i g e n v a l u e s d u r i n g S e i z u r e U s i n g t h e S u / c i e n t C o n d i t i o n s ( P g 1 ) (c) -1 0 1 -1 -0.5 0 0.5 1 Up d a t e d E i g e n v a l u e s d u r i n g S e i z u r e U s i n g t h e S u / c i e n t C on d i t i o n s ( P g 2 ) (d) Figure 6.6: These figures show the updated spatial matrix, fractional-order exponents, and eigen- values after solving P g 1 and P g 2 . patients from the Hospital at the University of Pennsylvania are referred to as ‘HUP.’ The electro- corticography data was recorded at a sampling rate of 500/512 Hz (MAYO/HUP), it was marked by clinical experts, and it was pre-processed according to the procedure outlined in [12]. We performed our experiments on eight different epileptic patient data sets. The meta-data for the different patients and their seizures are given in Table 6.1. The data from the eight epileptic patients that we analyze in this study are shown in Figure 6.7. 6.4.1 Fractional Dynamics are a Suitable Proxy for Brain Behavior We provide evidence that fractional dynamics are a suitable model to represent neural behav- ior by examining the mean-squared error between the data and two different models, namely the 134 Patient Sex Age (Years) (Onset/Surgery) Seizure Onset Etiology Seizure Type Seizures (N) Imaging Outcome SOZ Loc- alization HUP64 M 0.3/20 left frontal dysplasia CP+GTC 1 L ENGEL-I L HUP68 F 15/26 right temporal meningitis CP, CP+GTC 5 NL ENGEL-I L HUP72 F 11/27 bilateral left mesial temporal sclerosis CP+GTC 1 L NR L HUP86 F 18/25 left temporal cryptogenic CP+GTC 2 NL ENGEL-II L MAYO10 F 0/13 left frontal neonatal injury CP+GTC 2 L NF L MAYO11 F 10/34 right mesial frontal unknown CP 2 NL NF L MAYO16 F 5/36 right temporal orbito- frontal unknown CP+GTC 3 NL ILAE-IV L MAYO20 M 05/10 right frontal unknown CP+GTC 4 NL ILAE-IV L Table 6.1: This table shows the metadata for the patients. fractional-order system (FOS) and the linear time-invariant system (LTI). We compare three dif- ferent window sizes, that is 1, 2, and 5 seconds (see Table 6.2) as well as three different prediction horizons, that is 1, 2, and 5 steps ahead. We compute the total mean-squared error across all channels and all windows for eight different patients. In only two of the patients, we find that the mean-squared error for the FOS is smaller than the mean-squared error for the LTI system. However, we find that the total mean-squared error for the FOS is relatively similar to the total mean-squared error for the LTI system with the exception of patient MAYO16. In the case of both the LTI system and the FOS, the models generalize well to different prediction horizons since the mean-squared error is relatively similar across prediction horizons. For the LTI system, a smaller time window produces a smaller mean-squared error. However, in the case of the FOS, there are instances where a larger time window produces a smaller mean-squared error as in the case of patient HUP64 in comparing the 1 second time window to the 2 second time window and in the case of patient HUP72 in comparing the 5 second time window with the 1 and 2 second time windows. This indicates that the FOS may need a longer time window to accurately capture the inherent long-term memory prevalent in these signals. 135 0 50 100 150 Time(s) -4000 -2000 0 2000 4000 6000 7 V HUP64-ictal-block-1 ECoG Data (a) 0 50 100 150 Time(s) -3000 -2000 -1000 0 1000 2000 3000 4000 7 V HUP68-ictal-block-1 ECoG Data (b) 0 50 100 150 Time(s) -4000 -2000 0 2000 4000 7 V HUP72-ictal-block-1 ECoG Data (c) 0 20 40 60 80 100 120 Time(s) -6000 -4000 -2000 0 2000 4000 7 V HUP86-ictal-block-1 ECoG Data (d) 0 50 100 150 Time(s) -5000 0 5000 10000 7 V MAYO010-ictal-block-1 ECoG Data (e) 0 50 100 150 Time(s) -1 -0.5 0 0.5 1 7 V #10 4 MAYO011-ictal-block-1 ECoG Data (f) 0 20 40 60 80 100 120 Time(s) -1.5 -1 -0.5 0 0.5 1 1.5 7 V #10 4 MAYO016-ictal-block-2 ECoG Data (g) 0 20 40 60 80 100 120 Time(s) -1 -0.5 0 0.5 1 7 V #10 4 MAYO020-ictal-block-3 ECoG Data (h) Figure 6.7: These figures show the data from the patients that we are analyzing in our experiments. The results from studying the mean-squared error indicate that the fractional-order system can be used to represent neural behavior from electicortiography recordings. 6.4.2 Time of Seizure Onset is Related to the Stability of Fractional Dynamics We seek an autonomous method to pinpoint the time of seizure onset by studying the stability of the fractional-order system. The stability of fractional-order systems is dependent upon the eigen- values of the system. Hence, using the data from the channels associated with the time of seizure 136 Patient Window Size MSE LTI MSE FOS 1 step 2 steps 5 steps ahead 1 step 2 steps 5 steps ahead HUP64 1 second 1.091∗ 10 8 1.091∗ 10 8 1.091∗ 10 8 1.072∗ 10 8 1.072∗ 10 8 1.072∗ 10 8 2 second 1.485∗ 10 8 1.485∗ 10 8 1.485∗ 10 8 1.054∗ 10 8 1.054∗ 10 8 1.054∗ 10 8 5 second 1.687∗ 10 8 1.687∗ 10 8 1.687∗ 10 8 1.173∗ 10 8 1.173∗ 10 8 1.173∗ 10 8 HUP68 1 second 1.198∗ 10 8 1.198∗ 10 8 1.198∗ 10 8 2.174∗ 10 9 2.174∗ 10 9 2.174∗ 10 9 2 second 1.521∗ 10 8 1.521∗ 10 8 1.521∗ 10 8 3.185∗ 10 9 3.185∗ 10 9 3.185∗ 10 9 5 second 1.697∗ 10 8 1.697∗ 10 8 1.697∗ 10 8 3.232∗ 10 9 3.232∗ 10 9 3.232∗ 10 9 HUP72 1 second 1.956∗ 10 7 1.956∗ 10 7 1.956∗ 10 7 8.500∗ 10 9 8.500∗ 10 9 8.500∗ 10 9 2 second 1.957∗ 10 7 1.958∗ 10 7 1.958∗ 10 7 5.494∗ 10 7 5.494∗ 10 7 5.494∗ 10 7 5 second 2.077∗ 10 7 1.078∗ 10 7 2.078∗ 10 7 2.49∗ 10 7 2.49∗ 10 7 2.49∗ 10 7 HUP86 1 second 2.490∗ 10 8 2.490∗ 10 8 2.490∗ 10 8 8.490∗ 10 7 8.490∗ 10 7 8.490∗ 10 7 2 second 3.168∗ 10 8 3.168∗ 10 8 3.168∗ 10 8 1.243∗ 10 8 1.243∗ 10 8 1.243∗ 10 8 5 second 3.563∗ 10 8 3.563∗ 10 8 3.563∗ 10 8 1.494∗ 10 8 1.495∗ 10 8 1.495∗ 10 8 MAYO10 1 second 4.984∗ 10 9 4.927∗ 10 9 4.893∗ 10 9 9.88∗ 10 10 9.92∗ 10 10 1.004∗ 10 11 2 second 4.297∗ 10 9 4.297∗ 10 9 4.297∗ 10 9 1.230∗ 10 11 1.230∗ 10 11 1.231∗ 10 11 5 second 4.212∗ 10 9 4.197∗ 10 9 4.188∗ 10 9 1.071∗ 10 11 1.071∗ 10 11 1.073∗ 10 11 MAYO11 1 second 8.447∗ 10 7 8.459∗ 10 7 8.466∗ 10 7 1.996∗ 10 10 1.996∗ 10 10 1.998∗ 10 10 2 second 8.495∗ 10 7 8.51∗ 10 7 8.521∗ 10 7 4.943∗ 10 10 4.943∗ 10 10 4.944∗ 10 10 5 second 9.074∗ 10 7 9.099∗ 10 7 9.114∗ 10 7 4.948∗ 10 10 4.949∗ 10 10 4.985∗ 10 10 MAYO16 1 second 3.064∗ 10 9 2.863∗ 10 9 2.741∗ 10 9 1∗ 10 23 1∗ 10 23 2.946∗ 10 27 2 second 1.936∗ 10 9 1.982∗ 10 9 2.009∗ 10 9 2.946∗ 10 27 2.946∗ 10 27 2.946∗ 10 27 5 second 1.939∗ 10 9 1.983∗ 10 9 2.01∗ 10 9 2.946∗ 10 27 2.946∗ 10 27 2.946∗ 10 27 MAYO20 1 second 4.266∗ 10 7 4.361∗ 10 7 4.417∗ 10 7 8.26∗ 10 11 8.534∗ 10 11 5.118∗ 10 12 2 second 5.507∗ 10 7 5.526∗ 10 7 5.537∗ 10 7 5.167∗ 10 12 5.167∗ 10 12 5.167∗ 10 12 5 second 6.165∗ 10 7 6.209∗ 10 7 6.235∗ 10 7 5.173∗ 10 12 5.173∗ 10 12 5.222∗ 10 12 Table 6.2: This table shows the results of the mean-squared error for both the LTI and FOS dy- namics. onset (as clinically marked by physicians), we estimate the parameters of the fractional-order sys- tem using the method outlined in [112], and we compute the eigenvalues of the fractional-order system from these parameters. Eigenvalues are a measure of the energy in the system. We investi- gate the evolution of the magnitude of the eigenvalues over the time of the seizure. Furthermore, we study the evolution of the number of stable and unstable modes over the time of seizure, which gives an indication of the nature of the stability of the system over time – see Figures 6.8, 6.9, 6.10, 6.11, 6.12, 6.13, 6.14, 6.15. By examining the magnitude of the eigenvalues and the stable modes over the time of seizure, we observe a distinct pattern at the time of seizure onset in which the magnitude of the eigenvalues drop below 1 indicating that the system is stable. The eigenvalues remain in the stable region for several seconds before spiking again. This pattern is demonstrated most clearly in patients HUP68, HUP72, MAYO10, and MAYO16. In the time leading up to the seizure, the stability of the system oscillates between stable and unstable and the magnitude of the eigenvalues spike. We observe this 137 pattern across different time windows; however, it is seemingly most pronounced in the 2 second time window. While MAYO20, MAYO11, HUP86, and HUP64 do not seem to have this clear pattern, for patients MAYO20 and MAYO 11, there is an oscillation between stable and unstable modes that occurs before the seizure, and for patients HUP86 and HUP64, there is a spiking in the magnitude of the eigenvalues that occurs after the start of seizure. Leveraging these observations, we determine the start of seizure to be the shift from predomi- nantly stable modes to predominantly unstable modes, which is consistent with the spiking behav- ior in the magnitude of the eigenvalues that occurs after the eigenvalues dip below 1. We will use this pattern to determine the time to administer neurostimulation to mitigate seizures. 0 50 100 150 Time (s) 0 0.5 1 1.5 2 2.5 Magintude of Eigenvalues Evolution of Eigenvalues with Time Window 1 seconds (a) 0 50 100 150 Time (s) 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 Magintude of Eigenvalues Evolution of Eigenvalues with Time Window 2 seconds (b) 0 50 100 150 Time (s) 0.2 0.4 0.6 0.8 1 1.2 1.4 Magintude of Eigenvalues Evolution of Eigenvalues with Time Window 5 seconds (c) 0 50 100 150 Time (s) 0 2 4 6 8 10 Number of Modes Evolution of Unstable and Stable Mode with Time Window 1 seconds Unstable Modes Stable Modes (d) 0 50 100 150 Time (s) 0 2 4 6 8 10 Number of Modes Evolution of Unstable and Stable Mode with Time Window 2 seconds Unstable Modes Stable Modes (e) 0 50 100 150 Time (s) 0 2 4 6 8 10 Number of Modes Evolution of Unstable and Stable Mode with Time Window 5 seconds Unstable Modes Stable Modes (f) Figure 6.8: HUP64 138 0 50 100 150 Time (s) 0 0.5 1 1.5 2 2.5 3 Magintude of Eigenvalues Evolution of Eigenvalues with Time Window 1 seconds (a) 0 50 100 150 Time (s) 0 0.5 1 1.5 2 2.5 3 Magintude of Eigenvalues Evolution of Eigenvalues with Time Window 2 seconds (b) 0 50 100 150 Time (s) 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 Magintude of Eigenvalues Evolution of Eigenvalues with Time Window 5 seconds (c) 0 50 100 150 Time (s) 0 2 4 6 8 10 Number of Modes Evolution of Unstable and Stable Mode with Time Window 1 seconds Unstable Modes Stable Modes (d) 0 50 100 150 Time (s) 0 2 4 6 8 10 Number of Modes Evolution of Unstable and Stable Mode with Time Window 2 seconds Unstable Modes Stable Modes (e) 0 50 100 150 Time (s) 0 2 4 6 8 10 Number of Modes Evolution of Unstable and Stable Mode with Time Window 5 seconds Unstable Modes Stable Modes (f) Figure 6.9: HUP68 0 50 100 150 Time (s) 0.5 1 1.5 2 2.5 Magintude of Eigenvalues Evolution of Eigenvalues with Time Window 1 seconds (a) 0 50 100 150 Time (s) 0.5 1 1.5 2 Magintude of Eigenvalues Evolution of Eigenvalues with Time Window 2 seconds (b) 0 50 100 150 Time (s) 0.6 0.8 1 1.2 1.4 1.6 1.8 Magintude of Eigenvalues Evolution of Eigenvalues with Time Window 5 seconds (c) 0 50 100 150 Time (s) 0 1 2 3 4 5 Number of Modes Evolution of Unstable and Stable Mode with Time Window 1 seconds Unstable Modes Stable Modes (d) 0 50 100 150 Time (s) 0 1 2 3 4 5 Number of Modes Evolution of Unstable and Stable Mode with Time Window 2 seconds Unstable Modes Stable Modes (e) 0 50 100 150 Time (s) 0 1 2 3 4 5 Number of Modes Evolution of Unstable and Stable Mode with Time Window 5 seconds Unstable Modes Stable Modes (f) Figure 6.10: HUP72 139 0 20 40 60 80 100 120 Time (s) 0 0.5 1 1.5 Magintude of Eigenvalues Evolution of Eigenvalues with Time Window 1 seconds (a) 0 20 40 60 80 100 120 Time (s) 0 0.2 0.4 0.6 0.8 1 1.2 1.4 Magintude of Eigenvalues Evolution of Eigenvalues with Time Window 2 seconds (b) 0 20 40 60 80 100 120 Time (s) 0.6 0.7 0.8 0.9 1 1.1 Magintude of Eigenvalues Evolution of Eigenvalues with Time Window 5 seconds (c) 0 20 40 60 80 100 120 Time (s) 0 5 10 15 20 25 Number of Modes Evolution of Unstable and Stable Mode with Time Window 1 seconds Unstable Modes Stable Modes (d) 0 20 40 60 80 100 120 Time (s) 0 5 10 15 20 25 Number of Modes Evolution of Unstable and Stable Mode with Time Window 2 seconds Unstable Modes Stable Modes (e) 0 20 40 60 80 100 120 Time (s) 0 5 10 15 20 25 Number of Modes Evolution of Unstable and Stable Mode with Time Window 5 seconds Unstable Modes Stable Modes (f) Figure 6.11: HUP86 0 50 100 150 Time (s) 0 0.5 1 1.5 2 2.5 Magintude of Eigenvalues Evolution of Eigenvalues with Time Window 1 seconds (a) 0 50 100 150 Time (s) 0 0.5 1 1.5 2 2.5 Magintude of Eigenvalues Evolution of Eigenvalues with Time Window 2 seconds (b) 0 50 100 150 Time (s) 0.5 1 1.5 2 Magintude of Eigenvalues Evolution of Eigenvalues with Time Window 5 seconds (c) 0 50 100 150 Time (s) 0 5 10 15 20 Number of Modes Evolution of Unstable and Stable Mode with Time Window 1 seconds Unstable Modes Stable Modes (d) 0 50 100 150 Time (s) 0 5 10 15 20 Number of Modes Evolution of Unstable and Stable Mode with Time Window 2 seconds Unstable Modes Stable Modes (e) 0 50 100 150 Time (s) 0 5 10 15 20 Number of Modes Evolution of Unstable and Stable Mode with Time Window 5 seconds Unstable Modes Stable Modes (f) Figure 6.12: MAYO10 140 0 50 100 150 Time (s) 0 1 2 3 4 5 Magintude of Eigenvalues Evolution of Eigenvalues with Time Window 1 seconds (a) 0 50 100 150 Time (s) 0 1 2 3 4 Magintude of Eigenvalues Evolution of Eigenvalues with Time Window 2 seconds (b) 0 50 100 150 Time (s) 0.5 1 1.5 2 2.5 Magintude of Eigenvalues Evolution of Eigenvalues with Time Window 5 seconds (c) 0 50 100 150 Time (s) 0 0.5 1 1.5 2 2.5 3 Number of Modes Evolution of Unstable and Stable Mode with Time Window 1 seconds Unstable Modes Stable Modes (d) 0 50 100 150 Time (s) 0 0.5 1 1.5 2 2.5 3 Number of Modes Evolution of Unstable and Stable Mode with Time Window 2 seconds Unstable Modes Stable Modes (e) 0 50 100 150 Time (s) 0 0.5 1 1.5 2 2.5 3 Number of Modes Evolution of Unstable and Stable Mode with Time Window 5 seconds Unstable Modes Stable Modes (f) Figure 6.13: MAYO11 0 20 40 60 80 100 120 Time (s) 0 20 40 60 80 100 120 Magintude of Eigenvalues Evolution of Eigenvalues with Time Window 1 seconds (a) 0 20 40 60 80 100 120 Time (s) 0 1 2 3 4 5 6 7 Magintude of Eigenvalues Evolution of Eigenvalues with Time Window 2 seconds (b) 0 20 40 60 80 100 120 Time (s) 0 1 2 3 4 Magintude of Eigenvalues Evolution of Eigenvalues with Time Window 5 seconds (c) 0 20 40 60 80 100 120 Time (s) 0 1 2 3 4 5 6 7 Number of Modes Evolution of Unstable and Stable Mode with Time Window 1 seconds Unstable Modes Stable Modes (d) 0 20 40 60 80 100 120 Time (s) 0 1 2 3 4 5 6 7 Number of Modes Evolution of Unstable and Stable Mode with Time Window 2 seconds Unstable Modes Stable Modes (e) 0 20 40 60 80 100 120 Time (s) 0 1 2 3 4 5 6 7 Number of Modes Evolution of Unstable and Stable Mode with Time Window 5 seconds Unstable Modes Stable Modes (f) Figure 6.14: MAYO16 141 0 20 40 60 80 100 120 Time (s) 0 1 2 3 4 5 Magintude of Eigenvalues Evolution of Eigenvalues with Time Window 1 seconds (a) 0 20 40 60 80 100 120 Time (s) 0.5 1 1.5 2 2.5 3 3.5 Magintude of Eigenvalues Evolution of Eigenvalues with Time Window 2 seconds (b) 0 20 40 60 80 100 120 Time (s) 0.5 1 1.5 2 2.5 3 3.5 Magintude of Eigenvalues Evolution of Eigenvalues with Time Window 5 seconds (c) 0 20 40 60 80 100 120 Time (s) 0 1 2 3 4 5 6 7 Number of Modes Evolution of Unstable and Stable Mode with Time Window 1 seconds Unstable Modes Stable Modes (d) 0 20 40 60 80 100 120 Time (s) 0 1 2 3 4 5 6 7 Number of Modes Evolution of Unstable and Stable Mode with Time Window 2 seconds Unstable Modes Stable Modes (e) 0 20 40 60 80 100 120 Time (s) 1 2 3 4 5 6 Number of Modes Evolution of Unstable and Stable Mode with Time Window 5 seconds Unstable Modes Stable Modes (f) Figure 6.15: MAYO20 6.4.3 Stabilizing Fractional Dynamics Lowers the Energy in the System We evaluate the model-based framework developed in [70] that aims to mitigate seizures by sta- bilizing fractional-order dynamics. In particular, using the estimated parameters of the fractional- order system, the control scheme in [70] computes an updated spatial matrix ˜ A. This updated matrix ˜ A is used as a feedback control u[k]= ˜ Ax[k], where k is the time step, x[k] is a vector of the measurements from the electrocorticography recordings, and u[k] is a vector of the control input. Figures 6.16, 6.17, 6.18, 6.19, 6.20, 6.21, 6.22, 6.23 show the data (in red) during a 1, 2, and 5 second time window starting 60 seconds into the recording and the 1-step ahead predicted samples from the fractional-order system using the updated spatial matrix (in blue). In these figures, we see that the controlled signals (in blue) have a lower amplitude and lower frequency as compared with the data (in red) across all patients and all window sizes. These results provide evidence that the stabilizing control scheme has the ability to lower the overall energy in the system. 142 60 60.2 60.4 60.6 60.8 61 Time (s) -1000 -500 0 500 1000 7 V ECoG Data HUP64-ictal-block-1 Window Size 1 Data Controlled Signals (a) 60 60.5 61 61.5 62 Time (s) -1000 -500 0 500 1000 7 V ECoG Data HUP64-ictal-block-1 Window Size 2 Data Controlled Signals (b) 60 61 62 63 64 65 Time (s) -1500 -1000 -500 0 500 1000 1500 7 V ECoG Data HUP64-ictal-block-1 Window Size 5 Data Controlled Signals (c) Figure 6.16: HUP64 60 60.2 60.4 60.6 60.8 61 Time (s) -2000 -1000 0 1000 2000 7 V ECoG Data HUP68-ictal-block-1 Window Size 1 Data Controlled Signals (a) 60 60.5 61 61.5 62 Time (s) -2000 -1000 0 1000 2000 7 V ECoG Data HUP68-ictal-block-1 Window Size 2 Data Controlled Signals (b) 60 61 62 63 64 65 Time (s) -3000 -2000 -1000 0 1000 2000 7 V ECoG Data HUP68-ictal-block-1 Window Size 5 Data Controlled Signals (c) Figure 6.17: HUP68 60 60.2 60.4 60.6 60.8 61 Time (s) -1500 -1000 -500 0 500 1000 1500 7 V ECoG Data HUP72-ictal-block-1 Window Size 1 Data Controlled Signals (a) 60 60.5 61 61.5 62 Time (s) -1500 -1000 -500 0 500 1000 1500 7 V ECoG Data HUP72-ictal-block-1 Window Size 2 Data Controlled Signals (b) 60 61 62 63 64 65 Time (s) -1500 -1000 -500 0 500 1000 1500 7 V ECoG Data HUP72-ictal-block-1 Window Size 5 Data Controlled Signals (c) Figure 6.18: HUP72 6.4.4 Mathematically Representing Neural Dynamics There are different methods to represent neural signals. One useful tool in modeling is a dy- namical network system [83]. For example, [84] provides an overview of the recent advances in modeling the multi-scale behavior of the brain using dynamical networks. These models have al- lowed researchers to draw conclusions regarding the brain’s topology and function. While many 143 60 60.2 60.4 60.6 60.8 61 Time (s) -3000 -2000 -1000 0 1000 2000 3000 7 V ECoG Data HUP86-ictal-block-1 Window Size 1 Data Controlled Signals (a) 60 60.5 61 61.5 62 Time (s) -6000 -4000 -2000 0 2000 4000 7 V ECoG Data HUP86-ictal-block-1 Window Size 2 Data Controlled Signals (b) 60 61 62 63 64 65 Time (s) -6000 -4000 -2000 0 2000 4000 7 V ECoG Data HUP86-ictal-block-1 Window Size 5 Data Controlled Signals (c) Figure 6.19: HUP86 60 60.2 60.4 60.6 60.8 61 Time (s) -5000 0 5000 10000 7 V ECoG Data MAYO010-ictal-block-1 Window Size 1 Data Controlled Signals (a) 60 60.5 61 61.5 62 Time (s) -2000 0 2000 4000 6000 8000 10000 7 V ECoG Data MAYO010-ictal-block-1 Window Size 2 Data Controlled Signals (b) 60 61 62 63 64 65 Time (s) -5000 0 5000 10000 7 V ECoG Data MAYO010-ictal-block-1 Window Size 5 Data Controlled Signals (c) Figure 6.20: MAYO10 60 60.2 60.4 60.6 60.8 61 Time (s) -1500 -1000 -500 0 500 1000 1500 7 V ECoG Data MAYO011-ictal-block-1 Window Size 1 Data Controlled Signals (a) 60 60.5 61 61.5 62 Time (s) -1500 -1000 -500 0 500 1000 1500 7 V ECoG Data MAYO011-ictal-block-1 Window Size 2 Data Controlled Signals (b) 60 61 62 63 64 65 Time (s) -2000 -1000 0 1000 2000 3000 7 V ECoG Data MAYO011-ictal-block-1 Window Size 5 Data Controlled Signals (c) Figure 6.21: MAYO11 studies have used linear dynamical systems to model neural behavior [12, 85, 86], these models are unable to capture the nonlinear and non-Markovian behavior exhibited in the brain [87, 88, 89]. On the other hand, several studies have used more complex nonlinear models; however, these models are not easy to interpret and explain in the context of brain dynamics [90, 91]. 144 60 60.2 60.4 60.6 60.8 61 Time (s) -2000 -1000 0 1000 2000 3000 7 V ECoG Data MAYO016-ictal-block-2 Window Size 1 Data Controlled Signals (a) 60 60.5 61 61.5 62 Time (s) -2000 -1000 0 1000 2000 3000 7 V ECoG Data MAYO016-ictal-block-2 Window Size 2 Data Controlled Signals (b) 60 61 62 63 64 65 Time (s) -4000 -3000 -2000 -1000 0 1000 2000 3000 7 V ECoG Data MAYO016-ictal-block-2 Window Size 5 Data Controlled Signals (c) Figure 6.22: MAYO16 60 60.2 60.4 60.6 60.8 61 Time (s) -2000 -1500 -1000 -500 0 500 1000 1500 7 V ECoG Data MAYO020-ictal-block-3 Window Size 1 Data Controlled Signals (a) 60 60.5 61 61.5 62 Time (s) -2000 -1500 -1000 -500 0 500 1000 1500 7 V ECoG Data MAYO020-ictal-block-3 Window Size 2 Data Controlled Signals (b) 60 61 62 63 64 65 Time (s) -2000 -1000 0 1000 2000 7 V ECoG Data MAYO020-ictal-block-3 Window Size 5 Data Controlled Signals (c) Figure 6.23: MAYO20 Fractional-order dynamical systems, which originated in physics and economics and quickly found their way into engineering applications [92, 93, 94, 95, 96, 97, 98], are appealing mainly due to their representation as a compact spatiotemporal dynamical system with two easy-to-interpret sets of parameters, namely the fractional-order coefficients and the spatial coupling. Fractional- order coefficients capture the long-range memory in the dynamics of each state variable of the system, and the spatial matrix represents the spatial coupling between different state variables, where the latter is described by a matrix. Fractional-order systems provide an efficient way to model many different systems [25]. Here, we provide evidence that fractional dynamics are suitable for mathematically representing electro- corticography recordings from epileptic patients. 145 6.4.5 Detecting Seizures by Identifying Stable Fractional Dynamics Recently, some works have begun to uncover some insight into developing a rigorous definition for the start of seizures. For example, the work in [195] showed that prior to a critical transition, such as the start of seizures, a phenomenon known as ‘critical slowing’ occurs, which is when the dominant eigenvalue becomes zero and is associated with an increase in autocorrelation in the pattern of fluctuations. Other works have characterized the seizure onset as an instability and have studied the energy in the neural signals to uncover its relationship to the seizure onset and location in the brain [190, 85]. Still others have studied the spectrum of the signals as a means to characterize the start of seizure [86, 12]. One of the most widely used techniques to detect seizure investigates the dynamic changes in electrical activity and frequency by calculating the line length, which is a proxy for the fractal dimension [196]. While this technique has the ability to capture the long-term temporal changes in the system, it is unable to characterize these changes in the spatial context of the brain. Hence, new tools are needed to capture the spatio-temporal aspects of brain behavior to accurately detect seizures. While some works give clues about the characterization of the start of seizure [195, 190, 85, 86, 12, 196], none provide a formal definition that generally applies across all epileptic patients and all seizures. This fundamental limitation inhibits our ability to design effective neurostimulation techniques to properly treat epileptic patients. We believe that the eigenvalues of the fractional- order system and the number of stable modes can be used to effectively characterize the seizure onset and can be used to design effective future prediction schemes of seizures. The pattern that we identified in the results section, where the magnitude of the eigenvalues dip below 1 during the start of seizure and eventually spike again, is associated with the notion of crit- ical slowing [195, 214], which is the gradual increase in both signal variance and auto-correlation that leads to a critical transition. Furthermore, in critical slowing, the dominant eigenvalue becomes 1, so the system becomes increasingly slow in recovering from small perturbations. Therefore, the “calm before the storm” that we observe in our results when the magnitude of the eigenvalues dip 146 below 1, and the system becomes stable is a hallmark of the critical slowing phenomenon and further validates the use of the fractional-order system in studying seizures. 6.4.6 Stabilizing Fractional Dynamics Ultimately Mitigates Seizures Many works have proposed methods for neurostimulation treatments [12, 13, 14, 15, 11]. The work that we investigate in [70] stabilizes fractional-order dynamics by designing the spatial ma- trix. We observe that stabilizing the epileptic signals across different patients and different time win- dows lowers the amplitude and frequency of these signals and thereby lowers the overall energy in the system in all cases. Hence, we hypothesize that this method will indeed mitigate seizures in humans when used during the time of seizure onset. 6.5 Conclusions and Future Works We provided computationally efficient necessary and sufficient conditions for the stability of discrete-time linear fractional-order systems. Leveraging these conditions, we developed a frame- work using linear matrix inequalities to stabilize fractional-order systems. We applied our frame- work to a real-world dataset of an epileptic patient to show that we can impose stability on these systems with the hope that these methods will lead to the development of effective future treatments of epilepsy and other neurological diseases. We provide evidence that fractional-order systems are suitable for mathematically representing electrocorticography data from epileptic patients. We compute the stability and eigenvalues of fractional-order systems estimated from epileptic data to identify a distinct pattern in the magnitude of the eigenvalues and the stable modes that indicate the time of seizure onset and are associated with critical slowing. Finally, we evaluated the ef- fectiveness of a control framework that stabilizes fractional-order systems in its ability to mitigate seizures. 147 In future work, we plan to devise a framework to predict the time of seizure onset, we plan to devise a feedback control strategy that accounts for safety constraints, and we plan to test this method in vivo. Future work also includes developing a framework that can simultaneously design both parameters of the fractional-order systems – the fractional-order exponents and the spatial matrix – to ensure stability. 148 Chapter 7 Conclusions and Future Work This thesis presents several novel tools that lay the foundation for the design of future cyber-neural systems that aim to effectively treat brain-related disorders, like epilepsy, These tools can also be used in cyber-neural systems that seek to augment human physical capability and even human intelligence. Chapter 1 gave an introduction to cyber-neural systems, fractional dynamics, and distributed computation. Chapter 1 also presented the challenges associated with designing cyber-neural sys- tems, including monitoring brain behavior, detecting disease, and safely mitigating disease. Chap- ter 2 provided an overview of fractional calculus, fractional dynamics, and the current-state-of-the- art for controlling and estimating fractional-order dynamics. Chapter 3 presented a framework to design controllable complex dynamical networks with min- imal resources. In the context of cyber-neural systems, this amounts to determining the minimum number of electrodes needed to effectively, efficiently, and safely stimulate the brain or nervous system. In particular, we solved the minimum actuator placement problem for discrete-time non- commensurate fractional-order linear time-invariant systems by first proposing novel conditions for structural controllability. Then, we leveraged these conditions to show that the FLTI dynamical networks require fewer actuated nodes than LTI dynamical networks. This result implies that FLTI dynamical networks are easier to control than LTI networks. We demonstrated the results of my 149 theorem on several random and real-world networks to gain an understanding of the relationship between the required number of nodes and time-to-control and the variation in this relationship when considering the size, topology, and multifractal spectrum of the dynamical network. Chapter 4 focused on developing a framework to design observable complex dynamical net- works with minimal resources. This is an important problem to address in the context of cyber- neural systems where having an accurate depiction of the brain’s activity is critical for analysis and control. More specifically, we proposed conditions to ensure the structural observability of continuous-time switched linear time-invariant systems. We leveraged these conditions to design an algorithm that solves the minimum sensor placement problem for this class of systems. Fur- thermore, we demonstrated the results of our algorithm on the IEEE 5-bus power grid system. Chapter 5 presented the first distributed algorithm to find the strongly connected components and finite diameter of directed networks. Finding strongly connected components is important for designing cyber-neural systems because they are fundamental in solving the minimum actua- tor/sensor placement problems. Furthermore, the finite diameter is critical for designing complex dynamical networks that are robust to failures and faults. In this chapter, we demonstrated that our algorithm outperforms the state-of-the-art on several random networks, including Erd˝ os–R´ enyi and Barab´ asi–Albert networks. Furthermore, we showed that the computational time-complexity is comparable to the state-of-the-art. Chapter 6 presented several results to analyze the stability of fractional-order linear time-invariant systems and a comprehensive framework to stabilize these systems. These results have far-reaching implications in designing safe complex dynamical networks, which is fundamental in designing safe cyber-neural systems, where safety in terms of stability is critical when interfacing with the human brain and nervous system. In particular, in this chapter, we proposed sufficient as well as necessary and sufficient novel conditions to characterize the stability of discrete-time non- commensurate fractional-order linear time-invariant systems. We leveraged these conditions to design feedback control methods to stabilize these systems. Finally, we demonstrated these results 150 Figure 7.1: These tools will enable the unveiling of new insights into epilepsy as well as other neurological diseases and even mental health conditions. Furthermore, these tools will be used to solve problems in other cyber-physical systems, such as the power grid, wildfire management, and healthcare monitoring. in simulation on several epileptic data sets to show its ability to mitigating seizures in real-world patients. The theoretical foundations presented in this thesis are a stepping stone to better understand complex dynamical networks and design safe, effective, and efficient cyber-neural systems. Sub- sequently, there are many future works – see Figures 7.1 and 7.2. I look forward to working on these new frontiers as a future faculty member at Texas Tech University. 7.1 Leveraging Topology to Design Cyber-Neural Systems There are many different kinds of networks that have different topologies, degree distributions, multi-fractal spectrums, and clustering coefficients. In designing cyber-neural systems, it is impor- tant to leverage the knowledge of the network to create efficient algorithms. Hence, in future work, 151 Figure 7.2: This figure illustrates possible directions for future research that will make future cyber-neural systems, with the ability to treat neurological disease, augment physical capability, and even augment human intelligence, a reality. we want to understand how the time complexity of our distributed algorithm may be improved in the case of different types of densely directed networks, various multi-fractal spectrums, degree distributions, and clustering coefficients. This thesis demonstrated the trade-offs between the topology of a network and the minimum number of resources needed to control the dynamical network. In future work, we seek to under- stand the trade-offs between the minimum number of actuators and the control effort required to effectively control the dynamical network. Another measure of a network’s topology is the Ollivier-Ricci curvature [215]. In future work, we plan to analyze the relationship between the minimum number of actuators to control a network and the Ollivier-Ricci curvature. 152 By leveraging the knowledge of a network topology, we hope to develop asynchronous protocols capable of determining the strongly connected components and finite diameter of a digraph to improve efficiency. 7.2 Designing Private and Secure Cyber-Neural Systems Security and privacy are important issues to consider when designing complex dynamical net- works, especially cyber-neural systems. By law, the data of patients must remain private and secure. Therefore, in our future work, we aim to develop a method to find the strongly con- nected components and finite diameter while taking privacy into consideration such that the ID of the nodes can be hidden in the process of sharing information. Furthermore, we aim to design a stabilizing mitigation strategy that ensures the patient’s data remains secure from any potential cyber-attacks. 7.3 Designing Robust Cyber-Neural Systems Robustness is an important issue in designing safe cyber-neural systems. A system is robust if it can operate under unknown disturbances or unknown inputs. Hence, in future work, we plan to design a sensing-actuating-stabilizing framework that can account for unknown disturbances and inputs. In particular, we will examine the effects of unknown inputs on our proposed stimulation schemes. In addition, we will investigate the impact of our results when considering a stochastic framework. Another important direction of future research is considering the impact of our results when the network is partially hidden or unobservable [216]. This is an important problem to study if a system is attacked and nodes of a network lose communication with certain parts of its network. 153 Hence, in seeking to be robust to attacks, future sensing, actuating, and stabilizing frameworks should consider the impact of a hidden or partially observable network. 7.4 Designing Safe Cyber-Neural Systems While stability can be viewed as a a kind of safety metric, in designing cyber-neural systems, it is very important to consider constraints on the control effort as there may be unsafe levels of stimulation that cannot be delivered to the brain. Hence, in future work, we plan to develop safe mitigation schemes that constrain the control effort. We will investigate the role that control barrier functions [217] can play in achieving the goal of designing safe stimulation strategies. Related to this effort, we will explore how stimulation can be administered over time to reduce the control effort in a given time instance. To deploy these methods in practice, it is important to study how feedback controls can be trans- lated into stimulation modalities that the brain can understand, including non-invasive stimulation techniques [13, 212, 213]. We also plan to develop a spatio-temporal sensing framework that combines data modalities, including non-invasive recording methods, such as electrodermal recordings [218], to not only accurately detect seizures but also predict them. 7.5 New Discoveries in Neuroscience We aim to develop a prediction scheme using change-point detection and the stability of fractional- order systems to anticipate oncoming seizures [219]. We also are interested in researching stimu- lation schemes that can prevent seizures altogether. We look forward to deriving new theoretical results for fractional-order dynamical systems. In particular, we plan to extend our framework to design a stabilizing method to ensure stability in 154 finite time and seek to determine whether finite-time stability is a better biomarker for seizure onset. Because the brain continually changes over time, a better model would be a time-varying or switched dynamical system. Hence, we are interested in deriving conditions for finite-time stability of switched fractional-order linear time-invariant systems to more accurately characterize the seizure onset time and seizure onset zone, which is spatial location of the seizure in the brain. We also look forward to exploring contraction theory in the context of fractional-order systems, which can potentially provide new insights into the evolution of neurological disease. Future work includes developing a stabilization scheme that can simultaneously design both parameters of the fractional-order systems – the fractional-order exponents and the spatial matrix – to ensure stability. 7.6 Advancing Cyber-Physical Systems Using these theoretical tools, I will uncover new insights into epilepsy as well as other neuro- logical diseases and even mental health conditions that will lead to a paradigm shift in the way we treat patients, which will ultimately allow them to experience a better quality of life. These new tools will successfully lead us into the future where neurotechnology and brain-machine interface systems are ubiquitous in treating many diseases and conditions in medicine. Finally, these tools will be applied broadly to solve problems spanning cyber-physical systems, including healthcare monitoring, climate-change mitigation, and power grid resilience. 155 Bibliography [1] Laith Shalalfeh, Paul Bogdan, and Edmond A Jonckheere. “Fractional Dynamics of PMU Data.” IEEE Transactions on Smart Grid 12.3 (2020), pp. 2578–2588. [2] Zhen Kan, John M Shea, and Warren E Dixon. “Influencing emotional behavior in a social network.” Proceedings of the 2012 American Control Conference (2012), pp. 4072–4077. [3] Mark J Cook, Andrea Varsavsky, David Himes, Kent Leyde, Samuel Frank Berkovic, Terence O’Brien, and Iven Mareels. “The dynamics of the epileptic brain reveal long-memory processes.” Frontiers in Neurology 5 (2014), p. 217. [4] Siddharth Jain, Xiongye Xiao, Paul Bogdan, and Jehoshua Bruck. “Generator based approach to analyze mutations in genomic datasets.” Nature Scientific Reports 11.1 (2021), pp. 1–12. [5] Paul Bogdan. “Taming the unknown unknowns in complex systems: challenges and opportunities for modeling, analysis and control of complex (biological) collectives.” Frontiers in Physiology 10 (2019), p. 1452. [6] Paul Bogdan, Andr´ as Eke, and Plamen Ch Ivanov. Fractal and Multifractal Facets in the Structure and Dynamics of Physiological Systems and Applications to Homeostatic Control, Disease Diagnosis and Integrated Cyber-Physical Platforms. 2020. [7] Yuankun Xue, Saul Rodriguez, and Paul Bogdan. “A spatio-temporal fractal model for a CPS approach to brain-machine-body interfaces.” Proceedings of the 2016 Design, Automation & Test in Europe Conference & Exhibition (DATE) (2016), pp. 642–647. [8] Paul Bogdan, Siddharth Jain, and Radu Marculescu. “Pacemaker control of heart rate variability: A cyber physical system perspective.” ACM Transactions on Embedded Computing Systems (TECS) 12.1s (2013), p. 50. [9] Charles E Begley and Tracy L Durgin. “The direct cost of epilepsy in the United States: a systematic review of estimates.” Epilepsia 56.9 (2015), pp. 1376–1387. 156 [10] George P Thomas and Barbara C Jobst. “Critical review of the responsive neurostimulator system for epilepsy.” Medical Devices (Auckland, NZ) 8 (2015), p. 405. [11] Christine A Edwards, Abbas Kouzani, Kendall H Lee, and Erika K Ross. “Neurostimulation devices for the treatment of neurologic disorders.” Mayo Clinic Proceedings 92.9 (2017), pp. 1427–1444. [12] Arian Ashourvan et al. “Model-based design for seizure control by stimulation.” Journal of Neural Engineering 17.2 (Mar. 2020). [13] Sarthak Chatterjee, Orlando Romero, Arian Ashourvan, and S´ ergio Pequito. “Fractional-order model predictive control as a framework for electrical neurostimulation in epilepsy.” Journal of Neural Engineering 17.6 (2020), p. 066017. [14] Guanghao Sun, Fei Zeng, Michael McCartin, Qiaosheng Zhang, Helen Xu, Yaling Liu, Zhe Sage Chen, and Jing Wang. “Closed-loop stimulation using a multiregion brain-machine interface has analgesic effects in rodents.” Science Translational Medicine 14.651 (2022), eabm5868. [15] X Moisset, M Lanteri-Minet, and D Fontaine. “Neurostimulation methods in the treatment of chronic pain.” Journal of Neural Transmission 127.4 (2020), pp. 673–686. [16] Ivan Osorio. “The NeuroPace trial: missing knowledge and insights.” Epilepsia 55.9 (2014), pp. 1469–1470. [17] Bruce J West, Malgorzata Turalska, and Paolo Grigolini. Networks of Echoes. Cham, Switzerland: Springer International Publishing AG, 2016. [18] Yuankun Xue and Paul Bogdan. “Constructing compact causal mathematical models for complex dynamics.” Proceedings of the 8th International Conference on Cyber-Physical Systems (2017), pp. 97–107. [19] Dumitru Baleanu, Ziya Burhanettin G¨ uvenc ¸, J.A. Tenreiro Machado, et al. New trends in nanotechnology and fractional calculus applications. Springer, 2010. [20] Enrico Scalas, Rudolf Gorenflo, and Francesco Mainardi. “Fractional calculus and continuous-time finance.” Physica A: Statistical Mechanics and its Applications 284.1-4 (2000), pp. 376–384. [21] A.M. Shahin, E. Ahmed, and Yassmin A. Omar. “On fractional order quantum mechanics.” International Journal of Nonlinear Science 8.4 (2009), pp. 469–472. [22] Yongcan Cao, Yan Li, Wei Ren, and YangQuan Chen. “Distributed coordination of networked fractional-order systems.” IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 40.2 (2009), pp. 362–370. 157 [23] YangQuan Chen. “Fractional calculus, delay dynamics and networked control systems.” Proceedings of the 2010 3rd International Symposium on Resilient Control Systems (2010), pp. 58–63. [24] Wei Ren and Yongcan Cao. Distributed Coordination of Multi-agent Networks: Emergent Problems, Models, and Issues. V ol. 1. Springer, 2011. [25] Emily Reed, Sarthak Chatterjee, Guilherme Ramos, Paul Bogdan, and S´ ergio Pequito. “Fractional cyber-neural systems—A brief survey.” Annual Reviews in Control (2022). [26] Dumitru Baleanu, Jos´ e Ant´ onio Tenreiro Machado, and Albert CJ Luo. Fractional dynamics and control. Springer Science & Business Media, 2011. [27] Francis C Moon. Chaotic and Fractal Dynamics: An Introduction for Applied Scientists and Engineers. Hoboken, NJ, USA: Wiley, 2008. [28] Brian N Lundstrom, Matthew H Higgs, William J Spain, and Adrienne L Fairhall. “Fractional differentiation by neocortical pyramidal neurons.” Nature Neuroscience 11.11 (Oct. 2008), pp. 1335–1342. [29] Gerhard Werner. “Fractals in the nervous system: Conceptual implications for theoretical neuroscience.” Frontiers Physiology 1 (July 2010). [30] Stefan Thurner, Christian Windischberger, Ewald Moser, Peter Walla, and Markus Barth. “Scaling laws and persistence in human brain activity.” Physica A: Statistical Mechanics & Its Applications 326.3–4 (2003), pp. 511–521. [31] Malvin C Teich, Conor Heneghan, Steven B Lowen, Tsuyoshi Ozaki, and Ehud Kaplan. “Fractal character of the neural spike train in the visual system of the cat.” Journal of Optical Society of America 14.3 (Mar. 1997), pp. 529–546. [32] Sergio Pequito, Soummya Kar, and A Pedro Aguiar. “A framework for structural input/output and control configuration selection in large-scale systems.” IEEE Transactions on Automatic Control 61.2 (2015), pp. 303–318. [33] Guilherme Ramos, A Pedro Aguiar, and S´ ergio Pequito. “An overview of structural systems theory.” Automatica 140 (2022), p. 110229. [34] Zhendong Sun. Switched linear systems: control and design. Springer Science & Business Media, 2006. [35] Daniel Liberzon. Switching in systems and control. V ol. 190. Springer, 2003. [36] Rafal Goebel, Ricardo G Sanfelice, and Andrew R Teel. “Hybrid dynamical systems.” IEEE control systems magazine 29.2 (2009), pp. 28–93. 158 [37] Taha Boukhobza and Fr´ ed´ eric Hamelin. “Observability of switching structured linear systems with unknown input. A graph-theoretic approach.” Automatica 47.2 (2011), pp. 395–402. [38] Taha Boukhobza, Fr´ ed´ eric Hamelin, G Kabadi, and Samir Aberkane. “Discrete mode Observability of switching linear systems with unknown inputs. A graph-theoretic approach.” IFAC Proceedings Volumes 44.1 (2011), pp. 6616–6621. [39] Taha Boukhobza. “Sensor location for discrete mode observability of switching linear systems with unknown inputs.” Automatica 48.7 (2012), pp. 1262–1272. [40] Joao P Hespanha. Linear Systems Theory. Princeton University Press, 2018. [41] William L Brogan. Modern control theory. Pearson Education India, 1991. [42] Abdellah Benzaouia, Abdelaziz Hmamed, Fouad Mesquine, Mohamed Benhayoun, and Fernando Tadeo. “Stabilization of continuous-time fractional positive systems by using a Lyapunov function.” IEEE Transactions on Automatic Control 59.8 (2014), pp. 2203–2208. [43] Yan Li, YangQuan Chen, and Igor Podlubny. “Mittag–Leffler stability of fractional order nonlinear dynamic systems.” Automatica 45.8 (2009), pp. 1965–1969. [44] Concepci´ on A Monje, YangQuan Chen, Blas M Vinagre, Dingyu Xue, and Vicente Feliu-Batlle. Fractional-order systems and controls: fundamentals and applications. Springer Science & Business Media, 2010. [45] Ali Ahmadi Dastjerdi, Blas M Vinagre, YangQuan Chen, and S Hassan HosseinNia. “Linear fractional order controllers; A survey in the frequency domain.” Annual Reviews in Control 47 (2019), pp. 51–70. [46] Yan Li, YangQuan Chen, and Igor Podlubny. “Stability of fractional-order nonlinear dynamic systems: Lyapunov direct method and generalized Mittag–Leffler stability.” Computers & Mathematics with Applications 59.5 (2010), pp. 1810–1821. [47] Denis Matignon. “Stability results for fractional differential equations with applications to control processing.” Computational engineering in systems applications 2.1 (1996), pp. 963–968. [48] YangQuan Chen, Hyo-Sung Ahn, and Igor Podlubny. “Robust stability check of fractional order linear time invariant systems with interval uncertainties.” IEEE International Conference Mechatronics and Automation, 2005 1 (2005), pp. 210–215. [49] Manuel Duarte Ortigueira. “Introduction to fractional linear systems. Part 1: Continuous-time case.” IEEE Proceedings-Vision, Image and Signal Processing 147.1 (2000), pp. 62–70. 159 [50] Andrzej Dzieli´ nski and Dominik Sierociuk. “Stability of discrete fractional order state-space systems.” Journal of Vibration and Control 14.9-10 (2008), pp. 1543–1556. [51] M Busłowicz and A Ruszewski. “Necessary and sufficient conditions for stability of fractional discrete-time linear state-space systems.” Bulletin of the Polish Academy of Sciences. Technical Sciences 61.4.4 (2013). [52] Said Guermah, Said Djennoune, and Maamar Bettayeb. “Controllability and observability of linear discrete-time fractional-order systems.” International Journal of Applied Mathematics and Computer Science 18.2 (2008), pp. 213–222. [53] Panagiotis Kyriakis, S´ ergio Pequito, and Paul Bogdan. “On the effects of memory and topology on the controllability of complex dynamical networks.” Nature Scientific Reports 10.1 (2020), pp. 1–13. [54] Qi Cao, Guilherme Ramos, Paul Bogdan, and S´ ergio Pequito. “The actuation spectrum of spatiotemporal networks with power-law time dependencies.” Advances in Complex Systems 22.07n08 (2019), p. 1950023. [55] Emily A Reed, Guilherme Ramos, Paul Bogdan, and S´ ergio Pequito. “Minimum structural sensor placement for switched linear time-invariant systems and unknown inputs.” Automatica 146 (2022), p. 110557. [56] S´ ergio Pequito, Paul Bogdan, and George J Pappas. “Minimum number of probes for brain dynamics observability.” Proceedings of the 54th IEEE Conference on Decision and Control (2015), pp. 306–311. [57] Robert Tarjan. “Depth-first search and linear graph algorithms.” SIAM Journal on Computing 1.2 (1972), pp. 146–160. [58] Edsger Wybe Dijkstra. A Discipline of Programming. V ol. 613924118. Prentice Hall Englewood Cliffs, 1976. [59] Micha Sharir. “A strong-connectivity algorithm and its applications in data flow analysis.” Computers & Mathematics with Applications 7.1 (1981), pp. 67–72. [60] D Frank Hsu, Xiaojie Lan, Gabriel Miller, and David Baird. “A Comparative Study of Algorithm for Computing Strongly Connected Components.” Proceedings of the 2017 IEEE 15th International Conference on Dependable, Autonomic and Secure Computing, (DASC/PiCom/DataCom/CyberSciTech) (2017), pp. 431–437. [61] Lisa K Fleischer, Bruce Hendrickson, and Ali Pınar. “On identifying strongly connected components in parallel.” International Parallel and Distributed Processing Symposium (2000), pp. 505–511. 160 [62] Jiˇ rı Barnat, Jakub Chaloupka, and Jaco Van De Pol. “Distributed algorithms for strongly connected component decomposition.” Journal of Logic and Computation 21.1 (2011), pp. 23–44. [63] R´ eka Albert, Hawoong Jeong, and Albert-L´ aszl´ o Barab´ asi. “Diameter of the world-wide web.” Nature 401.6749 (1999), pp. 130–131. [64] Lu Zongxiang, Meng Zhongwei, and Zhou Shuangxi. “Cascading failure analysis of bulk power system using small-world network model.” Proceedings of the International Conference on Probabilistic Methods Applied to Power Systems (2004), pp. 635–640. [65] Jon G Kuhl and Sudhakar M Reddy. “Distributed fault-tolerance for large multiprocessor systems.” Proceedings of the 7th Annual Symposium on Computer Architecture (1980), pp. 23–30. [66] Robert W. Floyd. “Algorithm 97: Shortest Path.” Communications of the ACM 5.6 (June 1962), p. 345. ISSN: 0001-0782. DOI:10.1145/367766.368168. [67] Emily A Reed, Guilherme Ramos, Paul Bogdan, and S´ ergio Pequito. “The Role of Long-Term Power-Law Memory in Controlling Large-Scale Dynamical Networks.” Submitted to Nature Scientific Reports (2023). [68] Emily A Reed, Guilherme Ramos, Paul Bogdan, and S´ ergio Pequito. “A scalable distributed dynamical systems approach to compute the strongly connected components and diameter of networks.” IEEE Transactions on Automatic Control (2022). [69] Emily A Reed, Paul Bogdan, and S´ ergio Pequito. “Quantification of Fractional Dynamical Stability of EEG Signals as a Bio-Marker for Cognitive Motor Control.” Frontiers in Control Engineering (2022). [70] Emily Ann Reed, Guilherme Ramos, Paul Bogdan, and S´ ergio Pequito. “Mitigating Epilepsy by Stabilizing Linear Fractional-Order Systems.” Proceedings of the 2023 American Control Conference (2023). [71] Antonio Regalado. Elon Musk’s Neuralink is neuroscience theater. Aug. 2020. [72] Tanya Lewis. “Elon Musk’s Pig-Brain Implant Is Still a Long Way from ‘Solving Paralysis’.” Scientific American (Sept. 2020), online. [73] Jose Carmena, Paul Sajda, and Jacob Robinson. Future Neural Therapeutics: Closed-Loop Control of Neural Activity Technology Roadmap White Paper. Nov. 2019. [74] Ricardo Chavarriaga. Standards Roadmap: Neurotechnologies for Brain-Machine Interfacing. Feb. 2020. [75] Adam Rodgers. Neuralink Is Impressive Tech, Wrapped in Musk Hype. Sept. 2020. 161 [76] Cornelia Bargmann, William Newsome, A Anderson, E Brown, Karl Deisseroth, J Donoghue, Peter MacLeish, E Marder, R Normann, J Sanes, et al. “BRAIN 2025: a scientific vision.” Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Working Group Report to the Advisory Committee to the Director, NIH (2014). [77] Thomas R. Insel, Story C. Landis, and Francis S. Collins. “The NIH brain initiative.” Science 340.6133 (2013), pp. 687–688. [78] David C Van Essen, Stephen M Smith, Deanna M Barch, Timothy EJ Behrens, Essa Yacoub, Kamil Ugurbil, WU-Minn HCP Consortium, et al. “The WU-Minn human connectome project: an overview.” Neuroimage 80 (2013), pp. 62–79. [79] Henry Markram. “The human brain project.” Scientific American 306.6 (2012), pp. 50–55. [80] NAE. Grand Challenges - Reverse-Engineer the Brain. 2022. URL: http://www.engineeringchallenges.org/challenges/9109.aspx (visited on 04/14/2022). [81] Joseph LeDoux. The emotional brain: The mysterious underpinnings of emotional life. Simon and Schuster, 1998. [82] Stephen H Fairclough and Fabien Lotte. “Grand challenges in neurotechnology and system neuroergonomics.” Frontiers in Neuroergonomics 1 (2020), p. 2. [83] Danielle S Bassett and Olaf Sporns. “Network Neuroscience.” Nature Neuroscience 20.3 (2017), pp. 353–364. [84] Charley Presigny and Fabrizio De Vico Fallani. “Colloquium: Multiscale modeling of brain network organization.” Reviews of Modern Physics 94.3 (2022), p. 031002. [85] Adam Li, Sara Inati, Kareem Zaghloul, and Sridevi Sarma. “Fragility in epileptic networks: the epileptogenic zone.” Proceedings of the 2017 American Control Conference (2017), pp. 2817–2822. [86] S´ ergio Pequito, Arian Ashourvan, Danielle Bassett, Brian Litt, and George J Pappas. “Spectral control of cortical activity.” Proceedings of the 2017 American Control Conference (2017), pp. 2785–2791. [87] Xuefeng Zhang and Yangquan Chen. Remarks on fractional order control systems. IEEE, 2012. [88] Bruce J West. Fractional Calculus View of Complexity: Tomorrow’s Science. Boca Raton, FL, USA: CRC Press, 2016. [89] Michael F Shlesinger, George M Zaslavsky, and Joseph Klafter. “Strange kinetics.” Nature 363.6424 (1993), pp. 31–37. 162 [90] Bruce J West, Malgorzata Turalska, and Paolo Grigolini. “Fractional calculus ties the microscopic and macroscopic scales of complex network dynamics.” New Journal of Physics 17.4 (2015), p. 045009. [91] B Bonilla, Margarita Rivero, Luis Rodr´ ıguez-Germ´ a, and Juan J Trujillo. “Fractional differential equations as alternative models to nonlinear differential equations.” Applied Mathematics and Computation 187.1 (2007), pp. 79–88. [92] Anatoli˘ ı Aleksandrovich Kilbas, Hari M Srivastava, and Juan J Trujillo. Theory and applications of fractional differential equations. V ol. 204. Elsevier, 2006. [93] Duarte Val´ erio, Juan J Trujillo, Margarita Rivero, JA Tenreiro Machado, and Dumitru Baleanu. “Fractional calculus: A survey of useful formulas.” The European Physical Journal Special Topics 222.8 (2013), pp. 1827–1846. [94] Bruce J West. “Colloquium: Fractional calculus view of complexity: A tutorial.” Reviews of Modern Physics 86.4 (2014), p. 1169. [95] Dumitru Baleanu, Kai Diethelm, Enrico Scalas, and Juan J Trujillo. Fractional calculus: models and numerical methods. V ol. 3. World Scientific, 2012. [96] JATMJ Sabatier, Ohm Parkash Agrawal, and JA Tenreiro Machado. Advances in fractional calculus. V ol. 4. Springer, 2007. [97] Igor Podlubny. Fractional differential equations: an introduction to fractional derivatives, fractional differential equations, to methods of their solution and some of their applications. Elsevier, 1998. [98] Ivo Petr´ aˇ s. “Fractional-order chaotic systems.” In: Fractional-Order Nonlinear Systems: Modeling, Analysis and Simulation. Berlin, Heidelberg, Germany: Springer-Verlag, 2011, pp. 103–184. [99] Alain Oustaloup, Franc ¸ois Levron, St´ ephane Victor, and Luc Dugard. “Non-integer (or fractional) power model of a viral spreading: application to the COVID-19.” arXiv preprint arXiv:2102.13471 (2021). [100] Jean-Luc Battaglia, Ludovic Le Lay, Jean-Christophe Batsale, Alain Oustaloup, and Olivier Cois. “Heat flux estimation through inverted non-integer identification models; Utilisation de modeles d’identification non entiers pour la resolution de problemes inverses en conduction.” International Journal of Thermal Sciences 39 (2000). [101] St´ ephane Victor, Pierre Melchior, Rachid Malti, and Alain Oustaloup. “Robust motion planning for a heat rod process.” Nonlinear Dynamics 86.2 (2016), pp. 1271–1283. 163 [102] Jean-Franc ¸ois Duh´ e, St´ ephane Victor, Pierre Melchior, Youssef Abdelmounen, and Franc ¸ois Roubertie. “Modeling thermal systems with fractional models: human bronchus application.” Nonlinear Dynamics (2022), pp. 1–17. [103] Pierre Melchior, Mathieu Pellet, Julien Petit, Jean-Marie Cabelguen, and Alain Oustaloup. “Analysis of muscle length effect on an S type motor-unit fractional multi-model.” Signal, Image and Video Processing 6.3 (2012), pp. 421–428. [104] Robert G. Turcott and Malvin C. Teich. “Fractal character of the electrocardiogram: Distinguishing heart-failure and normal patients.” Annals of Biomedical Engineering 24.2 (Mar. 1996), pp. 269–293. [105] Wen Chen, Hongguang Sun, Xiaodi Zhang, and Dean Koroˇ sak. “Anomalous diffusion modeling by fractal and fractional derivatives.” Computers & Mathematics with Applications 59.5 (Mar. 2010), pp. 1754–1758. [106] Richard L Magin. Fractional calculus in bioengineering. V ol. 2. 6. Begell House Redding, 2006. [107] Keith Oldham and Jerome Spanier. The fractional calculus theory and applications of differentiation and integration to arbitrary order. Elsevier, 1974. [108] Andrzej Dzielinski and Dominik Sierociuk. “Adaptive feedback control of fractional order discrete state-space systems.” Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation, and International Conference on Intelligent Agents, Web Technologies and Internet Commerce 1 (Nov. 2005), pp. 804–809. [109] Igor Podlubny. “Fractional-order systems and PI λ D µ -controllers.” IEEE Transactions on automatic control 44.1 (1999), pp. 208–214. [110] Kai Diethelm. The analysis of fractional differential equations: An application-oriented exposition using differential operators of Caputo type. Springer Science & Business Media, 2010. [111] BM Vinagre, I Podlubny, A Hernandez, and V Feliu. “Some approximations of fractional order operators used in control theory and applications.” Fractional calculus and applied analysis 3.3 (2000), pp. 231–248. [112] Gaurav Gupta, Sergio Pequito, and Paul Bogdan. “Dealing with unknown unknowns: Identification and selection of minimal sensing for fractional dynamics with unknown inputs.” Proceedings of the 2018 Annual American Control Conference (2018), pp. 2814–2820. [113] Gaurav Gupta, S´ ergio Pequito, and Paul Bogdan. “Re-thinking EEG-based non-invasive brain interfaces: Modeling and analysis.” (2018), pp. 275–286. 164 [114] Gaurav Gupta, S´ ergio Pequito, and Paul Bogdan. “Learning latent fractional dynamics with unknown unknowns.” Proceedings of the 2019 American Control Conference (2019), pp. 217–222. [115] Sarthak Chatterjee and S´ ergio Pequito. “On learning discrete-time fractional-order dynamical systems.” Proceedings of the 2022 American Control Conference (2022), pp. 4335–4340. [116] Patrick Flandrin. “Wavelet analysis and synthesis of fractional Brownian motion.” IEEE Transactions on Information Theory 38.2 (1992), pp. 910–917. [117] Simon Foucart and Holger Rauhut. A Mathematical Introduction to Compressive Sensing. Applied and Numerical Harmonic Analysis. New York, NY , USA: Birkh¨ auser, 2013, pp. I–XVIII, 1–625. ISBN: 978-0-8176-4947-0. [118] Jocelyn Sabatier, Christophe Farges, Mathieu Merveillaut, and Ludovic Feneteau. “On observability and pseudo state estimation of fractional order systems.” Eur. J. Control 18.3 (May 2012), pp. 260–271. [119] Dominik Sierociuk and Andrzej Dzieli´ nski. “Fractional Kalman filter algorithm for the states, parameters and order of fractional system estimation.” Int. J. Appl. Math. Comput. Sci. 16.1 (Mar. 2006), pp. 129–140. [120] Behrouz Safarinejadian, Nasrin Kianpour, and Mojtaba Asad. “State estimation in fractional-order systems with coloured measurement noise.” Transactions of the Institute of Measurement and Control 40.6 (Apr. 2018), pp. 1819–1835. DOI: 10.1177/0142331217691219. [121] Behrooz Safarinejadian, Mojtaba Asad, and Mokhtar Sha Sadeghi. “Simultaneous state estimation and parameter identification in linear fractional order systems using coloured measurement noise.” International Journal Control 89.11 (Apr. 2016), pp. 2277–2296. DOI:10.1080/00207179.2016.1155237. [122] Nadica Miljkovi´ c, Nenad Popovi´ c, Olivera Djordjevi´ c, Ljubica Konstantinovi´ c, and Tomislav B ˇ Sekara. “ECG artifact cancellation in surface EMG signals by fractional order calculus application.” Computer Methods and Programs in Biomedicine 140 (Mar. 2017), pp. 259–264. [123] Slaheddine Najar, Mohamed Naceur Abdelkrim, Moufida Abdelhamid, and Aoun Mohamed. “Discrete fractional Kalman filter.” Proceedings of 2nd IFAC Conference on Intelligent Control Systems and Signal Processing 42.19 (2009), pp. 520–525. [124] Sarthak Chatterjee, Andrea Alessandretti, Antonio Pedro Aguiar, and S´ ergio Pequito. “Discrete-time fractional-order dynamical networks minimum-energy state estimation.” IEEE Transactions on Control of Network Systems 10.1 (2022), pp. 226–237. 165 [125] Andrea Alessandretti, Sergio Pequito, George J Pappas, and A Pedro Aguiar. “Finite-dimensional control of linear discrete-time fractional-order systems.” Automatica 115 (2020), p. 108512. [126] Pantelis Sopasakis and Haralambos Sarimveis. “Stabilising model predictive control for discrete-time fractional-order systems.” Automatica 75 (Jan. 2017), pp. 24–31. [127] S Joe Qin and Thomas A Badgwell. “A survey of industrial model predictive control technology.” Control Eng. Pract. 11.7 (July 2003), pp. 733–764. [128] Sofia Moratti and Dennis Patterson. “Adverse Psychological Effects to Deep Brain Stimulation: Overturning the Question.” Amer. J. Bioethics Neuroscience 5.4 (Oct. 2014), pp. 62–64. [129] Sarthak Chatterjee, Orlando Romero, and S´ ergio Pequito. “A Separation Principle for Discrete-Time Fractional-Order Dynamical Systems and Its Implications to Closed-Loop Neurotechnology.” IEEE Control Systems Letters 3.3 (2019), pp. 691–696. [130] B. Wayne Bequette. “Algorithms for a Closed-Loop Artificial Pancreas: The Case for Model Predictive Control.” Journal of Diabetes Science and Technology 7.6 (Nov. 2013), pp. 1632–1643. DOI:10.1177/193229681300700624. [131] Ivo Petr´ aˇ s. “Novel fractional-order model predictive control: State-space approach.” IEEE Access 9 (June 2021), pp. 92769–92775. [132] Sergio Picozzi and Bruce J West. “Fractional Langevin model of memory in financial markets.” Physical Review E 66.4 (2002), p. 046118. [133] JA Tenreiro Machado and Ant´ onio M Lopes. “Complex and fractional dynamics.” Entropy 19.2 (2017), p. 62. [134] Wolfram Data Repository. Wolfram Research Rat Brain Graph 1. 2016. URL: https://datarepository.wolframcloud.com/resources/Rat-Brain-Graph-1. [135] Wolfram Research. Wolfram Research USA Electric System Operating Network. 2021. URL:https://reference.wolfram.com/language/ref/GeoGraphPlot.html. [136] Yoonsuck Choe, Bruce H. McCormick, and Wonryull Koh. “Network connectivity analysis on the temporally augmented C. elegans web: A pilot study.” Society for Neuroscience Abstracts 921.9 (2004). [137] John G White, Eileen Southgate, J Nichol Thomson, and Sydney Brenner. “The structure of the nervous system of the nematode Caenorhabditis elegans.” Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences 314.1165 (1986), pp. 1–340. 166 [138] Mikail Rubinov and Olaf Sporns. “Complex network measures of brain connectivity: uses and interpretations.” Neuroimage 52.3 (2010), pp. 1059–1069. [139] Malcolm P Young. “The organization of neural systems in the primate cerebral cortex.” Proceedings of the Royal Society of London. Series B: Biological Sciences 252.1333 (1993), pp. 13–18. [140] Igor Podlubny. “Chapter 2 - Fractional Derivatives and Integrals.” In: Fractional Differential Equations: An Introduction to Fractional Derivatives, Fractional Differential Equations, to Methods of their Solution and some of their Applications. V ol. 198. Mathematics in Science and Engineering. Elsevier, 1999, pp. 41–119. DOI: https://doi.org/10.1016/S0076-5392(99)80021-6. [141] Andrzej Dzielinski and Dominik Sierociuk. “Adaptive Feedback Control of Fractional Order Discrete State-Space Systems.” Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation (2005), pp. 804–809. [142] George Karypis and Vipin Kumar. “A fast and high quality multilevel scheme for partitioning irregular graphs.” SIAM Journal on Scientific Computing 20.1 (1998), pp. 359–392. [143] Brian W Kernighan and Shen Lin. “An efficient heuristic procedure for partitioning graphs.” The Bell System Technical Journal 49.2 (1970), pp. 291–307. [144] Ruochen Yang and Paul Bogdan. “Controlling the multifractal generating measures of complex networks.” Nature Scientific Reports 10.1 (2020), pp. 1–13. [145] Ruochen Yang, Frederic Sala, and Paul Bogdan. “Hidden network generating rules from partially observed complex networks.” Nature Communications Physics 4.1 (2021), pp. 1–12. [146] Yang-Yu Liu, Jean-Jacques Slotine, and Albert-L´ aszl´ o Barab´ asi. “Controllability of complex networks.” nature 473.7346 (2011), p. 167. [147] Noah J Cowan, Erick J Chastain, Daril A Vilhena, James S Freudenberg, and Carl T Bergstrom. “Nodal dynamics, not degree distributions, determine the structural controllability of complex networks.” PloS One 7.6 (2012), e38398. [148] Yihan Lin, Jiawei Sun, Guoqi Li, Gaoxi Xiao, Changyun Wen, Lei Deng, and H Eugene Stanley. “Spatiotemporal Input Control: Leveraging Temporal Variation in Network Dynamics.” IEEE/CAA Journal of Automatica Sinica 9.4 (2022), pp. 635–651. [149] S´ ergio Pequito, Victor M Preciado, Albert-L´ aszl´ o Barab´ asi, and George J Pappas. “Trade-offs between driving nodes and time-to-control in complex networks.” Nature Scientific Reports 7.1 (2017), pp. 1–14. 167 [150] Guilherme Ramos and S´ ergio Pequito. “Generating complex networks with time-to-control communities.” PloS One 15.8 (2020), e0236753. [151] Myles Hollander, Douglas A Wolfe, and Eric Chicken. Nonparametric statistical methods. John Wiley & Sons, 2013. [152] Dongsheng Du, Bin Jiang, and Peng Shi. Fault tolerant control for switched linear systems. Springer, 2015. [153] Guilherme Ramos, S´ ergio Pequito, A Pedro Aguiar, Jaime Ramos, and Soummya Kar. “A model checking framework for linear time invariant switching systems using structural systems analysis.” 2013 51st Annual Allerton Conference on Communication, Control, and Computing (2013), pp. 973–980. [154] Guilherme Ramos, S´ ergio Pequito, A Pedro Aguiar, and Soummya Kar. “Analysis and design of electric power grids with p-robustness guarantees using a structural hybrid system approach.” 2015 European Control Conference (ECC) (2015), pp. 3542–3547. [155] Yuangong Sun, Yazhou Tian, and Xue-Jun Xie. “Stabilization of positive switched linear systems and its application in consensus of multiagent systems.” IEEE Transactions on Automatic Control 62.12 (2017), pp. 6608–6613. [156] Guilherme Ramos, Daniel Silvestre, and Carlos Silvestre. “General Resilient Consensus Algorithms.” International Journal of Control 0.ja (2020), pp. 1–27. [157] R Matthew Hutchison, Thilo Womelsdorf, Elena A Allen, Peter A Bandettini, Vince D Calhoun, Maurizio Corbetta, Stefania Della Penna, Jeff H Duyn, Gary H Glover, Javier Gonzalez-Castillo, et al. “Dynamic functional connectivity: promise, issues, and interpretations.” Neuroimage 80 (2013), pp. 360–378. [158] Rajeev Alur. Principles of cyber-physical systems. MIT Press, 2015. [159] ML Corradini and A Cristofaro. “A sliding-mode scheme for monitoring malicious attacks in cyber-physical systems.” IFAC-PapersOnLine 50.1 (2017), pp. 2702–2707. [160] Chun-Hua Xie and Guang-Hong Yang. “Secure estimation for cyber-physical systems with adversarial attacks and unknown inputs: An L 2-gain method.” International Journal of Robust and Nonlinear Control 28.6 (2018), pp. 2131–2143. [161] Faezeh Farivar, Mohammad Sayad Haghighi, Alireza Jolfaei, and Mamoun Alazab. “Artificial Intelligence for Detection, Estimation, and Compensation of Malicious Attacks in Nonlinear Cyber-Physical Systems and Industrial IoT.” IEEE Transactions on Industrial Informatics 16.4 (2019), pp. 2716–2725. 168 [162] Shreyas Sundaram and Christoforos N Hadjicostis. “Structural controllability and observability of linear systems over finite fields with applications to multi-agent systems.” IEEE Transactions on Automatic Control 58.1 (2012), pp. 60–73. [163] Guilherme Ramos, S´ ergio Pequito, and Carlos Caleiro. “The robust minimal controllability problem for switched linear continuous-time systems.” 2018 Annual American Control Conference (2018), pp. 210–215. [164] Shreyas Sundaram and Christoforos N Hadjicostis. “Designing stable inverters and state observers for switched linear systems with unknown inputs.” Proceedings of the 45th IEEE Conference on Decision and Control (2006), pp. 4105–4110. [165] Josh Alman and Virginia Vassilevska Williams. “A refined laser method and faster matrix multiplication.” Proceedings of the 2021 ACM-SIAM Symposium on Discrete Algorithms (2021), pp. 522–539. [166] B Molinari. “Extended controllability and observability for linear systems.” IEEE Transactions on Automatic Control 21.1 (1976), pp. 136–137. [167] Bin Meng. “Observability conditions of switched linear singular systems.” 2006 Chinese Control Conference (2006), pp. 1032–1037. [168] Xiaomeng Liu, Hai Lin, and Ben M Chen. “Structural controllability of switched linear systems.” Automatica 49.12 (2013), pp. 3531–3537. [169] S´ ergio Pequito and George J Pappas. “Structural minimum controllability problem for switched linear continuous-time systems.” Automatica 78 (2017), pp. 216–222. [170] Christian Commault, Jean-Michel Dion, and Jacob W van der Woude. “Characterization of generic properties of linear structured systems for efficient computations.” Kybernetika 38.5 (2002), pp. 503–520. [171] Harold W Kuhn. “The Hungarian method for the assignment problem.” Naval Research Logistics Quarterly 2.1-2 (1955), pp. 83–97. [172] Emily A. Reed, Guilherme Ramos, Paul Bogdan, and S´ ergio Pequito. Minimum Structural Sensor Placement for Switched Linear Time-Invariant Systems and Unknown Inputs. 2021. arXiv:2107.13493[eess.SY]. [173] Francesco Bullo, Jorge Cort´ es, and Sonia Martinez. Distributed control of robotic networks: a mathematical approach to motion coordination algorithms. V ol. 27. Princeton University Press, 2009. [174] Yuankun Xue and Paul Bogdan. “Reliable multi-fractal characterization of weighted complex networks: algorithms and implications.” Nature Scientific Reports 7.1 (2017), pp. 1–22. 169 [175] Jorge Cort´ es. “Distributed algorithms for reaching consensus on general functions.” Automatica 44.3 (2008), pp. 726–737. [176] Harold N Gabow. “Path-Based Depth-First Search for Strong and Biconnected Components; CU-CS-890-99.” CS Technical Reports. 837. (1999). [177] Agata Fronczak, Piotr Fronczak, and Janusz A. Hołyst. “Average path length in random networks.” Physics Review E 70 (5 Nov. 2004), p. 056110. DOI: 10.1103/PhysRevE.70.056110. [178] Duncan J Watts and Steven H Strogatz. “Collective dynamics of ‘small-world’networks.” Nature 393.6684 (1998), pp. 440–442. [179] Norbert Wiener. Cybernetics or Control and Communication in the Animal and the Machine. MIT press, 2019. [180] Babatunde Ogunnaike, Julio R. Banga, David Bogle, and Robert Parker. “Editorial: Biological Control Systems and Disease Modeling”. Frontiers in Bioengineering and Biotechnology 9 (2021). ISSN: 2296-4185. DOI:10.3389/fbioe.2021.677976. [181] Zhanshan Wang and Huaguang Zhang. “Global asymptotic stability of reaction–diffusion Cohen–Grossberg neural networks with continuously distributed delays.” IEEE Transactions on Neural Networks 21.1 (2009), pp. 39–49. [182] Xian Zhang, Ligang Wu, and Jiahua Zou. “Globally asymptotic stability analysis for genetic regulatory networks with mixed delays: an M-matrix-based approach.” IEEE/ACM Transactions on Computational Biology and Bioinformatics 13.1 (2015), pp. 135–147. [183] Y . Xue, S. Rodriguez, and P. Bogdan. “A spatio-temporal fractal model for a CPS approach to brain-machine-body interfaces.” Proceedings of the 2016 Design, Automation & Test in Europe Conference Exhibition (2016), pp. 642–647. [184] S. Pequito, P. Bogdan, and G. J. Pappas. “Minimum number of probes for brain dynamics observability.” 2015 54th IEEE Conference on Decision and Control (CDC) (Dec. 2015), pp. 306–311. DOI:10.1109/CDC.2015.7402218. [185] Richard L Magin. “Fractional calculus in bioengineering: A tool to model complex dynamics.” Proceedings of the 13th International Carpathian Control Conference (2012), pp. 464–469. [186] Riccardo Caponetto. Fractional order systems: modeling and control applications. V ol. V ol. 72. World Scientific, 2010. 170 [187] Yuankun Xue, Sergio Pequito, Joana R Coelho, Paul Bogdan, and George J Pappas. “Minimum number of sensors to ensure observability of physiological systems: A case study.” Proceedings of the 54th Annual Allerton Conference on Communication, Control, and Computing (2016), pp. 1181–1188. [188] Yuankun Xue and Paul Bogdan. “Constructing Compact Causal Mathematical Models for Complex Dynamics.” Proceedings of the 8th International Conference on Cyber-Physical Systems (2017), pp. 97–107. DOI:10.1145/3055004.3055017. [189] Epilepsy: a public health imperative. Last accessed March 17, 2022. 2019. URL:https: //www.who.int/publications/i/item/epilepsy-a-public-health-imperative. [190] Adam Li, Chester Huynh, Zachary Fitzgerald, Iahn Cajigas, Damian Brusko, Jonathan Jagid, Angel O Claudio, Andres M Kanner, Jennifer Hopp, Stephanie Chen, et al. “Neural fragility as an EEG marker of the seizure onset zone.” Nature Neuroscience 24.10 (2021), pp. 1465–1474. [191] Brian Litt, Rosana Esteller, Javier Echauz, Maryann D’Alessandro, Rachel Shor, Thomas Henry, Page Pennell, Charles Epstein, Roy Bakay, Marc Dichter, et al. “Epileptic seizures may begin hours in advance of clinical onset: a report of five patients.” Neuron 30.1 (2001), pp. 51–64. [192] Mohammed Imran Basheer Ahmed, Shamsah Alotaibi, Sujata Dash, Majed Nabil, Abdullah Omar AlTurki, et al. “A Review on Machine Learning Approaches in Identification of Pediatric Epilepsy.” SN Computer Science 3.6 (2022), pp. 1–15. [193] Sani Saminu, Guizhi Xu, Shuai Zhang, Isselmou Ab El Kader, Hajara Abdulkarim Aliyu, Adamu Halilu Jabire, Yusuf Kola Ahmed, and Mohammed Jajere Adamu. “Applications of Artificial Intelligence in Automatic Detection of Epileptic Seizures Using EEG Signals: A Review.” Artificial Intelligence and Applications (2022). [194] Ijaz Ahmad, Xin Wang, Mingxing Zhu, Cheng Wang, Yao Pi, Javed Ali Khan, Siyab Khan, Oluwarotimi Williams Samuel, Shixiong Chen, and Guanglin Li. “EEG-based epileptic seizure detection via machine/deep learning approaches: A Systematic Review.” Computational Intelligence and Neuroscience (2022). [195] Marten Scheffer, Jordi Bascompte, William A Brock, Victor Brovkin, Stephen R Carpenter, Vasilis Dakos, Hermann Held, Egbert H Van Nes, Max Rietkerk, and George Sugihara. “Early-warning signals for critical transitions.” Nature 461.7260 (2009), pp. 53–59. [196] Rosana Esteller, Javier Echauz, T Tcheng, Brian Litt, and Benjamin Pless. “Line length: an efficient feature for seizure onset detection.” 2001 Conference Proceedings of the 23rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society 2 (2001), pp. 1707–1710. 171 [197] Duluxan Sritharan and Sridevi V Sarma. “Fragility in dynamic networks: application to neural networks in the epileptic cortex.” Neural Computation 26.10 (2014), pp. 2294–2327. [198] Emily Reed, Sarthak Chatterjee, Guilherme Ramos, Paul Bogdan, and S´ ergio Pequito. “Fractional cyber-neural systems—A brief survey.” Annual Reviews in Control (2022). [199] Ravi P Agarwal. Difference equations and inequalities: theory, methods, and applications. CRC Press, 2000. [200] Bin Zhou and Tianrui Zhao. “On asymptotic stability of discrete-time linear time-varying systems.” IEEE Transactions on Automatic Control 62.8 (2017), pp. 4274–4281. [201] Gilbert Strang. Introduction to Linear Algebra. V ol. 3. Wellesley-Cambridge Press Wellesley, MA, 1993. [202] Walter Rudin et al. Principles of Mathematical Analysis. V ol. 3. McGraw-Hill New York, 1964. [203] MC De Oliveira, JC Geromel, and Liu Hsu. “LMI characterization of structural and robust stability: the discrete-time case.” Linear Algebra and Its Applications 296.1-3 (1999), pp. 27–38. [204] Maurıcio C De Oliveira, Jacques Bernussou, and Jos´ e C Geromel. “A New Discrete-Time Robust Stability Condition.” Systems & Control Letters 37.4 (1999), pp. 261–265. [205] Emmanuel J Cand` es, Justin Romberg, and Terence Tao. “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information.” IEEE Transactions on Information Theory 52.2 (2006), pp. 489–509. [206] Stephen Boyd, Laurent El Ghaoui, Eric Feron, and Venkataramanan Balakrishnan. Linear Matrix Inequalities in System and Control Theory. SIAM, 1994. [207] Richard S Varga. Gerˇ sgorin and his Circles. V ol. 36. Springer Science & Business Media, 2010. [208] Joost B Wagenaar, Benjamin H Brinkmann, Zachary Ives, Gregory A Worrell, and Brian Litt. “A multimodal platform for cloud-based collaborative research.” 6th Inter. IEEE/EMBS Conf. on Neural Engineering (2013), pp. 1386–1389. [209] Espen AF Ihlen. “Introduction to multifractal detrended fluctuation analysis in Matlab.” Frontiers in Physiology 3 (2012), p. 141. [210] Michael Grant and Stephen Boyd. CVX: Matlab Software for Disciplined Convex Programming, version 2.1.http://cvxr.com/cvx. Mar. 2014. 172 [211] Edward Beamer, Wolfgang Fischer, and Tobias Engel. “The ATP-gated P2X7 receptor as a target for the treatment of drug-resistant epilepsy.” Frontiers in Neuroscience 11 (2017), p. 21. [212] Karl Deisseroth. “Optogenetics.” Nature Methods 8.1 (2011), pp. 26–29. [213] Maiken Nedergaard, Bruce Ransom, and Steven A Goldman. “New roles for astrocytes: redefining the functional architecture of the brain.” Trends in Neurosciences 26.10 (2003), pp. 523–530. [214] Matias I Maturana, Christian Meisel, Katrina Dell, Philippa J Karoly, Wendyl D’Souza, David B Grayden, Anthony N Burkitt, Premysl Jiruska, Jan Kudlacek, Jaroslav Hlinka, et al. “Critical slowing down as a biomarker for seizure susceptibility.” Nature Communications 11.1 (2020), p. 2172. [215] Jayson Sia, Edmond Jonckheere, and Paul Bogdan. “Ollivier-ricci curvature-based method to community detection in complex networks.” Nature Scientific Reports 9.1 (2019), p. 9800. [216] Yuankun Xue and Paul Bogdan. “Reconstructing missing complex networks against adversarial interventions.” Nature Communications 10.1 (2019), p. 1738. [217] Aaron D Ames, Samuel Coogan, Magnus Egerstedt, Gennaro Notomista, Koushil Sreenath, and Paulo Tabuada. “Control barrier functions: Theory and applications.” 2019 18th European Control Conference (ECC) (2019), pp. 3420–3431. [218] Ant´ onio Banganho, Marcelino Santos, and Hugo Pl´ acido da Silva. “Electrodermal activity: Fundamental principles, measurement, and application.” IEEE Potentials 41.5 (2022), pp. 35–43. [219] J Sia, E Jonckheere, Laith Shalalfeh, and Paul Bogdan. “PMU change point detection of imminent voltage collapse and stealthy attacks.” Proceedings of the 2018 IEEE Conference on Decision and Control (2018), pp. 6812–6817. 173
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Theoretical and computational foundations for cyber‐physical systems design
PDF
Understanding dynamics of cyber-physical systems: mathematical models, control algorithms and hardware incarnations
PDF
Dealing with unknown unknowns
PDF
Analysis, design, and optimization of large-scale networks of dynamical systems
PDF
Dynamic graph analytics for cyber systems security applications
PDF
High-performance distributed computing techniques for wireless IoT and connected vehicle systems
PDF
QoS-aware algorithm design for distributed systems
PDF
Theoretical foundations for modeling, analysis and optimization of cyber-physical-human systems
PDF
Verification, learning and control in cyber-physical systems
PDF
Online learning algorithms for network optimization with unknown variables
PDF
Dispersed computing in dynamic environments
PDF
Defending industrial control systems: an end-to-end approach for managing cyber-physical risk
PDF
Scientific workflow generation and benchmarking
PDF
Data-driven and logic-based analysis of learning-enabled cyber-physical systems
PDF
Algorithmic aspects of energy efficient transmission in multihop cooperative wireless networks
PDF
Game theoretic deception and threat screening for cyber security
PDF
Enhancing collaboration on the edge: communication, scheduling and learning
PDF
Thwarting adversaries with unpredictability: massive-scale game-theoretic algorithms for real-world security deployments
PDF
Optimizing task assignment for collaborative computing over heterogeneous network devices
PDF
Algorithm and system co-optimization of graph and machine learning systems
Asset Metadata
Creator
Reed, Emily Ann
(author)
Core Title
Theoretical foundations and design methodologies for cyber-neural systems
School
Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Electrical Engineering
Degree Conferral Date
2023-08
Publication Date
01/05/2024
Defense Date
05/09/2023
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
control theory,cyber-neural systems,distributed algorithms,epilepsy,fractional-order systems,neurotechnology,OAI-PMH Harvest
Format
theses
(aat)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Bogdan, Paul (
committee chair
), Irimia, Andrei (
committee member
), Krishnamachari, Bhaskar (
committee member
), Narayanan, Shrikanth (
committee member
), Pequito, Sérgio (
committee member
)
Creator Email
emilyree@usc.edu,ereed1272@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-oUC113262029
Unique identifier
UC113262029
Identifier
etd-ReedEmilyA-12023.pdf (filename)
Legacy Identifier
etd-ReedEmilyA-12023
Document Type
Dissertation
Format
theses (aat)
Rights
Reed, Emily Ann
Internet Media Type
application/pdf
Type
texts
Source
20230706-usctheses-batch-1062
(batch),
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the author, as the original true and official version of the work, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright.
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Repository Email
cisadmin@lib.usc.edu
Tags
control theory
cyber-neural systems
distributed algorithms
epilepsy
fractional-order systems
neurotechnology