Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Behavioral modeling and computational synthesis of self-organizing systems
(USC Thesis Other)
Behavioral modeling and computational synthesis of self-organizing systems
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Behavioral Modeling and Computational Synthesis of Self-Organizing Systems James Humann December 16, 2015 A dissertation submitted to the faculty of the graduate school at the University of Southern California in partial fulllment of the degree Doctor of Philosophy in Mechanical Engineering Dissertation Committee: Dr. Yan Jin, Chair AME Dr. Georey Shi ett AME Dr. Azad Madni ASTE Abstract Engineered systems are facing requirements for increased adaptability, the capacity to cope with change. This includes exibility to fulll multiple purposes over long lifespans, robustness to environmental changes, and resilience to system change and damage. This dissertation investigates the use of self-organization as a tool for the design of adaptable systems. Self-organizing systems have no central or outside controller. They are built up from the interactions of autonomous agents, such as a swarm of robots. Because the agents are autonomous and self-interested, they can fulll complex functional requirements. They are able to grow and rearrange themselves; dierent segments of the system can adapt locally to nonuniform terrain; if the systems are made of homogeneous agents, they can be resilient to failure of several components as other identical agents can take their place. Moreover, this complex functionality can be found in the interactions of fairly simple agents, decreasing the cost of manufacturing these systems at large scales. The main challenge in the design of self-organizing systems is designing an agent's behavior at a local level such that the system fullls its function at a higher level. In order to overcome these challenges, this dissertation presents a design ontology and a computational synthesis framework. The design ontology identies the fundamental elements in the design of self-organizing systems and groups them into a cohesive methodology. This ontology can guide designers at the conceptual design stage to create parametric behavioral models for self-organizing agents. Computational synthesis, based on multi-agent simulation for analysis and a genetic algorithm for optimization, can complete the detail design work. The optimized systems can then be deployed in diverse simulated scenarios. This dissertation presents four case studies on the design of self-organizing systems: a ocking system, a protective convoy, a foraging system, and a box-pushing system. The results of the case studies validate the design approach. They show that there are signicant tradeos in the design of adaptable systems. Designers must sacrice some eciency and repeatability for adaptability. The ability to scale systems with constant conceptual designs was shown to be possible, but scaling with a constant detail design was shown to incur large tness penalties. These penalties were more severe when systems scaled up in size rather than down. i Acknowledgments After 21 years of formal education and countless hours of teamwork and collaboration, it seems strange that I should write only 2 pages to acknowledge the help that I've received from others, and 174 pages to claim as my own work. In all fairness, those amounts should be switched. To my thesis committee members. To Dr. Shi ett, who has advised the authors of an entire series of dissertations on self-organizing systems and brought that expertise and continuity to bear on my work. To Dr. Madni, who has framed my thinking about complex systems, guided my professional development, inspired my research with a mixture of practical and theoretical insights, and in everything advocated tirelessly and sel essly for my success. Especially to Dr. Jin, my thesis chair, who has mentored me from day one, brought me into the design theory community, allowed me to grow as an independent researcher within the themes established in his lab, given me a once in a lifetime opportunity to live and teach design in Shanghai, and helped me to understand the larger implications of our work even as I was wrestling with the complex details. You have kept me on track and provided invaluable input into my research. As a rst-year student, I could not see a clear path to reach this point, but you have guided me all the way here. To my friends, classmates, and teammates. Especially to my colleagues in the Impact Lab, who share my strange fascination with dots moving on a screen. To Winston, Chang, Jonathan, and Newsha before me, and to Dizhou and Vincent behind me. You are the source of some of my best memories of USC. To spend 5 years in a new city would have been impossible without you. You have supported me and kept my morale high even without knowing it. To the authors and volunteer community of the open-source software that I have used, including NetLogo, MASON, L A T E X, T E XStudio, Apache Commons Math, and others, whom I may never meet. Your hard work and expertise have beneted me tremendously. If I can ever return the favor, please don't hesitate to ask. To my family. To my siblings, Vicki, Rene e, Nicole, Brian, Michelle, and Steph, who have faithfully cheered me on. To Jon, who provided the electricity that generated most of this thesis. To my parents, Jim and Beth, who have always had high expectations of me but gave me plenty of room to pursue my own path. To my mom, who was my most important teacher. I have learned more valuable lessons from you than anyone could teach me in school. iii Behavioral Modeling and Computational Synthesis of Self-Organizing Systems To my God, who has had a plan for me since before I was born. You give me strength, and through you I can do all things. To all of you, and to countless others whom I haven't mentioned by name. My debt to you is so great that I could never possibly repay it. I can only hope that you recognize the depth of my gratitude, and that I have made you proud. My sincerest thanks, James Humann August 18, 2015 iv Table of Contents Abstract i Acknowledgments iii Table of Contents v List of Tables xi List of Figures xiii I Introduction and Motivation 1 1 Introduction 3 1.1 Traditional Engineering Design . . . . . . . . . . . . . . . . . . . . . 4 1.1.1 Systematic Design . . . . . . . . . . . . . . . . . . . . . . . . 4 1.1.2 Total Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.1.3 Axiomatic Design . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.1.4 Taguchi Methods and Design for X . . . . . . . . . . . . . . . 6 1.1.5 Unifying trends and limits to traditional design . . . . . . . . 7 1.2 Ontology in engineering design . . . . . . . . . . . . . . . . . . . . . . 7 1.3 Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.3.1 Emergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.3.2 Global-to-local mapping . . . . . . . . . . . . . . . . . . . . . 9 1.3.3 Self-organization . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.4 Cellular Self-Organizing Systems . . . . . . . . . . . . . . . . . . . . 10 1.5 Multi-agent simulation . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.6 Genetic algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.7 Overview of this dissertation . . . . . . . . . . . . . . . . . . . . . . . 13 2 Adaptability in Self-Organizing Systems 15 2.1 A market \pull" and a technology \push" . . . . . . . . . . . . . . . 15 2.1.1 Market pull . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.1.2 Technology push . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.2 Flexibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.2.1 Flexibility through emergence . . . . . . . . . . . . . . . . . . 17 2.2.2 Flexibility in ocking systems . . . . . . . . . . . . . . . . . . 18 v Behavioral Modeling and Computational Synthesis of Self-Organizing Systems 2.3 Robustness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.3.1 Robustness through situatedness and a short memory . . . . . 18 2.3.2 Robustness in foraging systems . . . . . . . . . . . . . . . . . 19 2.4 Resilience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.4.1 Resilience through distributed functionality . . . . . . . . . . 19 2.4.2 Resilience in self-healing and sacricial systems . . . . . . . . 19 2.5 Applications in Cellular Self-Organizing Systems . . . . . . . . . . . . 20 II Previous Work 21 3 Related Work 23 3.1 Natural self-organizing systems . . . . . . . . . . . . . . . . . . . . . 23 3.1.1 Food and habitat exploitation in the social insects . . . . . . . 23 3.1.2 Growth and development . . . . . . . . . . . . . . . . . . . . . 24 3.1.3 Flocking and synchronized behavior . . . . . . . . . . . . . . . 25 3.1.4 Other natural self-organizing systems . . . . . . . . . . . . . . 26 3.2 Articial self-organizing systems . . . . . . . . . . . . . . . . . . . . . 26 3.2.1 Formation control . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.2.2 Regulatory networks and articial intelligence . . . . . . . . . 27 3.2.3 Gathering and building . . . . . . . . . . . . . . . . . . . . . . 27 3.2.4 Reconguration . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.3 Computational tuning of complex system parameters . . . . . . . . . 29 3.4 Theory of organizations . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.5 Summary of related work . . . . . . . . . . . . . . . . . . . . . . . . . 33 4 CSO Systems: Review and Status 35 4.1 Road to the present . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 4.1.1 Early work: reconguration . . . . . . . . . . . . . . . . . . . 36 4.1.2 Flocking and emergent formation . . . . . . . . . . . . . . . . 36 4.1.3 Function before form . . . . . . . . . . . . . . . . . . . . . . . 38 4.1.4 Logical agents . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 4.1.5 Summary of previous CSO System accomplishments . . . . . . 39 4.2 A new approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 4.2.1 Limitations of previous work . . . . . . . . . . . . . . . . . . . 40 4.2.2 Strategy to move forward . . . . . . . . . . . . . . . . . . . . 40 III Theory and Methods 41 5 The Dual Nature of Complexity 43 5.1 Natural adaptable systems . . . . . . . . . . . . . . . . . . . . . . . . 43 5.2 The Law of Requisite Variety . . . . . . . . . . . . . . . . . . . . . . 44 5.2.1 The variety of a controller must match the variety of its envi- ronment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 vi Table of Contents 5.2.2 Variety in complex systems . . . . . . . . . . . . . . . . . . . 44 5.3 Creative complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 5.3.1 Source of creativity . . . . . . . . . . . . . . . . . . . . . . . . 45 5.4 From agent complexity to system complexity and performance . . . . 46 5.4.1 Complexity gains from simple agents . . . . . . . . . . . . . . 46 5.4.2 Short descriptive lengths . . . . . . . . . . . . . . . . . . . . . 46 5.5 Simple agents can interact to form creative systems . . . . . . . . . . 47 6 Computational Synthesis in Self-Organizing Systems 49 6.1 Agent-based modeling for complex system analysis . . . . . . . . . . 49 6.1.1 Agents working with and within the complex system . . . . . 50 6.1.2 Agents as the complex system . . . . . . . . . . . . . . . . . . 51 6.1.3 Multi-agent system example: seating layout design . . . . . . 51 6.1.4 Practical takeaways from seating case study . . . . . . . . . . 54 6.2 Genetic algorithm for detail design . . . . . . . . . . . . . . . . . . . 55 6.2.1 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 6.2.2 GA parameters . . . . . . . . . . . . . . . . . . . . . . . . . . 57 6.2.3 GA in the design of Cellular Self-Organizing Systems . . . . . 57 6.3 Integration of multi-agent simulation with optimization . . . . . . . . 58 7 Design Ontology for Self-Organizing Systems 61 7.1 Introduction to ontology . . . . . . . . . . . . . . . . . . . . . . . . . 61 7.1.1 Need for ontology in self-organizing systems design . . . . . . 62 7.2 Dening self organization . . . . . . . . . . . . . . . . . . . . . . . . . 62 7.2.1 The elusive nature of organization . . . . . . . . . . . . . . . . 62 7.2.2 Function as the missing subjective link . . . . . . . . . . . . . 63 7.3 Related ontologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 7.4 Requirements and approach . . . . . . . . . . . . . . . . . . . . . . . 64 7.4.1 Ontology requirements . . . . . . . . . . . . . . . . . . . . . . 64 7.4.2 Balance between generality and practicality . . . . . . . . . . 65 7.5 System, environment, and observer . . . . . . . . . . . . . . . . . . . 65 7.5.1 Designer as observer: a subjective denition of order . . . . . 67 7.6 Relevant characteristics of the design process . . . . . . . . . . . . . . 67 7.7 Measuring performance: system and state . . . . . . . . . . . . . . . 68 7.8 Architectural levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 7.9 Self-organization vs. top-down design . . . . . . . . . . . . . . . . . . 71 7.9.1 Behavioral design at the agent level is the key to creating self- organizing systems . . . . . . . . . . . . . . . . . . . . . . . . 71 7.10 Behavioral design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 7.10.1 Behavior capacity vs. behavior regulation . . . . . . . . . . . 72 7.10.2 Two-eld based behavior regulation . . . . . . . . . . . . . . . 73 7.11 Behavior selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 7.12 DNA-Based behavior representation . . . . . . . . . . . . . . . . . . . 78 7.13 Summary of ontology: building a parametric behavioral model . . . . 79 7.14 Design methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 vii Behavioral Modeling and Computational Synthesis of Self-Organizing Systems 7.15 Research implications . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 IV Case Studies 85 8 Flocking and Exploration 87 8.1 Practical applications . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 8.2 Research questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 8.3 The COARM behavioral model . . . . . . . . . . . . . . . . . . . . . 88 8.4 A eld-based ocking model . . . . . . . . . . . . . . . . . . . . . . . 89 8.5 Simulation and optimization . . . . . . . . . . . . . . . . . . . . . . . 90 8.5.1 NetLogo simulation platform . . . . . . . . . . . . . . . . . . . 90 8.5.2 Flocking simulation specications . . . . . . . . . . . . . . . . 90 8.5.3 Genetic algorithm specications . . . . . . . . . . . . . . . . . 91 8.5.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 8.5.5 Flocking with A =R . . . . . . . . . . . . . . . . . . . . . . . 93 8.6 Exploration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 8.6.1 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 8.6.2 Percentage-targeted exploration . . . . . . . . . . . . . . . . . 97 8.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 9 Protective convoy 103 9.1 Convoy task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 9.2 Research questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 9.3 Design of self-organizing protective convoy . . . . . . . . . . . . . . . 104 9.3.1 Behavioral design . . . . . . . . . . . . . . . . . . . . . . . . . 105 9.3.2 Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 9.3.3 Optimized system-level behavior . . . . . . . . . . . . . . . . . 106 9.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 10 Foraging 109 10.1 Signicance of foraging problem . . . . . . . . . . . . . . . . . . . . . 109 10.1.1 Practical applications . . . . . . . . . . . . . . . . . . . . . . . 109 10.1.2 Key features for designers . . . . . . . . . . . . . . . . . . . . 110 10.2 Foraging task and simulation . . . . . . . . . . . . . . . . . . . . . . . 111 10.3 Ant and ocking-inspired design . . . . . . . . . . . . . . . . . . . . . 112 10.3.1 Hardware constraint analysis . . . . . . . . . . . . . . . . . . . 112 10.3.2 System state and perspective . . . . . . . . . . . . . . . . . . 112 10.3.3 Functional design . . . . . . . . . . . . . . . . . . . . . . . . . 113 10.3.4 Behavioral capacity . . . . . . . . . . . . . . . . . . . . . . . . 113 10.3.5 Behavioral selection . . . . . . . . . . . . . . . . . . . . . . . . 114 10.3.6 Simulation and optimization . . . . . . . . . . . . . . . . . . . 115 10.4 Rework with boundary detection added . . . . . . . . . . . . . . . . . 120 10.5 Test for scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 10.5.1 Scalability assessment . . . . . . . . . . . . . . . . . . . . . . 122 viii Table of Contents 10.5.2 Research questions . . . . . . . . . . . . . . . . . . . . . . . . 123 10.5.3 Extended optimization . . . . . . . . . . . . . . . . . . . . . . 123 10.5.4 Scalability of conceptual design . . . . . . . . . . . . . . . . . 124 10.5.5 Scalability of detail design . . . . . . . . . . . . . . . . . . . . 125 10.5.6 Scalability in system with boundary detection . . . . . . . . . 129 10.6 Tests for resilience . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 10.6.1 Research questions . . . . . . . . . . . . . . . . . . . . . . . . 132 10.6.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 10.7 Using system complexity metrics to expand GA search . . . . . . . . 133 10.7.1 Research question . . . . . . . . . . . . . . . . . . . . . . . . . 133 10.7.2 Sinha and de Weck's system complexity metric . . . . . . . . . 133 10.7.3 Incorporating topological complexity into tness function . . . 134 10.7.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 10.8 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 11 Box Pushing 139 11.1 Signicance of box pushing task . . . . . . . . . . . . . . . . . . . . . 140 11.1.1 Practical applications . . . . . . . . . . . . . . . . . . . . . . . 140 11.1.2 Key features for designers . . . . . . . . . . . . . . . . . . . . 140 11.2 Details of the box-pushing task and design process . . . . . . . . . . . 141 11.2.1 Task overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 11.2.2 Simulation physics . . . . . . . . . . . . . . . . . . . . . . . . 141 11.2.3 Agent hardware assumptions . . . . . . . . . . . . . . . . . . . 142 11.2.4 Potential pitfalls identied through simulation . . . . . . . . . 142 11.3 Conceptual design of cooperative box-pushing system . . . . . . . . . 144 11.3.1 TRIZ application calls for state changes . . . . . . . . . . . . 144 11.3.2 Behavioral design . . . . . . . . . . . . . . . . . . . . . . . . . 145 11.3.3 Summary of agent behavior . . . . . . . . . . . . . . . . . . . 148 11.4 Detail design by simulation and optimization of dDNA . . . . . . . . 148 11.4.1 DNA encoding . . . . . . . . . . . . . . . . . . . . . . . . . . 148 11.4.2 Genetic operators . . . . . . . . . . . . . . . . . . . . . . . . . 148 11.4.3 Fitness function . . . . . . . . . . . . . . . . . . . . . . . . . . 149 11.4.4 Finding optimal systems . . . . . . . . . . . . . . . . . . . . . 150 11.5 Scenarios and results . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 11.5.1 Baseline: random initial positions and stepping order . . . . . 150 11.5.2 RNG seed attached to candidates . . . . . . . . . . . . . . . . 151 11.5.3 Ideal initial positions . . . . . . . . . . . . . . . . . . . . . . . 155 11.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 11.6.1 Revisiting research questions . . . . . . . . . . . . . . . . . . . 157 11.6.2 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 V Conclusion 159 12 Findings and Contributions 161 ix Behavioral Modeling and Computational Synthesis of Self-Organizing Systems 12.1 Collected research questions . . . . . . . . . . . . . . . . . . . . . . . 161 12.1.1 Validation of design ontology and computational synthesis ap- proach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 12.1.2 Adaptability of self-organizing systems . . . . . . . . . . . . . 164 12.1.3 Generational learning . . . . . . . . . . . . . . . . . . . . . . . 167 12.2 Primary contributions . . . . . . . . . . . . . . . . . . . . . . . . . . 168 12.2.1 Design theory and methodology community . . . . . . . . . . 168 12.2.2 Self-organizing systems community . . . . . . . . . . . . . . . 168 12.3 Research scope recap . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 13 Summary and Future Work 171 13.1 Dissertation summary . . . . . . . . . . . . . . . . . . . . . . . . . . 171 13.1.1 Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 13.1.2 Ontology and methodology . . . . . . . . . . . . . . . . . . . . 171 13.1.3 Case studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 13.1.4 Implications for adaptability . . . . . . . . . . . . . . . . . . . 172 13.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 13.2.1 Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 13.2.2 Improved simulations . . . . . . . . . . . . . . . . . . . . . . . 173 13.2.3 Ontology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 13.2.4 Classifying adaptability . . . . . . . . . . . . . . . . . . . . . . 174 13.2.5 Physical implementation . . . . . . . . . . . . . . . . . . . . . 174 13.3 Final remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 Bibliography 175 x List of Tables 1.1 Genetic algorithm { DNA comparison . . . . . . . . . . . . . . . . . . 12 6.1 Comparison of Two Seating Layout Design Options . . . . . . . . . . 53 7.1 Comparison of natural and design DNA . . . . . . . . . . . . . . . . . 78 7.2 Summary of ontological terms . . . . . . . . . . . . . . . . . . . . . . 81 8.1 Best dDNA set from nal generation for 100% exploration . . . . . . 95 8.2 Fittest high-O low-M dDNA set from nal generation for 25% exploration 98 8.3 Fittest high-A high-M dDNA set from nal generation for 25% exploration 98 9.1 Optimized design variables for protective convoy . . . . . . . . . . . . 105 10.1 Flocking behavioral parameters . . . . . . . . . . . . . . . . . . . . . 115 10.2 Mapping functions between binary numbers and behavioral parameters 117 10.3 Parameters of best candidate of rst generation . . . . . . . . . . . . 118 10.4 Parameters of best candidate of 200 th generation . . . . . . . . . . . . 118 10.5 Scalability cross-testing of systems without boundary detection . . . . 126 10.6 Scalability cross-testing of systems with boundary detection . . . . . 130 10.7 Results of cross-testing systems for resilience . . . . . . . . . . . . . . 132 10.8 Systems optimized for opposite levels of topological complexity . . . . 135 11.1 Mapping GA DNA encoding to behavioral parameters . . . . . . . . . 148 11.2 Optimized parameter set for baseline scenario . . . . . . . . . . . . . 151 11.3 Optimized parameter set for RNG seed added scenario . . . . . . . . 153 11.4 Optimized parameter set for ideal initial conditions scenario . . . . . 155 11.5 Performance tradeos for ideal vs. random initial conditions . . . . . 157 xi List of Figures 1.1 Axiomatic Design mapping and decomposition . . . . . . . . . . . . 5 4.1 Early CSO reconguration from spider to snake . . . . . . . . . . . 36 4.2 Box moving simulation sequence . . . . . . . . . . . . . . . . . . . . 38 4.3 Box-pushing task with narrow corridor . . . . . . . . . . . . . . . . 39 6.1 Two seating layout options . . . . . . . . . . . . . . . . . . . . . . . 52 6.2 Fitness value distribution for middle-aisle and side-aisle layouts . . . 54 6.3 Results of simulation when all customers arrive in groups of 4 . . . 54 6.4 Simulation-optimization loop . . . . . . . . . . . . . . . . . . . . . . 59 7.1 Related ontologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 7.2 Sources of ontological entities, from abstract to applied . . . . . . . 66 7.3 Observer demarcating a boundary between a system and its environment 66 7.4 Comparison of two rafts, one self-organized, one designed . . . . . . 70 7.5 Field-based behavior regulation . . . . . . . . . . . . . . . . . . . . 77 7.6 Flocking agent behavior as design DNA . . . . . . . . . . . . . . . . 80 7.7 Self-organizing systems design methodology . . . . . . . . . . . . . . 82 8.1 Agent and neighbor frames of reference . . . . . . . . . . . . . . . . 90 8.2 Typical ocking tness evolution . . . . . . . . . . . . . . . . . . . . 92 8.3 Typical ocking average COARM parameter evolution . . . . . . . . 92 8.4 Screenshot sequence of optimized system's ock formation . . . . . 93 8.5 Typical GA tness evolution with system restricted by A =R . . . 94 8.6 Typical dDNA parameter evolution with system restricted by A =R 94 8.7 High-O high-M exploration system . . . . . . . . . . . . . . . . . . 95 8.8 Screenshot sequence showing a fan and sweep technique . . . . . . . 96 8.9 High-A, low-M for 25% exploration . . . . . . . . . . . . . . . . . . 98 8.10 System-level behavior of optimized dDNA for 25% exploration . . . 99 8.11 Typical dDNA evolution across generations for 50% exploration . . 100 8.12 Parameter evolution that converged to a high-O for 50% exploration 100 8.13 Flock maintenance behavior for 50% exploration . . . . . . . . . . . 101 9.1 Initial setup showing cargo, protectors, and bullets . . . . . . . . . . 104 9.2 Fitness evolution across 40 GA generations . . . . . . . . . . . . . . 106 9.3 Action screenshot of optimized system midway through a cargo run 107 10.1 Initial conguration of 1-row foraging simulation . . . . . . . . . . . 111 xiii Behavioral Modeling and Computational Synthesis of Self-Organizing Systems 10.2 GA results across 200 generations for rst foraging system . . . . . 117 10.3 Best foraging candidate from rst generation . . . . . . . . . . . . . 119 10.4 Best candidate of nal generation . . . . . . . . . . . . . . . . . . . 119 10.5 GA tness evolution for systems with boundary detection . . . . . . 121 10.6 Behavior of system with boundary detection . . . . . . . . . . . . . 121 10.7 Optimized tness for each number of agent rows . . . . . . . . . . . 125 10.8 Results of detail design scalability test . . . . . . . . . . . . . . . . 126 10.9 Behavior of RO1 6-row system . . . . . . . . . . . . . . . . . . . . . 127 10.10 Behavior of RO6 6-row system . . . . . . . . . . . . . . . . . . . . . 128 10.11 System-level behavior of RO6 1-row system for timesteps 500{1000 . 129 10.12 Scalability tests for detail design of systems with boundary detection 130 10.13 Jamming of RO2 6-row system . . . . . . . . . . . . . . . . . . . . . 131 10.14 Comparison of GAs seeking high vs. low topological complexity . . 135 10.15 System-level structure when optimized for high vs. low complexity . 136 11.1 Initial conditions for box-pushing with agents placed randomly . . . 141 11.2 Exploratory box-pushing results . . . . . . . . . . . . . . . . . . . . 143 11.3 Representation of field box value at every NetLogo patch . . . . . . 146 11.4 Box cooperation zones . . . . . . . . . . . . . . . . . . . . . . . . . 147 11.5 Success rate as a function of energy budget . . . . . . . . . . . . . . 152 11.6 Retrial tness histogram for system with encoded RNG seed . . . . 154 11.7 System with ideal initial conditions . . . . . . . . . . . . . . . . . . 156 xiv Part I Introduction and Motivation 1 Chapter 1 Introduction Engineered systems have become increasingly complex, and with increasing global competition and technology, there is no sign that this trend will soon stop [11]. This increasing complexity is driven by more sophisticated customer needs, demands for adaptability, the increasing specialization of engineering knowledge, and the push toward system deployment in hostile environments. As engineered systems become incredibly large (communications infrastructure, skyscrapers, missile defense systems) and small (nanotechnology, genetic engineering) in scale, their environmental stressors are becoming harder to predict, and they become increasingly dicult to design in a top-down manner. This diculty is the result of a mismatch between the phenomena of complex systems and the classical approach to engineering design. Self-organization of simple robot swarms has been suggested as a tool to create complex adaptable systems [250, 105]. Self-organization has been studied extensively as a naturally occurring phenomenon, but less eort has been focused on researching self-organization as an intentional design mechanism. These two approaches are complementary, as the former can determine how these systems work, while the latter asks the question of whether or not this knowledge can be abstracted and applied to the design of other systems. This dissertation is focused on the design of complex systems through self- organization. The design process uses a general ontology of self-organizing systems for conceptual design and combined simulation and optimization for detail design. It draws on the vocabulary and ideas of traditional design theory and methodology (DTM), complexity, multi-agent simulations, ontologies, and evolutionary optimization algorithms. Key terminology and an overview of the dissertation are given in this chapter. 3 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems 1.1 Traditional Engineering Design Engineering design is a mixture of art and science, the practical application of theoretical knowledge. It is an abductive reasoning process wherein a designer attempts to create an artifact, which will cause a desired outcome, determined by the laws of nature. Research in engineering design must focus not only on understanding the technical and scientic phenomena in the systems of interest, but also on the human element, supporting the designer in the task of improving these systems [34]. Current design theory and methodology (DTM) follows a reductionist approach. Reductionism can be dened colloquially as \divide and conquer." A large task is sub- divided into several smaller tasks, and each of these tasks is completed and optimized somewhat independently. This approach is described at length in many fundamental and respected design textbooks. It can be seen in the function structure diagrams of Total Design [178] and Systematic Design [165], and the Independence Axiom in Axiomatic Design [209]. Many informal design methods, often company-specic, also follow the top-down reductionist approach. According to Bar-Yam et al. [11], the classical approach oers the following desirable characteristics: stability, predictability, reliability, transparency, and controllability; to this list I would add eciency. 1.1.1 Systematic Design Systematic Design is the design process described by German professors Pahl and Beitz [165]. Elements of this design process are often taught in undergraduate Mechanical Engineering Design classes. It is a very practical methodology that was based on years of observing professional designers. In this scheme, product development consists of four sequential tasks: 1. Planning and task clarication: an engineer must rst gather enough in- formation about the task given to him. This information can take the form of specic customer requirements, constraints, and the status of competitors, for example. The output of this stage is a requirements list to constrain the rest of the design. 2. Conceptual Design: a designer must then create a concept solution that will fulll the requirements list. This solution may be only an abstract representation of the behavior of the system, or possibly a sparsely detailed drawing. There could be multiple working concepts. 3. Embodiment Design: starting from the working concept, a designer must concretize the information by developing preliminary layouts and structures of the concept solutions. During this phase, a nal concept is selected among several variants, as the design is detailed enough to perform basic economic and performance analysis. 4 Chapter 1. Introduction 4. Detail Design: nally, the designer must output design specications in enough detail that the product can be manufactured. This phase requires the devel- opment of exact dimensions, material selection, and manufacturing processes. The output is a specic set of drawings, bills of material, and manufacturing instructions. 1.1.2 Total Design Total Design [178] is a similar methodology that is commonly taught and used. In this construction, the design process consists of a \Design Core," comprising sequentially Market, Specication, Concept Design, Detail Design, Manufacture, and Sell. This inner core is enveloped by the product design specication (PDS), and receives inputs from some domain-independent concerns such as optimization and market analysis. The PDS is the foundation of this design methodology; it must be a thorough compendium of performance requirements, environmental constraints, cost requirements, etc. The PDS should re ect the voice of the customer and fully guide the design process. While this process is meant to be performed sequentially, in practice there will be interruptions caused by the inputs, and iterative loops due to errors or new information, and the PDS may evolve during the course of development. 1.1.3 Axiomatic Design Figure 1.1: Axiomatic Design mapping and decomposition Nam Suh's Axiomatic Design (AD) methodology [209] is perhaps the most in uen- tial work in this eld. Axiomatic Design attempts to nd fundamental laws of design, whereas most other methodologies rely on heuristics . This methodology rests on two axioms: the Independence Axiom, and the Information Axiom. The Independence Axiom states that the best designs are achieved by mapping functional requirements to design parameters in such a way that the requirements can be met by tuning parameters individually, with no mutual dependency. The Information Axiom states that the best design is the independent design that requires the least information content; that is, it has the highest probability of success. 5 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems AD is a bold attempt at systematizing design. Its axioms produce corollaries and proofs in much the same way that mathematical theorems do. If the two fundamental axioms are universally true, then a designer should be condent in using this methodology. At this point, however, and until AD researchers can prove otherwise, the axioms are merely good advice for designers but not fundamental laws. The four domains of AD will be referenced throughout this paper as they represent a good framework for the various mappings inherent in the design process. These four domains are listed as follows [207]: Customer Domain Customer Attributes (CA) are determined early so that a de- signer can pinpoint a certain need in a segment ofthe population to fulll. Functional Domain Functional Requirements (FR) are specic functions that a system must perform. The FRs are specied by the designer to fulll the CAs. Physical Domain Design Parameters (DP) are the physical (or software) compo- nents of the system that are designed to enable the FRs. An example of a DP could be a module such as a condenser in a refrigeration loop, or a dimension such as the diameter of a pipe. Process Domain Process Variables (PV) are the variables that govern the produc- tion of the DPs. As shown in Figure 1.1, Axiomatic Design views the design process as a mapping from social needs in the Customer Domain, to technical specications in the Process Domain, with decomposition into ner and ner details occurring within domains. 1.1.4 Taguchi Methods and Design for X There are many methodologies that focus only on a certain stage of the design process or specic attribute of the product. These include Design for Manufacturing, Design for Assembly/Disassembly, Design for Recycling, Design for Serviceability, and many others [180]. As a group, they can be referred to as Design for X (DFX 1 ). Taguchi methods [212] are a prominent example of Design for Robustness. Genichi Taguchi outlined a process of deliberate experimental design to maximize product information at a minimal cost. His philosophy was that quality must come from the design process; it cannot be \inspected into" the product via quality control sampling [186]. The robustness comes from careful selection of parameters that make the manufacturing process impervious to uncontrollable factors such as machine variance. All of the DFX strategies have an important place in design, and some will be emphasized more in certain industries and by certain types of companies. They are usually not comprehensive design methodologies, but serve as valuable references during the design process. 1 Remich [180] has suggested that DFX could stand for \Design for Excellence." 6 Chapter 1. Introduction 1.1.5 Unifying trends and limits to traditional design No design process exactly follows the approaches described in these sections. They are general prescriptive guides that designers have found useful through trial and error. A descriptive model of the design process would show many iterative loops between dierent design tasks and a considerable amount of informal, self-organized behavior among design teams [46]. Some tasks may be shortened or skipped altogether due to time and budget constraints or the capabilities of the design team. These classical methodologies have a few common themes. They rely on a mapping process across various realms of knowledge, from customer needs to manufacturing. System functionality (FR-fulllment) is the main goal, and various DFX methods are used a la carte to add certain desirable characteristics of recyclability, robustness, etc. Traditional DTM also relies on decomposition of solutions to elementary units. A high-level concept for the purpose of the design is all that a designer may have to start with. Throughout the design process, a designer will progressively rene this idea until it is decomposed into manageable components. In Figure 1.1, the mapping across realms of knowledge corresponds to horizontal movement, and decomposition corresponds to vertical movement. For these methodologies to work, a design should have constant FRs and environmental assumptions, and the pieces of the design must be amenable to analysis isolated from the whole system. The latter constraint implies that a reductionist approach can be applied to the system. The reductionist approach has worked well for many applications. It will continue to work well in these areas, and this dissertation does not argue otherwise, but using a conventional, top-down, design approach may carry some risk. Unintentional dependencies may arise and dominate the performance of the built system. The system may only serve one or a limited number of functions, and the system cannot deal with an unpredictable environment, an unstable environment, or task changes. These systems can be economical but fragile [74] as every component is a possible failure point in the system [217]. Scalability is also a concern, as many product failures result from na ve geometric scaling of design without anticipating the new failure modes that size begets [172]. Dealing with the complexity inherent in unknowable or dynamic environments presents unique challenges to the designer, which may be better solved using an alternate design method. 1.2 Ontology in engineering design An ontology is a formal language for describing a knowledge domain. Engineering design is fundamentally an exercise in information processing, so having the information structured in ontological form can be very valuable to the process. Having a general ontology also allows for a description of diverse systems with the same language, so that fair comparisons can be made and analogical knowledge transfer facilitated. If the ontology species the entities and relationships of a class of system, then designers 7 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems can be guided through the process of formally specifying new instances of the system. Various researchers have proposed ontologies for concepts important to engineers. Gero has proposed an ontology of artifacts consisting of function, behavior, and structure [72], which can be used to describe the various states and transformations of the design process [71]. The Functional Basis [94] is a detailed exploration of the denition of function, trying to identify a \minimal set" of terms for engineers to use during functional design, to ensure common levels of specicity between dierent designers. With common decompositions of function, and design repositories of successfully engineered structures that fulll those functions, researchers have demonstrated automated searches that can be used to generate structural designs from higher-level functional requirements [28, 35]. With a similar description of biological systems, these knowledge stores have even be used as aids to generate design concepts by analogy to nature [154]. 1.3 Complexity Complexity is found wherever a system has parts whose many interactions are dicult to analyze. Complex systems are usually found in the mesoscale; that is, they must have more than a few parts so that the interactions among parts|whose number scales nonlinearly with the number of components|are too numerous to analyze conveniently, and they must not have so many parts that the system can be described simply on a macro level according to a thermodynamic analysis [9]. Another phenomenon of complex systems is that the behavior of their components in isolation may be very dierent from their behavior as part of the system. Complex systems, in the engineering sense, are designed systems that have many parts with strong and unintuitive interactions. The interactions and dependencies among components are so pervasive that a complex engineered system is dicult to analyze from a reductionist approach. This is in contrast to merely complicated systems, such as an automobile, which have many parts, but the boundaries, dependencies and interactions among the parts are well dened and predictable. In a complex system, the interfaces and relationships among components are where the designer can add value to the system [11, 138]. Complex systems can be hard to design and control, because reactions are not necessarily proportional to inputs, and massive interdependencies can cause cascading failures or unintended consequences in seemingly unrelated areas of the system [89]. For the engineer, problems with dierent levels of complexity may require qualitatively dierent solution strategies [138]. 1.3.1 Emergence Complexity also exhibits the phenomenon of the whole being more than the sum of its parts. This means that the behavior of a complex system cannot simply be inferred from the initial conditions and properties of its parts [196]. This phenomenon 8 Chapter 1. Introduction is termed emergence [233, 9]. For example, ocking can be thought of as an emergent property of a complex system. While birds in a ock may be simple organisms that behave according to simple rules [183], it is the surprising interaction of these rules that lead to ocking behavior. The study of a solitary bird would not lead to a prediction of ocking behavior; the ocking is a property of the interactions among birds, not of the birds themselves. Lower-level interactions can lead to complicating emergence or simplifying emer- gence. For example, consider cellular automata, which in their simplest form are tables of on/o (or 1/0) values with update rules that only consider a cell's nearest neighbors. From such a simple, deterministic rule set, complex, chaotic or seemingly random patterns can be formed [243]. Also consider the motion of the planets in our solar system. Astronomers need not consider the gravitational force between every atom in the earth and sun, but only the emergent force that arises from the total lumped mass of the elements of the system. Thus, a system that is very complex at one scale can show emergent simplicity at a higher scale, or vice versa [9]. 1.3.2 Global-to-local mapping In order to properly engineer a complex system, a designer must prevent the emergence of system-level pathologies and encourage the emergence of desired functionality. However, emergence is very hard to predict from traditional, reductionist analysis. Recall that emergence arises from the interactions among components, rather than the characteristics of the components themselves. Thus the reductionist approach, which analyzes parts in isolation, is not well suited to predict or control emergence. In fact, using cellular automata as an example, there is no guarantee that a local rule can be found to produce a certain global eect, nor that such a rule is unique [9]. Human intuition often fails to create optimal local rules in emergent systems [47]. An engineer can use testing and simulation to map local behaviors to global action, but the inverse, global-to-local mapping, is an open problem in complex systems research, and is the key challenge of complex systems design. 1.3.3 Self-organization This dissertation will use a fairly general denition of self-organizing systems: a self- organizing system is a system that shows the emergence of structure and organization due to the interactions of simple agents with no global knowledge that is not an obvious outgrowth of the constituent components and is not prescribed or choreographed by a larger force outside the system. Here the denition of an agent is also quite general; it could refer to any low-level components from molecules self-organizing into galaxies to bualo self-organizing into migrating herds. Self-organization is a way of modeling a system by assuming certain elements to be simple. In the case of modeling self-organizing insect colonies, entomologists can 9 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems assume that the process of ying to a source of nectar and returning it to the nest is simple. Of course, to someone interested in ight mechanics and control, this process is very complex [22]. This is a view of the system that allows study and design with minimal mental and computational exertion. A self-organizing model shows that the global phenomena of interest can be explained by making the assumption of simplicity at a lower level. The view level should be low enough that salient details are not ignored, and high enough that irrelevant complexity is averaged out. Only if the description fails should it be made more complex [22]. Self-organization is present in a diverse range of system types, including physical, biological, and social systems. The hierarchical structure of materials can be thought of as a self- organization of atoms and molecules. \Hive mind" behavior in insects arises from simple agents with local interactions [111]. Even human social systems such as freeway trac or evacuation exhibit emergent ocking behavior that can be re-created in simulation using self-organizing principles [146]. In fact, compared to traditionally engineered monolithic systems, self-organizing systems are far more common, but less understood [57]. 1.4 Cellular Self-Organizing Systems There is a new approach to design known as Cellular Self-Organization (CSO). In CSO systems, the design is \bottom-up." The fundamental unit is the mechanical cell (mCell). Each cell in the system is a simple entity, perhaps a small robot, with limited sensory perception and computational ability. Each cell is encoded with behavioral rules, and from the interaction of these many cells the overall desired system behavior will emerge. This route is less direct than conventional design, so we can expect it to be less ecient, but the loss in eciency is justied by a gain in adaptability. There are two overarching goals in CSO research [104]. The rst goal is to develop tools for designing adaptable systems, and the second is to gain greater understanding of the processes underlying the \designs" found in nature. The CSO approach is inspired by the self-organization found in natural systems. Treating living systems as designed products, we could say that the top-level FR is \to survive." This could be decomposed into sub-FRs such as \maintain body temperature, ingest nutrients, release waste, etc." All sub-FRs serve only to assist the organism in fullling its top-level functional requirement. The biological cells are not self-aware and do not know the top-level FR, but simply follow the rules encoded in their DNA, and the interactions among these many cells lead to emergent life. The designer of an articial system can take advantage of this focus, but change the top-level FR to whatever is useful for his purposes. The \instincts" of the mCells will cause the cells to interact in a way that fullls the top-level FR. Engineered self-organizing systems such as CSO systems may be less ecient and predictable than a traditionally engineered monolithic system, but they also may exhibit some attractive features: 10 Chapter 1. Introduction Low cost and easy manufacturability: CSO systems are made of very simple agents. These agents could be manufactured cheaply in large batches. Flexibility: a CSO system is not locked into one mode of operation. By a change in interaction rules, dierent top-level functionality can be obtained. Robustness: robust systems can respond to internal or external perturba- tions [221] and are not dependent on exact environmental stimuli. CSO systems are not optimized for a single environment, but instead react to whatever current sensory information they are receiving. This allows them to adapt to changing environments or environments that were unanticipated during the design process. Scalability: since CSO systems rely only on local interactions, there is no computational upper limit to the number of cells in a system. A cell is content to react to stimuli in its local neighborhood, regardless of the size of the entire system. Thus, the number of mCells can be dramatically increased or decreased without systemic problems. Resilience: resilient systems display acceptable performance in diverse contexts and degrade gracefully|not catastrophically|if damaged [158]. CSO systems will have the capability to withstand damage to components without total system failure since the constituent agents are mostly homogeneous, and there is redundancy in the large number of cells. All of these characteristics are dicult to engineer, but abundant in nature. The drawbacks of self-organized engineering include stochasticity and a lack of some rigorous mathematical proofs of convergence and design validation. Traditional design techniques are suitable to ensure the proper functioning of an individual mCell, but it is dicult or impossible to prove convergence or eectiveness of emergent systems [240, 48], and designers of such systems must often resort to statistical techniques to establish condence in the system's emergent behavior. The combination of strengths and weaknesses described in this section will make CSO systems attractive for FRs and environments that place a high premium on adaptability, to make up for the loss in eciency and predictability. 1.5 Multi-agent simulation A multi-agent based simulation (MAS, also multi-agent System) is a computer program that simulates the interactions of several autonomous agents. The agents must be capable of some independent action or decision-making. The aggregate behavior of many agents acting independently is often dicult or even impossible to evaluate analytically, but simulations of such behavior can be performed on computers to give valuable information about the emergent properties of the system. An MAS generally should rely on a model of agents and interactions that is simple enough to highlight the most salient characteristics, and rich enough to re ect reality. 11 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems For an MAS to be truly multi-agent, it is important to restrict the agents' knowledge. If the agents were omniscient, they could simply reason and act in concert as one super-entity and essentially become one agent [166]. Thus, in most MAS studies, agents are assumed to have knowledge of only their own internal state, and their local neighborhood. MASs have been used to study ocking and schooling in the animal kingdom [183, 239], disaster relief operations [156], the mechanical design process [128, 106], the growth and decline of civilizations [3], and many other phenomena that can be modeled as the emergent results of interactions among many autonomous entities. 1.6 Genetic algorithms A genetic algorithm (GA) is a stochastic optimization algorithm that uses DNA, natural selection, and evolution as a useful metaphor for ecient optimization. Some of the most in uential works on GA were authored by John Holland of Carnegie- Mellon University and his prot eg e David Goldberg [77, 96]. GAs are well-suited to explore nonlinear and discontinuous optimization search spaces and are much faster than exhaustive search methods. This becomes important when the function evaluation is computationally expensive. A GA works by generating an initial eld of candidate solutions, or genomes. These genomes are typically binary strings. The genomes are mapped to specic arguments of a tness function, and the tness function evaluates each candidate's worth. Candidates with high tness are more likely to be selected for reproduction, so that their genes, possibly mixed with genes from other t parents, survive to the next generation. Mutations can be introduced by randomly ipping a bit in a genome. This process is repeated for a set number of generations or until a candidate with suitable tness is identied. Goldberg's explanation of GA's features is summarized in Table 1.1 [76]. Table 1.1: Genetic algorithm { DNA comparison Nature Genetic Algorithm Chromosome String Gene Character Allele Value Locus String position Genotype Structure Phenotype Decoded structure Epistasis 2 Nonlinearity 12 Chapter 1. Introduction Because GAs act on a population of candidate solutions, and not just a single point, they have an ability to explore possibly remote areas of a search space in parallel while avoiding local maxima. The Schema Theorem states that by simultaneously testing multiple \schemata," which are small partially dened segments of the overall solution, they can take advantage of \implicit parallelism" to quickly nd and propagate schemata that cause high tness [76]. Once some elementary \building blocks" of the solution are uncovered, the GA can recombine them through crossover to nd more potent combinations [79]. There are many dierent specic implementations of GA, each with its own strengths and weaknesses. Some hew closely to the Darwinian inspiration of the original algorithms, while others take liberties with the form of the genome, the number of parents involved in reproduction, cloning, etc. In even the most exotic algorithms, the basis is still selection (tness), recombination (mating), and mutation. 1.7 Overview of this dissertation The focus of this research is a computational synthesis of complex systems through self-organization, genetic algorithms, and a generalized behavioral model. This chapter has explained some concepts and vocabulary that are the foundation for this dissertation. The traditional engineering design process was explained along with its limitations. Self-organization was introduced as a possible design mechanism for the synthesis of adaptable systems, and CSO Systems were highlighted as a self- organizing approach to design. Part I concludes with further motivation for the use of self-organization in design. Part II relates relevant work in the areas of self-organizing systems, complex system optimization, and formal organizations research, concluding with Chapter 4, which summarizes the previous research in CSO design and points toward a new approach using computational synthesis at the detail design level while a general self-organizing model is developed for use at the conceptual design level. This behavioral model and computational framework are developed in Part III. Part IV reports the results of several case studies on computational synthesis of CSO systems. The case studies introduce research questions and focus on numerical and qualitative results, but the deeper implications are discussed in Chapter 12 so that they can be grouped by theme rather than application. Chapter 13 summarizes this dissertation and lists possible areas for future work. A reader who wants only the highlights of this research could focus on Chapters 4 and 7, any one chapter from Part IV, and Chapter 12. 2 Epistasis is the phenomenon of multiple genes' aecting a single phenotype simultaneously in a non-linear way. For example, where normal genetic analysis would predict type B or AB blood, one recessive gene at a separate locus can cause a person's blood type to be type O [118]. 13 Chapter 2 Adaptability in Self-Organizing Systems Natural systems, composed bottom-up of cells and further organizing into hives and colonies, show remarkable adaptability. They are able to grow, heal themselves, satisfy multiple competing survival needs, and move to and thrive in new environments. These are capabilities that could revolutionize engineering if machines could display them with similar success. In this chapter, I will lay out an argument for self-organizing engineered systems based on a need for adaptability. \Adaptability" in the engineering literature has several related denitions, and here I will use it as an umbrella term for dierent ways to cope with change. Recall from Chapter 1 that traditional engineering DTM is an ecient way of designing for static functional requirements and constraints, but that these techniques have diculty coping with complexity and change [171]. In particular, this dissertation will focus on three main facets of adaptability: Flexibility Resistance to changes in functional requirements Robustness Resistance to changes in the environment and initial conditions Resilience Resistance to changes (e.g. damage) within the system itself Note that these denitions are notional and are simply meant to guide the reader's thoughts for this dissertation. There are many other denitions in the literature, and many cases can be interpreted as combinations of these terms. 2.1 A market \pull" and a technology \push" Groundbreaking inventions are often the result of economic demand that \pulls" them out of the R&D lab or new enabling technologies that \push" them out to the 15 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems population before they even know to ask for it [42]. Currently, both forces are acting on engineered self-organizing systems. 2.1.1 Market pull Design of adaptable systems The market motivation for self-organizing systems is to create adaptable systems. In many applications, engineered systems must perform in dynamic environments or fulll changing requirements, e.g., biomedical applications [176], extraterrestrial environments [170], and military and search-and-rescue missions. Systems may also face extreme environmental stressors or serve long lifespans that require new functionality after deployment. Adaptability is also required in systems operating at scales so small that they are dicult to control or repair [181, 95]. Thus, the ability to cope with change is now often ordered as a system requirement [158, 221]. This demand for adaptability is the market pull. Self-organizing systems have the potential to display adaptability through redun- dancy, scalability, and soft connections. Because the components of the system are autonomous, there are no long causal chains of system functionality that can be broken by the malfunction of a single part. If a self-organizing agent breaks down or is removed, another identical agent can take its place. Because the systems are based on local interactions, there is no shared resource (e.g. energy storage or communication bandwidth) that acts as a hard limit to the size of the system. Agents can form local teams of appropriate sizes, independent of the entire system's size, to adaptably scale the system. The agents' ability to form and disband interactions also allows the system to re-organize to better meet changing conditions. Pervasive self-organizing phenomena Self-organization is a phenomena that is already occurring in manmade systems, as miniaturization has made communicating systems more interconnected and perva- sive [98, 173, 126]. For example, passenger vehicles are increasingly being equipped with technology to locally communicate information about their velocity and location [184]. As more consumer devices communicate and interact, unintended consequences may arise, and knowledge of self-organizing principles can help to manage this emergent behavior [111]. In other words, new self-organizing systems with surprising behavior are being created with increasing regularity, even if this is not by design, and new techniques are required to harness their emergent power. 16 Chapter 2. Adaptability in Self-Organizing Systems 2.1.2 Technology push Self-organization has been studied extensively as a naturally occurring phenomenon [33, 5], but less eort has been focused on researching self-organization as an intentional design mechanism. As the mechanical slowly merges with the natural [17, 111], the ability to purchase a swarm of small, cheap, insect-like robots with some of the capabilities envisioned by many self-organization researchers is becoming more likely. This is the technology push. Enabling technologies like KiloBots [187] and the open-source \Jasmine" robots [121] provide simple robotic agents that can form swarms. Another possible substrate for SO systems are hybrid biological-mechanical systems that take advantage of natural organisms' size and locomotion but allow for algorithmic control. For example, re- searchers have demonstrated digital control over live cockroaches [190] and miniature swimming robots made with jellysh membranes [157]. Unmanned vehicles for the air and sea are also now commercial o-the-shelf products, and these can work together like ocks and schools. These relatively recent technologies can be incorporated into a designer's toolbox if he has a methodology for leveraging the interactions among the simple agents to create adaptable systems. 2.2 Flexibility Flexibility is a system's ability to adapt to changing functional requirements. This requires a system to display dierent behaviors at dierent points in time as require- ments change. The failure of many pieces of heavy infrastructure can be attributed to a lack of exibility, as their long lifespans inevitably lead to changes in demand from their customer base. 2.2.1 Flexibility through emergence In a self-organizing system, exibility can be achieved through the emergence of system-level behavior from the interactions of simpler agents. Because the agents are cheaper and less complex than the system as a whole, it is simple for an engineer to change the agent behavior, especially if it is software-based or parameterized. From small changes in agent behavior, it is possible to cause large changes in system-level behavior [244]. Even if changing the behavior of existing agents is not possible, engineers can introduce new agents into the system with diverse behavioral models. The way the agents' actions ripple through the system can cause a large variety of emergent behavior, which is exactly what is necessary for exibility. Of course, this is necessary but not sucient, as the designers must ensure that the emergent behaviors are suited to the system's function. 17 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems 2.2.2 Flexibility in ocking systems As an example, ocking behaviors (described in more detail in Chapter 8) can be altered at the agent level, with a large variety of observed eects at the system level. In [40], ocking was composed of 5 behavioral primitives, with relative weights among the 5 governing agents' behaviors. Proper combinations of relative weights can lead to emergent functionality such as search and surround, missile grouping, and space lling, all from the same hardware assumption and parameterized behavioral model [39]. In a similar approach [204], changing the angle of vision or turning radius of ocking agents was shown to have measurable eects on the dynamics of the ock at a system level. 2.3 Robustness A robust system can continue to function well in the face of uncertainty in its inputs, initial conditions, or environmental conditions. This ability is necessary in autonomous and remote systems that cannot be continuously monitored and corrected when exigencies arise. 2.3.1 Robustness through situatedness and a short memory One strategy of achieving robustness is to form a system out of reactive agents with a local sphere of sensing and in uence. A situated robot is one that does very little planning or advanced logical operations on its sensory information. Instead, it simply reacts to whatever its local stimuli are [64]. This makes it quick to adjust when it encounters a new environment. Systems with short memories and no lasting state changes are also able to quickly adapt to new conditions [182]. One diculty in situated design is the lack of goal-orientation [54], and a situated agent must be designed such that it follows only local information, so that goal fulllment is actually a byproduct of the situated behavior. Behavioral variety can be gained from agents' situated responses of their own local stimuli, so that subsections of the system adapt independently of one another [6], even if they have otherwise identical control mechanisms. If swarms of these robots work together, the soft connections among the agents can also insulate parts of the system from the volatility of the environment in remote areas of the system. Short memories are complementary to situated behaviors. If an agent has no memory of past states, its past behavior cannot in uence its current behavior 1 . Pheromone-based systems [53, 170, 14, 210] exploit this eect by allowing pheromones to evaporate and eventually cease to aect the system. In this way, any pheromones 1 This is true at an algorithmic level. Of course, physically, an agent's prior decisions (such as moving between walls and getting trapped) may aect its current behavior. 18 Chapter 2. Adaptability in Self-Organizing Systems laid down to guide the system at a previous time period cannot lead the system astray if the environment has since changed. 2.3.2 Robustness in foraging systems The pheromone example is most easily seen in ant foraging. Ants behave based on local stimuli but are able to coordinate large foraging parties to create short lines between their nest and food source. To do this, they lay down pheromones (chemical scent trails) as they search for food. When an individual ant nds food, it returns to the nest while laying down a dierent pheromone. Other ants can follow this \food pheromone" to the same food source, laying down pheromones of their own as they return food to the nest. This creates a positive feedback loop that strengthens the pheromone trail and recruits more agents to exploit the food source. After the resource has been exhausted, however, the ants do not continue traveling to the depleted zone, as the pheromone trail is transient. It eventually evaporates shortly after the food source is exhausted since ants are no longer triggered to lay down more pheromone. This causes the colony to disperse once again and nd a new source of food, eventually converging on the new source in a similar manner. This behavior is robust to the daily uncertainty of when and where a colony's food might appear. 2.4 Resilience 2.4.1 Resilience through distributed functionality When self-organizing systems are composed of a large number of agents, they can be resilient to the failure of any particular agent. This is due to redundancy. If one agent fails or is deactivated, another agent can step in to take its place, with no system-level breakdown. There is also no critical node in the system whose failure can immediately halt all downstream functionality. Parts do not necessarily have to be identical to be redundant [230]. It is not just physical redundancy that leads to resilience, but functional redundancy as well. \Degeneracy" is the ability of a system to perform the same function using dierent components or functional pathways [60]. It is found in complex systems ranging from the genetic code to populations of organisms. The variety of emergent behaviors that self-organizing systems can display allows functional redundancy in addition to physical redundancy. 2.4.2 Resilience in self-healing and sacricial systems Multi-cellular organisms are remarkably resilient to damage. Blood clots to stanch bleeding. Skin cells form to heal cuts. This is possible because there is an abundance of the needed raw materials (blood platelets and skin cells, respectively) available in the body. The raw materials are actually created by organisms in such cases, but 19 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems similar eects can be designed for engineered self-organizing systems if there is a suciently large number of identical agents. Such systems are called self-healing [182]. Systems may also purposefully sacrice agents on the periphery to absorb damage so that the damaging forces will not reach the core of the system. This has been shown as a viable engineered strategy for a self-organized protective convoy [102]. 2.5 Applications in Cellular Self-Organizing Sys- tems This chapter has summarized the motivation for studying the design of self-organizing systems. In Part II, many examples of self-organizing systems will be noted. Special emphasis is placed on Cellular Self-Organizing Systems, which are swarms of unsophis- ticated \mechanical cells." The previous work on cellular self-organizing systems has mainly focused on exibility (accomplishing many dierent tasks with the same basic cellular hardware assumptions) and robustness (moving through environments with randomly placed obstacles). This approach's major results, limitations, and avenues for extension are summarized in Chapter 4. 20 Part II Previous Work 21 Chapter 3 Related Work The work described in this dissertation can be classied as a branch of engineering design theory and methodology (DTM), which was described in detail in Section 1.1. Also, it builds directly on previous work in the USC Impact Lab on CSO systems, described more fully in Chapter 4. In this chapter, I give a brief overview of several other research areas that are fundamental to this work: natural and articial self-organizing systems, design and optimization of complex systems, and theory of organizations. 3.1 Natural self-organizing systems Natural systems often use self-organization as a means to achieve complex tasks. In general, the top-level functional requirement (FR) of a natural system is simply to survive. This survival is an emergent property of many interacting cells and organs. One source of inspiration for self-organizing design is social insects. In particular, ants, termites, and bees show tremendous ability in collective foraging and construction, even though their individual behaviors are quite basic. This collective intelligence is emergent and not dependent on the intelligence of any one agent. 3.1.1 Food and habitat exploitation in the social insects Termites Termite mounds are some of the most impressive structures in the animal king- dom [120]. To build a nest, a termite will mix mud with saliva and deposit it near the queen, because the queen emits an attracting pheromone. The saliva also contains a pheromone, increasing the concentration of pheromones in the area where a block of mud has been placed. This increasing pheromone level is a form of positive feedback that attracts even more building material. Soon, several columns of earth are formed, and the faint traces of pheromone wafting out from each column cause columns to be 23 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems built at a slight inclination towards one another. Eventually, the pillars meet and the nest is fully enclosed. Relative to the length scale of the builders, termite mounds are the 2 nd tallest free- standing structures in the animal kingdom [120]. Only bettered by human construction, termites can be considered more \intelligent" builders than primates or any other animal. Termite mound construction is aided by the near-decomposability [196] of the task. Termites can self-organize into semi-autonomous units focused on building columns of the nest. The tasks are only weakly linked by the pheromone signals wafting between columns, allowing a local intensity of focus, while receiving only the minimal necessary amount of information from remote tasks. Not only are termite mounds large compared to the size of the termite, but they also display impressive regulation of ambient air properties such as humidity, oxygen levels, and temperature [24, 53]. Macrotermitinae nests include such subsystems as brood chambers, a royal gallery, and rooms to grow edible fungi [53]. The ability to create these architectures emerges from the interactions of the simple insects; it is not found in the intellect of the builder. Ants Ant colonies engage in collective foraging. They are able to search the vicinity of their nest, and quickly exploit food resources as a group, with no direct communication. They work by laying down pheromones as they return to their nest with food [22]. This attracts other ants to follow the path back to the food. These ants further deposit pheromones on the same trail, leading to a more usage. Since pheromones eventually evaporate, rich food sources close to the nest are exploited rst, and as these food sources dwindle, ants begin to explore for more sites. Biologists [162] describe only a few primitive behaviors of the ant species Pheidole dentata. Though their individual behavioral options are limited, as a collective they are able to eciently forage for food, build nests, care for the queen and their young, and organize military raids. Pheidole dentata is also an example of a caste species, where major and minor workers display physiological dierences, despite sharing similar DNA. The dierent castes are caused by environmental in uences on developing insects. The amount of food and the types of pheromones that a young insect is exposed to determine its development into dierent physical castes in adulthood. 3.1.2 Growth and development Biological development is another example of self-organization, where the basic units are cells. Development is the process of cell division that leads to the growth from a fertilized egg to an adult organism in a multicellular creature. This self-organization relies on instructions in the form of DNA, encoded in every cell. DNA contains the information required to form the enzyme molecules necessary for life. In a multicellular 24 Chapter 3. Related Work organism, cells must dierentiate to perform separate functions. In the human body, there are trillions of cells [25], and a pressing question in developmental biology is why certain cells specialize, and how this complexity arose from a single primitive zygote. All cells, even specialized cells, contain all DNA; that is, all cells contain the information necessary to form the entire body. The dierentiation is a result of selective expression. Though every cell contains the same information, some cells will only use part of that information. The genes responsible for the production of insulin, for example, are turned on in the pancreas and turned o elsewhere. Generally, no more than 5% of the genetic information available to a cell is active [25]. Thus biological development can be thought of as a process of turning on and o the expression of specic genes at the appropriate times. There is no outside force telling dierent parts of a body to dierentiate from stem cells, but there is an internal mechanism that governs this development. One model for how this organization takes place is morphogenesis [219]. A mor- phogen is a chemical (or signal) that instructs cells to only act on certain parts of the DNA code. An experiment with a small organism known as a hydra is described in [87]. In this experiment, the head of the hydra is removed and transplanted to some other area in the hydra's body. If the head is replaced near its original location, the hydra, disgured, survives as normal. If the head is placed at a remote area of the body, a new head will form where the original head was. This implies that the head is releasing some inhibiting chemical that depresses the formation of new heads, but that there was a residual amount of a catalyst chemical at the old location that spurs the growth of the new head. If the head is replaced near its original location, the inhibitor wins out and no new head is formed. If the head is placed at a remote location, the activator self-catalyzes and a new head is formed. This is known as a reaction-diusion system. The inhibitor diuses quickly and suppresses the concentration of some chemical, while the activator diuses more slowly and accelerates the concentration. The mathematical relations of these competing forces form a \pre-pattern" of chemical concentration in the eld, and the relevant biological processes (e.g. segments of DNA, color display) are activated according to local level of concentration. This morphogen process had been a useful model for years even before there was any empirical biological evidence for it [219], but recently there has been some success in identifying specic morphogens responsible for several processes of biological dierentiation, including organ formation and skin pattern formation [119]. 3.1.3 Flocking and synchronized behavior The ocking of birds is well known. Reynolds [183] is credited with creating the rst ultra-simple ocking algorithm. In his Boids Algorithm, the ocking agents followed only a few simple rules: move toward the center of a ock, avoid neighbors, and align with neighbors. From these simple rules he was able to generate computer animations of ocking with minimal computational eort. Similar emergent behavior is seen in 25 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems herding or schooling [110], and ocking behavior has been used to simulate crowd control [153]. The behavior is robust to disturbances from obstacles and predators, and massively scalable [29]. The publications on ocking are too numerous to list. The important notes are that from very simple local rules, global ocking can emerge, and that this ocking is important for species survival. 3.1.4 Other natural self-organizing systems Many other examples of self-organization in biological systems have been cited in the literature. The emergent phenomena include the formation of spirals and other geometric patterns in mold colonies, clustering of penguins, synchronized rhythmic ashing of re ies covering entire trees, and aggregation of nest sites in non-social insects, to name just a few [33]. 3.2 Articial self-organizing systems An articial self organized system can be either a swarm of robots or a computer simulation. Many impressive self-organizing robotic systems have been developed, but the tasks they accomplish are often trivial or strictly academic. Simulated self- organizing systems have shown promise of accomplishing complex tasks, but only in a virtual world, and the jump from simulation to hardware can be quite dicult [103]. Both simulated and robotic systems will be discussed in this section. 3.2.1 Formation control Formation control of autonomous vehicles is the goal of many self-organization re- searchers. The military in particular is interested in this type of control [161]. The research often is inspired by ocking [183], but has applications to manmade systems such as trac ow [146] and pursuit. Rather than taking a controls or game-theoretic approach [194], vehicles can be modeled as self-organizing units interacting with near neighbors according to societal rules of the road. If the driverless car [223] is to become mainstream, it may need to rely on reactive ocking behaviors to avoid collisions with other cars with which it cannot communicate. Trianni [217] and Gro [84] demonstrated self-organized behavior in \s-bots" which can join together to form \swarm-bots." These s-bots are equipped with tracks for locomotion, grippers, LED's, and light sensors. They have shown, with swarms of 2{8 robots, the ability to aggregate, form search lines, cooperatively move large objects, ock, and form shapes. Combinations of these tasks then lead to interesting system behavior. By combining search lines with moving large objects, passive objects in the environment can be found and returned. By combining ocking and shape formation, swarm-bots can cooperatively traverse rough terrain or bridge gaps, etc. 26 Chapter 3. Related Work Bai and Bree [4] showed SO formation of a diverse array of shapes using chemotaxis (following gradients of increasing chemical concentration) as inspiration. An example of coordinated object-moving is given in Zhang et al. [247], where a team of robotic sh was used to move a box toward a goal. 3.2.2 Regulatory networks and articial intelligence Genes continue to act as a control system even after the body is fully formed. The articial abstraction of this process is known as a Genetic Regulatory Network (GRN). A GRN consists of genes which control actions, but also control one another. Genes emit and receive \proteins," which can be sensory information, internal signals, or actions. Kumar [124] used a GRN to control a robot's path through obstacles. The GRN was optimized with a genetic algorithm and involved proximity sensors sending proteins to genes which controlled the motion of the robot. In the same work, a GRN development process was also used to grow cells into predened shapes such as planes or cubes. Thus, the author demonstrates that the GRN is capable of producing both form and function. A parallel trend has also been reported in the Articial Intelligence literature. Some now believe that modeling the brain as a complex emergent property of massively interconnected simple agents, similar to the organization of neurons, is a more con- structive approach to true AI than the use of sophisticated monolithic algorithms and reasoning methods [167, 27]. Some of the greatest success in articial self-organizing systems is in the use of simulations of natural systems as optimization strategies. These strategies include genetic algorithms, ant colony optimization [22, 55], neural networks [149], particle swarm optimization [112], and more. 3.2.3 Gathering and building Another common task in articial self-organizing systems is \gathering." Consider a eld with a large number of target objects on it. How can a distributed system retrieve or aggregate all of these objects? One early example of a physical gathering system is given in Beckers et al. [13]. In this study, robots were asked to aggregate pucks on a small eld. The robots had xtures in front similar to a snow plow so that they could gather and push the pucks around the eld. The self-organizing algorithm was simple: move in a straight line until you hit an obstacle or other robot, and if you have three pucks in your possession at one time, drop them in place and turn around. This simple behavior was based on the concept of stigmergy, a word that means \work-creating-work." The work that the robots did in creating small piles of pucks then aected the way the other robots worked, because they were more likely to nd a third puck in a location with a high puck density, i.e. a location where another robot had left pucks. This stigmergic process caused a positive feedback loop at locations of high puck density such that by the end of the run, a single pile of pucks was created. 27 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems This gathering task was done by a team of simple robots, with no communication. The authors also note that there was an upper limit to the number of robots that could eciently work in the arena. With an increasing number of robots, the system's total gathering time decreased, until a 5 th robot was added. At that point, the robots spent too much time avoiding collisions with one another, and their gathering capability actually decreased. The gathering task has been re-visited and expanded since the work of [13]. Song et al. [200] recap advances in this eld and give an example of a clustering algorithm that shows two improvements over most others: the ability to cluster non-circular objects, and an enhanced ability to form clusters in the center of the eld, rather than at the edges. Their rst approach was similar to the Beckers experiments. A robot would move randomly until it sensed that it was pushing multiple boxes. Then it would leave the boxes in place and move. This resulted in clusters forming around the boundaries of the arena. In their modied algorithm, they introduced task allocation and heterogeneity. In order to cluster boxes in the middle of the eld, some robots were programmed to be \twisters," and others were programmed to be \diggers." The pushing and dropping behavior remained mostly unchanged, but the behavior near walls was modied. Twisters would hit the corners of boxes that were stuck along a wall, rotating them so that diggers could get between them and the wall to push them toward the center. By moving from a homogeneous system to a heterogeneous system, the robots were able to take advantage of specialization. Werfel [234] demonstrated a promising application for SO systems: collective con- struction. Drawing inspiration from social insects such as termites [120], Werfel [235] developed a system of swarm robots that can build a pre-specied 2D shape out of square bricks. Localization and gripping are the main barriers to this type of construction, but Werfel introduced several simple remedies for both problems. 3.2.4 Reconguration Requicha and Arbuckle [182] introduced \active self-assembly." In their paradigm, the self-organizing agent is a very small, possibly nanoscale robot that cannot move under its own power. They are assumed to undergo Brownian motion in a uid. Even though all robots are running the same program, by a process of signal passing they are able to self-assemble into a wide variety of pre-dened shapes in simulation. Moreover, these shapes show the ability to self-repair. If there is an abundance of extra robots in the uid, breaking a shape apart causes both halves to re-assemble into full, separate parts. Another impressive feature of this work is a global-to-local compiler, which could take a desired shape as input, and output the required agent rules by decomposing the shape into edges. Modular or self-recongurable robotic systems can also use self-organizing principles. Shen et al. [195] introduce a hormone-inspired control algorithm for a self-reconguring 28 Chapter 3. Related Work robot. Their robot was made of many identical autonomous modules which could attach and send signals similar to the transfer of hormones among biological cells. They demonstrated the ability for a robot to recongure into various shapes, with a dierent form of locomotion for each shape. The concept of pheromones has also been used by other robotics researchers [170]. Nagpal [155] relies on origami rules and local communication for self-organized shape construction. If the desired shape can be assembled from origami transformations, Nagpal can deduce the needed local rules with a global-to-local compiler. Claytronics [80] is a program that envisions millions of tiny interacting robots with no moving parts which can use one another as anchor points for locomotion and ultimately become a new, 3-D medium for communication. 3.3 Computational tuning of complex system pa- rameters Many complex systems have been studied and optimized using metaheuristic opti- mization methods. General heuristic algorithms can perform well on a diverse array of problems, but an ad hoc optimization will usually perform the best on a specic problem. This implies that there is a tradeo between generality and performance [18]. The strategy is to avoid an exhaustive search by focusing the search eort in areas where candidates of high tness are likely to be found. The methods used include genetic algorithms (GA), their close relative genetic programming (GP), linear programming and others. The fundamentals of GA are described in section 1.6. Other optimization methods will be brie y introduced as necessary in this section. Evolving complex systems using AI presents several challenges to the researcher. The irreducibility of most complex problems requires that solutions be evaluated using multi-agent simulation, rather than analytical formulas. Running the large number of simulations required for optimization can be time-consuming. The architecture underlying most multi-agent simulation platforms is partly stochastic, even if there are no specic random parameters programmed into the simulation. This stochasticity can arise from random initial agent placements, stepping orders or initial conditions. The indirect global-to-local mapping makes it dicult to assign rewards for specic actions in reinforcement-based learning [218, 116]. Also, the global behavior that one wishes to capture may be transient, emergent, or dicult to measure quantitatively [31]. Despite these diculties, there has been a large amount of success in recent years in discovering desirable emergent behavior using AI techniques. Calvez and Hutzler [32] introduce an approach they call Adaptive Dichotomic Optimization. This approach uses parallel sampling and search space discretization to eciently explore the search space of local behavior parameters in an ant foraging simulation. This method not only outputs an optimized set of simulation parameters, but also information about the search space, as more in uential parameters were discretized into smaller intervals. 29 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems Miller [147] used GA to evolve parameters for a highly nonlinear model of world population dynamics known as World3. Here he demonstrated the use of GA for not only optimization of the nal state, but also for sensitivity analysis. In order to study the sensitivity analysis, the tness function for maximizing world population was modied by penalizing large changes from a set of baseline parameters. Thus only the parameters that provided the highest return for least amount of change survived the modied tness function. Ueyama et al. [220]] used a GA to optimize a path-planning algorithm in a simulation of their distributed CEBOT robotic swarm Genetic programming is an evolutionary technique inspired by GA. Rather than operating on genomes of xed length, a GP adds functions and nodes to a computer program that is meant to evaluate an input or perform a task. The tness function judges the appropriateness of the GP's output. GP has been used to implement re y synchronization and ant foraging algorithms [222]. GP has been shown to give human-competitive results in the eld of lens design, at times re-discovering patented inventions, or creating new designs [122]. GP has been used to develop signaling strategies for self-organized shape formation [4]. These authors use GP to evolve mathematical formulas for eld generation. Each cell in their simulation propagates a eld in its local vicinity, and senses the elds generated by others. Agents then follow the gradients of the eld. The authors were able to successfully generate functions which would cause systems of 500 cells, initially placed randomly, to form into specic shapes such as stars or letters. Crutcheld et al. [47] used GA to evolve rules for cellular automata [242] to perform the \majority classication" task. The point of the task is for the cellular automata to converge to a steady state of all 1s if the initial condition is majority 1s, or all 0s if the initial condition is majority 0s. The rst generations of the GA created simple rules that expanded groups of contiguous 1s or 0s, but later generations showed some \cleverness" in creating rules that caused structures to be built in the cellular automata that could communicate local conditions to other locations on the CA. Trianni [217] devotes several chapters to the optimization of an articial neural network (ANN) via GA, to develop self-organizing behavior in s-bots. An ANN is a machine learning method modeled after the architecture of the human brain [149, 248]. Each bit of information to be processed in known as a perceptron. Perceptrons are aggregated to form some perception, which can be a decision or output. Rather than running a sequential calculation like a computer program, an ANN will consider all inputs simultaneously, and may be based on certain excitation thresholds that are reached by a proper combination of perceptron values and relative weights. Trianni used an ANN to couple the robots' sensory arrays (inputs) with their actuators (outputs) and used a GA to tune the relative weights of each perceptron. He was able to evolve control algorithms for coordinated motion, hole avoidance, and synchronization. 30 Chapter 3. Related Work 3.4 Theory of organizations While nature often makes use of self-organization, most formal organizations in society follow a prescribed structure and chain of command. Formal organizations exist to operate some technology that is too complicated for a person 1 to operate independently, where \technology" can broadly mean any complicated task from manufacturing spacecraft to educating a nation's children [216]. Organizations oer certain advantages over individual endeavors. Naturally, large groups performing similar tasks can have a higher output than an individual. More importantly, an organization can bring together agents with diverse skill sets, enabling more complex functionality. Thus an organization brings individuals together for the sake of supplementary similarities and complementary dierences [92]. A large body of descriptive and prescriptive literature exists that focuses on formal organizations. Generally, organizations will be structured according to the nature of their technology and task environment. A hierarchical decomposition of human organizations is common, as hierarchies show some attractive properties when dealing with complex problems [196]. It is argued in [216] that organizations, subject to the norms of rationality, will form specic structures according to the external environment they face and the interrelationships of the organization's internal functional units. Thompson [216] denes three types of dependence relationships: pooled, sequential, and reciprocal. Pooled dependence is general dependence where each part is aected by the organi- zation's whole operation. We would expect to nd this type of dependence in any business where each unit is aected by the ability of the whole to make a prot. Sequential dependence is dened as a time dependence between units. If unit B cannot be productive without the output from unit A, they are sequentially dependent. Reciprocal dependence is the hardest to overcome. Two units that are reciprocally dependent will need to constantly adjust their behavior to each other, while also adjusting to the adjustments of each other, and so on. Note the similarity between the denition of reciprocal dependence and the denition of complexity. Even though the structure of an organization may be carefully dened to cope with its environment, the actual actors within the organization often display self-organization by forming at hierarchies and informal working groups to solve problems. A high-level classication of organizations according to their structure and purpose is given in [148]. Formal organizations have devised methods to deal with these dependencies. Pooled dependence can be overcome by creating functional units that have a homogeneous mission (e.g. decomposing into a manufacturing division and sales division) to facilitate standardization. Units with sequential or reciprocal dependence are usually grouped together into smaller, local, semi-autonomous departments to facilitate coordination and local decision-making among the units [216]. These decompositions are organization design decisions and usually result in a hierarchical organizational 1 \Person" will be used interchangeably with \agent" in this section, because it draws from the literature of formal organization and multi-agent systems. Interestingly, these two elds show a wide theoretical overlap. 31 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems structure where each unit is responsible for a certain internal task or interfacing with a certain segment of a heterogeneous environment. Agents can also form organizations. The research relationship between human and agent organizations is twofold. Research on human formal organizations can be used to improve multi-agent systems, and multi-agent simulations can be used to learn more about the nature of human organizations. Drawing on ideas from formal organization theory and distributed robotics, the eld of agent organizations considers the relationships among computer agents and how to optimize them for certain tasks. Just as in formal organizations, agent relationships may determine authority, communication ow, resource and attention allocation, etc. [97]. Agent-based systems can be arranged hierarchically, where each level lters out information and only sends requests up the tree when necessary. Hierarchical or- ganization is convenient when the task to be accomplished can also be composed hierarchically [97]. A holarchy is slightly less formal, where groups of agents band together in functional groups called holons 2 , and these functional groups can be ap- proximated as units by other holons. Agents can form less formal coalitions or teams if they derive an advantage by working together. These groups may be short-lived. One important form of organization is the society. A society is open; members may come and go as they choose, but there are behavior expectations of any agent in the society. These cultural norms can facilitate cooperation and eciency in the system. Many other types of organizations exist, and in fact, many systems are hybrids of several of these types [97]. Agent organizations have the added exibility of moving along the dimension of heterogeneity. Whereas human organizations must rely on diverse members, an agent organization can be heterogeneous if the designer so chooses. Some formal organizations do attempt to homogenize their workforce somewhat through socialization [216], but this is much easier to enforce in agent systems. Also, while it is dicult for formal organizations to prevent communication and coalition formations among members, this can be accomplished quite easily in agent organizations. This leads to a natural taxonomy of agent organizations along the dimensions of communicating vs. non- communicating, and heterogeneous vs. homogeneous [202]. The entropy of an organization can qualitatively be described as the amount of disorder within the organization. An organization with low entropy will keep members locked into predened tasks. The activities of the organization at any one time will be predictable. Organizations with higher entropy may be less predictable, but more open to changes in the environment. High entropy may be helpful in the search for marketable ideas, but low entropy is necessary in the factories that actually produce the ideas. If an organization stays in a high-entropy state for too long, its lack of eciency will cause it to succumb to market forces. If an organization is in a permanent low-entropy state, it will not be adaptable, and soon its technology will become obsolete. Rather than oscillating between entropy levels, most organizations 2 \Holon" is a combination of the Greek words for \part" and \whole." 32 Chapter 3. Related Work will instead maintain a low-entropy \technical core" for production, and a higher entropy, creative \managerial level" that interfaces with the environment and attempts to isolate the technical core [216]. The structure of an organization can signicantly aect its task completion [97]. Although a rigid structure may be present in the design of an organization, in practice there are often informal channels of power and in uence that truly drive it [148]. Some organizations can even be described oxymoronically as \organized anarchies," where agents within the organization are given wide latitude to choose which problems to solve and which solutions to use [43]. In addition to the structure of the organization, the design of the reward system is important [164], as members may behave sub- optimally if they feel they do not receive fair credit for the success of the organization. In cooperative agent organizations, the same problem can arise when a designer attempts to \teach" agents by giving credit through reinforcement learning. 3.5 Summary of related work Natural systems can often be modeled as self-organizing systems, and they display cer- tain adaptive properties such as exibility, robustness, and resilience. The adaptability of natural systems is desirable but dicult to obtain in articial systems. Nature often inspires design, and we see that many natural systems were abstracted and recreated as articial systems. These systems were either built for the sake of study or some task completion. As biology and design become more intertwined [17, 111], there will be more practical uses of articial self-organizing systems, and a need for a more thorough understanding of their underlying processes. Self-organizing systems were shown to rely on a balance of long-range and short- range forces. Often, short-range activators will amplify random uctuations to build local structures, and long-range inhibitors will suppress this construction at remote locations. This was especially prevalent in the reaction-diusion model. Near- decomposability was shown to be an eective organizational approach. If agents can work in semi-autonomous groups, they need only be concerned with some general signaling from other groups, and unconcerned with the internal dynamics of other groups [169]. The dierentiation can also be physical, as selective gene expression can cause agents with identical DNA to display disparate phenotypes. Articial self-organizing systems show great promise. Basic capabilities have been demonstrated, and grand future applications have been imagined, even space coloniza- tion [19]. Robotic self-organizing systems have shown the ability to cluster, shape-shift, ock, gather, and forage, either in simulation or in the real world. Heterogeneity and system size were shown to have an important impact on FR completion in some systems. Seemingly intelligent FR completion was shown that resulted simply from the local interactions of primitive agents. The state of the art in practical systems is still far from fullling this potential, however, and more advanced techniques are needed. The references showed that self-organizing systems should be given the 33 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems capability to form some structure, so that they can take advantage of heterogeneity and near-decomposability. Also, the size of the system should match the FR and environment, as these systems may only be scalable to a certain point. The survey of organizations shows a continuum from at swarms to hierarchies, and self-organizing systems tend to lie nearer the swarm end, but there is a wide middle ground where a system can use self-organization but still display some task- based structure. The design of an organization can determine its eectiveness to a large degree. The appropriate design depends on the nature of the environment and task the organization is responsible for. Human and agent organizations respond to complex tasks and environments by decentralizing, pushing their organization closer to a loose swarm than a rigid hierarchy. They respond to predictable task environments by dierentiating and standardizing. Agent systems have the added capability of controlling their degree of heterogeneity and imposing tighter restrictions on unauthorized information sharing among agents. Computational metaheuristics were shown to be a viable approach to the op- timization of nonlinear systems, MASs in particular. Whether the objective was model optimization or data-tting, the common theme among the works cited in this chapter is that they succeeded in adjusting more variables than a human mind could comfortably cope with simultaneously. The techniques used in the literature vary, as researchers have used diverse methods from GA to hill-climbing to simulated annealing, but the results show an ecient search of a vast, nonlinear space. In addition to the research areas discussed in this chapter, many other topics are relevant to engineering design and have aected or guided this research to varying degrees. Decision Theory [179, 129], Game Theory [130, 59], Articial Intelligence [149], and Biology [36, 118] are just four examples among many. To expound in detail on these topics is beyond the scope of this dissertation, but a reader well-versed in them will certainly recognize their in uence throughout this work. 34 Chapter 4 CSO Systems: Review and Status This chapter reviews the research on self-organizing systems that has been done at the USC Impact Lab and calls for extending this research. The design approach described here is Cellular and Self-Organizing (CSO). The agents of a CSO system are simple robots called mechanical cells (mCells). The metaphor of a robot in a swarm as a cell is not new [15] and is used to convey the idea that these robots are simple and mostly interchangeable, with their real value arising from their interactions and self-organized ability to work together. The mCells are mobile and able to sense each other and their environment within a local radius of detection. They are autonomous and have the ability to make decisions with simple calculations. mCells do not have a unique identication and do not send messages to individuals, but communicate via one-to-many signaling (if they communicate at all). The CSO approach is meant to parallel traditional design from a biological per- spective, by using a cell-based, bottom-up approach, rather than a component-based, top-down approach. The design and deployment of large-scale self-organizing systems is still a long-term goal, and this research is not meant to compete with traditional, top-down design in the short term or for simple products. It is meant to exist alongside conventional design to aid in the design of distributed and adaptable systems, or for the modeling and analysis of existing complex systems. CSO research has two main goals [104]. The rst goal is to design systems that show adaptability. The second goal is to gain understanding of the self-organizing process and \design" in nature and their relevance to the engineering design community. 4.1 Road to the present CSO research began as the synthesis of specic self-organized structures [250], and then branched into studies based on scalable amorphous ocks [39] and studies with a focus on behavior and FR fulllment [38, 104], then combinations of these approaches [114]. 35 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems Figure 4.1: Example simula- tion from [250] showing initial blank state, conguration into spider shape, and recongura- tion into snake shape 4.1.1 Early work: reconguration The early work of Zouein [107, 249] introduced the concept of design DNA and gave a detailed mapping of natural to articial systems. The design DNA, which is stored within every mCell, is a set of instructions for building a global shape. This DNA encoded agent rules that caused agents to form specic shapes such as \spiders," or \snakes," as shown in Figure 4.1. The reported simulations showed that a system of mCells could indeed recongure its shape in response to obstacles in the eld, so that the more slender snake shape would maneuver through narrow pathways, while the spider shape would move through the open elds. Zouein reasoned that in nature, function can be inferred from form [249]. Being able to change form or shape during FR completion can certainly be a very valuable means to an end. For example, consider an airplane with a variable wing prole, or a presentation pointer with a telescoping length. However, very rarely will a system's sole purpose be to assume a certain shape, as form is usually a means to fulll function. 4.1.2 Flocking and emergent formation A more resilient and scalable approach was demonstrated in the ocking research of Chiang [39]. In this work, mCells were endowed with relationship rules that govern 36 Chapter 4. CSO Systems: Review and Status how they react to one another. Rather than creating a certain shape, the goal of this research was to allow the self-organization of ocks, groups of mCells clustered in space with a common heading. This work attempted to resolve the dual problems of emergent functionality: the analysis problem of predicting global behavior based on local interactions, and the design problem of choosing local interactions that give rise to a desired emergent behavior. The approach was to parameterize the behaviors of the agents and give them a relative weight [40]. The ve behaviors given to each mCell are given: Cohesion step toward the center of mass of neighboring agents. Avoidance step away from agents that are too close. Alignment step in the direction that neighboring agents are headed. Randomness step in a random direction. Momentum step in the same direction as the last timestep. The acronym COARM is used to refer to these behaviors. An agent calculates the Cohesion, Avoidance, and Alignment vectors according to the following formulas: ~ C = 1 N X i2 ~ x i (4.1) ~ O = 1 N X i2 ~ x i kx i k 2 (4.2) ~ A = 1 N X i2 ~ v i kv i k (4.3) where i2 signies that agent i is in the neighborhood of the agent calculating its direction, x i is the vector from an agent to its neighbor, and ~ v i is the velocity of a neighbor. All agents make their stepping decisions in parallel, and their interactions can cause complex system-level behavior. The interactions of the behavioral tendencies and environmental stimuli cause the individual actions, and the interactions among agents cause the emergent functionality. A study of the relationships among the various interactive behaviors is known as the Meta-Interaction Model (MIM). This approach was able to generate many qualitatively dierent behaviors from systems with the same hardware assumptions by simply tuning the relative weights of the behavioral parameters [41]. The MIM design approach was a repetitive trial-and-error method where the author would systematically vary parameters, run simulations, and record the emergent behavior. This local-to-global mapping having been done, a future designer is then free to use it as a lookup table to perform the inverse global-to-local mapping as the need arises. 37 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems Figure 4.2: Box moving simu- lation sequence from Chen [37], where the more favorable eld locations are marked by green in the heat map. The system's FR is to move the box past ob- stacles to the goal on the right side of the arena. These systems relied almost exclusively on agent interactions, with little or no interaction with the environment. In the ocking studies, the agents simply formed a ock and moved in empty space. In searching studies, the agents would nd an object and attach to it, but not manipulate it. This approach showed the ability to form certain structures, but more interaction with the environment may be necessary to complete useful FRs. 4.1.3 Function before form The work of Chen on path-nding and box-pushing [37, 104] has a stronger focus on FR completion than its predecessors. In his research, the system task is the most important concept, and the actual structure of the system need not be predetermined, as long as it can complete the FR. Adopting the language of system dynamics, he views CSO design as the process of creating desirable basins of attraction. These basins of attraction are system states wherein the system-level FR has been accomplished. In order to enter such states, the mCells follow a path through a \eld," which they sense locally. He calls this process Field-Driven Behavior Regulation (FBR). If the eld and signals are properly dened, and there are no trapping attractors of the eld, multiple agents can follow signals in the eld in parallel, leading to FR completion. In the most dicult task, a set of homogeneous agents completed the system FR to push a box toward a goal, with ubiquitous knowledge of the goal location and local sensing of the box and obstacles. The eld formula caused attraction toward the side of the box opposite the goal, attraction to the goal, and repulsion from 38 Chapter 4. CSO Systems: Review and Status Figure 4.3: Box-pushing task from [114] with a narrow corridor that forces agents to coordinate and rotate the box to reach the goal obstacles, as shown in Figure 4.2. These agents had almost no purposeful interactions with one another beyond collision avoidance; they focused primarily on the task object. 4.1.4 Logical agents In Khani's work [114, 113], agents have a combined focus on coordination and task completion. The task she chose to study is a box-pushing task that is more dicult than Chen's because it requires rotating the box around an obstacle (Figure 4.3). Her argument is that system order can arise from rule-based interactions among agents. Rather than continuously following mathematical eld functions, agents react to one another through a set of logical rules. She shows that logical interactions can help the system to organize properly and move and rotate a box toward a goal. It was also shown that these new rules cause system overhead and potential ineciency if improperly applied. 4.1.5 Summary of previous CSO System accomplishments To summarize, the shape formation work of Zouein introduced the mCell concept and established the biological foundation for self-organizing systems. The demon- strated adaptability came from the soft connections among agents and their ability to recongure. Chen's box-pushing work introduced eld-based behavioral regulation and an emphasis on function before form. The ocking work of Chiang relied almost entirely on social interactions, and the MIM was established as a design technique to manage interactions that relied on a parametric description of the agents' behaviors. Although he did not use the term, in light of Chen's work, the social interactions could be thought of as a social eld as opposed to Chen's task eld. Khani relied on a combination of task eld and logic-based agent interactions to perform a more complex self-organizing box-pushing task. 39 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems 4.2 A new approach 4.2.1 Limitations of previous work In Chiang's dissertation [39], the COARM set of behavioral primitives was parametri- cally optimized to display the emergent behaviors of ocking, surrounding, and space- lling. His parametric approach provides exibility, but exhaustive local-to-global mapping through the MIM is very time consuming and would be computationally intractable for testing all the permutations of larger parameter sets. Also, other system-level FRs, say foraging or collective construction, are not contained within the parameter space. To obtain these other emergent functions, it may be necessary to choose primitives beyond the COARM behaviors, or to use a behavioral selection method that is not a weighted sum. The choice of behavioral primitives and the be- havioral selection method is a conceptual design problem. How to form the conceptual design is a dicult question, and very little guidance is given in the literature. The CSO simulations in the literature, and most of the other self-organizing systems described in Section 3.1 have arisen from an ad hoc design process. The main focus is on just getting the system to work. While much can be learned from building specic systems, from a design perspective, it would be benecial to have a general design methodology similar to traditional DTM. 4.2.2 Strategy to move forward No universal modeling capability is given in the CSO literature. The three main pillars, shape reconguration, ocking, and box-pushing, all started from a conceptual design and explored its capabilities. Without general modeling capabilities, the systems are locked in by their conceptual designs, and can only be optimized to a certain point. To add to the body of research, there is a need for a unied modeling approach which can not only describe the previous work in the CSO literature but also other diverse self-organizing systems. The general modeling method should take the form of a design ontology for self-organizing systems, where conceptual behavioral designs can be derived from the ontology. Integrating the ontology with a computational synthesis approach to detail design would eliminate time-consuming trial and error and provide automated global-to-local mapping. This computational synthesis at the detail design level would move the CSO design eort to the conceptual design level. Taken all together, these form an overall SO design methodology that could be a more systematic approach than those reported in the literature, allowing easier design representation, analogical knowledge transfer, and better system-level understanding of articial self-organizing systems. Chapters 6 and 7 develop this proposed framework in detail. 40 Part III Theory and Methods 41 Chapter 5 The Dual Nature of Complexity A complex system is one with many parts that interact in a non-intuitive way [196]. In most cases, designers attempt to avoid or reduce the complexity of the systems that they create [8, 208]. This is because complexity causes challenges in design, analysis, and deployment. Complex systems have input-output relationships that are nonlinear and oset in time and space. Their 2 nd and 3 rd -order eects may be more impactful than their 1 st -order eects. They can display runaway behavior due to hidden positive feedback loops, or their overall behavior can be chaotic. With engineering emphases on safety and reliability, such phenomena can be drivers of system failure. But complexity can also be the source of added system functionality, if it is harnessed correctly. Complex functional requirements may be forced on the designer, and the only way to achieve them is through somehow matching the complexity of the requirements with the complexity of the system. In this chapter, the potential positive eects of complexity are explored, with an emphasis on complexity-driven variety and creativity. 5.1 Natural adaptable systems As a proof of concept and motivation, it is well known that natural systems must adapt to randomness and disturbances such as changes in weather and the appearance of predators, so as a starting point for the design of adaptive self-organizing systems, we should look to successful \designs" that are found in nature. One example is the growth of multi-cellular organisms, known as embryogeny, which self-organize by multiplying from a single cell, and then dierentiating into a heterogeneous system of organs with no direct guidance from the outside, only iterative protein synthesis from the DNA stored in every cell [87]. Another good example is the behavior of social insects such as ant colonies. Again there is no hierarchical social structure that tells an individual ant where to forage for food and when to return it home, but instead a at organization emerges where ants 43 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems are in constant contact with one another and with the pheromone trails left behind by previous workers. Even without outside or centralized control, their individual behavior leads to complex and ecient foraging at a system level through the formation of trails between their nest and a food source [52]. These two short examples show remarkable capabilities of growth and functionality amid an unpredictable environment. They achieve this through self-organization of the components of the system and their relationships, two strategies that are very rarely used in conventional engineered systems. These systems are also highly complex, even though their substrate (cells and ants, respectively) is comparatively simple. How can we import the adaptability from natural systems into engineered systems? The answer is not yet complete, but at least we have an intuition that it will require a move away from xed system structure and relationships, and a way to manage, even embrace, randomness and chaos [163]. 5.2 The Law of Requisite Variety 5.2.1 The variety of a controller must match the variety of its environment Ashby's Law of Requisite variety states that for a system to adapt to an environent that can have 1 ofN states, the system controller must also haveN possible states [1]. Simply put, the variety of the system must at least match the variety of its environment to ensure stable performance. 5.2.2 Variety in complex systems Variety in behavior can be driven by complexity. In order to achieve this variety, steps must be taken to make the system more complex, such as adding more com- ponents, giving components more autonomy, or allowing more interactions among components. This has been shown in the theory of formal organizations, where it is known that uncertain environments require decentralized control. This is because a large organization that is controlled by an individual has variety no greater than the individual, whereas a large organization with decentralized control can have complexity that bubbles up from the interactions among its many components [216, 10]. Strong control relationships among components lessen the variety of the system because components will be dependent upon the other, and you would need to only know the state of one to infer the state of others, lessening the information necessary to describe the system. Variety is necessary, but of course not sucient for robustness to environmental conditions [10]. The various states must be appropriately matched with the environmental conditions. A large repertoire of behaviors is not enough. 44 Chapter 5. The Dual Nature of Complexity 5.3 Creative complexity Creativity and adaptability are closely related concepts [30]. Adaptability is the ability to cope with change. Creativity, in a problem-solving context, is the ability to generate solutions that are novel (new and/or surprising) and useful [44]. If a system can rearrange its own organization to become more suited to its purpose, it is a creative system. If it does this in response to system damage (resilience) or a change in functional requirements ( exibility) or inputs (robustness), it is an adaptable system. 5.3.1 Source of creativity Creativity also comes partly from the proper handling of stochastic processes [175]. In systems that are chaotic or not fully deterministic, the dynamics may be bounded, but within these bounds there is room for probabilistic events. These events are not pre-ordained, and may be considered a choice. Even natural systems of inert particles can undergo bifurcations in their dynamics, thus making a \choice" even though there is no central cognitive controller for these actions. Bifurcations are common in complex systems [9]. Irreversible bifurcations can be constructive sources of order in the system [174], and these choices can be in uenced by tiny random uctuations in the system dynamics if the system is already near a bifurcation point [160]. Creative systems will bifurcate to reinforce stochastic behavior that is novel and useful. This is in fact a model of how creativity works in human problem solving. The \geneplore" model of human cognition [63] treats cognition as a combination of two processes: to gene-rate partial solutions and ex-plore these structures by modifying them or combining them into full solutions. The mechanism behind the formation of the preinventive structures is unknown, but could rely on stochastic processes. In fact there is a growing eld of practical creativity research that encourages the use of random or absurd ideas as the starting point of innovation [81]. In the eld of genetic algorithms, which are computational problem-solving algorithms, Goldberg [79] talks of a similar \physics of innovation" that must be followed. This is the random generation of \building blocks," which are partial solutions to problems, and then their mutation and mixing together, which create innovative new problem solutions. Moving to a more physical substrate, the human brain itself is considered to be one of the most complex systems that has ever been studied [8]. The functionality of the brain is distributed through billions of neurons in a highly interconnected mesh [229]. This mesh is resilient to misres and failures of neurons, and as proved by human development, capable of adaptation over the course of a lifetime. Out of this complexity arises great creativity, and since the activities of the brain|memory, perception, and learning|are parallel and emergent, they cannot be directly congruent to storage, input, and change, the serial and repeatable activities of a computer, and thus computers may never be as creative as a human mind [228]. 45 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems 5.4 From agent complexity to system complexity and performance The emergent variety of a system has to somehow arise from the actions of its components. If the components are allowed to freely interact with one another, their interactions can create a highly complex system, even if the components themselves are comparitively simple. 5.4.1 Complexity gains from simple agents In order for a system to display complex emergent behavior, its constituent agents must have a threshold level of complexity. Below this level, the system is degenerate and can only display simple behaviors [159]. This has been shown in cellular automata, where there is a certain threshold of complexity in rules that must be crossed for system complexity to emerge, and below this threshold, only periodic or nested behavior is found. But this threshold is not high, and complex system-level behavior can be created through the interactions of relatively simple components [244, 10]. Taking a linguistics perspective, Pattee [169] describes the nature of the couplings in a hierarchical system, showing how a huge variety of system-level behaviors can be obtained from a small set of interacting behavioral primitives. The lowest level of a hierarchical representation is made up of \tokens." A specic set of tokens is an alphabet. For example, a class of molecules known as proteins, which exhibit incredible diversity, can be built using an alphabet of only 20 amino acids, and the number of materials that can be formed from the alphabet of 92 natural elements is incalculable. Thus we see that great variety can be obtained from simple alphabets. The agents of a self-organizing system provide a exible alphabet of behaviors, which can be combined in novel ways, even if the agents have only a few behavioral proles. In addition to the alphabet of behaviors, complexity can also be gained from the number of interactions among agents. Because the number of possible interactions increases proportionally to the square of system size, an N-fold increase in size can yield a greater than N-fold increase in capacity [159]. 5.4.2 Short descriptive lengths Seeking complexity from simple agents has certain benets. One is that the simple agents require a shorter description than the entire system. In engineering terms, this is a less direct approach to synthesis, an indirect encoding. The product of design for self organization is a set of interaction rules that govern how the system can build itself, whereas most common engineering design processes result in a full description of the system. The description of the system goes through another mapping process 46 Chapter 5. The Dual Nature of Complexity before the structure of the system is determined. This mapping process is the self- organizing structure formation and task completion. This is useful for design and optimization, because the search space of the short description is much smaller than a large description, but the results are still complex [16]. This indirect approach is similar to the concept of evolutionary embryogeny, which has been used to evolve growth rules for vertical structures that must support a horizontal load [245]. 5.5 Simple agents can interact to form creative sys- tems To summarize this chapter, complexity is often avoided in engineered systems, because it can lead to diculty in design and control, but sometimes complexity is forced on the designer due to requirements for adaptability. This adaptability may be needed to face environmental change or shifting functional requirements. To be adaptable, the system's variety must match the variety of its environment. One avenue for increasing the complexity of the system is to allow loose and numerous connections among its constituent parts. These parts do not even need to be inherently complex themselves, but can be simple agents or robots. The complexity of the system, if properly channeled, can actually be a creative force, as the system will be able to discover emergent functionality at runtime. The design and optimization of the system can also be simplied by the short length of the description required to specify the agents compared to the description that would have been necessary to describe the entire system. An ontology and methodology for exploiting these theoretical insights are given in Chapter 7, and practical applications of the framework are given in Part IV. 47 Chapter 6 Computational Synthesis in Self-Organizing Systems This chapter gives an introduction to the techniques of agent-based modeling and evolutionary optimization, with a justication for their use in the design and analysis of complex systems, including self-organizing systems. 6.1 Agent-based modeling for complex system anal- ysis Agent-based modeling is potentially useful in both analyzing and designing complex systems [134, 137]. This capability, in part, stems from the study of interaction links between components of a system. Any link between system components can be characterized by a degree of dependence and degree of control. If links in a system have high dependence|that is, the behavior of one component relies strongly on the behavior of another|but very little control over that behavior, then the system can be eectively modeled as a collection of interacting, autonomous agents. For complex systems, which typically have a large number of interconnections, agent- based modeling and simulation oers a viable approach. This is especially true of self-organizing systems, in which constituent parts have a great deal of local autonomy, but no direct control or even knowledge of overall system behavior. An agent is a discrete, situated, autonomous entity [133] with an internal logic governing its reactions to outside stimuli. Because the behavioral model is only concerned with a single agent, it can be quickly built and simulated on a computer. A simulation based on the behavior of a collection of interacting agents is called a multi-agent simulation (MAS). In a truly multi-agent simulation, it is necessary to restrict agents' knowledge 1 [166]. Therefore, in most simulations, agents are assumed to have knowledge of only their own internal state, certain external signals from other 1 Otherwise, the collection of agents would be acting as one monolithic entity. 49 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems agents, and their local environment. The data generated by agent-based simulations can provide useful insights about the behavior complex systems and uncover hidden interactions, including those that lead to unintended consequences. Higher-order interactions of agents, not apparent in an architectural object diagram, can become apparent in a simulation as the indirect eects of agent-agent interactions propagate through the system in a time scale that engineers can understand and track. This approach can be helpful for in uencing the \design of emergence." The results from such simulations for various \what-if" conditions or parameter changes can be combined with powerful optimization methods to explore the design trade space. This exploration allows for a more detailed elaboration of the trade space, and uncovering of interesting behaviors that can lead to asking deeper questions. Many complex systems can be studied using multi-agent simulations. Agent- based modeling has been used to study a diverse range of systems and phenomena, including ocks, schools and crowds [183, 146]; mechanical design teams [106]; robot swarms [234]; and even the rise and fall of civilizations [3], to name just a few. Agent-based modeling is useful when the interactions between system components are complex or nonlinear, when the population is heterogeneous, when the topology of interactions is heterogeneous and complex, and when agents exhibit learning and adaptation [23]. Because the engineer does not know a priori what the results of the simulation may be, there is a chance of being surprised. Because of the complexity of the system being modeled, it is often dicult to distinguish whether this surprising behavior is the result of an incorrect model, or an unanticipated result of a sound model [68]. The production of a simulation is a design process in itself, and as the \product" moves from vague conceptualization to computer code, care must be taken to minimize the introduction of errors and artifacts at every step of the design process [68]. 6.1.1 Agents working with and within the complex system Humans and other entities that constitute the system's environment can be modeled as agents [136], thereby enabling analysis of the complex feedback loops that connect system performance and user behavior [135]. For example, the erratic behavior of untrained users, AI stand-ins for trained operators, and decision-making processes of consumers can all be modeled as agents working with and within complex systems [135]. For such systems, it may not be possible to derive smooth analytical functions for system performance, but MAS can make the analysis more tractable. 50 Chapter 6. Computational Synthesis in Self-Organizing Systems 6.1.2 Agents as the complex system In certain highly complex systems, such as self-organizing systems, elements of the system itself can be modeled as agents, to analyze the non-linear relationship between local design variables and system-level output. This allows the engineer to focus on the local problem of accurately modeling agent interaction rules while leaving the larger problem of system-level analysis to the computer. 6.1.3 Multi-agent system example: seating layout design Here I will present a short case study from [102] to illustrate the type of emergent behavior that an agent-based model can uncover and its relevance to designers. The design under consideration is a seating layout for a theater. Customers are allowed to choose their seat, and their overall experience determines their satisfaction when leaving the building. The design question is, \Will customers be better served by a wide aisle down the center of the seats, or by two narrow aisles at the sides?" In practice, it may be very dicult to model a user's satisfaction, but by application of microeconomic theory, observations, questionnaires, and interviews, rough estimates can be formed [50]. Therefore, marketing personnel typically target a particular demographic, and tailor the design toward that demographic, or elicit customer desires using surveys and focus groups. Several techniques such as the Quality Function Deployment and House of Quality [88] are useful for aggregating diverse customer needs. In an agent-based approach, customers can be modeled as agents that can interact with the design. Their diversity can be captured through distributions of parameter values that vary throughout the population. Thus, their preferences do not need to be aggregated to set design targets, but can be simulated so that design alternatives can be virtually tested to measure the emergent satisfaction of users. For example, in the domain of sports stadium design, guidelines exist for choosing the seating capacity based on local socioeconomic factors and the arrangement based on spectator sightlines [108]. With an agent-based approach it is possible evaluate design choices with regard to spectator social desires as well, moving beyond a purely structural analysis. Customers' seating preferences For a seating scenario, the customers are assumed to have ve main preferences: 1. Groups of customers (friends) prefer to sit together 2. Customers prefer not to sit directly next to other customers outside their group (strangers) 3. Customers prefer an unobstructed view of the stage 51 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems Figure 6.1: Two seating options: the middle-aisle layout (left) and the side-aisle layout (right). The squares represent seats, the circles represent structural columns, and the stage is represented by the the thick line at the bottom of the frame. 4. Customers prefer to sit close to the stage 5. Customers prefer to sit along the centerline of the stage These preferences can be roughly observed in any situation where there is open seating to view a performance (e.g., a movie or a sporting event). Designer's seating options The seating section under consideration needs to be built around two existing load- bearing columns that may obstruct the view of the audience. The customers' satis- faction then depends on their proximity to the stage, the customers sitting next to them, and whether or not the columns obstruct their view. There are a total of 52 seats, with room left for aisles. The systems engineer needs to choose between two candidate designs. One design alternative leaves the center of the seating area empty for aisles, while the other design alternative places aisles on the sides. The two options are shown in Figure 6.1. The side-aisle layout provides the most unobstructed seating, and the most seating near the centerline of the stage. The middle-aisle layout has more edge seats, giving a lower likelihood that a customer would have to sit by strangers. It also can accommodate more mid-sized groups, as its seat sets are all between 4{6 seats, rather than varying from 2{12 seats as in the case of the side-aisle conguration. A numerical comparison of the two options is summarized in Table 6.1. Simulation of aggregate customer seating behavior In the simulation, a group of customers enters the venue and nds seats at each time step. To select their seats, the group's leader rst nds a group of seats (if any) large enough for his entire group to sit together. The leader then chooses the most desirable of these seats for himself. The next group member sits in the next most desirable seat adjacent to the leader, and so on, until all are seated. If there are no groups of 52 Chapter 6. Computational Synthesis in Self-Organizing Systems seats large enough for the whole group of friends, the leader will select the largest seat group available for a subset of the group, while several group members have to nd other seats, some distance away from their friends. The desirability U of a seat is calculated as follows: U = u d +u f u g +u o 3 (6.1) u d = 36:6 d u g = ( 1 if sitting with main group 0:5 if separated from main group u o = ( 1 if view is unobstructed 0 if view is obstructed where the value for u f is given in the following table: Total friend neighbors Total stranger neighbors u f 1, 2 0 1 1 1 0.5 0 0, 1, 2 0 Equation 6.1 is used to map the customers' qualitative preferences to a mathe- matical algorithm that can be implemented in simulation. It can be seen that agents choose where to sit based on the desirability of the available seats, and the desirability of the remaining seats in turn depends on where the agents have chosen to sit. This feedback loop between system performance and user evaluation is common in complex systems. To simulate aggregate behavior, groups of 2{12 friends were randomly created and allowed to choose seats until 48 of the 52 seats were lled. The tness of each simulation was taken to be the sum of the desirability of every occupied seat. Five hundred simulations were run for each layout option. The tness frequencies for each option are given in Figure 6.2. Due to the side-aisle layout's inherent structural Table 6.1: Comparison of Two Seating Layout Design Options Middle Aisle Design Side Aisle Design Total Number of Seats 52 52 Number of Obstructed Seats 26 22 Number of Edge Seats 20 18 Number of Seats in Largest Group 6 12 Number of Seats in Smallest Group 4 2 53 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems 0 20 40 60 80 100 120 140 160 26-26.5 26.5-27 27-27.5 27.5-28 28-28.5 28.5-29 29-29.5 29.5-30 30-30.5 30.5-31 31-31.5 31.5-32 32-32.5 32.5-33 Fitness Frequency (N = 500) Fitness Middle Aisle Side Aisle Figure 6.2: Fitness value distribution for middle-aisle and side-aisle layouts Figure 6.3: Results of simulation when all customers arrive in groups of 4. Sets of friends are indicated by a common color. advantage, it outperforms the middle-aisle layout on average, but the middle-aisle layout is superior in several specic cases which involve many mid-sized (4{6) groups of friends. For example, if all customers come in groups of 4 (Figure 6.3), some of the best front-row seats of the side-aisle conguration are left unoccupied, giving it a tness of 28.75, whereas the middle-aisle layout leaves the worst seats empty, for a slightly higher tness of 29.37. 6.1.4 Practical takeaways from seating case study This section was intended as a motivation for using agent-based modeling in Systems Engineering and Design. With an agent-based approach, a systems engineer can look beyond a static, structural model of the system, and study users and their complex interactions with the system, with the end goal of increasing system performance and customer satisfaction. In the case study, the agent-based approach yielded two signicant contributions: a nuanced analysis of socially emergent phenomenon that 54 Chapter 6. Computational Synthesis in Self-Organizing Systems was not apparent in a structural analysis, and a clue to where more information and market research eort could be focused. The total desirability of a conguration depends not only on the physical arrange- ment of seats, but also on both sizes of customer groups and the order in which they enter the venue. A static analysis of the problem would have chosen the side-aisle layout as the clear favorite because it provides the most unobstructed seating and the most seating near the centerline of the stage. A dynamic analysis of the problem with customers modeled as agents, however, indicates that the middle-aisle layout may be preferable in particular circumstances, indicated by the overlap in Figure 6.2. As importantly, an agent-based model can help the designer to highlight important assumptions and gaps in information (Do we serve more couples or families of four? Do our customers place more emphasis on sitting close to the stage, or having an unobstructed view?) Answering such questions enables more ecient use of investigative resources. If taken a step further, the agent-based method could also aord the opportunity to simulate the eects of interventions such as priority seating groups, or assigned seating. It is also important to recognize that an agent-based simulation is never the end goal, but merely an important step within the system development process [168]. The tools used here are adopted in service of the overall design. While not addressed in this dissertation, the time, cost, and eort of developing such simulations must always be measured and traded o against the value they provide in the design context. 6.2 Genetic algorithm for detail design The agent-based analysis of a seating layout from Section 6.1.3 could be taken a step further. The multi-agent simulation was used to analyze the relative merits of two predened seating layouts, but the designer will always be thinking ahead to leverage the power of even more advanced computational techniques, such as linking the agent model of customers with a software system for automated interior layout generation [145, 70], and an optimization algorithm. Such an approach would allow end-to-end AI-supported design 2 with the engineer focused on controlling the interfaces among the software components. A similar approach is taken here with the design of self-organizing systems. Here I focus on the use of genetic algorithms 3 [96, 76, 78, 79] because they are useful for optimizing large parameter sets and have been proven capable in similar applica- tions [31, 204]. 2 Such a case study is not given here, but is left for future work, as the rest of this dissertation will focus on the use of system components modeled as agents, rather than system users. 3 There are many other optimization algorithms available to the designer of complex systems. Genetic programming [122], hill climbing [204], simulated annealing [149, 83], active nonlinear testing [147] and all the variations thereof are just a small subset of the possibilities. 55 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems The basics of genetic algorithms (GA) were given in Section 1.6. To recap, they operate on a population of candidate solutions. The solutions are evaluated according to a tness function, and the ttest solutions are selected for recombination with other solutions and/or mutation. The selection, recombination, and mutation results in a new generation of candidate solutions that ideally has a higher tness than the prior generation. This process is repeated for as many generation as it takes to nd suitably t solutions. The following sections explain several more aspects of the GA that are important to designers. 6.2.1 Evaluation The system performance must be evaluated by a tness function. In this work, the global results are the primary concern; the particular self-organizing strategy that leads to the results is usually treated as a black box. The following aspects must be considered when creating a tness function: Search space This is the range of variables that the GA can test for tness. Both the number of variables and the range of acceptable values aect the size of the search space. Ideally, the search space will be densely populated with highly t solutions. Speed Possibly thousands of evaluations will be run, so the evaluation should not add signicant time costs to the algorithm. Depth The system should obey the spirit of the task requirements, not just the \letter of the law." Responsiveness The tness function should be able to make ne distinctions between candidates which exhibit similar performance, and should function even if the evaluation is clouded by noise. A designer must also decide which level of the system is being evaluated. A self-organizing system displays behavior at dierent levels of decomposition. A tness function could either be applied to the success of an individual, or the system. Also, if the system's FR can be decomposed into sub-FRs, the designer must decide whether to account for the completion of the sub-FRs or just the top-level function. In general, it is best to apply the tness function holistically to the system-level behavior. Just as the hive or colony can be treated as a super-organism subject to competition and natural selection [111], so can the agent swarm be evaluated as a single entity. In fact, decomposing the tness function and selecting for sub-FRs may lead to optimized partial solutions that integrate into a sub-optimal global eect [236]. 56 Chapter 6. Computational Synthesis in Self-Organizing Systems 6.2.2 GA parameters A designer must specically consider certain parameters that govern the GA's perfor- mance and balance it between breadth-focused and depth-focused search. Population size the number of candidates that the GA is will create at each gener- ation Final generation the number of generation that a GA will run before it stops Elitism the number of candidates that will be directly cloned to the next generation Crossover percentage the percentage of selected candidates whose genes are re- combined Crossover point how to determine which genes come from which parents in a crossover operation Mutation percentage the probability that a bit of genome will mutate during procreation Elitism is used so that the stochastic algorithm does not randomly lose the knowledge of highly t candidates. Mutation can be used to maintain diversity in the population so that the GA does not prematurely converge on local optima. Higher population sizes and nal generation cutos generally lead to discovery of better candidates, at the cost of increased computation time. These parameters are often set by experience or informal experimentation. It is best to do multiple GA runs with varying parameters to test for sensitivity. Some have suggested methods to optimize these parameters, even using a GA to optimize the parameters of another GA [82], but this can lead to unlimited levels of meta- optimization [143] unless a non-heuristic optimization algorithm is ultimately used. 6.2.3 GA in the design of Cellular Self-Organizing Systems The concept of cellular systems with design DNA meshes quite well with the GA approach, because where you have DNA, you would expect natural selection, crossover, and mutation. Moving past this supercial level, the parametric approach used by Chiang [39] is simple to encode in a binary digital genome, as the chromosome interpreter simply needs to scan at constant intervals to read values. At the most fundamental level, the GA in some ways mimics creative cognition in humans. Goldberg [79] talks of a \physics of innovation" that GAs must follow. Well-written GAs all follow these rules, whether or not they closely resemble the biological process of reproduction. Innovation is the mixing of \building blocks," where a building block is a segment of the virtual chromosome that produces positive behavior. Similarly, certain elementary ideas that exist in a person's memory are retrieved during problem 57 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems solving and mixed together. The \geneplore" model of human cognition [63] states that while thinking creatively, one will generate \preinventive structures" and explore these structures by modifying them or combining them. These creative thoughts must be assessed according to their novelty and appropriateness [44]. Other cognitive researchers describe \blends" of elementary ideas that result in emergent creative thoughts [67, 62]. Just as humans blend and morph preinventive structures to be assessed for novelty and appropriateness while trying to avoid xation [117] on inferior early ideas, so the GA mixes and mutates building blocks to be evaluated according to a tness function while attempting to escape premature convergence [79]. So we see that the human and GA creative processes have certain fundamental similarities [78, 7], and study of one can aid understanding of the other. During conceptual design, the human designer must creatively develop a model that is endowed with enough capacity for elementary building blocks such that either a human engineer or GA can optimize it during detail design. Thus, the trouble that a GA has in optimizing the design should give insight into the appropriateness of the model. Having such neutral designer also allows for evaluation of the adaptability of a conceptual model if the same GA is used to optimize it multiple times for various scenarios, and it allows for comparing the optimized scores across scenarios. 6.3 Integration of multi-agent simulation with op- timization Multi-agent simulations can be combined with optimization to optimize agent behavior for system-level eects. Because there may be multiple variables to optimize, and the outputs may have highly nonlinear relationships with the input, the simulation environment presents challenges to the system designer that are distinct from those that confront standard optimization methods [191], but with recent advances in computing power [65] and advanced optimization techniques such as genetic algorithms, the marriage of these two tools is feasible. In this thesis, I propose an integration of multi-agent simulation with a genetic algorithm for the computational synthesis of self-organizing systems. This approach is summarized in Figure 6.4, where the task is the designer's primary concern because it links the system performance to the user's needs. Each stage in Figure 6.4 is potentially rich area for research, and not all of it can be covered in one dissertation. If we assume that the agent hardware is simple enough that it can be built through a routine design process and that the task can be simulated with sucient delity using multi-agent simulation, then the research eort can focus on the evaluation, optimization, and description of the self-organizing system. In particular, I will focus on the description of the system and how these descriptions can be generated during the conceptual design stage. To do this, a method for modeling self-organizing systems is required. This model must be general enough to describe a 58 Chapter 6. Computational Synthesis in Self-Organizing Systems Self-Organizing Machine Genetic Algorithm Description of Self-Organizing System Modeled by Generate Perform Optimize Evaluation Self-Organizing System Task Task Figure 6.4: Simulation-optimization loop wide class of systems while remaining specic enough for practical applications. It must also t into the computational synthesis framework of this chapter. To this end, a design ontology of self-organizing systems is proposed in Chapter 7. 59 Chapter 7 Design Ontology for Self-Organizing Systems Chapter 4 alluded to the need for a design ontology, with an emphasis on system-level understanding, for the design of self-organizing (SO) systems. This chapter will explain the basic concepts of an ontology and why they would be helpful for designers. Then an ontology specic to the design of self-organizing systems is developed in the middle ground between ontologies that are too specic to lead to new discoveries and those that are too general to have practical application. Finally, the terms of the ontology are composed in a methodology, along with agent-based simulation and optimization for an integrated process to bring SO systems from concept through detail design. 7.1 Introduction to ontology An ontology is a knowledge structure applied to a domain of interest. Ontologies identify the most important concepts within a domain, dene the concepts uniquely, and establish rules for relating concepts to one another. Ontologies dene formal languages, easing information transfer and clarifying semantics. They are abstract, detailing only the important (to specic users) details of a domain, and should be \explicit," giving a precise denition of the concepts and relationships contained within the ontology [86]. Ontologies are often used for consistency in knowledge transfer between cooperating entities, whether they be humans [232] or computer agents [125]. Most importantly, ontologies can be used to dene other models. In engineering design, if an ontology has properly specied entities and relationships, portions of it can be built up into models of systems, aiding in the conceptual design of new systems. 61 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems 7.1.1 Need for ontology in self-organizing systems design There are many examples of SO systems in the literature (e.g. [13, 100, 217]), but most researchers have taken an ad hoc approach to their design, demonstrating individual successes but few general results [176]. This makes it dicult to transfer lessons learned from one application to the design of another. Also, designers in dierent organizations have to start from scratch when creating models for conceptual design. With a regimented design ontology for SO systems, various successful systems can be categorized, and their important self-organizing mechanisms can be documented for possible adoption to achieve other, similar functions. Behavioral modeling and design can also be facilitated with a generally applicable ontology of the agents that form the system. In Prokopenko's introduction to the 2008 book Advances in Applied Self-Organizing Systems [176], he comments on the small number of practical self-organizing systems, stating, \. . . the lack of a common design methodology for these applications across multiple scales indicates a clear gap in the literature." Tellingly, his introduction to the 2013 edition of the same book [177] contains the same verbiage word-for-word, indicating that not much progress had been made in the ve intervening years. There are many methodologies for engineering design (e.g. [165, 209]), which are applicable at a high level to conventional systems and SO systems alike. The gap in knowledge is not a lack of systematic design methods, but a lack of specic adaptation to SO systems. Currently engineers do not have a guideline for the elements of an SO system that must be specied during design, or a model of SO system behavior. Some helpful methodologies have been proposed in this area [73, 51], but no standard exists. 7.2 Dening self organization The users of the design ontology in this chapter are expected to create self-organizing systems. To properly dene the ontology, an understanding of the salient features of self-organization is necessary. 7.2.1 The elusive nature of organization According to Ashby [2] for a system to self-organize can mean one of two things: the formation of relationships that did not exist before (forming an organization), or the reformulation of relationships that lead to a better organization. With the former denition, there is no guarantee that self-organization will be helpful to the engineer, and with the latter, there is great diculty in dening just what is better \organized" or more \ordered." Gershensen and Heylighen [75] showed that, depending on the partition of the system or the aspects measured, any system can be shown to be increasing or decreasing in order. Ashby [2] claims that order is a relationship between the size of a system and the length of its description, and thus its order is subjective, 62 Chapter 7. Design Ontology for Self-Organizing Systems since it depends on the choice of language used to describe it. Von Foerster [227]| echoed by Gershensen and Heylighen|even goes so far as to say that there \are no such things as self-organizing systems!" These more extreme statements are hyperbole of course, as von Foerster's main point is that a system cannot self-organize in isolation (without gaining order from an environment and exporting entropy), and Gershensen and Heylighen's point was that only certain classes of systems are usefully modeled as self-organizing, while it is not an inherent property of any system. I agree with this last point; there is no way to describe self-organization as an objective, measurable property of a system as we do with density or temperature. Self-organization is inherently subjective. If we use Ashby's second denition of the self-organizing system, a system that changes from a bad organization to a good organization, we need to dene what is bad and good. From the standpoint of engineering design, this demarcation follows quite naturally from the concept of \function." 7.2.2 Function as the missing subjective link Subjectivity is required for any practical denition of self-organization. Designers have valuable experience with working under subjective goals. There are three fundamental concepts [71] that form the basis of design theory: function, behavior, and structure (FBS): Function what a system is supposed to do Behavior what a system actually does Structure what a system is, in a tangible sense A good summary of design is that it is the process of specifying a system's structure, such that its behavior fullls its function. The subjectivity enters in the denition of the function, as dierent stakeholders can have dierent opinions on what a system is supposed to do. Nonetheless, denition of the function is one of the rst and most important steps in design methodology [165, 178, 209, 138]. Once the system's functional requirements are set, the designer proceeds to specify a structure in such a way that the structure's behavior fullls the system's function. In a self-organizing system, the designer has no direct control over the global structure of the system, and must take advantage of his control over the local behaviors of system components so that they interact to create order. So an SO system is a system that is designed at the local level, but fullls a specied function at the global level, without external or central control. 63 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems Mechanical Design (Gero 2002) System Dynamics Biology Design Ontology for Self-Organizing Systems Previous CSO work • Function • Behavior • Structure • System state • System levels • DNA encoding • DNA transcription • Field-based behavior regulation • Behavior parameterization Figure 7.1: Related ontologies 7.3 Related ontologies Articial self-organizing systems are designed. They are dynamic systems. They are also bio-inspired. Thus, the ontology of this chapter will primarily draw from ontologies of mechanical design, dynamic systems, and biological systems. It will also incorporate useful discoveries from the previous work on Cellular Self-Organizing Systems. An overview of the anchors of the ontology is given in Figure 7.1. 7.4 Requirements and approach This section outlines the base ontology that is used to categorize SO systems. This explanation is complicated by the fact that the ontology must serve two overlapping purposes: description and synthesis. Ontology and modeling are a mixture of art, science, and philosophy, so one cannot simply derive a universal ontology as a conse- quence of natural laws. Instead I aim to validate the ontology through practical use in synthesis and its ability to describe various systems in the literature, so it is necessary that the ontology's dividing lines be placed where the designer has the most leverage to aect the system and where they will highlight the most important entities within the design domain. 7.4.1 Ontology requirements The ontology should cover the domain of designed self-organizing systems. This ontology should be general enough to describe the huge variety of self-organizing systems from the literature, yet specic enough that designers can use it as a practical 64 Chapter 7. Design Ontology for Self-Organizing Systems tool when creating SO systems. A general ontology should enable the capture of the most important lessons learned through various case studies and facilitate the transfer of these SO mechanisms to other domains. 7.4.2 Balance between generality and practicality Ontologies must strike a balance between generality and specicity. A more general ontology can describe a greater domain of knowledge, but a more specic ontology will provide more immediately practical advice to users. The danger in being too general is that an ontology may be unwieldy or oer no practical use, and the danger in being too specic is that the ontology may only describe what we already know, with no possibility for integrating new knowledge or composing new systems. Te begin, the fundamentals of describing a system are extracted from studies in Physics, Philosophy, Self-Organization, and Organization Theory. This identies a system, an environment, and an observer. The ontology is then rened by exploring the implications of a designer as the observer, which introduces subjective terms related to the system's purpose. Consideration of the typical design process that is required to synthesize such systems leads to consideration of abstraction, recursion, iteration, and uncertainty. I integrate terms from the literature for ontological distinctions that designers have made to align with these characteristics. The ontology is nished by adding details from specic experiences in the design of Cellular Self-Organizing (CSO) Systems, which are systems composed of homogeneous agents with only simple hardware and computational abilities. In this way, the ontology will be built from fundamental to practical, by following this progression of designer needs: description, evaluation, synthesis. Figure 7.2 shows a brief outline to the construction of this ontology. The upper portion of each box describes the characteristics of design and self-organization that will drive the ontological distinctions, and the lower portion lists some of the fundamental entities as each stage. 7.5 System, environment, and observer We begin with the very simple assumption that there exist a system, an environment, and an observer(Figure 7.3). A system is a persistent arrangement of interacting components. It is the arrangement of the components that distinguishes the system, rather than the physical substrate that the system is specically made from [142, 226, 231]. If it is not fully insulated, the system will receive inputs from the environment. While it is possible to give an objective denition of boundary for certain well-dened classes of systems, in general, it requires an observer to subjectively demarcate where a system ends, and the environment begins. With no observer to distinguish between system and environment, the entire 65 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems Description Synthesis Abstract Applied Philosophy, Physics, Self-Organization • System • Observer • Environment Observer as Designer • Purpose • Function Design of Self-Organizing Systems • Agents • Behavior Design of CSO Systems • Parametric behavioral model • Design DNA Figure 7.2: Sources of ontological entities, from abstract to applied SYSTEM ENVIRONMENT OBSERVER Figure 7.3: Observer demarcating a boundary between a system and its environment 66 Chapter 7. Design Ontology for Self-Organizing Systems universe could be described as an un-bordered collection of interactions. According to Heylighen [91], this existence can be described as simply actions and states, where actions are what causes the change from one state to another. This description may be philosophically fullling, but more specicity is required for practical application that may be useful for engineers. 7.5.1 Designer as observer: a subjective denition of order Designers are not only interested in how systems operate, but also how they can be built and improved [21]. If the observer of a system is a designer, this introduces the subjective ideas of value and purpose into the discussion. What we commonly describe as the \purpose" or \function" of a system is really the purpose or function that we, as observers, ascribe to the system [26]. We colloquially talk about the function of a system as if it were an inherent property, but the function of an artifact is only determined by its user or designer. For example, consider a Roman chariot in a museum. Is the function of the chariot to transport soldiers, or to attract tourists? The former may have been true millennia ago, while the latter is true now. The artifact itself has not changed, but its purpose has. This is because the observer has changed. Drastically dierent methods of warfare, transportation technology, and educational opportunities have caused the modern observer to ascribe a dierent purpose to the artifact. If the observer is an engineering designer, the purpose of the system can be described using the familiar term \function." As mentioned in Section 7.2.2, it is useful to think of systems in terms of their FBS: function, behavior, and structure [71], where the function is the purpose of the system, what it is supposed to do. Since the function of a system is dependent on the observer of the system, and is not an inherent property of the system, so too is the concept of self-organization. If the \order" that arises from self-organization relates to its function, then, depending on an observer's subjective opinion of what the function is, the system can be described as self-organizing or disorganizing [75]. 7.6 Relevant characteristics of the design process Design is a process of mapping from societal needs to physical specications, with decreasing levels of abstraction in all domains. The Axiomatic Design [209] model of engineering design describes the process as a transition from customer needs, to functional requirements, to design parameters, to process variables, with the artifact being described hierarchically within each domain. There are many other notable explanations of the design process such as Total Design [178] and Systematic Design [165], which identify dierent phases and use dierent terminology, but in general terms, most methodologists agree that the process transforms societal needs into technical solutions, while considering the system in increasing levels of specicity. 67 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems Design is an inherently iterative process [46] because the result of one design iteration is more detailed knowledge of the system, which can be fed back in the form of modied constraints and goals in the next iteration [131]. For example, in automobile design, an estimate for weight is required at the very beginning of the design process so that high-level power and energy calculations can be performed. However, the weight is only known with accuracy at the end of the design process, after all of the components have been specied. The weight calculated at the end of the design process can be used as a more accurate assumption than the original estimate during further iterations. Design is carried out amid uncertainty. Uncertainty can be caused by randomness in material properties or interactions with the environment and suppliers. It can also arise from incomplete understanding of the system. This factor is especially true in the design of self-organizing systems. Self-organizing systems operate on dierent levels [169, 75]. The designer only has rm control over the agent level, and the system-level behavior is emergent. The interactions among agents may be so complex that an analytical solution to their emergent results is impossible to obtain [61], so the designer must rely on less direct methods such as simulation. Taken together, these properties of needs-solutions mapping, decreasing abstraction, uncertainty, and iteration present unique challenges to the designer of any technical system, and especially to the designer of SO systems. 7.7 Measuring performance: system and state A system's behavior must be measured to determine whether or not it is organizing to fulll its function. It is important to classify the system in terms of its possible states. In general, a system can be described in dynamic terms through state transitions [90]. Each state is characterized by values assigned to the attributes that describe the system. This leads to the following denition: System state (S sys ) the attributes and values of a system at a particular time So the system state is a set ofhattribute;valuei pairs: S sys =fhattribute;valuei i gji = 1; 2; 3:::N where N is the total number ofhattribute;valuei pairs necessary to fully describe the system. Attributes are descriptors of the system, and values are the specic value assigned to the attributes. For example, a middle-aged man's state could have the attribute \age" with the value \40 years." As the system evolves through its states during deployment, it may have a tendency to reach certain specic subsets of the state space and stay there. Any such collection 68 Chapter 7. Design Ontology for Self-Organizing Systems of states where this is true (that does not also contain a smaller subset meeting the same denition) is called an attractor: Attractor most compact subset of the state space that the system can reach, but cannot leave A set of states outside of an attractor that inevitably lead to the attractor is called a basin. Note that systems can have more than one attractor. An observer will not be able to comprehend every possible attribute of the system. Even if we assume a supernatural observer who can handle innite evolving system attributes, it is likely that many of the possible attributes are irrelevant and should be ignored anyway. An observer will be biased toward tracking certain relevant attributes and ignoring others, so we can say that the observer has a lter. Filter observer's bias causing him to consider only a subset of the possible attributes in S sys If the observer of the system is a designer, he will lter out the S sys attributes that are irrelevant to the fulllment of the system's function and constraints. Unless the system is fully random or chaotic, it will eventually nd and reach an attractor; thus the designer must identify satisfactory and unsatisfactory states, and determine which types are contained within the attractors of the system. Satisfactory state a state of the system, where the values of the attributes are within the acceptable range of the observer Unsatisfactory state a state of the system, where the values of the attributes are not within the observer's acceptable range A piece of practical advice is that a designer should dene satisfactory states at a system level, without considering the underlying agents. This is because the individual agents may rearrange, malfunction, etc. during run time, but if the system's purpose is still being fullled, the system state is still satisfactory. In many practical cases of design of SO systems, the system can reach multiple attractors, and it is not immediately obvious which attractor will be reached. However, the future performance can sometimes be predicted (within a margin of error) based on the system's current state. If leading indicators of important attributes are found, the designer can consider not just performance attributes, but also prediction attributes. Performance attribute property of the system indicating how well it is fullling its function Prediction attribute aspect of the system that is not directly relevant to the functional performance of the system, but indicates how the performance will evolve over time 69 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems Analysis Level: System System, Local Control Level: Local (agent) System, Local Design Level: Local (agent) System, Local Figure 7.4: Comparison of two rafts, one built by self-organization, and one by conven- tional manufacturing (Photo credit: Mlot and Hu, Georgia Institute of Technology [150]) If a designer cannot create a system whose attractors contain only satisfactory states, then he must consider control mechanisms for guiding the system toward satisfactory states during the evolution of the system and mechanisms for recognizing and rescuing systems stuck in unsatisfactory attractor states. 7.8 Architectural levels Traditionally engineered systems are hierarchically decomposed into subsystems and components [197]. In the design of SO systems, the designer is also concerned with multiple levels of system architecture, but the substrate of the system is agents. In both cases, the main functionality is a system-level goal. From an outsider's perspective, it does not matter how this functionality is ac- complished. The \living architecture" of ants [69] that attach to one another to form rafts fullls the same system-level function as a lifeboat, but one was \designed" and manufactured through self-organization [151], whereas the other is a commercial product built at a factory. In the design of a lifeboat, the designer can consider all levels of the system in synthesis and analysis, but if a designer were to mimic the ant rafts, he would only have control over the ant \hardware" and behaviors. This comparison is illustrated in Figure 7.4. So we see that in SO system design, evaluation can occur at the system level, but design and control only occur at the agent level. This gives two broad levels of system architecture that the designer must consider: Agent level perspective considering only one agent Group level perspective considering multiple agents, up to the size of the entire system 70 Chapter 7. Design Ontology for Self-Organizing Systems Group levels may display emergent properties that are beyond the capability of any one agent. The highest group level is the system level, where the system-level function must be fullled. The agent-level perspective also implies an agent state: Agent state (S agent ) the attributes and values of an agent at a particular time An agent can be described in terms of its actions, which aect the system state. The emergent properties of the system are built up of the actions and interactions of the agents. The system state subsumes the agents' states, so any action that aects an agent state by extension also aects the system state. Action a change in system state Agent component of the system whose actions are controlled by an inner behavioral logic Both words \agent" and \action" come from the same Latin root 1 meaning \to do." The function of the system is fullled at the system level, but the agents perform the physical actions that make it happen. Without agents performing actions, the system's state would not change and it would not have any interesting or useful behavior. In a self-organizing system, if there is a failure at the system level, we can assume that there is a deciency in the capability of the agents, or in the design of their interactions. 7.9 Self-organization vs. top-down design The design of self-organizing systems diers from traditional design in a few important respects. Rather than rigid hardware, the substrate of SO systems is active components such as robots or cells, which can have internal intentions and satisfaction. Because the agents that form the basis of the system are not inert components, their behavior is dependent on both their internal control logic and the physical forces surrounding them. In this way, the analysis of their behavior is more similar to the analysis of a computer program than of a cantilever beam. The designer is able to actually design the behavior of the system components, rather than just designing a structure whose behavior is a deterministic response to its environment and inputs. 7.9.1 Behavioral design at the agent level is the key to cre- ating self-organizing systems Agent behaviors are the foundation of all self-organizing systems. From a philosophical perspective, Heylighen [91] argues that the essential building blocks of the universe are 1 The innitive form agere and the passive form actus, respectively. 71 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems actions and interactions, rather than mater and energy. A state is a collection of all possible actions that can occur during that state. An \agent" in his ontology can be anything from molecules, to cells, to organizations. It is simply that part of the state which is necessary for the performance of an action, which persists after the action is performed. The physical substrate of the agents and states is never mentioned in his ontology. This argument is supported by practical applications from the SO systems literature. Doursat [57] advises that as systems become more complex, engineers should focus less on rigid design, and instead spend eort on meta-design of the conditions that allow self-assembly and self-regulation to take place. Bar-Yam [11] argues that in a highly complex system, it is the interactions among components that require the most eort in engineering, rather than the components themselves. Also, a designer may wish to re-use agents or buy o-the-shelf robots, and this would leave behavioral design as the engineer's only point of intervention with no freedom to specify the hardware. In light of Heylighen's work, robotic agents are much more concrete, in that they can be physically demarcated by their structure, but their actions and interactions do form the base of the system-level functionality. So what was intuitive to designers working with self-organizing systems seems to be a consequence of deeper philosophical inquiry. Self-organizing systems design requires endowing the agents with fundamental capabilities and creating a rich search space, from which emergent suitable behavior will be found [58]. If the behaviors are suciently simple, creating the agent hardware is a task in routine 2 mechanical design. 7.10 Behavioral design 7.10.1 Behavior capacity vs. behavior regulation As mentioned in Section 7.6, at the beginning of the design, system functions are decomposed into smaller sub-functions that will fulll the top-level function. This decomposition is usually done to transform a large intractable problem into a series of smaller, more manageable problems that can be assigned to components. In self- organizing systems, system-level functions rarely map cleanly to sub-functions. It may be necessary to design for emergent system-level functions, and use a combination of agent-level and group-level functions in the decomposition. While performing decomposition, there are a few capabilities that the designer can condently declare must be present in the agents. For example, in Trianni's work on a cooperative lifting task [217], the basic capacity of attaching to the object to be lifted was an obvious necessity and was included in the design of the agents, while the interactive mechanisms for cooperative lifting were left for later design. In 2 This is not to say that the process of hardware design is trivial, only that it is one in which engineers have traditionally excelled. 72 Chapter 7. Design Ontology for Self-Organizing Systems Reynolds' Boids simulation [183], the actions of centering, velocity matching, and collision avoidance were assigned to ocking agents, and Reynolds explored dierent methods for choosing among them. The capability to perform these actions can be justied, but the control mechanism for the timing and execution of these actions is another complicated phase of design, so we separate the agent behavior into two components: behavior capacity, and behavior regulation. Behavior capacity set of actions that an agent is able to perform Behavior regulation method for choosing an action at a particular time Specifying the behavior capacity and behavior regulation is quite dicult, since the link between individual action and global behavior can be nonlinear and unintu- itive. Nonetheless, it can be accomplished by a combination of analogy, analysis, recombination, and designer intuition. 7.10.2 Two-eld based behavior regulation Behavior regulation design is the process of creating the decision strategies that agents use to choose among the elements of their behavioral capacity. In order to bring greater detail to the agent behavior portion of the ontology, I include eld-based behavior regulation, a useful concept that has been successful in the design of several SO systems [101, 104] and is general enough to describe a wide class of systems. Field mathematical abstraction of in uence acting in space In nature, elds of morphogens cause the self-organized formation of organs and bodies. Gravitational elds determine the orbits of the celestial bodies around the sun. In a more gurative sense, the course of one's career path may appear to follow certain trajectories as if it were being acted upon by an external eld [140]. An example of eld-based control is given in [4], where researchers simulate a system-level function of shape formation. The agent behavior capacity was to move, propagate a eld, and sense a eld. Agents collectively followed one another's locally-sensed eld gradients in order to form the proper global shape. The most general way for agents to sense the eld is to have them articially calculate a eld based on their external stimuli and internal state. This eld can take any form, and is not limited to dierential equations, continuity, or attenuation that many physical elds display. Another method is to have agents sense a eld which is natural, such as light intensity, chemical concentration, uid velocity, and the like. Another possibility, which is less often discussed in the literature, is to create a literal eld to be sensed, but at a higher level than the agents can aect. For example, a designer could create a light intensity eld on a laboratory oor with spotlights, while 73 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems small robots react to the eld. Or, in possible medical applications of self-organization, the work performed inside the body by agents would necessarily be self-organized, as the environment is dynamic and unpredictable. However, the sterile and orderly doctor's oce could house a machine capable of creating useful elds to guide agents. In this way the designer is creating an articial physics, a literal and tangible eld that exists independently of agent sensation 3 and that agents can react to as they would physical elds. Fields can be more than simple attraction and repulsion; they can cause discrete actions as well. Also, elds do not have to be mathematically \well-behaved" like physical elds if they are articially calculated by the agent; they can be discontinuous, non-monotonic, or innite. Of course, there may be a convincing design rationale to use elds that are mathematically simple or mimic what is found in nature. According to [224], a good eld control strategy should be decentralized, adaptable, independent of the number of agents, and computationally inexpensive. Whether the eld be gurative or literal, natural or calculated, it can be used to determine how an agent within it will act. If an agent considers its internal state as well as its eld, then behavior regulation is a function of state and eld: action t+1 =FBR(S agent;t ;field t ) where FBR stands for eld-based behavior regulation. This ontology distinguishes between two sources of eld stimuli. These sources are internal or external to the system. Any stimuli that form part of the system's environment or are relevant to its function are included in the task eld (tField). Any stimuli coming from other agents are considered in the social eld (sField). All agents will sense their own positions and elds, and the total of all tField information is the system's emergent knowledge of its environment, whereas the total of all sField information represents the system's (limited) knowledge of itself. Task eld The task eld of a CSO system represents the agents' perception of the environment and objects involved in the system's tasks. This includes terrain, laws of nature, responses to obstacles, and attraction of the system's goal (when applicable). Chen's work [38] relies almost exclusively on the task eld paradigm. In his box-pushing simulation, the agents do not communicate in any substantial way. With the exception of basic collision avoidance, they are entirely unconcerned with the presence of other agents. In this case, the task eld was suciently described as a potential eld caused by attracting (goal) and repelling (obstacles) objects in the task eld. 3 This approach would lose one advantage of self-organization: the lack of a centralized weak point in the system that could be compromised. Nonetheless, it may still be useful in situations where autonomy and self-organization are necessary at a small scale, but traditionally designed, purpose-built systems are feasible at a higher scale. 74 Chapter 7. Design Ontology for Self-Organizing Systems In some self-organizing systems, the agents interact with the object of the tasks in such a way that updating the system state gives agents information on how to further update its state toward the nal goal; This is called \stigmergy." Stigmergy, or work-creating-work, in collective robotics can be used for gathering tasks [13, 200] and has been shown to be the mechanism by which termites organize to build their mounds [120]. \Extended stigmergy" [234] is a more extreme example, where blocks in a building task are encoded with RFID chips or other mechanisms for conveying more precise messages from agent to agent. In each case of stigmergic self-organization, the interactions among agents were indirect and unsynchronized, meaning that there was a possible delay between the time a message was sent and the time it was received. Since the medium of signaling was also the task object of interest, we can classify this self-organizing behavior as a response to a changing task eld. Social eld In addition to the task, an agent's behavior can be aected by other agents. The social eld arises from agents' in uence on one another and forms another layer of information for the system to use. Explicitly distinguishing between task eld and social eld allows the design of needed social behaviors for agents to form structures that aid in task completion. To cope with a given task eld, an agent can explore and develop its social relations so that unfavorable global structures can be transformed into favorable ones, leading to emergence of productive global behavior. Researchers have used various communication strategies among agents to explore social elds. In some systems [40, 48, 183], agents sense one another's presence and react directly while traveling mostly through empty space. Thus they have little concern with a task eld, and the social eld dominates. Signals can also be used to build a social eld. Communication strategies such as \pherobots" [170] or hormone-inspired robotics [195] can be described as dierent ways of creating a social eld. An extreme example of a social eld is given in [4]. These authors use genetic programming (GP) to evolve mathematical formulas for eld generation. Each cell in their simulation propagates a eld in its local vicinity, and senses the elds generated by others. Agents then follow the gradients of the eld. In self-organizing systems, communication among agents is often not one-to-one. Rather, agents create a eld of in uence in their vicinity. The omission of one-to-one messaging makes it possible to have a truly homogeneous distributed and decentralized system, where an individual agent does not even have a unique identication. This adds resilience to the system, as the failure of one agent does not immediately imply failure of the system; another identical agent can always take its place. Combined elds A general trend in the survey of self-organization literature is that social eld ma- nipulation is associated with a self-organized structure of the system, whereas task 75 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems eld manipulation is used to cause the emergent fulllment of system tasks. Many self-organizing systems display both structure and emergent functionality. In fact, the fulllment of the task requirements often depends on the structure of the system. Organization theory claims that organizations form structures that are dependent on their task and environment [216]. This suggests that better system functionality can be achieved through the self-organized emergence of complementary social structure and behavior, which is why task and social elds are distinguished but used in tandem in this ontology. Task eld (tField) in uence of stimuli in the system's environment and task on agent behavior Social eld (sField) in uence of agents upon one another field =ftField;sFieldg 7.11 Behavior selection Field-based regulation can be decomposed into 4 parts: sensing, eld generation, eld transformation, and behavior selection. Field generation creating a eld to abstract information from the task and environ- ment Field transformation assigning a quantitative preference score to each available action Behavior prole set ofhaction;preferencei pairs output by the eld transformation Behavior selection choosing an action to perform Taking these steps together, the behavior can be represented as a nested function: act t+1 = FBR BS (FBR FT (FLD T (SNS T ); FLD S (SNS S ;S agent ))) where SNS T=S is the output of the agents' sensors corresponding to the tField or sField, FLD T=S is the eld generation operator, FBR FT is the eld transformation operator, and FBR BS is the behavior selection. This equation is shown as a owchart in Figure 7.5. The steps of eld transformation and behavior selection are considered separately as another reaction to uncertainty in the design process. A more straightforward approach would be to simply choose thehaction, preferencei pair with the highest preference in the behavior prole. Some systems, however, may benet from a dierent selection process. For example, in a goal-seeking simulation, an agent with only local environmental object sensing was less likely to get trapped by obstacles if its behavior 76 Chapter 7. Design Ontology for Self-Organizing Systems Task Environment FLD T sField FLD S tField FBR FT 1 , 1 , 2 , 2 … FBR BS Social field generation Field transformation SNS T SNS S Neighborhood sensing Task environment sensing Task field generation Social Environment (neighboring agents) Behavior selection Individual behavior Behavior profile Social field Agent relations Task field Task stimuli + 1 Figure 7.5: Behavior regulation as a transformation of task and social environments into actions 77 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems Table 7.1: Comparison of natural and design DNA Natural DNA Design DNA Encoding AGTC (nucleotides) sequence Decision logic Interpretation Transcription Agent behavior Output Proteins Actions Adaptation Mutation, Natural selection Detail design, Optimization selection was to choose randomly among the top 40% ofhaction, preferencei pairs in its behavior prole than if it deterministically chose the best [104]. Choosing the highest preference score may be a valid selection strategy, but is not the only option. 7.12 DNA-Based behavior representation Finally, the designer must decide on a representation scheme for the agent's behavior. This behavior encoding is specied by the designer, and interpreted by the agent when it is deployed. Again, nature can be used as an inspiration. The previous work on CSO systems [104] introduced notions of behavioral DNA (bDNA) and design DNA (dDNA). The DNA is the description of the system that is carried by the agents of the system. In biological cells, DNA is dynamically interpreted through the process of transcription, which creates the protein building blocks of the organism. The interactions of the products of DNA build up to the system-level behavior. This comparison is summarized in Table 7.1. Knowing that the design process works by decreasing levels of abstraction from conceptual designs to detailed designs, the concept of dDNA should be rened for the design ontology. Design DNA (dDNA) the portion of the agent's behavior encoding that is not xed during conceptual design, but is subject to later revision and optimization Transcription portion of the agent's behavior encoding that interprets and applies the information in the dDNA In practice, the behavior encoding will normally take the form of a computer program in the design of robotic or simulated self-organizing systems. This approach can be used to describe the work of Trianni [217], who optimized control mechanisms for self-organizing robots using an articial neural network. In that work, the connections among nodes of the neural network and their outputs to the robots' actuators are the transcription, and the weights of the connections (what was optimized) are the dDNA. 78 Chapter 7. Design Ontology for Self-Organizing Systems Many designers wish to create SO systems from the interactions of homogeneous agents. This is particularly attractive when one considers that the scale of such systems could be immense. As the number of agents in a system increases, there is a higher probability of at least one agent failure. If all agents are running the same behavior algorithms (same dDNA and same transcription), then there is an inherent redundancy in the system, as any nearby agent can take the place of the failed agent. This is one of the positive attributes of self-organizing systems that is found in many examples from the literature [40, 229, 182, 217]. Note that sharing the same dDNA does not imply that all agents will display identical behavior. Just as the cells of the human body dierentiate into skin cells, neurons, etc., so the \cells" of a mechanical SO system can dierentiate based on their individual state and local stimuli. 7.13 Summary of ontology: building a parametric behavioral model Parametric design is a technique to bridge the conceptual and detailed design phases [101]. If agent behavior is parameterized, the decision structure can be left intact after the conceptual design stage while the parameters are optimized|perhaps by a software algorithm. During detail design, bounds on the parameter values must be respected so that any combination results in a working (but not necessarily opti- mal) agent behavioral model. This parameter range, coupled with agent interactions, creates a rich search space with the possibility to nd novel, useful emergent behavior. Figure 7.6 shows an example behavior encoding for agents' stepping behavior from the previous work on ocking [40], separated into transcription and dDNA, where the dDNA can be optimized as parameters of the parametric behavioral model (PBM). Dierent parameter sets lead to dierent emergent behaviors. In Figure 7.6, the rst term is cohesion (agents moving toward one another), the second is avoidance (agents moving away from one another), and the third is alignment (agents setting their headings equal to one another). C, O, and A are the relative weights of these behavioral primitives, encoded as dDNA. The mathematical terms from the transcription are a representation of Boids ocking [183, 39]. The ability to change and optimize the dDNA leads to the exibility in system-level behavior. Flocking is studied in much greater depth in Chapter 8. All of the terms dened in this section are summarized in Table 2. Most of the terms are entities that the designer must consider, but the perspective levels are modiers that may be used to clarify which level of the entity is being considered, and the eld-based regulation operators are functions. If a designer accounts for these entities, he should have a functionally complete description of the agent behavior and a model of the system for simulation and analysis. 79 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems ,, = × −1 i∈η + × −1 i∈η 1 + × 1 i∈η cos − High Cohesion {1, 0.1, 0.1} High Avoidance {0.1, 1, 0.1} High Alignment {.1, 0.2, 1} dDNA {C, O, A} Transcription Figure 7.6: Flocking agent behavior, where dierent dDNA parameter values result in dierent emergent behaviors. The lines represent motion traces painted by the moving agents. 80 Chapter 7. Design Ontology for Self-Organizing Systems Table 7.2: Summary of ontological terms Term Syntax Denition Function Entity The purpose of the system, what it is supposed to do Behavior Entity What a system actually does Structure Entity What a system is, physically S sys Entity System state, the attributes and values of a system at a particular time Attractor Entity Most compact subset of the state space that the system can reach, but cannot leave Filter Entity Observer's bias causing him to consider only a subset of the possible attributes in a Ssys Satisfactory state Entity A state of the system, where the values of the attributes are within the acceptable range of the observer Unsatisfactory state Entity A state of the system, where the values of the attributes are not within the observer's acceptable range Performance attribute Entity Property of the system indicating how well it is fullling its function Prediction attribute Entity Aspect of the system that is not directly relevant to the functional performance of the system, but indicates how the performance will evolve over time Agent level Modier Perspective considering only one agent Group level Modier Perspective considering multiple agents, up to the size of the entire system S agent Entity Agent state, the attributes and values of an agent at a particular time Action Entity A change in system state Agent Entity Component of the system whose actions are controlled by an inner behavioral logic Behavior capacity Entity Set of actions that an agent is able to perform Behavior regulation Entity Method for choosing an action at a particular time Behavior encoding Entity Set of instructions that an agent will interpret to regulate its behavior Field Entity Mathematical abstraction of in uence acting in space tField Entity Task eld, in uence of stimuli in the systems environment and task on agent behavior sField Entity Social eld, in uence of agents upon one another Field generation Function Creating a eld to abstract information from the task and envi- ronment Field transformation Function Assigning a quantitative preference score to each available action Behavior selection Function Choosing an action to perform Behavior prole Entity Set ofhaction, preferencei pairs output by the eld transforma- tion dDNA Entity Design DNA, the portion of an agent's behavior encoding that is not xed during conceptual design, but is subject to later revision and optimization Transcription Entity Portion of an agent's behavior encoding that interprets and applies information found in dDNA 81 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems Figure 7.7: Self-organizing system design methodology, integrating entities from the design ontology 7.14 Design methodology Figure 7.7 shows a proposed design methodology that integrates the entities from the ontology. The only terms in the methodology that have not previously been dened are the constraint analysis, simulation, and optimization processes. Constraint analysis involves analyzing the intended agent hardware to determine what can feasibly be included in the behavior capacity. The methodology is similar to other design or systems engineering methodologies (e.g. [178, 138]) but with several distinctive features tailored to SO systems: Agent behavioral design: This is the dening feature of SO systems. The system components are not inert. They are programmable, and their behavior, not just their physical form, can be designed [100]. The behavioral capacity is a set of behavioral primitives, and the behavioral selection is an algorithm for applying the behavioral primitives at a given time. Field-based behavioral design is recommended for this stage. The output of this step is a parametric behavioral model that can be simulated and optimized. 82 Chapter 7. Design Ontology for Self-Organizing Systems Simulation/Optimization: Simulation of system behavior is included as a de facto necessity because mathematical local-to-global analysis is dicult for complex SO systems. Without mathematical convergence proofs, it is necessary to do extensive simulation and testing of complex systems to build condence in their performance [61]. Optimization can be combined with simulation to perform detailed design of dDNA [102]. This technique is very powerful if the optimization method can handle nonlinear systems with many interacting components [31, 204, 66]. As with any methodology, the design process will not be as linear as indicated in Figure 7.7. There may be many iterations that result from new discoveries at each design phase. Nonetheless it provides a good starting point for systematic design of self-organizing systems, with the ontology focusing designers' eorts on the most important features of the system. 7.15 Research implications Every decision made in the creation of this ontology opens up research questions. The denition of entities and the points of focus are subject to debate and inquiry. The overall ontology and computational synthesis approach also create a platform for inquiry into the functionality of specic systems. Most importantly, features of adaptability can be captured and measured using this approach. Example applications following this ontology and methodology, research questions, and results are given in Part IV of this dissertation. 83 Part IV Case Studies 85 Chapter 8 Flocking and Exploration Reynolds [183] is credited with creating the rst simple ocking algorithm. His motivation was to reduce the amount of time necessary for a computer animator to accurately portray a dynamic ock of birds. Instead of prescribing every path for every bird, he relied on independent agents, following a combination of three simple rules: Collision Avoidance Velocity Matching Centering where \Centering" is an agent's desire to stay near the center of the ock. From these simple local behaviors, an emergent behavior instantly recognizable as ocking could be animated with relative ease. Such behavior is pervasive in natural systems. The ocking of birds is well known. Similar emergent behavior is seen in herding, schooling [110], or movement of human crowds [153]. The behavior is robust to disturbances from obstacles and predators. Even when disturbed, the formations adapt and regroup [183]. Flocking is also massively scalable, as locust ocks 1 can reach a size of 10 9 insects [29]. Embedding these capabilities in engineered systems could have several practical advantages. 8.1 Practical applications Coordination of autonomous vehicle groups as ocks is a potential application of ocking insights. The military is particularly interested in this type of control [161]. This knowledge can also be applied to other man-made phenomena such as urban trac ow [146] or even coordination of assistive transportation devices for the 1 Perhaps \plagues" is a more accurate word. 87 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems disabled [205]. Rather than taking a controls approach [194], vehicles can be modeled as self-organizing units interacting only with near neighbors. For driverless cars to become mainstream [223], they may need to rely on reactive ocking behaviors to avoid collisions with other cars if they do not have a common communication protocol. 8.2 Research questions Is eld-based behavior regulation a viable approach? The ontology and methodology developed in Chapter 7 will be tested on a well-known SO pattern, to attempt to recreate the ocking behaviors of Reynolds [183] and Chiang [39]. If the behavioral primitives can be captured through eld-based regulation, and the behavioral model parameterized and optimized, it will validate the design approach in this dissertation. How exible is a self-organizing ocking system? If the system is parameterized, can we develop behaviors besides ocking from the parametric behavioral model (PBM)? How repeatable is the genetic algorithm? GAs are partially stochastic. Because they do not exhaustively search a parameter space, there is no guarantee that multiple GAs will converge to the same parameter set. In this chapter, I perform at least 5 GAs for each optimization in order to investigate the dierent evolved parameter sets. What kind of learning takes place across GA generations? The agents have no global knowledge of the system or its environment. They operate only according to local stimuli. In order to achieve higher organization and functionality, their interactions must combine to re ect some overall properties of their task and environment, such as the size of the eld. A successful optimization will embed this knowledge implicitly in the dDNA parameters across generations. 8.3 The COARM behavioral model An in-depth description of ocking capabilities in a CSO system is given in [39], where various relative weights given to ocking behaviors at the local level result in standard ocking, spreading, obstacle avoidance, or searching activity at the system level. Agent behavior is a weighted sum of 5 competing desired step vectors. The 5 behavioral primitives are dened here: Cohesion step toward the center of mass of neighboring agents Avoidance step away from agents that are too close Alignment step in the direction that neighboring agents are headed Randomness step in a random direction 88 Chapter 8. Flocking and Exploration Momentum step in the same direction as the last timestep The acronym COARM is used to refer to these behaviors. An agent calculates the Cohesion, Avoidance, and Alignment vectors according to the following formulas: ~ C = 1 N X i2 ~ x i (8.1) ~ O = 1 N X i2 ~ x i kx i k 2 (8.2) ~ A = 1 N X i2 ~ v i kv i k (8.3) where i2 signies that agent i is in the neighborhood of the agent calculating its direction, ~ x i is the vector from an agent to its neighbor, and ~ v i is the velocity of a neighbor. The direction vectors are added together according to a set of relative weights. All agents make their step decisions in parallel. 8.4 A eld-based ocking model As a proof of concept to the eld-based behavioral design framework, here I re-create the behavior of the COARM system using a social eld. In the eld-based approach, agents calculate an articial eld in their vicinity according to Equation 8.4. FLD(r;;) =C 1 N N X i2 r i +O 1 N N X i2 1 r i +A 1 N N X i2 jv i j cos() (8.4) +Rs max cos(RA) +Mjv 0 j cos where~ v 0 is an agent's current velocity (previous step size and direction), RA is an angle output by a random number generator,step is the agent's maximum step size, is the angle from an agent's current heading,~ v is a neighbor's velocity, is a neighbor's current heading relative to the agent, and r is the scalar distance from the neighbor. Note that terms involving r are measured from the neighbor's position, and terms involving angles are measured with the agent at the origin with a heading of 0 . These variables are displayed in Figure 8.1. The social eld policies are applied to every neighbor in an agent's radius of detection, and all terms from Equation 8.4 are added together to calculate the sField. The COARM acronym returns in equation 8.4, this time to represent the parameters in the eld transformation equation. This equation maps each point in the plane to a preference value, creating the necessaryhbehavior;preferencei set for behavior selection. The behavior selection is to simply move to the location within the maximum stepping distance with the highest preference. In this chapter, the maximum stepping 89 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems Figure 8.1: Agent and neighbor frames of reference distance will always be 0:7 pw, where 1 pw is the width of a patch in the NetLogo simulation environment 2 . 8.5 Simulation and optimization 8.5.1 NetLogo simulation platform NetLogo [238] is a free multi-agent simulation platform that is well suited to study distributed systems with emergent behavior. NetLogo has an optional API controller that can be run on a Java virtual machine. A simple command() function in the Java program can send arguments and commands to the NetLogo software. In this way, all system parameters can be set, and the simulations can be called by a Java GA program with no further input from the human operator. 8.5.2 Flocking simulation specications The agents in this case study represent small robots moving on a 2-D surface in a torroidal world. Each robot has a limited sensory range and very little memory. At each timestep, an agentswill sense the positions and heading of all the other agents within its radius of vision and react according to its eld-based behavior algorithm. To initialize the system, an empty eld is populated with 30 agents with random initial coordinates and headings. The COARM parameters are set by the GA, and the simulation is allowed to run for 250 timesteps. At the end of the simulation, the momentum of the entire ock ~ M and total tness are calculated assuming unit mass 2 For reference, in Figure 8.4, an agent would t inside a 1 pw diameter circle, and the arena is 100 pw wide. 90 Chapter 8. Flocking and Exploration for each agent: ! M = N X i=1 ~ v i (8.5) fitness = k ~ Mk N (8.6) where N is the total number of agents. 8.5.3 Genetic algorithm specications The genetic algorithm is a custom program written in Java that controls a multi-agent simulation using the NetLogo API. A string of 40 bits (ve 8-bit binary numbers) is interpreted as the weights of the ve COARM directions. These 8-bit numbers in the range 0{255 can be mapped to the COARM weights in the range 0.02{50 by the function f(x) = 1:0313 127x . The relative weight of Randomness was xed at 1.00 because it was primarily the ratios of weights that mattered, not the absolute magnitude. These weights are assigned homogeneously to every agent in a simulation before the simulation starts, and they remain constant during a simulation. The GA assigns the parameters, initializes the simulation, gathers the results, assigns tness, applies the GA operators, and begins the next generation. Fitness scaling [188] was used so that the best candidate in any generation had a tness 30% greater than the average of all candidates in that generation. The best candidate at each generation was cloned to the next generation, and the remaining candidates were randomly selected, with replacement, for single-point crossover with probability proportional to their tness, until the next generation of 15 candidates was full. All non-clones were allowed to mutate, with a probability of 1% per bit of genome. This process was repeated for 40 generations. A full GA run required 11-14 minutes on a laptop computer with an Intel 2.2 GHz dual core processor and 4 GB of RAM. 8.5.4 Results Figure 8.2 shows the tness progression across GA generations. This run of the GA, which allowed unrestricted manipulation of the COARM parameters, displayed intuitive results. As one would expect, the Alignment parameter was quickly maximized (Figure 8.3) so that all agents would have an overwhelming tendency to match their heading to their neighbors. Cohesion and Avoidance generally tracked one another, with the C=O ratio varying from 0.2{0.5, which allowed the agents to keep a suitable separation distance from one another|not too close to avoid collisions and ock segmentation, and not too far to avoid ock dispersion. Figure 8.4 shows this system's emergent behavior. Other ocking GA runs produced results that were qualitatively similar in relative parameter values and system-level behavior. 91 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 0 5 10 15 20 25 30 35 40 Fitness Generation Top Fitness Average Fitness Figure 8.2: Typical ocking tness evolution 0.001 0.01 0.1 1 10 100 0 5 10 15 20 25 30 35 40 Generation C O A R M Figure 8.3: Typical ocking average COARM parameter evolution 92 Chapter 8. Flocking and Exploration t = 0 t = 100 t = 250 Figure 8.4: Screenshot sequence of optimized system's ock formation 8.5.5 Flocking with A =R The high-A optimization strategy seemed obvious, as the Alignment behavior causes agents to match their heading with their neighbors. Given enough time, it seems intuitive that a system of agents focused on local alignment will eventually reach a state of global alignment, so in order to further challenge the algorithm, a new ocking behavioral model was established with the restriction that the relative weights of Alignment and Randomness must be equal (set to 1 in this instance). In the original COARM formulation, R was meant to model system limitations such as noise in sensors or error in motor output [39], so here it is used to constrain a designer. Any relative importance placed on Alignment (which in practice would require more costly and precise sensors and actuators) will be paired with increased Randomness (representing the reduced performance of the rest of the hardware). This makes the optimization much more dicult, as Randomness tends to coun- teract the tendency toward coherent ocking that Alignment tends to build. The results are given in Figures 8.5 and 8.6. It can be seen from these results that the ocking task with A =R is a much more dicult problem to solve than pure ocking. These results showed more inconsistency when compared to the unrestricted ocking example, but by the nal generation, the best-of-generation candidates did display high tness values, indicating that they were moving as a group with a single heading. The \strategy" found by the GA was to make both Alignment and Randomness mostly irrelevant by keeping the Momentum weight very high. As long as the Cohesion and Avoidance balanced each other (it was sucient that they dier by a factor of less than 5), the Momentum could act as a sort of system memory, allowing the Alignment tendency to slowly build up during the course of the simulation, while the Randomness eects canceled themselves out. 93 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 0 5 10 15 20 25 30 35 40 Fitness Generation Top Fitness Average Fitness Figure 8.5: Typical GA tness evolution with system restricted by A =R 0.01 0.1 1 10 100 0 5 10 15 20 25 30 35 40 Generation C O A R M Figure 8.6: Typical dDNA parameter evolution with system restricted by A =R 94 Chapter 8. Flocking and Exploration Table 8.1: Best dDNA set from nal generation for 100% exploration Parameter C O A R M Value 0.6862 17.99 0.3242 1 25.64 Figure 8.7: High-O high-M exploration sys- tem, showing random distributed exploring be- havior 8.6 Exploration One of the goals in designing self-organizing systems is exibility, so it would be useful for designers to use the same hardware assumptions and behavioral model as the ocking simulation to perform a dierent task: exploration. The change in functionality is a result of the change in relative parameter values. Here, 11 agents are initially placed in a line in the center of an arena lled with white patches. Agents discover patches by darkening all patches within a 1:5 pw radius at each timestep. A simulation lasts 200 timesteps, and the tness of the simulation depends on how closely the system comes to discovering the entire eld. 8.6.1 Results Table 8.1 shows the optimized parameter set for full exploration. The high O and M predictably lead to the agents' spreading out quickly in all directions and continuing in a set direction until they sense another cell. Then the two colliding cells turn and travel in other directions. This allows agents to work in parallel to discover new territory, stay out of one another's way, and avoid rediscovering trodden ground. This happened in most GAs, and intuitively ts what a human designer might try if he were asked to assign a set of parameters to accomplish exploration. The system-level behavior is shown in Figure 8.7. As is often the case in emergent systems, what is intuitive to the human designer 95 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems Figure 8.8: Screenshot se- quence of high-A, high-M sys- tem showing a fan and sweep technique for exploration t = 0 t = 50 t = 100 t = 200 may fail to create optimal local rules [47]. This intuitive high-O high-M strategy performs well (tness score in the 0.8{0.9 range), but improvement was found in other GA runs. At the end of certain GA runs, the dominant parameters were Alignment and Momentum, rather than Avoidance and Momentum. Intuitively, it would seem that such high A values would lead to a single ock which was too cohesive and thus could not send individual cells to explore new territory, but the \clever" output of the algorithm actually resulted in precisely balanced parameters whose C=O ratio, along with high Alignment, caused an emergent fanning and sweeping behavior, as shown in Figure 8.8. Systems using the high-A high-O strategy bested the high-O strategy by uncovering about 95% of the eld. The agents in these systems were able to spread to the width of the arena and complete a full lap within the 200 timestep limit. Due to the stochastic nature of the algorithm, it is impossible to determine a priori which set of parameters a GA will converge to, if it converges at all. This is why it is important to run multiple GAs on one problem, or insert extra GA steps to ensure a diverse population is maintained. Note also that this sweeping behavior is dependent on the initial conditions of the ock (all in a straight line, facing up), and should not be expected to arise in a population with a dierent initial conguration. If we can expect this initial condition in a real-world deployment of exploratory robots, then the optimization is valid and fan/sweep is a useful emergent strategy for exploration. However, if that assumption is false, then the designer has fallen for the 96 Chapter 8. Flocking and Exploration common trap of optimizing to a specic condition and tness function, rather than a general design intent [31]. It is quite possible that in other initial congurations, other emergent problem-solving patterns would be evolved. 8.6.2 Percentage-targeted exploration Encouraged by the results of the exploration GA, the behavioral model was further tested by selecting for parameter sets that would discover only a certain portion of the eld within the allotted 200 timesteps. This forces the system to exhibit some sort of restraint or throttling, as the previous results showed that full-on exploration can reliably discover at least 85% of the eld. The target exploration values range from 0% to 75%. To achieve this, the tness function is modied from full exploration. An example tness function that optimizes for 25% exploration is given: fitness = ( 0:75 +P exp 0P exp < 0:25 1:25P exp 0:25P exp 1 (8.7) where P exp is the percentage of the arena that the agents explored. This tness function is modied for every exploration level so that it has a maximum of 1 for its respective target level. Targeting 0% The search for no exploration (0%) predictably led to high-C behavior which caused immediate clustering and prohibited any agents from leaving the initial pack. This clustering behavior is sometimes discussed in the literature as a desirable behavior in self-organizing systems [84], but here it evolves when the system requirements are to not explore the system's surroundings. This pattern was repeated for every run of the GA, and there was almost always at least one candidate randomly discovered in the rst generation which achieved the maximum possible tness. These results may not be surprising or theoretically interesting, but they are included to illustrate the point that sometimes a task is actually too simple to justify the computational cost of a GA, and a designer must take these time tradeos into account. Targeting 25% The search for 25% exploration resulted in more interesting results. These GAs tended to converge to one of two behavioral proles. Table 8.2 and Figure 8.9 illustrate a common high-O low-M behavioral prole that was evolved. The strategy for this ock was to immediately disperse so that they were outside of one another's eld of vision. Then, once they were \on an island," the Random behavior dominated, and they randomly moved about a point (Randomness and Momentum are the only behaviors that act when an agent has no neighbors), uncovering just enough new territory to move the system near 25%. 97 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems Table 8.2: Fittest high-O low-M dDNA set from nal generation for 25% exploration Parameter C O A R M Value 0.4964 35.85 0.4792 1 0.1878 Figure 8.9: High-A, low-M for 25% explo- ration Another typical evolved behavioral prole is illustrated in Table 8.3 and Figure 8.10. The gure shows high-C=O high-A behavior, which causes tightly packed, single-le ocks to emerge. The high C=O ratio causes the tight packing, and the high A value ensures that the most stable congurations are long trains. These ocks do not venture far from the initial conguration, but extend out and uncover a narrow swath of territory, exploring only enough to reliably uncover 25% of the eld's territory within 200 steps. Again we see that a GA can converge to one of two (or more) qualitatively dierent strategies. The rst strategy (high-O low-M) is dependent on the agents' eld of vision, which was xed for all experiments. If the radius of vision were larger, we should expect this strategy to uncover too much of the eld, as the agents would pass the 25% mark before their random behavior came to dominate. The second behavior prole (high-C=O high-A) was dependent on the number of timesteps that the system was allowed, as the agents were still active at the 200 th step, and would have discovered too much of the eld if allowed to run further. These results remind us of the importance of considering xed values in the agents' capabilities and the translation from simulation to reality in the formation of the tness function. Table 8.3: Fittest high-A high-M dDNA set from nal generation for 25% exploration Parameter C O A R M Value 6.762 0.05472 22.24 1 6.297 98 Chapter 8. Flocking and Exploration Figure 8.10: System-level behavior of opti- mized dDNA for 25% exploration, with emer- gent single-le area coverage. Recall that the simulation world is torroidal, so agents that exit out the top of the screen will reappear at the bottom of the screen, and vice versa. Targeting 50% The GA that selected for 50% exploration gave results typically resembling Figure 8.11 or Figure 8.12. The GAs that optimized a high-O strategy showed behavior similar to Figure 8.9 for 25% exploration, but with a higher Momentum value so that the ock would spread out more before the Randomness behavior dominated. The high-A high-M behavior allowed a single ock to form and simply travel ahead while maintaining a roughly constant separation distance. This is because in the ock's initial state, the agents' vision covers about half of the horizontal row. If the agents maintain this formation and simply sweep the eld once, they will reliably uncover about 50% of the available eld. The end result is shown in Figure 8.13. Targeting 75% The results for 75% exploration were very similar to the rst results shown in Sec- tion 8.6.1 for 100% exploration. This strategy (high-M high O) was found in all runs of the GA. It results in system behavior of spreading out quickly, and then traveling in straight lines while avoiding neighbors. 8.7 Discussion The case studies in this chapter were meant to validate the proposed design ontology and methodology by re-creating well-known ocking behaviors and test for repeatability of the results. These local rules in this chapter were successfully tuned by a GA, but an engineer must take a critical look at the results from any automated design algorithm, because computational synthesis cannot yet replace the human designer 99 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems 0.01 0.1 1 10 100 0 5 10 15 20 25 30 35 40 Generation C O A R M Figure 8.11: Typical dDNA evolution across generations for 50% exploration 0.01 0.1 1 10 100 0 5 10 15 20 25 30 35 40 45 Generation C O A R M Figure 8.12: Parameter evolution that converged to a high-O for 50% exploration 100 Chapter 8. Flocking and Exploration Figure 8.13: Flock maintenance behavior for 50% exploration, relying on high alignment and momentum entirely. In many cases, the GAs were not repeatable as repeated GA runs converged to very dierent parameter sets (e.g. the expansion vs. fan/sweep behaviors for 100% exploration). Some results may be spurious or optimized to the peculiarities of the tness function rather than the real-world task requirements. Designers must be careful to ensure that the GA is truly optimizing for the attributes that the system will need in the real world because there is a danger that the GA will simply take advantage of quirks in the initial condition or tness function. Repeatability of any particular set of optimized agent parameters was not found to be an issue. This is because clones and near clones of the best candidates are continu- ally being retested across generations. Thus, only the reliably successful candidates survive to the nal generation. This is a useful result from the use of evolutionary optimization, but it should not be generalized or taken for granted. A designer of self-organizing systems will always need to carefully evaluate the repeatability of system behavior, whether empirically or through mathematical proofs of convergence. Further implications of these results are discussed in Chapter 12. 101 Chapter 9 Protective convoy The system of interest in this case study is a self-organizing protective convoy modeled as a set of agents. 9.1 Convoy task In this scenario, there is a slow-moving, important cargo that must be transported across a eld, while bullets are red in an attempt to destroy the cargo, and protectors attempt to block the attackers. This simple example is representative of several real world problems such as a barge being transported across the ocean, while under attack by torpedoes, with a shield of smaller vessels intercepting the torpedoes. Such a system would be constantly changing and deteriorating as the protectors absorb damage from the torpedoes. Thus, this system needs to be resilient. Designing protectors as a self-organizing system is a viable means to achieving this resilience. The simulation environment used in this case study is NetLogo [238], an open- source multi-agent modeling software. The optimization algorithm is a GA (written in Java) that can control NetLogo through its API. It treats the simulation as a black box, providing design variables as inputs and receiving global performance measures to measure tness. Figure 9.1 shows the key elements of the simulation. A large blue ship in the middle represents the cargo. The cargo is surrounded by protectors. Circular bullets attack the cargo from various directions. If a bullet reaches the cargo, it is recorded as a hit. If a bullet is intercepted by a protector, the bullet is neutralized, and the protector sustains some damage. Protectors can absorb up to 4 hits before they break down. As the cargo crosses the eld, 60 bullets are red in total, and with 15 protectors protecting the cargo, a perfect strategy could block all bullets. 103 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems Figure 9.1: Initial setup showing cargo, protectors, and bullets. The inset shows an enlarged view of bullets near the cargo. 9.2 Research questions Can a self-organizing system adapt and maintain its functionality in the face of damage or loss of agents? This has implications on the resilience of SO systems. In many cases, it may be advantageous to sacrice inexpensive autonomous agents for the sake of saving some more important item. This strategy can only work if the remaining agents form a resilient system that can still provide protection. What balance should a protective system strike between discipline and aggression? This is a common tradeo in process design and strategy. Protectors have the option to pursue bullets in their task eld or stay close to the cargo and maintain a formation through social eld relationships. Many heuristic arguments can be made for aggressive or defensive strategies [211], but here the GA is allowed to implicitly decide between the two by selecting for dDNA that produces aggressive or defensive emergent behavior. 9.3 Design of self-organizing protective convoy The protector hardware is assumed to be xed, and the behavior of the cargo and bullets are treated as a given. Thus, the domain of the system designer is dened by the rules of interaction among the protectors. The protectors are modeled as agents with limited sensory capability. They can only sense bullets and other protectors on their side of the cargo. The protectors move at twice the speed of the cargo, while the bullets move at three times the speed of the cargo. 104 Chapter 9. Protective convoy Table 9.1: Optimized design variables for protective convoy dDNA Parameter Optimized Value p d 23.86 c d 3.212 w p 3.030 w c 3.491 w b 0.05897 9.3.1 Behavioral design The agents calculate \elds" of in uence around the cargo, other protectors, and the bullets, and react at each timestep36. Since the protectors only consider the bullets and other protectors that are on the same side of the ship as they are, the stimuli in their detection neighborhood can be expected to change throughout the simulation. Their behavior is to step to the point within the maximum step size that has the highest eld value. The eld function is dened as: f(x;y) = w p N N X i2 p i +w c c +w b cos (9.1) p i = ( d i p d d i <p d p d d i d i >p d (9.2) c = ( dc c d d c <c d c d dc d c >c d (9.3) where p i is the distance from the point to one protector in the agent's neighborhood, the summation is carried out for all N protectors in the agent's neighborhood, c is the distance from the point to the cargo; and is the angle formed by the point, the agent, and the nearest bullet. The design variables are the desired distances to the protectors p d and cargoc d , and the relative weights w p ,w c , andw b of the three terms in the equation. Each agent applies Equation 9.1 in parallel at each timestep in the simulation. 9.3.2 Optimization The optimized design is taken to be the best-performing set of design variables in the last generation. The optimized variables are given in Table 9.1. The GA operates on the 5 design variables. The desired distances are allowed to vary from 1 to 50 units, and the relative weights are allowed to vary from 1 to 20. Figure 2 shows the evolution of tness as the GA operated on a population of 25 candidates 105 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 5 10 15 20 25 30 35 40 Fitness Top Fitness Average Fitness Figure 9.2: Fitness evolution across 40 GA generations for 40 generations. The tness at the end of a simulation run is the percentage of bullets blocked as the cargo crosses the eld. Repeated GA runs gave qualitatively similar results. The simulation environment NetLogo is partially stochastic with regard to agent stepping orders and choosing among equal options, so the optimized system was re-tested for reliability. Through 20 repeated tests, it blocked, on average, 88.6% of incoming bullets. 9.3.3 Optimized system-level behavior To illustrate the system's behavior, Figure 9.3 shows a screenshot of this system from the mid-point of the cargo's trip, after 5 protectors have been destroyed and the cargo has sustained 1 hit. The strategy evolved was for each protector to hew closely to the cargo to establish a tight perimeter with an equilibrium distance of 3:212 pw (1:5 pw is the minimum to avoid collisions) to the cargo, and to focus system resources on maintaining an equilibrium distance from the cargo and other protectors. The relative weight of the eld generated by bullets was weak, less than 2% of the weight placed on maintaining equilibrium distances. Increased parameter values for the bullet eld in earlier GA generations actually caused the protectors to chase bullets and break their formation, letting other bullets through the gaps. In this case, over-aggressiveness caused system vulnerabilities, and the GA selected for more conservative, defensive strategies. Further study of the GA's early generations uncovered other designs with varying 106 Chapter 9. Protective convoy Figure 9.3: Action screenshot of optimized system midway through a cargo run degrees of success, such as wider perimeters and trailing crescent formations, and several plainly unsuccessful parameter combinations that led to immediate collapse of the protectors and abandonment of the cargo. By using the design parameters of successful later generation runs, the protectors were able to achieve a system-level goal of protecting the cargo by maintaining tight formations and solely maximizing a local utility function at each timestep. 9.4 Discussion By enacting a defensive strategy, the system was able to incercept almost 90% of all bullets. Rather than selecting for systems that chased bullets, the GA selected for systems that \stayed home" and kept a formation around the cargo. This allowed the agents to be in a good position to intercept bullets when they came close. It is important to note that the formation held even as agents took damage and were destroyed during the cargo run. By maintaining functionality even as its constituent agents dwindled in number, the optimized system clearly showed the potential for resilience in self-organizing sytems. Further tests of resilience and emergent strategies are carried out in the case studies of Chapters 10 and 11. The results are compared and discussed further in Chapter 12. 107 Chapter 10 Foraging Self-organized foraging is a behavior most notably displayed by ants, which com- municate via pheromones. When a forager ant nds a food source, it lays down a pheromone trail as it carries the food source back to its nest [14, 215, 33, 85]. This pheromone trail attracts more ants, which in turn nd the same food source and lay down even more pheromone. In a positive feedback loop, the pheromone scent gets stronger until the majority of foraging ants are on a straight trail between their nest and the food source 1 . Most studies on foraging xate on the pheromone aspect. Since depositing pheromones requires the storage and release of chemicals, it would be advantageous from an engineering perspective to solve this problem through other means. The design goal in this chapter is to accomplish foraging without the use of pheromones or individual memory of the food location. Investigation of this task leads to questions about system heterogeneity, scalability, and resilience. 10.1 Signicance of foraging problem Engineering a foraging system is another example of bio-inspired design. Several practical tasks could be aided by the indirect communication used by ants, and by extending the behavioral model from Chapter 8, more sophisticated systems can be developed from simple ocking-based agents. 10.1.1 Practical applications Foraging is an essential task performed in a non-repeatable environment. For example, throughout the lifecycle of an ant colony, foragers must continually nd and return food to the nest. At any given attempt, they do not know where their daily bread may appear, but their survival depends on their ability to nd it. An analogy to practical 1 The interested reader can re-create this behavior in simulation using the Ant Model [238] bundled with the open-source NetLogo [237] software. 109 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems applications would be to use ant-like SO systems to \forage" for waste in cleanup tasks, or to harvest crops when they reach their peak ripeness. The only requirement for these strategies to work is that there be a large amount of the target resource in an area where it is rst detected, because the agents will recruit more workers to the places they have already been. If the resource is uniformly randomly distributed or agents can return the entire amount in one trip, then this recruitment does not help the system. Research implementations have been shown in [13, 200], where SO robot swarms were able to gather a distributed set of pucks or boxes into one area with no specic coordination among robots. The stigmergic interactions of agents are actually so adept at solving search problems that they have been abstracted into an optimization algorithm, known as ant colony optimization (ACO) [55]. In ACO, virtual ants are sent to explore an optimization search space, leaving pheromone trails as they nd areas of higher tness. These pheromone trails eventually attract other virtual ants, concentrating the search eort in areas of high tness. In this way, the optimization moves from a wide search to a local optimization, similar to the strategy of GAs and simulated annealing [149]. ACO is a general optimization algorithm, but is particularly useful in applications where the user is searching for a path, as in traveling salesmen problems [206]. This is actually a common pattern in the study of SO systems: rst they are discovered and studied in nature; after they are understood, they are re-created ana- lytically and through simulation; after the mechanisms are described and vetted, they are abstracted and applied to other elds; eventually, the mechanisms and applications may be changed and optimized to the point that they bear little resemblance to the original natural inspiration but show remarkable functionality. See [79] for another example of this progression with genetic algorithms as the system of interest. 10.1.2 Key features for designers This chapter describes an extension of the ocking primitives to a new application. The work on ocking has traditionally been based on Reynold's Boids algorithm [183] of centering, velocity matching, and collision avoidance, while the work on foraging has centered on random movement and pheromones. To bridge the two, how can a designer create a sucient set of behavioral primitives? The answer will rely on a mixture of analogy, intuition, ontology-guided design, and iteration. In the work leading up to this point, the focus of CSO research was either on sField relations as in [41] or tField stimuli as in [38]. In order to reach a higher level of system functionality, it may be necessary to combine the two, so that sField relations can create system structures, while tField reactions govern their deployment in space. The designer then has to decide how to distinguish between sFields and tFields, and whether the agents should consider them simultaneously or separately. As with any extension of a behavioral model, their may be downsides to adding this complexity. 110 Chapter 10. Foraging Figure 10.1: Initial congura- tion of 1-row foraging simulation. The red circle indicates the de- tection range of an agent. Ants dierentiate into dierent behavior modes of searching and returning food 2 . They lay down a homing pheromone when they are searching for food, and a food pheromone when they are carrying food [85]. This means that at any given time, dierent ants may be operating in dierent modes, performing dierent sub-functions, but in the ocking studies, all agents were homogeneous. To extend the behavioral model, this case study will introduce heterogeneity at the behavioral level through state changes. Foraging is a perpetual function. If a foraging system is allowed to operate for more time, it is expected that it will nd more food, unlike one-o pass/fail tasks. Measures must be developed to rate the eectiveness of a foraging system that take into account not just whether food was returned, but how much was returned, and how quickly. The eciency of the system must also be measured against the size of the system, and scalability must be studied to understand the risks of enlarging a system to meet new speed goals. 10.2 Foraging task and simulation Figure 10.1 shows the initial setup of the foraging task. The objective is to maximize the amount of food returned to home within a time limit. Agents can sense food and other agents within 3 pw. When an agent moves onto a patch that contains food, 2 In fact, there are also soldier ants, nurse ants, drones, builders, and the queen, and most of this dierentiation is triggered by the way they are nurtured as infants; it is not inherited [162]. 111 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems it extracts 5 units of food from the resource and changes its color to green. If it carries food back to the home base, it deposits the food and changes its color back to brown, and the system stores the food. Agents have no individual memory of the food location, and they can only sense it when it is within their radius of detection, but they can sense the direction toward home at all times. This simulates the situation where there is a central beacon|ants have the large concentration of pheromones emanating from their nest|signaling to all agents simultaneously, but not controlling any individual's actions. In a practical situation, the beacon could be broadcast from a boat in a search-and-rescue mission, from a disposal zone in a beach cleanup task, or from a silo in a harvesting system. 10.3 Ant and ocking-inspired design Although the task is inspired by ants, the behavioral design in this chapter is still based on the ocking algorithms described in Chapter 8. The mixture of these two approaches will lead to a new behavioral algorithm suitable for deployment in articial systems. 10.3.1 Hardware constraint analysis For practical purposes, self-organizing systems will not be economically competitive with most other systems unless they are based on very simple hardware. This simplicity allows for mass-scale manufacturing, and little economic loss from the failure of any single unit. The same assumptions of limited sensory radius and simple hardware found in the previous work on Cellular Self-Organizing Systems [41, 100] are used here to limit the possible behavior capacity of the agents. The result of these assumptions is that the designer is limited to agents that can only sense each other, environmental objects, and food within a local radius, and have very little on-board memory. Agents can sense the direction toward their home base at all times. 10.3.2 System state and perspective The system's function is to retrieve food, so the state of the system will include attributes for the total amount of food returned, and amount of food carried. At the outset, the system-level behavior is unknown; it will be uncovered through simulation. The structure will be a swarm of 30 foraging robots. System state { Performance variable: food returned food r { Prediction attribute: food carried food c 112 Chapter 10. Foraging { Satisfactory state 3 : food r > 0 at t = 1000 { Unsatisfactory state: food r = 0 at t = 1000 Perspective { System-level evaluation { Group-level sub-functions { Agent-level design Function: Retrieve food Structure: 30 primitive robots, initially aligned near the home base 10.3.3 Functional design The functional requirement of the foraging system is to \retrieve food." The system- level function could be decomposed into the four sub-functions \nd food, pick up food, carry food to home, and drop food." All of these functions are within the capability of a single agent. We know, however, from the study of social insects, that the function of \nd food" can be made much easier if there is corresponding sub-function of \indicate food location." The food may be a relatively large distance away from the home base. If this distance is many times the radius of an agent's sensory capabilities (see Figure 10.1), then directing agents between the food and home is outside of the capabilities of a single agent. This requires a group-level function, which will be created through proper behavioral design. 10.3.4 Behavioral capacity Social insects have a behavior capacity of emitting pheromones, which can lead other insects to food stores. The switch between homing pheromones and food pheromones is triggered when the ants nd a food source, so by analogy, \change state" must be in the agent behavior capacity, and this action will be regulated by whether or not they are carrying food. The agents will also have the capacity to broadcast their state. For the purposes of on-screen visualization, agents use color changes to broadcast their state, but in practice, any electromagnetic or acoustic signal would be equivalent. Decomposed system-level functions related to picking up, moving, and dropping food can be directly mapped to agent capacity. The group-level function \indicate food location" requires social interactions among agents. There was an expectation that ocking would be an ecient way for the agents to explore the eld, so social 3 Of course, the performance could be much higher than 0, and performance close to 0 is probably unacceptable, but at early stages of conceptual design, it may be dicult to estimate just how good the performance could be. This is why design is often an iterative process, where requirements and expectations change as designers learn more about the system [131]. 113 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems ocking behaviors such as cohesion, nonlinear avoidance, and alignment were also added to the capacity. A random component to agent movement was added to prevent agents from getting stuck in local maxima of tField locations. These capacities for action combine to become the total capacity of an agent. 10.3.5 Behavioral selection To regulate behavior, agents are designed based on a two-eld based parametric behavioral model, where the agents are designed to respond to a task eld, generated by the stimuli of their home base and food, and a social eld caused by other agents. It is important to note that the elds in this case study are articial elds. They are spontaneously calculated by each agent. There is no physical force attracting the agents toward food, but the agents mathematically calculate a eld around themselves. Both the formulation and eects of this eld are the product of design. The relevant variables of the behavior regulation are summarized as follows: tField stimuli { Home location { Food location sField stimuli: other agents { Color { Location { Heading Agent state { is carrying food (TRUE/FALSE) { random num [0{1) dDNA: 18 parameters governing agent decision algorithm, given in Table 10.1 Since the agents are allowed to switch their behavior and color due to state changes, the behavioral model can be enlarged from the model in the ocking task. Because states depend on whether or not an agent is carrying food, there will be two dierent classes of agent. This will lead to 4 relationship types. The parameters that govern agent-to-agent relationships are C,O, andA, so these can each take 1 of 4 values. All together, there are 18 dDNA parameters to include in the PBM (Table 10.1). 4 In this chapter, most optimizations will x both R 1 and R 2 at 1. Because they encode relative weights, it is primarily the ratios between parameters that matter, not the absolute values. The other parameters will still be optimized to be some ratio of the R values, and the search space is reduced by 2 variables. 114 Chapter 10. Foraging Table 10.1: Flocking behavioral parameters Agent State Neighbor State Cohesion Avoidance Alignment Randomness Home Food Food Food C 1 O 1 A 1 R 1 4 H 1 F 1 No Food C 2 O 2 A 2 No Food Food C 3 O 3 A 3 R 2 H 2 F 2 No Food C 4 O 4 A 4 The equations for generating the social and task elds are given in Equations 10.1 and 10.2. FLD s (r;;) =C 1 N N X i2 r i +O 1 N N X i2 1 r i +A 1 N N X i2 jv i j cos() (10.1) FLD t (f;h;) =s max F (cos(f) +H cos(h)) (10.2) wherer i is the distance from an agent to its neighbor; is the agent's current angular heading; is the neighbor's current heading, s max is the agent's maximum step size; f is the angle toward food; h is the angle toward home; and C, O, A, F, and H are dDNA parameters. The sField relationships are triggered by proximity within 3 pw. Once an agent has identied its neighbors and their type, it will apply the parameters of its policy for each neighbor. In this case study, agents use a normalized ocking aggregation. That is, each set of neighbors in a particular state is treated as a dierent ock, and once the Cohesion, Alignment, and Avoidance eld values are calculated for each ock, they are simply added together. Since the ocking behaviors are normalized before addition, a large ock with no food will have the same relative in uence as a small ock with food 5 and vice versa. The agents generate their tField by sensing the home beacon and food, and their reactions to the elements of this eld are simple attraction or repulsion. The eld generation is performed according to Equation 10.1 for every point within an agent's stepping radius. The eld transformation simply pairs a stepping action toward each point with its eld value as preference. An agent's actions of picking up or dropping food are automatic reactions to nding the food or home locations. Finally, the behavior selection is to step to the point with the highest preference, and pick up/drop food if applicable. 10.3.6 Simulation and optimization The 18 parameters that the GA can optimize are given in 10.1. Any particular set of 18 parameters xes the dDNA and determines agents' behavior regulation. Agent 5 Many other aggregation strategies are possible: proportional ocking where each neighbor's in uence counts equally, nonlinear in uences, etc. Detailed investigation of these options was not found to be necessary, but it could be an interesting area for future research. 115 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems behavior is simulated in a NetLogo MAS. For this case study, a system of 30 agents was placed on a eld between their home base and a food source. At each timestep, every agent would sense its local neighborhood and apply its behavior regulation. If an agent found a patch containing food, it would pick up and carry ve units of food. If it carried food to the home base, it would drop the food, and this would count toward the system's total tness. This was repeated for 1000 timesteps. The tness is then calculated according to Equation 10.3. fitness =food r + 1 N N X i=1 food c;i (10.3) where the summation occurs over each agent, subscript r represents the food returned to home before the time limit, and subscript c represents the food being carried by the agents at the time limit. This equation includes both the performance and prediction attributes identied in the system state representation. The summation in Equation 10.3 is used to dierentiate systems early in the GA, when only a few systems return any amount of food. The design gets \partial credit" for at least nding food, and this behavior is eventually combined with other positive behaviors to create more successful systems in later generations. Merely nding food can never give a tness higher than actually returning food, however, so in more successful systems, the prediction attribute is responsible for a negligible percentage of the total tness score. An initial population was randomly seeded as 200, 144-bit, binary strings. Every 8 bits of the genome corresponds to one of the 18 agent parameters. With 18 parameters to optimize, an exhaustive search would be very computationally expensive. A na ve parameter sweep, with just three levels for each variable (e.g. low, medium, high), would require more than 3.8 million simulation runs. The use of an optimization algorithm reduces this number substantially while still generating capable candidates. All of these GA experiments used less than 50,000 repeated simulation runs. The binary numbers were mapped to decimal numbers as described in Table 10.2. The best candidate of each generation was cloned directly to the next generation, and the remaining candidates were created using the same tness scaling, selection, and crossover of Section 8.5.3. The mutation probability was 0.5% per bit of genome for all non-clones. With a dual-core processor, every tness evaluation required approximately 0.75 seconds of computation time on average, so a 200-candidate, 200-generation (40,000 total tness evaluations) GA run required 8 hours to complete. GA Results It can be seen from Figure 10.2 that the GA showed gradual improvement in the best-of-generation tness and average tness values until it reached a tness plateau of about 225 units of food. The best candidate of the rst generation returned 60 units 116 Chapter 10. Foraging Table 10.2: Mapping functions between binary numbers and behavioral parameters Parameter Raw Range Mapping Function Parameter Range C, O 0{255 f(x) = 1:0313 127x 0.02{50 A, H, F 0{255 f(x) = ( 1:063 64x 0x< 128 1:063 x191 128x 255 -50{50 0 50 100 150 200 250 300 350 0 20 40 60 80 100 120 140 160 180 200 Fitness Generation Top Fitness Average Fitness Figure 10.2: GA results across 200 generations for rst foraging system 117 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems of food (12 round trips), and the best of the nal generation returned of 210 units of food (42 round trips). The best tness found in any generation was 305.8 in the 167 th generation. Due to the elitism component of the GA, this candidate was cloned to the next generation, but it was unable to reliably reproduce such great results, and it was eventually overtaken, but its genes did propagate to future candidates. Other GA runs produced qualitatively similar results, with no obvious improvement in runs lasting longer than 200 generations. Because there were 18 variables to optimize, and the populations showed consider- able diversity, it is not illuminating to show a plot of the average parameter values across generations. Instead, the randomly chosen parameters of the best candidate of the rst generation will be compared to the evolved parameters of the last generation's top candidate. Table 10.3 shows the parameter values of the rst generation's most successful candidate. As shown in Figure 10.3, this system's behavior was to break into small groups. The groups could explore the eld separately while displaying ocking behaviors within the group. This led to a distributed search, but when one member of a group found the food, several other group members would follow to pick up food as well. Because this search method was slow, only 1{2 groups would nd the food during a simulation, returning about 60 units. Also, notice that this randomly generated candidate had a negative H 1 value, meaning it was actually repelled from home while carrying food. This repulsion slowed down its rate of food return, as food was only returned when ocking behaviors randomly caused the food carriers to reach home. Table 10.3: Parameters of best candidate of rst generation Agent State Neighbor State Cohesion Avoidance Alignment Randomness Home Food Food Food 3.760 0.0536 10.85 1 0.7831 0.5105 No Food 0.03940 0.2016 0.9407 No Food Food 6.347 4.811 3.610 1 -0.4250 -0.02403 No Food 10.07 48.47 4.079 Table 10.4 shows the parameters that were evolved for the best candidate in the last generation. Note the very large negative Alignment parameter between agents which both had no food. The system-level behavior that was found in the nal Table 10.4: Parameters of best candidate of 200 th generation Agent State Neighbor State Cohesion Avoidance Alignment Randomness Home Food Food Food 0.4924 7.404 -0.832 1 44.24 0.1414 No Food 0.3008 21.76 -0.7367 No Food Food 0.7816 1.3607 0.1042 1 -21.25 22.59 No Food 0.05040 3.646 -41.62 generation did in fact fulll the system-level function to retrieve food. The \strategy" 118 Chapter 10. Foraging Figure 10.3: System-level behavior of best candidate from rst generation, with agents that have found food shown in green. There are several small groups exploring independently. The large group on the right has discovered food. t = 0 t = 18 t = 50 t = 1000 Figure 10.4: Foraging behavior for best candidate of nal generation, showing lines forming at the boundary 119 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems evolved was to use the edges of the arena to guide the agents toward the food, since the food was placed in a corner opposite the home base. To accomplish this, the agents without food had negative Alignment values toward one another. As shown in Figure 10.4, the negative alignment caused an initial shuing period as the group could not reach an equilibrium ocking heading. Eventually the agents spread far enough apart that their negative Home tendency dominated and drove them toward the top and right edges of the eld, with a few agents randomly nding the food area. The system eventually reached a state where most agents without food were on an edge, but the negative A 4 values ensured that they did not get stuck. If a new agent arrived at the edge near another, one would have to change direction so that they could maintain opposite headings. This caused a chain reaction of agents bumping each other o the edge until one reached the food, when its strong positive Home tendency would take over. After returning the food, an agent's negative Home tendency would cause it to move toward an edge again, starting another chain reaction. This conguration persisted until the 1000 timestep limit, allowing the system to return 210 units of food. In these lines of agents on the edge of the arena we can see the sField giving rise to task-based structure which allowed the system to complete its FR. 10.4 Rework with boundary detection added The lines at the edges were the optimized foraging strategy found within the behavioral model given to the GA, but the use of so many static agents on the edges is inecient, because at any given time there may be only a fraction of the agents actually moving between food and home. This led to a reformulation of the agent capacity. A new element was added to the agents' behavior capacity: boundary detection. With this new agent capability, the behavior regulation was updated to include a parameterized attraction or repulsion from the boundary. This added two new parameters (B 1 and B 2 ) to the dDNA. The optimization results are shown in Figure 10.5. With this addition, the optimized system behavior changed. Agents relied on ocking in small groups. This approach was only viable with boundary detection, because in the case without boundary detection, the small ocks may have gotten stuck on a boundary. The optimized system uidly searched the area and returned food, as shown in Figure 10.6. Comparing Figures 10.2 and 10.5 clearly shows that the conceptual design with boundary detection was more suited to the task of foraging within a square, bounded environment 6 . In the nal generation of optimization, the conceptual design with boundary detection had a tness of 571.3, compared to 210.5 for the design without boundary detection, an increase of 171%. 6 Figures that show systems with boundary detection are indicated by the thick black borders around the arena. 120 Chapter 10. Foraging 0 100 200 300 400 500 600 700 800 0 20 40 60 80 100 120 140 160 180 200 Fitness Generation Top Fitness Average Fitness Figure 10.5: Performance of top candidate found by GA at each generation of opti- mization of system with boundary detection Figure 10.6: Behavior of system with boundary detection. The agents formed small groups that ocked together (left). The right panel shows a motion trace of the agents. 121 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems 10.5 Test for scalability One way of achieving exibility is through scalability [185], as scalable systems can change in size to meet changing requirements. Scalability is often hailed as a promising feature of SO systems [56, 241], but na evely scaling systems without understanding the possible pitfalls may lead to system failures [13, 172]. So in this section, I investigate the conditions required for scalability in a self-organizing foraging system, by applying the PBM developed in this chapter to systems with more than one row of agents (See Figure 10.9), where each row contains 30 agents. 10.5.1 Scalability assessment To test for scalability, behavioral parameters are optimized and tested for each of 1{6 rows of agents. Then, keeping the parameters constant, the systems are tested in other scenarios. In this way, the parameter set is tested outside of the range for which it was optimized. If increasing size leads to improved performance, we can call this system scalable. Because integration costs in SO systems are negligible, the vast majority of the cost of adding new agents comes simply from the agent hardware. If the performance of the system scales proportionally to the cost, we can call it linearly scalable. In this scheme, there are 3 possible system classications: Linear Neutrally scalable. The performance increases linearly with the number of agents. Superlinear Better than linearly scalable. The performance increases with the number of agents, and the rate of this increase also increases. Sublinear Worse than linearly scalable. The system gets diminishing returns from adding more agents, or the performance may even deteriorate. These can be measured by the concavity of the performance vs. N agents curve. A curve that is concave up is superlinear, and a curve that is concave down is sublinear. This measurement can be applied at two levels of design abstraction, the conceptual design and the detail design. Evaluation at the conceptual design level allows for parameter changes to combat the changing number of agents, but evaluation at the detail design level locks in a particular parameter set for all cases. Note: a distinction must be made in text between systems optimized for a certain size and systems that actually are a certain size. The systems optimized for a certain size will be referred to as ROX, where X is the number of rows for which they were optimized. A system that actually is a certain size will be referred to as an X-row system. For example, an RO2 system could be scaled and tested as a 6-row system. 122 Chapter 10. Foraging 10.5.2 Research questions What are the consequences of over-design? With fast simulation-optimization loops, it is feasible to procedurally generate nearly innite deployment scenarios to test the system in. The designer could program the simulation to change agent numbers, environment size, food location, agent reliability, sensor noise, etc. Is there a danger in optimizing for all of these cases? Should some of them be ignored if there is little chance that they will be found in actual deployment? A quantitative comparison of systems optimized in the face of uncertainty vs. those optimized in largely repeatable conditions will give clues to an answer. How much information does the designer need to know about the system deployment before optimization? A goal of genetic optimization is adaptability. It is assumed that any candidate that has survived multiple generations of perturbations and competition will be robust to changing conditions. This cannot be guaranteed, however. How accurately must the designer predict these changes beforehand? If these changes are unpredictable, how much can he trust the GA to create a robust system? Is scalability directional? Is it easier to scale a system up or down? Does it matter? This has implications for resilience. If a system can be easily scaled down, it can be assumed that the loss of several agents from the original system will not result in catastrophic failure. It will instead result in graceful degradation. If the designer is unsure about how large the system must be, he needs to know whether to target his optimization toward the higher or lower end of an estimated range. 10.5.3 Extended optimization The investigation in this section requires that an optimal candidate be found for each scenario. In a complex system, it is very dicult to objectively determine which solution set is optimal [61] for several reasons. The search space is enormous 7 . The simulations are partially stochastic, so to get statistical condence, many trials would have to be performed. The GA is also partially stochastic, and multiple GAs are not guaranteed to converge to the same parameter values. Perhaps it is not even appropriate to talk of absolutely optimal candidates, and instead the language should be restricted to opmtimized candidates only. In order to determine the optimized candidate, we developed countermeasures for each of the aforementioned diculties. The choice of genetic optimization is an attempt to counteract the huge search space of the problem. GAs are population-based, rather than point-based, so they can perform a wide search of the search space without getting trapped in local optima. The randomness of the GA convergence is mitigated by performing multiple optimizations for each design. This allows several optimized parameter sets to be found and compared to one another. The ttest candidates 7 A 160-bit string has more than 1:48 10 48 possible parameter combinations. Even at a rate of millions of trials per second, to exhaustively check every combination would take billions of years. 123 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems from these runs can even be used as seeds of another GA for further incremental optimization. The randomness of the individual simulation can be overcome by large numbers of generations withing the GA. These long run times require clones and ospring of t candidates to repeatedly achieve high tness scores, increasing the odds that only reliably t candidates will survive until the end. With these strategies in mind, the method proposed in this work rests on repeated GA runs, followed by re-testing of a \hall of fame" of optimized candidates from the preliminary GA rounds. Finally, the best candidates are put through a reliability test, where their 30 th percentile performance is taken to be their nal tness score. This reliability level is meant to ensure that the system will perform at its rated tness at least 70% of the time. This process is used to determine the optimized candidates for every scenario and conceptual design in this case study: 1. 4 standard GA runs 2. Select from each standard run 3 candidates (a) Best candidate from nal generation (b) Runner-up candidate from nal generation (c) Best candidate from any generation 3. 1 seeded GA run, using 12 previously selected candidates as seeds 4. Select 3 candidates from seeded run 5. Reliability test on 15 previously selected candidates (a) Repeat simulation 100 times for each candidate (b) Assign percentile scores for each candidate's performance 6. Select candidate with best 30 th percentile performance from reliability test as optimized parameter set 10.5.4 Scalability of conceptual design Figure 10.7 shows the tness of the optimized systems for each of 1{6 row systems without boundary detection. The system is almost perfectly linearly scalable at the conceptual design level. TheR 2 value for the linear regression is 0.995. Remember that there is very little overhead in adding agents to the system, because the connections are soft, and there is no system-wide communication. It is as simple as putting more agents in the vicinity of the system, and changing system parameters. In practice, it may be dicult to change the behavioral parameters of the existing agents, but one possible solution is for the new parameters to be spread virally between agents as new agents enter the population with updated rulesets. 124 Chapter 10. Foraging y = 320.96x - 56.828 R² = 0.9953 0 200 400 600 800 1000 1200 1400 1600 1800 2000 0 1 2 3 4 5 6 7 Fitness Rows of Agents Figure 10.7: Optimized tness for each number of agent rows, showing a linear relationship between performance and system size 10.5.5 Scalability of detail design What can be done if the system operator is not allowed to change behavioral parameters after design or deployment? This would be the case in fully autonomous systems. In such a scenario, a system with parameters optimized for 1 row of agents may be scaled to 6 rows, or vice versa. This design constraint was explored by taking the optimized parameter sets from Section 10.5.1 and testing them for each of 1{6 rows of agents. Thus some were tested outside of the range for which they were optimized, and analysis of their performance will lead to a measurement of their scalability. The results are summarized in Figure 10.8. Unexpectedly, there were several instances where a parameter set optimized for a certain system size actually outperformed all other candidates at a dierent system size 8 . It can also be seen from Figure 10.8 that the systems optimized for small system sizes often had trouble scaling up, but the systems optimized for large system sizes could very smoothly scale down, as indicated by the nearly linear behavior of the RO5 and RO6 systems, and the drastic drops with increasing system size of the other curves. Even though their tness was not a catastrophic failure at small system sizes, there was a signicant dierence in tness scores between systems optimized for large sizes (RO5 and RO6) and those optimized for small sizes (RO1 and RO2). At both size scales, systems suered a performance penalty compared to systems that had been optimized for that scale. A summary of the performance penalties in the extreme 8 For example, in the 5-row test, the RO5 candidate had a tness of 1502.5, but the RO6 candidate had a higher tness of 1505.9. This indicates a slight shortcoming in the optimization algorithm, in that it was not always able to nd the absolute best parameter sets in its rst attempt. Nonetheless, any such cases only manifested by a slim margin. The most extreme example was the 2-row test, where the RO2 system had a tness of 542.1, but the RO1 system had a tness of 566.2, 4.44% higher. In all other cases, the expected parameter set had the highest tness in its respective test. 125 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems 0 200 400 600 800 1000 1200 1400 1600 1800 2000 0 1 2 3 4 5 6 Fitness Rows of Agents RO1 RO2 RO3 RO4 RO5 RO6 Figure 10.8: Results of scalability test, where each curve represents one set of behavioral parameters, tested at each level of system size cases is given in Table 10.5. Table 10.5: Results of cross-testing systems systems without boundary detection optimized for large size in small-scale deployment, and vice versa System size (rows) Optimized tness Test system Test tness Penalty (%) 1 290.0 RO5 110.0 62.1 RO6 110.2 62.0 2 542.1 RO5 370.3 31.7 RO6 350.3 35.4 5 1502.5 RO1 308.0 79.5 RO2 1027.6 31.6 6 1881.1 RO1 204.0 89.2 RO2 525.2 72.1 Figure 10.8 indicates that the RO1 system had the most extreme tness crash when scaling up to large system sizes, so here I will compare its behavior to the smoothly scaling RO6 system. The RO6 system scaled almost linearly (R 2 = 0:998), whereas the RO1 system was highly nonlinear. A closer inspection of the RO1 6-row system's behavior shows that jamming was the main cause of poor performance at large sizes. This can be seen in Figure 10.9, where by the nal timestep, so much congestion had developed around the home base that the whole system suered a slowdown in the amount of food that it could return. 126 Chapter 10. Foraging t = 100 t = 250 t = 550 t = 1000 Figure 10.9: Behavior of the RO1 6-row system, with jamming developed by the nal timestep. 127 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems t = 0 t = 50 t = 200 t = 400-600 Figure 10.10: Behavior of RO6 6-row system. The 4 th frame shows a motion trace of all agents over 200 timesteps. 128 Chapter 10. Foraging Figure 10.11: System-level behavior of RO6 1-row system for timesteps 500{1000 Figure 10.10 shows the RO6 6-row system's behavior. This system also found a solution to the problem of sticking at the arena walls. It had such a large number of agents that it could aord to sacrice some of them as barriers along the wall, so that other agents would avoid them and be more eectively channeled toward the food. This caused the system to essentially break into two distinct components: a static barrier around the arena, and a circulating inner core that repeatedly made trips between food and home. This can be best seen in the 4 th panel of Figure 10.10, where the motion trace of the inner core is clearly shown in contrast to the motionless barrier agents. When this same RO6 system was tested as a 1-row system, its tness greatly declined, but qualitatively, its behavior was similar. See Figure 10.11 for a motion trace of the system's agents. The same strategy was not as eective in the 1-row case, because a larger proportion of agents had to be allocated to maintaining the boundary, so we see that a system-level structure that was evolved for large systems was ineective at smaller sizes. 10.5.6 Scalability in system with boundary detection When the same optimization procedure and scalability tests were performed on systems with boundary detection, similar results were found, as shown in Figure 10.12. The conceptual design was again almost perfectly linearly scalable 9 . Although the general shape of the curves in Figure 10.12 is similar to the scalability test results of the system without boundary detection, the tness penalties are less severe at smaller sizes, and more severe at larger sizes. The tness penalties of the extreme cases are summarized in Table 10.6. The performance penalty suered by the SO6 system in 9 R 2 = 0:999 This can be visualized by connecting the top points at each x-coordinate in Figure 10.12 129 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems 0 500 1000 1500 2000 2500 3000 3500 4000 0 1 2 3 4 5 6 Fitness Rows of Agents RO1 RO2 RO3 RO4 RO5 RO6 Figure 10.12: Scalability tests for detail design of systems with boundary detection Table 10.6: Results of cross-testing systems with boundary detection optimized for small size in large-scale deployment, and vice versa System size (rows) Optimized tness Test system Test tness Penalty (%) 1 697.4 RO5 502.3 28.0 RO6 511.8 26.6 2 1249.1 RO5 1147.0 8.17 RO6 1117.5 10.5 5 2896.7 RO1 25.75 99.1 RO2 117.5 95.9 6 3323.3 RO1 30.45 99.1 RO2 97.4 97.1 130 Chapter 10. Foraging Figure 10.13: Jamming of RO2 6-row system. All agents carrying food are hemmed in at the food location by the agents searching for food, halting all system progress. a 1-row deployment is only 21%, but the lack of scalability in the RO1 and RO2 systems tested in a 6-row deployment caused them tness penalties of 99.1% and 97.1%, respectively. These extreme tness crashes can be explained by jamming of the system, as agents crowded around the food location 10 , blocking all progress, as shown in Figure 10.13. 10.6 Tests for resilience The results of the scalability study have implications for the resilience of systems. Recall that a resilient system is one that can tolerate internal faults and damage and still maintain core functionality. If the damage causes a resilient system's performance to degrade, this degradation is slight and gradual, not catastrophic. Fully resilient systems would show no degradation and return to their fully functional state after disturbances [246]. To give a practical example, an automobile is not resilient to the failure of inexpensive parts such as a battery or spark plug. Contrast this with a tree: its leaves and fruit can be eaten; its limbs can fall o in storms; it can even suer burns and live for centuries|a remarkable example of resilience. In this section, I test the resilience of the foraging system by deactivating agents in the middle of the simulation run. The systems begin with 3 rows of agents, and act 10 In retesting, jamming would sometimes occur around the home base as well, or only partial jamming would allow some of the agents to leak out and forage for food. Recall that the numbers reported here are the 30 th percentile tness scores out of 100 tests, so there may be large variability in each trial and tness score. For example, the SO2, 6-row system had 10 th and 90 th percentile scores of 50.9 and 1040.0, with mean and standard deviation of 439.6 and 397.2, respectively. 131 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems for 500 timesteps. Then, 2 rows of agents are removed from service, and the system acts for another 500 timesteps. The system uses a total of (90 + 30)=2(1000) = 60000 agent-timesteps, equivalent to the resources of a 2-row system. 10.6.1 Research questions If it is known that a system's agents may break or be deactivated in service, at what size should the system be optimized? As shown in Section 10.5.5, the size for which a system is optimized has an eect on its performance if the deployed system size is dierent. We saw that there is less danger in scaling a system down than up, but there was still a performance penalty for overdesign when the SO6 system was deployed with only 1 row of agents. Similar considerations must be made for a system that will lose agents in service. Should it be optimized for its starting size, its estimated ending size|or perhaps some size in-between? 10.6.2 Results The optimized system for 3{1 row resilience had a 30 th percentile tness score of 1154.45. This is lower than the tness obtained by the SO2 system that had a constant 2-row size. The results of cross-testing the SO1{SO3 parameter sets on the resilience scenario, and vice versa, are given in Table 10.7. It can be seen from Table 10.7 that Table 10.7: Results of cross-testing systems for resilience System size (rows) Optimized tness Test system Test tness Penalty (%) 1 697.7 RES 598.7 14.2 2 1249.1 RES 1184.3 5.2 3 1768.9 RES 1769.0 (0.006) 3{1 1154.5 RO1 381.8 66.9 3{1 1154.5 RO2 1020.2 11.6 3{1 1154.5 RO3 1093.0 5.3 there was no performance penalty if the resilience-optimized system was deployed as a 3-row system, and only a small penalty when the RO3 system was deployed in an environment that required resilience, even though it was not optimized for that scenario. The RO1 system did show poor performance when deployed in the resilient scenario. This indicates that optimizing for the largest system size is a good strategy, even if it is possible that the system will lose agents in service. 132 Chapter 10. Foraging 10.7 Using system complexity metrics to expand GA search The tness function of a GA is usually treated as a \black box." It takes behavioral parameters as inputs, and gives only a measure of emergent system performance as output. This is convenient for the designer, because it does not bias the GA towards any specic strategy for achieving a goal, and it leaves room for surprising discoveries by the GA [100, 122]. If there is enough time during the detail design phase, however, it may be fruitful to add conditions to the tness function that force the GA to search for specic strategies or emergent forms. In this section, I use a system-level complexity metric to bias the GA search in two directions: towards tightly coupled (complex) systems, and towards loosely coupled systems. 10.7.1 Research question Can system-level metrics be used to guide GA toward certain qualitative behavioral strategies? In the exploration case study of Chapter 8, we saw that qualitatively distinct emergent behaviors could be found when the GAs were run multiple times. These distinct behaviors may be useful to the designer, but if the candidates are replaced by the generational churn of the GA, this knowledge can be lost. If quantitative metrics are devised so that the computer can automatically sort behaviors into separate qualitative strategies, then the diverse strategies can be recorded and preserved, or the GA can be biased toward certain strategies at the outset. 10.7.2 Sinha and de Weck's system complexity metric To include system complexity metrics in the tness score, they must have a quantitative value. In this dissertation, the system-level complexity value will come from the work of Sinha and de Weck [199, 198]. Those authors study system complexity from three perspectives: component complexity, interface complexity, and topological complexity. They state that component complexity can be measured in terms of TRL 11 . The interface complexity rises with the number of connections between system components, and the topological complexity is a holistic measure of system integration. The measure of topological complexity rests on a calculation of the energy of the adjacency matrix. In a system adjacency matrix, every row and column represents a system component. The o-diagonal cells represent interactions between system components. The matrix energy is the sum of the singular values of the matrix, and indicates how dicult it is to construct and divide the system. A more complex system will be more dicult to cleanly subdivide and thus have higher matrix energy. 11 Technology Readiness Level. This is a NASA and Department of Defense classication commonly used in Sytems Engineering. 133 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems System complexity is calculated according to Equation 10.4 C(n;m;A) = n X i=1 + n X i=1 n X j=1 ij A ij ! E(A) (10.4) A ij = ( 1 if component i aects component j 0 otherwise where n is the number of components, m is the number of interfaces, A is the system adjacency matrix, i is the complexity of component i, ij is the complexity of the interface between components i and j, is a scaling factor, and E(A) is the matrix energy ofA. This complexity metric was chosen because it has been empirically shown to be predictive of practical engineering concerns, such as assembly time 12 . 10.7.3 Incorporating topological complexity into tness func- tion Equation 10.4 was developed in the context of systems engineering. It is very general and can be applied to systems with heterogeneous parts. Applying it to a system of homogeneous agents actually allows for several simplications. Because the tness function only needs to discriminate between systems, not provide an absolute score, the term for component complexities can be dropped (all systems in this test will have the same value for this term). Likewise, since all connections are of the same complexity, and system size does not change, the terms in front of the matrix energy function can be set to 1. Incorporating the system complexity into the original ocking tness function then yields Equations 10.5 and 10.6. fitness =food r 1 + E(A) 70 (10.5) fitness =food r 1 + E(A) 70 (10.6) where the matrix energy term 13 is calculated from the interactions among agents at the nal simulation timestep. Matrix energy appears as a multiplying factor for GAs that seek high complexity, and as a divisor for GAs that seek low complexity 14 . 12 In [199], it was shown that test subjects take longer to assemble molecular models with higher topological complexity, and this relationship is actually nonlinear. 13 The singular value decomposition required for this term was performed using Apache Commons Math [45], an open-source mathematics library written in Java. 14 The number 70 in Equations 10.5 and 10.6 was determined through informal experimentation. In preliminary tests on randomly generated systems, matrix energy of approximately 70 was commonly found. This makes typical matrix energy values increase or decrease the tness score by a factor of 2. 134 Chapter 10. Foraging 0 200 400 600 800 1000 1200 1400 0 20 40 60 80 100 120 140 160 180 200 Food Returned by Fittest Candidate GA Generation Seek Low Complexity Seek High Complexity Figure 10.14: Comparison of top tness found at each generation of two GAs, one seeking tightly coupled systems, and the other seeking low-complexity systems 10.7.4 Results The new tness function was used to optimize a 2-row system with boundary detection. The results of the optimization are given in Figure 10.14. It can be seen from Figure 10.14 that both optimizations converged toward approximately the same performance, even though they were optimized toward opposite ends of the complexity spectrum. This is an example of an SO system with multiple qualitative strategies for achieving the same functionality. Figure 10.15 shows the optimized behavior from each GA, and Table 10.8 summarizes the numerical results Table 10.8: Results for systems optimized for opposite levels of topological complexity Optimized for Low Topological Complexity High Topological Complexity Complexity (mean) 24.5 94.2 Fitness (30 th Percentile) 808.4 2787.2 food r (30 th Percentile) 1090 1330 The dierence in tness between the high-complexity and low complexity systems was much lower as a percentage than their dierence in complexity: 22.0% vs. 284.4%. Qualitatively, as seen in Figure 10.15, they had very dierent emergent behaviors. This indicates that the GA was succesfully biased toward two dierent qualitative strategies with comparable tness scores by the inclusion of the topological complexity metric. 135 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems Figure 10.15: System-level structure at nal timestep for systems optimized for low complexity (left) and high complexity (right). It can be seen from Figure 10.15 that the system optimized for low topological complexity did in fact show a more distributed architecture than the system optimized for high complexity. In the low-complexity system, agents distributed themselves more widely throughout the arena, and relied on a strategy of using the walls for guidance toward the food. In the high-complexity system, the agents formed a tightly packed foraging line. The agents without food used the line of agents with food as a guideline to nd the food again, through a negative A 3 value. This behavior led to a minimal travel time between food and home, and is the most similar to actual ant foraging of any of the optimized behaviors found in this chapter. Another interesting result is that the system that was optimized for high complexity actually outperformed the RO2 system that was optimized with the standard tness function (1245 units of food returned) by 6.83%. With an additional constraint on the system, the GA was biased toward a high-tness area of the search space that it could not nd in the previous optimizations. 10.8 Discussion In this chapter, the two-eld behavioral model was shown to be a viable approach to SO systems design. The task and social elds designed served distinct but complementary purposes. Figure 10.4 shows one benecial interaction between sField and tField, where the sField parameters caused agents to misalign with one another while seeking food. This group behavior was actually a surprising repurposing of the original ocking primitives included in the behavior capacity. The alignment behavior was originally included with the expectations of group ocking, for ecient group movement. In fact, the GA evolved a large negative value for the A 4 parameter, causing agents without 136 Chapter 10. Foraging food to set their heading opposite the heading of their neighbors. The physics of the simulation stopped agents along the boundary, creating lines of agents, and the sField relationships caused a chain reaction of agents directing each other toward the food. These structures created by sField relationships allowed agents to reliably nd the food, after which their tField behaviors would dominate and lead them to carry the food back to the home base. This indicates that the social interactions can be designed partially independently from the tField, with the goal of designing useful emergent structures. The tField can then be used to deploy these structures in the proper place and time. Another example of this was shown in Figure 10.15, where the high-complexity system deployed groups of ocking agents (structure) that used the directional cues from the home base and boundaries (tField) to eciently forage for food. The use of a simulation-optimization loop also provided useful results as it allowed an objective determination of optimized candidates within a conceptual model. This same approach is used in Chapter 11, and the implications are discussed in Chapter 12. 137 Chapter 11 Box Pushing In this chapter, a self-organizing system is designed to push a box toward a goal. It is an expansion of previous work on box-pushing Cellular Self-Organizing Systems [114, 115]. In those papers, the self-organizing agents were given the ability to broadcast the conditions in their local vicinity, so that all the agents could coordinate to move the box. The agents would individually nd the worst locations for the box and move there. Then, when all agents had taken a place, they would push it toward the best locations. The process would repeat until the system was successful or the time limit ran out. At a system level, it looked like a synchronized alternation between formation and pushing. The expansion in this dissertation is to complete this task without the aid of system-wide synchronization. Note that synchronization is possible to achieve in SO systems (e.g. [33, 217]) but is not guaranteed and requires dedicated hardware and/or algorithms. Because one objective in designing SO systems is to make the hardware as simple as possible [109], good performance without the synchronization overhead represents a notable improvement. Exploration of this problem uncovered an interesting sensitivity to initial conditions and internal perturbations. This is an inherent property of complex systems, and thus a risk in the design of SO systems, but it was eectively covered up by the system synchronization in the previous work. TRIZ separation principles [214] were used to counteract these problems in the unsynchronized system as well. Using a genetic algorithm for optimization, it was also possible to determine the performance penalty for systems that were designed for robustness vs. systems that were optimized for particular initial conditions. Using this approach will allow designers to make informed tradeos between robustness and performance according to their systems' requirements and environmental constraints. 139 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems 11.1 Signicance of box pushing task 11.1.1 Practical applications It is possible to achieve impressive system functionality from minimalist agents, [109]. A motivating case study can be found in nature, where the advantage of coordinated object-moving is evident in the behavior of foraging ants, which have been reported to transport food objects that are heavier than a single worker ant by a factor of 5000 1 ([152], cited in [123]). By analogy, in the engineered world, debris removal, mining, or construction could all be aided by cooperating robots. This type of task is often described as the \piano mover's problem" or path planning, which has been studied in the robotics, articial intelligence, and control theory communities [127]. While this problem is often approached from a mathematics or topological perspective [193], in the present work, the agents are trying to solve the problem collectively using local information. Path planning bears strong similarities to general problem solving, as both require a system to nd a sequence of actions that map an initial state into a desired state [127], so with proper knowledge of the constraints and obstacles in a problem, this approach could also be deployed to deal with more abstract \path-planning" problems, such as design problems or resource allocation. 11.1.2 Key features for designers With a large box and simple agents, it is necessary for agents to work together to move the box. This tight coordination is dicult to design in a system if it is assumed that agents' communication is limited. How can this coordination be achieved by independent agents? What is the minimum communication required? There are also several physical aspects of the task for agents to consider. The task has a goal, walls, and possibly obstacles. How can the agents sense and react? Should walls be treated dierently from obstacles if it requires more sophisticated hardware to dierentiate between them? In this work, agents' reactions to stimuli are parameterized, leading to the question of whether or not each wall and obstacle should be assigned a dierent parameter. Success in the task is partially stochastic. Agent initial conditions and internal perturbations|common in complex and SO systems [230]|will aect the system's behavior. How can the system be made more robust to these perturbations? And does it lose eciency by seeking adaptability? 1 or heavier than the collective team of ants by a factor of 50 140 Chapter 11. Box Pushing Figure 11.1: Initial conditions for box-pushing task with agents placed randomly on the left side of the box 11.2 Details of the box-pushing task and design process 11.2.1 Task overview Figure 11.1 shows the simulated environment for the box-pushing task. The simulation is run using NetLogo [238], a popular platform for agent-based simulations. The brown box is to be pushed to the goal (concentric black circles) by the agents (green squares). They can push the box to move and rotate it. They cannot pull the box, and the box cannot be pushed through the red obstacle or the blue walls. The system is limited by the total distance traveled by agents (representing battery power or abstract eciency metrics) before it turns o. In keeping with the previous work [115], if the centroid of the box reaches the same x-coordinate as the goal, the attempt is considered a success. 11.2.2 Simulation physics Because the NetLogo simulation software is dimensionless, any simulation dimensions are measured in patch-widths (pw) throughout this paper. A patch is an elementary square area of the NetLogo environment. For reference, the agents are 1 pw large along the diagonal; the box is 5 pw wide; and the starting distance from the center of the box to the goal is 30 pw. 141 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems The simulation rests on a simplied physical model. At every simulation timestep, the forces from every pushing agent are summed. Every push carries an equal force, and a vector sum of 11 pushes will move the box 1 pw in a given direction. The torque is applied around the centroid of the box, and a push with a moment arm of 5 pw will rotate the box 1 . The translation and torque scale linearly with the number of agents pushing and their moment arms, respectively. If the box is moved into an agent, that agent is pushed out of the way. If the box or an agent is moved into walls or obstacles, its motion is halted, but no other reaction force is simulated. Agents cannot move through the box, walls, or obstacles. 11.2.3 Agent hardware assumptions An agent is assumed to be able to move in any direction in the horizontal plane. They can broadcast and receive information using one-to-many signaling, but not direct one- on-one communication. They can dierentiate other agents from the environment, and discriminate among environmental stimuli. They can measure distances and directions, and they have enough computing power to perform simple reasoning algorithms. They have enough data storage to govern state changes. They have a maximum speed of 3 pw per simulation timestep. These assumptions are in keeping with the previous work on CSO systems[41], and they are similar to the denition of a \minimalist" robot [109]. They are also reasonable with respect to current swarm robot hardware (e.g. [84, 144]). 11.2.4 Potential pitfalls identied through simulation Refer to Figure 7.7 for a summarized design methodology for self-organizing systems. The methodology, with a few iterative loops, is followed in this chapter. During exploratory work, design of the system was treated as a ocking problem. As in other work on ocking [192, 40, 100, 204], ocking primitives were parame- terized to create a behavioral search space. In this case, the box was treated as a special member of the ock. Agents' reactions to it were unique, but still based on attraction and repulsion, which are classical primitives in ocking [183]. The agents also simultaneously considered attraction to the goal, and repulsion from walls and obstacles. These parameters were tuned using a genetic algorithm. The tness function counted the number of times that the system successfully moved the box to the goal before the agents had moved a total of 40 000 pw (the task was reset with random agent initial positions after every success). I will not give details of this exploratory behavioral algorithm here, but a visual summary with motion traces of the box of systems optimized by the GA is given in Figure 11.2. It can be seen in Figure 11.2 that the system occasionally behaved in critically 142 Chapter 11. Box Pushing ((a) (c) (b) (d) Figure 11.2: (a) task completion with obstacle collision (b) task completion using walls as guide (c) task failure, pushing box backwards (d) task failure, jamming box against wall 143 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems counterproductive ways, pushing the box backwards or jamming it against a wall. Another behavior, not shown, was system failure due to abandoning the box and moving to the goal without it. Even in successful attempts, the box usually collided with walls or the obstacle. Note that all the behaviors described here were found in systems after optimization. Because the GA could only optimize within the parameter space given to it, it was clear that the conceptual design of the system needed to be reworked to minimize failures. Even collisions could be problematic. If the agents were required to move a fragile box (or alternatively, if they were working in a fragile environment like a medical application), then pushing the box into a wall or obstacle could damage it and be another driver of system failure. With these potential pitfalls in mind, a formal behavioral model for a box-pushing system was developed, as described in the next section. 11.3 Conceptual design of cooperative box-pushing system Design is rarely linear [71]. Iterative feedback loops also exist due to knowledge generation throughout the process. In the case of the preliminary work mentioned earlier, unsatisfactory results after optimization and new knowledge of the emergent system pitfalls led to rework and a new strategy at the behavioral design phase. 11.3.1 TRIZ application calls for state changes TRIZ (Russian acronym that can be translated as \Theory of Inventive Problem Solving") is a tool for solving design problems. In TRIZ, problems are modeled as contradictions, where one desired aspect of the system is in con ict with another. These contradictions can be solved using separation principles, where the con icting goals are separated in time, space, system architectural levels, or according to conditions [93]. This section will brie y explain the use of TRIZ in mitigating the counterproductive behaviors discussed in Section 11.2.4. The contradiction in the system was the agents' need to be in two dierent places. An agent wants to be near the box (to push), but it also wants to be near the goal (to complete the task). These ideas were built into the agent with the idea that they should complement each other so that the agents would push the box toward the goal. In practice, this was not always the case. For example, when an agent was between the box and goal, the desires pulled it in opposite directions. Sometimes, when all the agents were in this zone, there would be critical failures such as all agents' moving to the goal (abandoning the box) or all agents' pushing the box (pushing the box away from the goal, Figure 11.2). The previous work [114] avoided this scenario by synchronization. In that study, 144 Chapter 11. Box Pushing agents would choose a location at the perimeter of the box rst. Then, when all agents had found a suitable location, they would communicate and decide whether or not to push the box. This is implicit TRIZ separation in time. One goal is fullled by the system, then the next, without attempting to fulll both at the same time. In the current work, which does not rely on system synchronization, the TRIZ principles of separation in time and separation by condition were applied to individual agents. The agents' decisions whether to seek a position around the box (formation keeping) or push toward the goal are governed by a state change, so that an agent is only attempting to fulll one function at a time. 11.3.2 Behavioral design Agent behaviors are designed using the eld-based approach, as described in Chapter 7. A eld is a mathematical abstraction of every stimulus that an agent considers. Smooth eld functions allow agents to follow local gradients toward goals. A task eld and social eld are used. The task eld (tField) is a response to objects in an agent's task and environment, and the social eld (sField) arises from agent-agent interactions. The elds are treated as two separate concepts, because the sField can be used to dynamically create task-based structures, while the tField controls where these structures should be deployed [101]. The stimuli in the task eld are the box, walls, goal, and obstacle. Agents need to consider the desirability of their own location within the eld, and the desirability of the box's location within the eld. Field for box The eld governing the desirability of the box's location is a pure tField, whose function is given here: field box =V goal e goal d goal +V obs e obs d obs +V wall e wall d wall (11.1) where V and are behavioral design parameters (design DNA, or dDNA), and d is the distance to the goal, the center of the obstacle, or the nearest point on the wall. Because the terms in the eld equation decay exponentially, the in uence of any one stimulus can peak at a point and be negligible elsewhere. This makes it simple to create a local maximum around the goal, and local minima near the walls and obstacle. Thus, with proper selection of decay constants, following an increasing gradient can lead through the environment, between walls and obstacles, and toward the goal. One graphical example applied to the simulation environment is given in Figure 11.3. 145 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems Figure 11.3: Example representation of the field box value at every NetLogo patch. White patches have the highest eld values. Darker shades of red have lower values, and the black patches represent walls and obstacles. Field for agents Agents cannot simply follow the eld described in equation 11.1, as they must be mindful of the box's location and orientation within the eld as well. The eld governing agent movement should distribute them around the box, so that they can protect it from collisions and collectively push it toward the goal. The agents use a combined tField and sField to calculate their own movements: field agent =w p X i2 p i +w b b (11.2) p i = ( d i p dpn d i p d pn dpn d i p d i p d pn b = ( d b d bn d b d bn d bn d b d b d bn where is the set of the calculating agent's neighbors;d p;b is a distance (pw) to an agent or the box, respectively;d bn;pn are design parameters representing the nominal distance that an agent attempts to maintain from the box and other agents, respectively; and w p;b are relative weights that an agent places on maintaining a distance toward its neighbors, or the box, respectively, if both cannot simultaneously be the nominal distance. Similar to [102], the equilibrium distances d bn;pn are used so that agents remain near the box, but also spread out from one another. The intention is for agents to maintain a formation surrounding the box. As shown in Figure 11.4, the box is divided into 6 zones to focus the agents' attention on important interactions and to minimize interference within the system [114]. For an agent to determine its neighborhood , it seeks agents in its own zone, and if it is on a long side of the box, neighbors in the adjacent zone. For example, in Figure 11.4, where the red agent is calculating its move, would contain the blue agents. 146 Chapter 11. Box Pushing 6 2 1 5 4 3 Figure 11.4: LEFT: the box is divided into 6 zones. The zones determine which agents work together. RIGHT: The red agent is calculating its move. Since it is in zone 3, it considers information from the blue agents, which are in zones 2 and 3; and the green agents, which are in zone 5. State changes The green agents in Figure 11.4 are opposite the red agent and govern the red agent's state change. State changes separate in time the agent's formation keeping from box pushing. The agent's logic for switching between the two states is given in the following pseudocode: At any timestep, the agent will receive the box eld values from its neighbors on the opposite side of the box (cross neighbors in the pseudocode). If the average of these values is higher than the box eld at its own location, it will set its state to pushing. Otherwise, it will set its state to forming. If there are no cross neighbors, the agent will randomly switch states, according to the probability switch prob. 147 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems Table 11.1: Mapping GA DNA encoding to behavioral parameters Parameter Raw Range Mapping Function Parameter Range V goal 0{255 f(x) = 1:0369 127x 0:01 to 100 V obs , V wall 0{255 f(x) =1:0369 127x 0:01 to 100 goal , obs , wall 0{255 f(x) = 3 x 255 0 to 3 w p , w b 0{255 f(x) = 1:0369 127x 0:01 to 100 d pn , d bn 0{255 f(x) = 1 + 49 x 255 1 to 50 switch prob 0{255 f(x) = 100 x 255 0 to 100 11.3.3 Summary of agent behavior At a given simulation timestep, an agent will rst sense its tField, broadcast its tField value, determine its neighbors on its own side of the box, and determine its cross neighbors on the opposite side of the box. It will then possibly change its state. If it is in the pushing state, it will move toward the box, and if it reaches the box, it will push. If it is in the forming state, it will move to the point within 3 pw with the highest eld value according to Equation 11.2. All agents run the same behavioral algorithm in parallel at every timestep. 11.4 Detail design by simulation and optimization of dDNA This section will describe the dDNA encoding and setup of the genetic algorithm. 11.4.1 DNA encoding The behavioral parameters are encoded as 8-bit binary numbers. This is the \design DNA" of the system that the GA can optimize. Table 11.1 shows the values and mapping functions that the GA used for determining behavioral parameters. The several exponential mappings were used for parameters that encode a relative weight. This is because it is not the raw value of the parameter that matters, but the ratios among relative weights. With an exponential mapping, parameters have a median value of 1, and can increase or decrease by a factor of 100. 11.4.2 Genetic operators The GA used in this research can be described as a Simple Genetic Algorithm (SGA) [79] with tness scaling, uniform crossover, and elitism. An SGA randomly 148 Chapter 11. Box Pushing generates a population of candidate solutions (the binary strings encoding behavioral parameters) and evaluates them according to a tness function (measure of simulated global performance). With tness scaling, the raw tness scores are scaled so that the best candidate has a score that is a predetermined factor of the average score (15, in this study) and all other candidates' tness scores are scaled linearly. After the rst randomly generated generation of solutions, the GA creates further generations using selection, crossover, and mutation. Using elitism, the top candidate from a generation is cloned (copied bit-by-bit) to the next generation. To ll the rest of the next generation, candidates are randomly selected to mate with one another. The selection probability is proportional to their scaled tness. Under uniform crossover (as dened in [201], a bit is randomly chosen from either parent to write each bit of the ospring genome. For all non-clones, every bit also has a possibility of mutation ( ipping a bit from 1 to 0 or vice versa) according to a predetermined probability (1%). The process is continued for a number of generations, until highly t candidates are found. These operators of selection, crossover, and mutation are meant to generate better and better candidates in progressive generations. The optimization is continued until suitably t candidates are found or a the predetermined maximum generation is evaluated. 11.4.3 Fitness function The pass/fail nature of the task makes it dicult to derive gures of merit. In its most basic form, the task is simply to push the box toward a goal, with no stipulation on how this is to be done. A wide variety of strategies can be used successfully, and the success of any one attempt relies partially on random chance. At the end, the result is either a pass or fail, sorting all systems into one of only two categories. To aid in dierentiation of systems, more criteria were added to the system performance metrics: eort, collisions, and reliability. Eort is the total distance traveled by the agents in the system. Systems can reduce their eort by taking the shortest possible route between the origin and the goal, and not backtracking. It is included in the tness function as a simulation stopping condition. After the system has expended a predetermined amount of energy, the simulation is stopped and the tness is calculated. Collisions is the number of times that the box collided with the walls or obstacles. It is included in the tness function as a penalty that reduces the score of a success. Reliability is tested by allowing the system to complete the task multiple times before the eort budget is exhausted. After a success, the task and environment will reset, allowing another opportunity for the system to complete the task, obtaining a higher tness with every subsequent success. 149 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems For clarity, in the remainder of this chapter, any single attempt at pushing the box to the goal will be referred to as a sprint. The set of sprints that occur before the eort budget is expired will be referred to as a trial. Systems can add to their tness score with each successful sprint, but the GA only considers the nal tness score at the end of a trial. A trial is ended after the agents have collectively travel 40 000 pw. The tness is calculated according to Equation 11.3. fitness = 100 M X i=1 0:75 x i (11.3) whereM is the number of successful sprints in a trial, andx i is the number of collisions that occur within a sprint. It can be seen that the collision penalty is 25%. 11.4.4 Finding optimal systems Because the behavior of the system is partially stochastic, determining the optimal system is not as simple as choosing the system with the highest tness value found by the GA. The mean and variance of the system behavior must also be considered. Also, dierent GAs with the same tness function and search space can converge to dierent parameter sets. In this work, multiple GAs are run to identify highly t candidate systems. These systems are then re-tested for reliability, and the system with the highest 30 th percentile score (out of 100 trials) is chosen as the optimal candidate. This means that any tness score reported for an optimized parameter set can be interpreted as having 70% reliability. The full process is identical to the extended foraging simulation optimization described in Section 10.5.3. 11.5 Scenarios and results 11.5.1 Baseline: random initial positions and stepping order The baseline setup for the system is for the agents to assume a random initial position on the left side of the box and for the simulation to update the agent positions in a random order. Research questions This scenario was chosen to include several sources of randomness, and to compare to the previous work in [114]. Several questions guided this case study: Can an unsynchronized eld-based system be as reliable as a logic-based system with synchronization? What are the performance advantages and disadvantages of each approach? 150 Chapter 11. Box Pushing Optimized system Table 11.2 shows the optimized parameter set found in the baseline scenario. Table 11.2: Optimized parameter set for baseline scenario Parameter Value V goal 5.107 V obs -28.04 V wall -0.8970 goal 0.5059 obs 2.553 wall 2.6706 Parameter Value w p 29.43 w b 16.55 d pn 40.01 d bn 2.729 switch prob 100.0 In 100 trials, this system had a 30 th percentile tness of 1565.9. Its minimum, mean, and max tness scores were 0, 1572.7, and 1901.3, respectively. Comparison to previous work Compared with the previous work, the unsynchronized system was found to be quite competitive, and more ecient from an energy viewpoint. It was, however, more prone to collisions. Figure 11.5 shows three trends as a function of the energy budget of the system: the raw success rate of the current work, the success rate if sprints with collisions are not counted, and the success rate of the previous work (where collisions never occurred in 100 trials). It can be seen from Figure 11.5 that the unsynchronized system could complete the task much more reliably with a small energy budget, but it was prone to collisions. At higher energy levels, the synchronized system had more collision-free successes. This is because the synchronized system expended a lot of energy to create formations around the box, and because it showed a lot of backtracking and oscillations in box motion. In the synchronized system, agents could move to an entirely new location around the box during every pushing sequence. In the unsynchronized system, when agents found a location around the box, they tended to stay in the vicinity, minimizing formation-keeping movement. Also, there were fewer oscillations and less backtracking, so the box was pushed to the goal more quickly. 11.5.2 RNG seed attached to candidates The next optimization removed the randomness from the simulation by allowing candidate solutions to x their random number generator (RNG) seed. To do this, 151 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems 0 10 20 30 40 50 60 70 80 90 100 0 5000 10000 15000 20000 25000 30000 35000 40000 45000 50000 Success Rate (%) Agent Travel Limit (pw) Optimized Candidate (collisions allowed) (Khani, Humann, and Jin 2015) Optimized candidate (no collisions allowed) Figure 11.5: Success rate as a function of energy budget, comparing the optimized, unsynchronized system with the synchronized system the parameter sets were appended with 8 more bits to encode the RNG seed, and this seed was passed to NetLogo at the beginning of a trial. This seed controls the three sources of randomness within the simulation: 1. Random initial positions 2. Random stepping order of agents 3. Random switching between states The random stepping order is an artifact created by the simulation software. As a serial-processing machine, the computer can only emulate the parallel actions of SO systems by updating the system state in small intervals, and agent states must be updated one at a time to check for collisions and interference. Nonetheless, this simulation artifact does emulate some of the internal perturbations of distributed hardware systems such as unequal agent speeds, missed communication, interference, and unsynchronized update times. By encoding the RNG seed with the parameters, initial positions and stepping orders can be maintained in clones and ospring of highly t candidates. They are still generated based on the RNG seed (i.e. not specically designed), but for any given seed they are repeatable. Intuitively, this could lead to one of two scenarios: An RNG seed is found that enables advantageous initial positions and stepping orders for a large swath of candidates. 152 Chapter 11. Box Pushing Table 11.3: Optimized parameter set for RNG seed added scenario Parameter Value V goal 0.01926 V obs -0.1576 V wall -0.01118 goal 0.2471 obs 1.635 wall 2.212 Parameter Value w p 0.02371 w b 0.01286 d pn 23.48 d bn 2.537 switch prob 90.98 RNG seed 210000 Or a co-evolved pairing of the RNG seed with parameter values is found that gives high tness, even though individually the parameter values or the RNG seed may not be generally helpful. Note: NetLogo uses the Mersenne Twister RNG, as developed by Matsumoto and Nishimura [141] and implemented in Java code by Luke [132]. Research questions The random initial positions represent a designer's uncertainty about the system state at deployment. Many SO systems are envisioned to be deployed in environments where precise control is impossible [19, 95, 139]. How can a GA optimize in the face of random initial conditions and internal perturbations? What is the performance penalty caused by the use of random initial positions? What is the danger of over optimization? These questions have implications for design tradeos and optimization approaches. If the designer has to pay a high premium for precise control over the system's initial condition, then it may be wise to forgo that precision if the system is robust to changing initial conditions. Optimized system Table 11.3 shows the optimized dDNA set for the RNG-encoded system. This system evolved to a small equilibrium distance with the box of 2:53 pw. A distance of 1 pw is necessary to avoid colliding with the box. It was generally quick to surround the box, and pushed it toward the goal with minimal stalling or collisions. With its evolved RNG seed of 210000, it was able to achieve a tness of 2228.4 with 25 complete sprints. 153 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems 0 10 20 30 40 50 60 70 250 500 750 1000 1250 1500 1750 2000 More % Fitness RNG Optimized Standard Optimized Figure 11.6: Retrial tness histogram comparing system optimized with RNG seed and system from baseline scenario, both tested with random RNG seeds (N = 100) Retesting without optimized RNG seed This performance was highly dependent on the RNG seed, however. To determine this dependence, a retest was performed without the evolved RNG seed (a random new seed was generated for each trial). Qualitatively, this parameter set was prone to the same errors as other optimized systems, occasionally jamming the box against a wall or pushing it backwards away from the goal. The particular evolved RNG seed had just prevented the system from displaying this error within the time limit. Although the optimized tness was 2228.4, the maximum of the retested trials was 1899.3, or 14.8% lower. The mean and standard deviation were 1312.5 and 501.6, respectively. The worst trial had no successful sprints, achieving a tness of 0. A histogram of the results compared to the baseline optimized candidate is given in Figure 11.6. The baseline optimized candidate had a maximum tness of 1901.3. The candidate with attached RNG seed had a tness of 2228.4. This indicates that there is at least a 14.7% performance penalty due to random initial conditions and perturbations, if the designer is only concerned with a best-case scenario (a maximax optimization). The candidate that was not optimized in the face of perturbations was less robust when retested with perturbations. This lack of robustness gave it a mean performance of 1312.5. Compared to the baseline mean, 1572.3, there is a 16.5% performance penalty due to lack of robustness to an uncertain environment. 154 Chapter 11. Box Pushing 11.5.3 Ideal initial positions Because successful systems tended to surround the box and signal when it was approaching a wall or obstacle, the task completion could be aided by intentional arrangement of agents in positions surrounding the box. The most advantageous initial formation has agents on all four sides, regularly spaced, with more agents on the left than the right (because they need to push toward the right). Research questions The obvious advantage here is that the designer can set initial conditions that are conducive to system functionality, rather than making the system assemble itself. The possible downsides are again related to cost and robustness. This precise control may come at a high price if the system is deployed in remote or harsh environments. Also, if the system behaviors are designed for ideal initial conditions, they may fail in o-nominal cases. As is often the case in design, a cost/benet decision must be made. These research questions aim to explore the cost/benet tradeos: What performance gains are achievable by prescribed initial conditions? How severe is the loss in performance when a system optimized for ideal initial conditions is deployed in an o-nominal conguration? Optimized system The candidate optimized for ideal initial conditions is shown in Table 6.1. It had a tness of 6500, much higher than any other systems mentioned in this chapter. It achieved this by successfully completing 65 sprints without colliding with any walls or obstacles. The candidates evolved in this scenario not only had high tness, but also were remarkably reliable. Table 11.4: Optimized parameter set for ideal initial conditions scenario Parameter Value V goal 16.88 V obs -0.02975 V wall -2.2193 goal 0.2471 obs 0.8000 wall 1.082 Parameter Value w p 17.78 w b 74.99 d pn 39.62 d bn 1.000 switch prob 86.27 155 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems Figure 11.7: (LEFT) Initial designed formation (RIGHT) Behavior of optimized system. A wire trace is shown of 5 sprints, with a dierent color for each sprint. Figure 11.7 shows the initial formation, and a wire trace of 5 sprints from this candidate. All 100 subsequent trials of the optimized candidate displayed the same behavior as in Figure 11.7. There were no collisions with the wall or obstacle. The only notable variation from sprint to sprint was whether the system would go above or below the obstacle. The system achieved such high tness by maintaining a very tight formation around the box. Its d bn value (governing the equilibrium distance from the box) was 1.0, the lowest possible value in the parameter range. The formation surrounding the box and the state-switching strategy reliably caused agents to move the box to the goal, and the tight equilibrium distance minimized the distance that agents would travel, allowing them to conserve their energy budget for more sprints. Retesting without ideal initial positions The optimized parameter set, shown in Table 11.5, was highly dependent on the initial conditions. In a re-test of the optimized system (N = 100) with the random initial positions of the baseline scenario, the average tness was 0.004323, with only 54% of trials resulting in any successful sprints. The best trial resulted in a tness of 0.1784 due to the high number of collisions. Agents did a poor job of distributing themselves around the box, often grouping together on only 1 or 2 sides and collectively jamming the box into a wall. 156 Chapter 11. Box Pushing 11.6 Discussion 11.6.1 Revisiting research questions Can a eld-based system be as reliable as a logic-based system with synchronization? The answer to this question is a qualied \yes". Certainly in the case where ideal initial positions can be prescribed, the unsynchronized system in this paper is more reliable than the synchronized system of the previous work. Even in the baseline case, the unsynchronized system was very reliable with small energy budgets, and was close to 100% reliable at higher energy budgets. What are the performance advantages and disadvantages of each approach? The synchronized system was always free from collisions with the walls and obstacles, but the baseline system was prone to collisions, which could be a major problem, depending on the application. This weakness could be mitigated by better control of initial conditions, or it may be possible to use the same parameter ranges with a new optimization that has a higher tness penalty for collisions to nd systems that are less prone to collisions. The deeper implications of the ndings on robustness and resilience are discussed in Chapter 12 alongside the results from other case studies. 11.6.2 Limitations There are several limitations to this work due to computing constraints. The quantitative ndings presented here rest on the assumption that the GA was able to nd the optimal parameter set for every scenario and behavior encoding. As with any heuristic search algorithm, this cannot be guaranteed. Given the huge search space, it would be impossible to perform an exhaustive search to prove optimality, but these are the best candidates that could be found within the time and computing power constraints that we faced. The simulation physics could be made more realistic. The physical model is still rst-order, with forces corresponding directly to movements. A more realistic model would have forces creating accelerations. This would complicate the dynamics of the system, and would require a more sophisticated model of how friction aects the box motion. The collision detection is not foolproof, especially when the sides of the box Table 11.5: Performance tradeos for ideal vs. random initial conditions Optimized IC Test IC Optimized Fitness Test Fitness Fitness Penalty (%) Random Ideal 6500 1689.6 74.0 Ideal Random 1565.9 0 100 157 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems are at against a wall. This is due to NetLogo's collision detection, which models all moving entities as points with a surrounding circle. This can only approximately represent the rectangular box, and a large number of small circles is used in this simulation to ll in the box for collision detection. This number had to be limited to ensure fast run time however, aecting the accuracy of collision detection. Reaction forces are also not modeled with high delity. Currently, collisions just act as absolute stops on the box motion, but in reality, a reaction force would develop from any collision, resulting in a torque which would aect the rotation of the box. Accurately modeling this behavior may actually make the task easier, as agents would be able to rotate the box around obstacles more easily. Again, this was sacriced in order to speed the simulation time. 158 Part V Conclusion 159 Chapter 12 Findings and Contributions Part IV of this dissertation presented four case studies on the design of self-organizing systems. In this chapter, the research questions from the case studies are brought back to be discussed in greater detail. This will serve as a summary of the major theoretical insights gleaned from this research. The chapter closes by highlighting the contributions of this research in various elds. 12.1 Collected research questions Although the case studies were organized by application, there were several common themes that cut across applications, so here the research questions are taken from all case studies and organized thematically. 12.1.1 Validation of design ontology and computational syn- thesis approach Validation of the ontology concerns whether or not its entities and relationships were useful for description and synthesis in design. Validation of the computational synthesis process concerns not just the eectiveness of the approach in generating high marks on a tness test but also how the knowledge generated can be successfully transferred to ensure eect real-world deployment. Eectiveness Although most design processes were not a clean linear application of the methodology given in Section 7.14, the process was eective with several iterations. The knowledge gained from the behavioral strategy of the protective convoy system was easy to transfer to the box-pushing system because it was described using the entities from the ontology and eld-based design. In all the simulations, the dDNA encoding was 161 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems succesfully optimized through the combined eects of the multi-agent simulation and the genetic algorithm. The optimization was possible by separating the parameters of the behavioral algorithm from the logic that interprets the parameters, allowing the GA to choose any permutation of dDNA without breaking the agents' decision structures. The two-eld approach was also validated, especially in the foraging and box-pushing case studies, where the emergent formation of the system, governed by sField relations was as important as attraction to task stimuli, governed by the tField. Repeatability The GAs in the exploration and foraging case studies occasionally converged to drastically dierent behavioral proles even if all initial conditions were kept equal. In the rst exploration case study, several GAs found high-Avoidance high-Momentum strategies that caused agents to spread out and explore individually, while others found high-Alignment high-Momentum strategies that caused agents to maintain a ock spread across the width of the eld to explore as a unit. Similarly, when targeting 25% exploration, some GAs optimized for systems that would immediately spread out and then stop exploring, and other GAs optimized for systems that formed single-le lines so that they uncovered territory very slowly, only reaching 25% exploration right before the time limit. Attempts to capture these qualitatively dierent but equally eective strategies led to this research question from the foraging case study: can system-level metrics be used to guide GA toward certain qualitative behavioral strategies? It was shown that a system-level complexity metric could be included in the GA tness function to bias the GA toward dierent regions of the search space. Specically, the GA that optimized for low topological complexity found a system that could return 1090 units of food, with a topological complexity of 24.5, and the system optimized for high complexity could return 1330 units of food, with a topological complexity of 95.2. Compared to the low-complexity system, the high-complexity system's performance was only 22% higher, but its topological complexity was 289% higher. The qualitative behaviors were dramatically dierent, as shown in Figure 10.15. This result is useful to designers who want to preserve the knowledge of these dierent behavioral strategies without losing them to the churn of the GA, or designers who want to use variations of the complexity metric to force the GA into diverse areas of the search space. Simulation artifacts It is important that the designer distinguish between results that can be transferred to real-world deployments and results that are spurious or adapted to quirks of the simulation architecture and tness function. In the ocking examples, the GA quite often found equally functional emergent strategies that displayed markedly dierent behaviors. The designer must determine the validity of each strategy. For example, the random searching behavior would be 162 Chapter 12. Findings and Contributions valid in a large range of applications, but the fan/sweep searching behavior would only work in a eld with the particular width of the simulation. If this width is to be expected in actual deployment, the fan/sweep behavior would oer better exploration, but if it was simply an assumption made to create the simulation, than the random behavior would be more robust. Similarly, many of the foraging examples relied on the walls of the simulation to guide agents toward the food, which may or may not exist in reality. An exception is the high-complexity foraging system, which built a line directly between the food and home, with less functionality outsourced to potential simulation artifacts. There is always a \reality gap" between simulations and the real world [103], and designers must ensure that this is not exploited by the optimization algorithm. The tness function must accurately re ect reality as well. The simplest way to make this possible to is use \black box" tness functions that account only for top-level functionality. This is largely the approach used in this dissertation. But several times, other aspects were considered, such as collisions in the box-pushing case study and food carried but not returned in the foraging case study. In every case, the percentage that these auxiliary factors aect the tness function must be carefully evaluated, as multi-objective tness functions have mathematical pitfalls [129]. One takeaway from this dissertation is that the auxiliary factors can be used during early generations to bootstrap the GA and distinguish among candidate pools lled with otherwise poor solutions, while fading in importance during later generations as the black-box tness dominates. Even this approach may bias the GA during early generations however, so it must still be used with care. GA as design guide At its worst, a GA will evolve a trivial set of parameters to \deceive" the tness function and oer no practical use, but at its best, a GA can allow a designer to quickly search a design space and even highlight errors in the original problem specication. In the foraging task, all the successful candidate solutions evolved negative alignment behaviors between agents that did not have food. This meant that the agents were actively trying to set their heading to the opposite of their neighbors, preventing any ocking. It could have been useful for them to ock together, but the negative alignment was used to correct for a design error: the lack of boundary detection. The agents were not endowed with the ability to detect or react to the boundary of the arena, so many would simply move to an edge and stay there without doing any useful work. The negative alignment helped to rescue some of the agents that got stuck on the edge, because if a new agent reached the edge close to another, one would immediately turn away from the boundary so they could maintain opposite headings. This gave the agent a chance to move back toward the center of the eld. Some systems showed a chain reaction of this behavior, where this disturbance would slowly move along a boundary until an agent found food and changed its state. 163 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems Because the food was placed in the corner of the arena, the most successful systems actually used this behavior to send agents along the edges to the food source, a strategy unanticipated by the human designer of the simulation. Because the GA evolved behavior that consumed so many resources to overcome the boundary problem, a designer could use that as evidence that basic boundary detection should be added to the agents' behavioral capacity, as shown in further case studies, which raised the optimized tness from 210 to 570. 12.1.2 Adaptability of self-organizing systems The main motivation for this research was the potential for adaptability in self- organizing systems. The systems in the case studies were shown to be adaptable, but at a cost. There were tradeos of adaptability vs. repeatability in all systems, eciency vs. scalability and eciency vs. resilience in the foraging case study, and eciency vs. robustness in the box-pushing case study. With few exceptions, the systems that were optimized for adaptability were not as ecient in tame environments as systems optimized for those tame environments. This should serve as the warning to the designer not to over-design for unlikely scenarios or create a rigid system that is too tightly adapted to one scenario. The successes and failures of the design approach are recapped here along with the results of using the integrated simulation and optimization to analyze the adaptability of behavioral models. Flexbility The case studies showed that self-organizing systems created with a parametric behavioral model are exible enough to fulll changing functional requirements with minimal hardware adjustments. All case studies used the assumption of Cellular Self-Organizing systems, and the behavioral models had many features in common. The software changes in behavioral models across studies led to emergent ocking, protecting, foraging, and box-pushing. In the foraging and and exploration tasks, even while the behavioral model was held constant, parameter changes allowed a system to complete ocking or exploration tasks. One aspect of exibility is scalability, which was investigated quantitatively in the foraging case study. The conceptual model of foraging behavior was shown to be linearly scalable. A linear regression of optimized tness for the system with boundary detection at each system size (Figure 10.7) had anR 2 value of 0.995, inidicating almost perfect linearity of the system performance vs. size. This scalability is an important result for designers of systems that may need to grow or shrink over time. This did not mean that any particular set of behavioral parameters could be used for all sizes, however, as the linear relationship only held for the conceptual design. In systems whose behaviors are primarily parametric and software-based, changing 164 Chapter 12. Findings and Contributions behavior at runtime by loading new parameter values is feasible and inexpensive. For other systems, whose behavior is not so easily changed, Figure 10.8 should serve as a warning that not all SO systems can be scaled up smoothly to larger sizes. This is a point that has often been overlooked in the literature on SO systems design, as scalability is often assumed as a given from the soft connections among agents. This case study showed that it must be explicitly designed. The disparity in scalability of the PBM vs. the detailed behavioral model can be explained by the qualitatively dierent strategies employed by systems of dierent sizes. The negative A 4 value evolved for the RO1 system without boundary detection helped it to avoid getting stuck at the walls when the system was small, but caused a lack of coordination and eventual jamming of the system at larger sizes. This lack of scalability incurred a tness penalty of 89.2% compared to the SO6 system. The SO6 system relied on a separation into 2 functional units: static barrier agents along the walls, and circulating agents that moved between the food and home. The overhead in creating the barrier agents consumed the majority of the system resources when deployed as a 1-row system, rendering this strategy ineective and incurring an over-design penalty of 62%. These behavioral \phase transitions" are commonly observed when scaling SO systems [14, 5]. The GA can uncover them and exploit their useful new behaviors, but the designer needs to be mindful of the size bounds for which they are valid. The insights here could also be applied to the design of other organizations. The key concept is that jamming (agents that have not learned to coordinate and stay out of one another's way) can cause catastrophic failure at large system sizes, but dierentiation can be compensated for (with some performance penalty) at small system sizes when agents are sophisticated enough to accomplish more than one task. Organizational leaders and managers should take this into account when contemplating growing their business or laying o employees. Robustness Section 12.1.1 on simulation artifacts has implications for robustness, as a system must be able to cope with a changing environment and set of initial conditions, even if those changes are only in the translation from simulation to reality. This was a concern for every case study, but the box-pushing case study provided the research questions and quantitative data that most directly targeted robustness. What is the performance loss caused by random initial positions and perturbations? And is there a performance penalty if the system is optimized for robustness to initial conditions? The RNG-encoded optimization produced a system with high tness. However, when the same candidate was re-tested without its optimized RNG seed, its maximum tness out of 100 trials was 14.7% lower, and its mean tness was 41.1% lower, indicating substantial performance drops due to randomness and perturbations. The RNG-encoded optimized candidate had a tness 17.2% higher than any trial 165 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems of the optimized baseline system, so the performance penalty from optimizing for robustness is that the GA is restricted from seeking unique cases that cause high tness. In re-testing, the baseline optimized candidate had high mean values, but its peak value was lower than the RNG-encoded systems. This would be interpreted as a performance penalty in a maximax optimization. From an optimization standpoint, this also serves as a warning to include sucient randomness in the optimization algorithm if there is uncertainty in the environment, because the optimized parameter set was uniquely suited to the optimized RNG seed, but was not robust to changing conditions. What performance gains are achievable by prescribed initial conditions? There is a signicant performance gain when the designer is allowed to pre-specify initial conditions. The optimized system with agents surrounding the box in the beginning completed 65 sprints with no collisions in 1 trial, for a tness of 6500, and it reliably repeated this performance 100 times in further testing. Compare this to the optimized baseline system, which achieved a maximum tness of 1901.7 in 100 trials, and was prone to collisions in almost 10% of its sprints. How severe is the loss in performance when a system optimized for ideal initial conditions is deployed in an o-nominal conguration? The tradeo to the performance gain from ideal initial conditions is that systems optimized with these conditions have critically degraded performance when deployed in environments with random initial conditions, indicating that they are brittle, or non-robust. As evidence, when re-testing the optimized candidate from the ideal initial conditions scenario with random initial conditions, it failed to complete a single sprint in 54% of its trials, and it was so prone to collisions that it never reached a tness score above 0.1784 (indicating 22 collisions). The robustness tradeos conrm ndings from research in Complexity Theory. As a system becomes more optimized for its particular operating environment, its relationships become more rigid. It is more ecient, but less adaptable. According to [99], systems that are \most adapted to given constraints tend to have a lower complexity than adaptable ones, as the development of specic connections between their parts will likely lower the diversity of their structures." This thought is explained in Systems Theory [225] as an expression of potentiality, where every such expression is a branch in the system's possible dynamics, and closes o other possibilities. As a system persists, it builds up too much information and becomes too tightly coupled to its environment [189]. This tight coupling causes the performance penalty in situations that call for adaptability, and the designer must carefully balance these competing needs. Resilience The protective convoy case study was shown to be resilient. As the cargo moved across the eld, the protectors continually took damage and were removed from the swarm. Nonetheless, by the end of the simulation run, the optimized protective system was 166 Chapter 12. Findings and Contributions able to reliably intercept about 90% of the attackers. Sacricing most of the agents led to overall system success. The ocking systems were also shown to be resilient to partial system failures. Even when 2=3 of the system was destroyed at runtime, the system maintained some functionality. A system that was forced to drop from 90 agents to 30 agents had an optimized tness of 1154.5, and the system optimized for a constant 2 rows had a tness score of 1249.1 (the total number of agent-steps taken in simulation is equal for either case), so the tness penalty incurred by the non-constant system size was only 7.6%. Recall that the scalability studies showed that it is easier to scale a system down after it has been optimized for a large number of agents than it is to do the opposite. Combining that result with the low tness penalties discussed in this section leads to this advice for designers: if a designer must create a system that will potentially lose agents during deployment, it is important to optimize for the large end of the estimated size range. 12.1.3 Generational learning Since the agents did not learn from their behavior within a particular simulation run, the only avenue for learning was through trial and error across generations with the GA. Every case study has implications for the type of generational learning that can take place. What kind of generational learning takes place during evolutionary optimization? The GA was shown to embed global knowledge into the dDNA. In the foraging task, the location of the food relative to home was embedded in the agents' home aversion (go right/up) and negativeA 4 value (go up/right). When this information is embedded as dDNA, it is not necessary that agents have any specic global knowledge, only that they can recreate the proper structures required through cooperative, iterative application of local rules. Similarly, the width of the eld was encoded as a Cohesion/Avoidance ratio in several of the exploration systems. Specically in the systems that were optimized for a fan/sweep strategy (Figure 8.8), the agents could spread to the width of the eld and sweep across it once before the time limit ran out, reliably discovering 95% of the patches. Again, no single agent knew the width of the eld, but by reacting to one another with properly tuned dDNA, they were able to spread to it at a system level. What balance should a protective system strike between discipline and aggression? This research question was explored in the protective convoy case study. In that case study, GAs selected for systems that place a high emphasis on maintaining a formation, rather than chasing bullets. This indicates|at least in this one instance|that the scale should be tilted toward discipline and formation-keeping, as aggressively chasing one bullet does no good from a system perspective if the vacated space lets two other bullets pass through. This was \learned" by allowing the more aggressive candidates 167 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems to fail during early generations and be removed from the gene pool. A similar result was uncovered in the box-pushing case study. There was no protection involved, but the agents did still have to maintain a formation around the box while they were attracted to a goal. The direct tradeo between chasing the goal or maintaining a formation was not discussed in detail, but the inability of the preliminary GA to nd a balance led to some of the undesirable behavior reported in Figure 11.2 and Section 11.2.4 such as abandoning the box. Ultimately, the tradeo was resolved through TRIZ principles by separating the formation-keeping and goal-seeking by a state change. 12.2 Primary contributions This dissertation has made contributions in the broad area of Design Theory and Methodology (DTM), with specic contributions for researchers in the area of design of self-organizing systems and Cellular Self-Organizing Systems. 12.2.1 Design theory and methodology community The bottom-up design approach of this dissertation provides more tools for the design of adaptable systems, an important goal in the DTM community. While self-organization is not meant to compete with the traditional design approach, it can exist alongside traditional design to be used when the task environment shows a need for adaptability. There are many potential applications of hybrid systems that show some self-organization [177], and these can be aided by the tools explained here. This dissertation has also provided new understanding of bio-inspired design. One of the key mechanisms in natuaral systems is DNA. By controlling mechanical systems through a design DNA, meaning that the system is distributed and the agents are homogeneous, it is possible to create adaptable systems. Moreover, the DNA can be optimized through selection, crossover, and mutation. This technique for behavioral design can be added to the arsenal of tools for bio-inspired design, which currently has a primary focus on structure and materials. 12.2.2 Self-organizing systems community The primary contribution for the self-organizing systems research community is the design ontology and methodology given in Chapter 7. This ontology provides a denition of order and self-organization, and it clearly delineates self-organizing systems from traditionally designed systems. The entities identied in the ontology can be used to classify a broad range of systems, so that fair comparisons can be made across application domains in order to facilitate knowledge transfer. 168 Chapter 12. Findings and Contributions The eld-based approach can be used for the control of many systems. It was validated on the common ocking system, and then used to derive three others. In its most basic form, it only requires identication of the relevant stimuli in an agent's task and social environments. From there, designers can create elds whose attractors draw agents to create useful formations and apply them to the task. Knowledge of advantageous formations and eld functions can also be reused in future design after they are discovered. The case studies also provided valuable examples of eld-based formations that could be useful in other simulations. Flocking behaviors were used in the exploration and foraging case studies, and future designers could reuse these behaviors in an ontology-based design of a UAV swarm, for example. The sField formations were shown to be especially important in the protective convoy and box-pushing case studies, where agents formed a tight perimeter around the task object. These behaviors could also be abstracted and applied to other shield systems or teams of bodyguards. The examples have been validated, and the ontology makes their reuse convenient. The computational synthesis framework can be applied to many self-organizing systems. The integration of optimization with multi-agent simulation leads to auto- mated detail design and allows designers to focus on conceptual design. Objectively determining an optimized system for a given system size or initial conditions allows for determination of tness penalties when optimizing for adaptability. The adaptability tradeos framework based on penalties for lack fo adaptability is also novel and can be used to make and justify high-level design decisions. Taken all together, these contributions to the self-organizing systems community bring the state of the art of design of self-organizing systems closer to the systematic design processes already in place for routine design. Cellular Self-Organizing Systems This dissertation is the latest in a series on the design of Cellular Self-Organizing Systems (CSO Systems) from the USC Impact Lab [249, 39, 37, 113]. CSO research focuses on the design of self-organizing systems of simple homogeneous robots. The main contributions to this community are the ontological generalization of the previous work and the four new case studies. Previous research had taken an ad hoc approach to CSO design, where the eort was mainly focused on getting the system to work, with less emphasis on system-level adaptability metrics or knowledge capture. By automating the detail design through computational synthesis, this work allowed for the extension of previous CSO models. A ocking behavioral model was translated into eld functions and then reused for exploration. The model was further extended to create the foraging simulations. A protective convoy behavioral model was created with ontological guidance. It was then reused to extend the previous work on box-pushing by eliminating the need for synchronization. In the foraging and box-pushing studies, the computational synthesis 169 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems framework was used to measure the systems' adaptability. 12.3 Research scope recap This research has led to valuable insights in the design of complex systems and created several case studies for further inquiries into design for adaptability. There is still room for much more research in these elds, and several possible avenues for future research are highlighted in Chapter 13. 170 Chapter 13 Summary and Future Work This chapter brie y recaps the research recorded in this dissertation, highlighting the areas that could possibly be extended in future work. 13.1 Dissertation summary 13.1.1 Context This research is in the eld of engineering design theory and methodology. The complementary goals are to better understand complex systems and to design adaptable systems. Self-organizing systems were chosen as an avenue to design adaptable systems because there has been a recent trend toward miniaturization and swarm architectures that make it feasible to deploy large groups of simple robotic agents. Computational synthesis through multi-agent simulation and a genetic algorithm was incorporated into an overall design methodology. With computational synthesis at the detail design level, the methodology allows designers to focus eort on the conceptual design. 13.1.2 Ontology and methodology A design ontology of self-organizing systems was proposed to guide designers during the conceptual design phase. The ontology focuses on the function, behavior, and structure of a self-organizing system at the agent, group, and system levels. The agent- level behavior was described in greatest detail. The two-eld approach to behavioral design allows social eld specication to create system-level structures, and task eld specication to deploy the agents and structures appropriately in space. The behavior is encoded using DNA representation. The design DNA is a set of parameters that can be optimized during detail design. Transcription is the process of interpreting the design DNA when the system is deployed. In all of these case studies, the transcription took the form of a computer program that an agent executes during runtime. Taken 171 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems together, the design DNA and transcription form a parametric behavioral model that can be quickly simulated and optimized. 13.1.3 Case studies The ontology and computational synthesis approach were validated through four case studies on the design of self-organizing systems. A ocking system was created to re- create a common self-organizing system through a social eld-based behavior algorithm. The design DNA was then retuned to display exploration behaviors. A protective convoy case study was created to test the resilience of self-organized system-level structure. A foraging case study showed the advanced capabilities that are possible through combining task and social elds. And a box-pushing case study investigated the proper balance between responsiveness to task and social eld stimuli. 13.1.4 Implications for adaptability It was shown that self-organizing systems can display complex emergent behavior even if their constituent agents are comparatively simple. They are exible, either through software parameter tuning or through scalability. The scalability is not guaranteed, however. It was shown that there is a tness penalty for designing scalable systems compared to point designs. This penalty is smaller than the tness penalty incurred by scaling up systems that were not designed to be scalable though. It was also shown that it is easier to scale systems down than up. This last point has implications for resilience, as systems that lost agents during runtime displayed acceptable performance when they were optimized for the largest number of agents they could be expected to have. Robustness to initial conditions is possible, but to achieve this, the system must be optimized in the face of perturbations, as the systems optimized for ideal initial conditions performed very poorly when deployed with random initial conditions. 13.2 Future work Several assumptions were made to narrow the research eort and expedite computa- tions. These assumptions can be relaxed and challenged with greater computational resources or a change in focus. Also, several interesting research leads were uncovered that could be explored in future research. 13.2.1 Optimization The genetic algorithm could be made more sophisticated using techniques demonstrated in the literature in the last few decades. Many GA techniques [79] are available to improve the algorithm and ameliorate some potential downfalls of the stochastic 172 Chapter 13. Summary and Future Work optimization process. In fact, the GA approach itself was chosen at the start of this research as a promising direction, but GA is of course only one of many possible optimization tools, and others should be considered in comparison studies. For some tasks, dierent runs of the GA would produce qualitatively dierent emergent behavior in the nal generations. It would be useful to develop a means of automatically identifying these dierent ways of achieving the same goal (when all the emergent behaviors have high tness) so that they can be preserved for later study and one or more are not lost as the GA population reaches homogeneity. The studies that incorporated emergent system-level metrics independent of performance may aid work in this area. The GA could also work at a higher level on the self-organizing rules, altering the transcription and not just the dDNA. Currently, the behaviors are dened, and the GA only optimizes the relative weights of those behaviors. It may be possible via evolutionary optimization to evolve the local rules themselves, in addition to their relative importance. 13.2.2 Improved simulations The simulations can be improved through faster speeds and higher delity, but these are usually competing goals The Simulations intentionally ignored some of the real-world physics of a case study in order to speed computation time. Realism could come from more nuanced simulation of robot mechanics or aerodynamic analysis of ocks. The protective convoy simulation could be made more realistic by a more detailed damage calculation, including the location and power of hits, and the uid-body interactions of explosions [20]. The box-pushing task can be made more realistic by considering geometries other than a simple box, such as multi-bodied, deformable, or hinged objects that are considered in the broader path-planning literature (e.g. [193]). Faster simulation and optimization could be enabled by simplied models of the agent-based systems. It may be possible to speed up the ocking simulations by treating agents as particles in an N-body problem [12]. Other approaches such as \surrogate models" [49] and \small models" [79] should be explored, if they are able to generate the same insights into emergence as agent-based simulation. 13.2.3 Ontology The ontology focused on behavioral design at the agent level of system architecture. To enrich the ontology, future eort could be focused on detailed denitions of function ad structure as well. System-level structure would be especially helpful, and a regimented ontology that contains both agent-level behavior and system-level structure could aid in global-to-local mapping and analogical cross-disciplinary knowledge transfer. At a 173 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems lower level, a set of general behavioral primitives, similar to the Functional Basis in DTM [203] could be generated to aid in populating an agent's behavioral capacity. A structured ontology of system-level function would also mesh well with task complexity metrics, an area of ongoing work in the USC Impact Lab [114, 115]. Coupled with behavioral complexity metrics, this could lead to studies of complexity matching between a system and its task. 13.2.4 Classifying adaptability The studies on adaptability used tness penalties and performance chages across scenarios to measure adaptability. This form of measurement is peculiar to the system being studied, and only gives a percentage score. Future work should develop more general measures of complexity so that all systems can be measured on the same scale. With genetic optimization, evolvability metrics of the behavior encoding, such as those proposed by [213] may need to be considered along with adaptability metrics. 13.2.5 Physical implementation Of course the ultimate goal of this line of research is a physical SO system that performs a function or displays adaptability unachievable in conventionally designed systems. The modeling and simulation are merely means to this end. The next step in the path is an implementation of these case studies and others on a swarm of physical robots. There is ongoing research within the IMPACT Lab at USC working toward this goal, and it will require a substantial fraction of the SO systems research eort going forward. 13.3 Final remarks The future of engineering design is exciting. New systems are being demanded by a growing world population facing ever more complex challenges. In a positive feedback loop, the technologies created in response to these challenges lead to greater demands and enable further technological innovation. It is my hope that this research will contribute to the state of the art in the design of future complex systems through self organization. If nothing else, the respect for complexity, appeal to biology, and use of advanced computational tools are helpful in the design of a wide range of systems. For further reading, please see my previous papers: [100, 101, 102, 114]. Thank you to the members of my dissertation committee, and to you, the reader. 174 Bibliography [1] William R. Ashby. Requisite variety and its implications for the control of complex systems. Cybernetica, 1(2):83{99, 1958. [2] William R. Ashby. Principles of the self-organizing system. Principles of Self-organization, pages 255{278, 1962. [3] Robert L. Axtell. Population growth and collapse in a multiagent model of the Kayenta Anasazi in Long House Valley. Proceedings of the National Academy of Sciences, 99(90003):7275{7279, May 2002. [4] Linge Bai and David Bree. Chemotaxis-Inspired Cellular Primitives for Self- Organizing Shape Formation. In Morphogenetic Engineering: Toward Pro- grammable Complex Systems, pages 141{156. 2012. [5] Per Bak. How nature works: the science of self-organized criticality. Springer Science & Business Media, 2013. [6] Gianluca Baldassarre, Stefano Nol, and Domenico Parisi. Evolving mobile robots able to display collective behaviors. Articial Life, 9(3):255{267, 2003. [7] Amit Banerjee, Juan C. Quiroz, and Sushil J. Louis. A Model of Creative Design Using Collaborative Interactive Genetic Algorithms. In John S. Gero and Ashok K. Goel, editors, Design Computing and Cognition '08, pages 397{416. Springer Netherlands, January 2008. [8] Y. Bar-Yam. Unifying principles in complex systems. Converging Technology (NBIC) for Improving Human Performance, MC Roco and WS Bainbridge, Dds., Kluwer, 2003. [9] Yaneer Bar-Yam. Dynamics of complex systems. Addison-Wesley, Reading, Mass., 1997. [10] Yaneer Bar-Yam. Multiscale variety in complex systems. Complexity, 9(4):37{45, 2004. [11] Yaneer Bar-Yam, Ali Minai, and Dan Braha. Complex Engineered Systems: A New Paradigm. In Complex engineered systems: science meets technology, Springer complexity, pages 23{39. Springer, Berlin ; New York, 2006. 175 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems [12] Natalie N. Beams, Luke N. Olson, and Jonathan B. Freund. A Finite Element Based P3m Method for N-body Problems. arXiv preprint arXiv:1503.08509, 2015. [13] Ralph Beckers, Owen Holland, and Jean-Louis Deneubourg. From local actions to global tasks: Stigmergy and collective robotics. In Articial Life IV, pages 181{189. MIT Press, 1994. [14] Madeleine Beekman, David J. T. Sumpter, and Francis L. W. Ratnieks. Phase transition between disordered and ordered foraging in Pharaoh's ants. Proceedings of the National Academy of Sciences, 98(17):9703{9706, August 2001. [15] Gerardo Beni. The concept of cellular robotic system. In IEEE International Symposium on Intelligent Control, 1988. Proceedings, pages 57 {62, August 1988. [16] Peter Bentley. Three ways to grow designs: A comparison of embryogenies for an evolutionary design problem. In Proceedings of the Genetic and Evolutionary Computation Conference, pages 35{43. Morgan Kaufmann, 1999. [17] Peter Bentley. Digital biology: how nature is transforming our technology and our lives. Simon & Schuster, New York, 2001. [18] Mauro Birattari and Marco Dorigo. The problem of tuning metaheuristics as seen from a machine learning perspective. 2004. [19] Brian Birge. Applying Biomimetic Algorithms for Extra-Terrestrial Habitat Generation. August 2012. [20] Neal Bitter and Joseph Shepherd. Dynamic buckling and uid{structure in- teraction of submerged tubular structures. In Blast Mitigation, pages 189{227. Springer, 2014. [21] Lucienne T. M. Blessing and Amaresh Chakrabarti. DRM, a design research methodology. Springer, Dordrecht, 2009. [22] Eric Bonabeau. Swarm intelligence: from natural to articial systems. Oxford University Press, New York, 1999. [23] Eric Bonabeau. Agent-Based Modeling: Methods and Techniques for Simulating Human Systems. Proceedings of the National Academy of Sciences of the United States of America, 99(10):7280{7287, May 2002. [24] Eric Bonabeau, Guy Theraulaz, Jean-Louis Deneubourg, Nigel R. Franks, Oliver Rafelsberger, Jean-Louis Joly, and Stephane Blanco. A model for the emergence of pillars, walls and royal chambers in termite nests. Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences, 353(1375):1561{ 1576, 1998. 176 Bibliography [25] James Bonner. Hierarchical Control Programs in Biological Development. In Hierarchy Theory: The Challenge of Complex Systems, pages 51{70. 1973. [26] Philip Brey. The Epistemology and Ontology of Human-Computer Interaction. Minds and Machines, 15(3-4):383{398, November 2005. [27] Rodney A. Brooks. Intelligence without reason. In COMPUTERS AND THOUGHT, IJCAI-91, pages 569{595. Morgan Kaufmann, 1991. [28] Cari Bryant, Robert Stone, Daniel McAdams, Tolga Kurtoglu, and Matthew Campbell. Concept generation from the functional basis of design. In Inter- national Conference on Engineering Design, ICED, volume 5, pages 15{18, 2005. [29] Jerome Buhl, David J. T. Sumpter, Iain Couzin, J. J. Hale, E. Despland, E. R. Miller, and S. J. Simpson. From Disorder to Order in Marching Locusts. Science, 312(5778):1402{1406, June 2006. [30] Mark A. Bunco. Tension, adaptability, and creativity. Aect, creative experience, and psychological adjustment, page 165, 1999. [31] Benot Calvez and Guillaume Hutzler. Automatic Tuning of Agent-Based Models Using Genetic Algorithms. In Jaime Sichman and Luis Antunes, editors, Multi- Agent-Based Simulation VI, volume 3891 of Lecture Notes in Computer Science, pages 41{57. Springer Berlin / Heidelberg, 2006. [32] Benot Calvez and Guillaume Hutzler. Ant Colony Systems and the Calibration of Multi-Agent Simulations: a New Approach. In Multi-Agents for modelling Complex Systems (MA4CS'07) Satellite Workshop of the European Conference on Complex Systems 2007 (ECCS'07), page 16, Allemagne, 2007. [33] Scott Camazine. Self-organization in biological systems. (Princeton studies in complexity). Princeton University Press, Princeton, N.J., 2001. [34] Amaresh Chakrabarti. A course for teaching design research methodology. Articial Intelligence for Engineering Design, Analysis and Manufacturing, 24(03):317{334, July 2010. [35] Amaresh Chakrabarti, Kristina Shea, Robert Stone, Jonathan Cagan, Matthew Campbell, Noe Vargas Hernandez, and Kristin L. Wood. Computer-Based Design Synthesis Research: An Overview. Journal of Computing and Information Science in Engineering, 11(2):021003, 2011. [36] Amaresh Chakrabarti and L.H. Shu. Biologically inspired design. Articial Intelligence for Engineering Design, Analysis and Manufacturing, 24(04):453{ 454, October 2010. 177 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems [37] Chang Chen. Building cellular self-organizing system (CSO): a behavior regula- tion based approach. PhD thesis, University of Southern California, Los Angeles, CA, 2012. [38] Chang Chen and Yan Jin. A Behavior Based Approach to Cellular Self- Organizing Systems Design. In Proceedings of the ASME 2011 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference, Washington, DC, 2011. [39] Winston Chiang. A Meta-interaction Model for Designing Cellular Self- Organizing Systems. PhD thesis, University of Southern California, Los Angeles, CA, 2012. [40] Winston Chiang and Yan Jin. Toward a Meta-Model of Behavioral Interaction for Designing Complex Adaptive Systems. In ASME IDETC/CIE 2011, pages 1077{1088. ASME, 2011. [41] Winston Chiang and Yan Jin. Design of Cellular Self-Organizing Systems. Chicago, Illinois, 2012. [42] Shyam Chidamber and Henry Kon. A research retrospective of innovation inception and success: the technology{push, demand{pull question. International Journal of Technology Management, 9(1):94{112, 1994. [43] Michael D. Cohen, James G. March, and Johan P. Olsen. A Garbage Can Model of Organizational Choice. Administrative Science Quarterly, 17(1):1{25, March 1972. [44] Daniel Collado-Ruiz and Hesamedin Ostad-Ahmad-Ghorabi. In uence of envi- ronmental information on creativity. Design Studies, 31(5):479{498, 2010. [45] Commons Math Developers. Apache Commons Math 3.2, 2015. [46] Nigel Cross. Designerly ways of knowing. Birkh auser; Springer, January 2007. [47] James P. Crutcheld, Melanie Mitchell, and Rajarshi Das. Evolving cellular automata with genetic algorithms: A review of recent work. Moscow, Russia, 1996. [48] Feipe Cucker and Stephen Smale. Emergent Behavior in Flocks. Automatic Control, IEEE Transactions on, 52(5):852 {862, May 2007. [49] Adam Cutbill, Kambiz Haji Hajikolaei, and G Gary Wang. Visual hdmr model renement through iterative interaction. In ASME 2013 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, pages V03BT03A002{V03BT03A002. American Society of Mechanical Engineers, 2013. 178 Bibliography [50] Mehdi Dastani, Nico Jacobs, Catholijn M. Jonker, and Jan Treur. Modelling user preferences and mediating agents in electronic commerce. Knowledge-Based Systems, 18(7):335{352, November 2005. [51] Tom De Wolf and Tom Holvoet. Towards a methodology for engineering self- organising emergent systems. Frontiers in Articial Intelligence and Applications, 135:18, 2005. [52] Jean-Louis Deneubourg, Serge Aron, Simon Goss, and Jacques Marie Pasteels. The self-organizing exploratory pattern of the argentine ant. Journal of insect behavior, 3(2):159{168, 1990. [53] Jean-Louis Deneubourg, Guy Theraulaz, Ralph Beckers, P. Bourgine, and E. Varela. Swarm made architectures. In 1st European Conference on Articial Life, pages 123{133. MIT Press, 1992. [54] Ralf Der. Self-organized acquisition of situated behaviors. Theory in Biosciences, 120(3-4):179{187, 2001. [55] Marco Dorigo and Christian Blum. Ant colony optimization theory: A survey. Theoretical Computer Science, 344(23):243{278, November 2005. [56] Marco Dorigo, Vito Trianni, Erol S ahin, Roderich Gro, Thomas H Labella, Gianluca Baldassarre, Stefano Nol, Jean-Louis Deneubourg, Francesco Mon- dada, Dario Floreano, et al. Evolving self-organizing behaviors for a swarm-bot. Autonomous Robots, 17(2-3):223{245, 2004. [57] Rene Doursat. The myriads of Alife: Importing complex systems and self- organization into engineering. In 2011 IEEE Symposium on Articial Life (ALIFE), pages 1 {8, April 2011. [58] Rene Doursat, Hiroki Sayama, and Olivier Michel. Morphogenetic Engineering: Reconciling Self-Organization and Architecture. In Ren Doursat, Hiroki Sayama, and Olivier Michel, editors, Morphogenetic Engineering, Understanding Complex Systems, pages 1{24. Springer Berlin Heidelberg, January 2012. [59] Lee Alan Dugatkin and Hudson Kern Reeve. Game Theory and Animal Behavior. Oxford University Press, February 1998. [60] Gerald M. Edelman and Joseph A. Gally. Degeneracy and complexity in biological systems. Proceedings of the National Academy of Sciences, 98(24):13763{13768, November 2001. [61] Bruce Edmonds. Using the experimental method to produce reliable self- organised systems. In Engineering Self-Organising Systems, pages 84{99. Springer, 2005. [62] Gilles Fauconnier and Mark Turner. The Way We Think: Conceptual Blending And The Mind's Hidden Complexities. Basic Books, March 2003. 179 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems [63] Ronald A. Finke, Thomas B. Ward, and Steven M. Smith. Creative Cognition: Theory, Research, and Applications. A Bradford Book, January 1996. [64] Regina Frei, Richard McWilliam, Benjamin Derrick, Alan Purvis, Asutosh Tiwari, and Giovanna Di Marzo Serugendo. Self-healing and self-repairing technologies. The International Journal of Advanced Manufacturing Technology, 69(5-8):1033{1061, 2013. [65] Michael C. Fu. Optimization for simulation: Theory vs. practice. INFORMS Journal on Computing, 14(3):192{215, 2002. [66] Michael C. Fu, Fred W. Glover, and Jay April. Simulation optimization: a review, new developments, and applications. In Simulation Conference, 2005 Proceedings of the Winter, pages 13{pp, 2005. [67] Liane Gabora. Cognitive mechanisms underlying the creative process. In Proceedings of the 4th conference on Creativity & cognition, C&C '02, pages 126{133, New York, NY, USA, 2002. ACM. [68] Jos Manuel Galn, Luis R. Izquierdo, Segismundo S. Izquierdo, Jos Ignacio Santos, Ricardo del Olmo, Adolfo Lpez-Paredes, and Bruce Edmonds. Errors and Artefacts in Agent-Based Modelling. Journal of Articial Societies & Social Simulation, 12(1), 2009. [69] Simon Garnier, Tucker Murphy, Matthew Lutz, Edward Hurme, Simon Leblanc, and Iain D. Couzin. Stability and Responsiveness in a Self-Organized Living Architecture. PLoS Comput Biol, 9(3):e1002984, March 2013. [70] Tobias Germer and Martin Schwarz. Procedural Arrangement of Furniture for Real-Time Walkthroughs. Computer Graphics Forum, 28(8):2068{2078, 2009. [71] John S. Gero and Udo Kannengiesser. The situated functionbehaviourstructure framework. Design Studies, 25(4):373{391, July 2004. [72] John S. Gero and Michael A. Rosenman. A conceptual framework for knowledge- based design research at Sydney University's Design Computing Unit. Articial Intelligence in Engineering, 5(2):65{77, 1990. [73] Carlos Gershenson. A general methodology for designing self-organizing systems. arXiv preprint nlin/0505009, 2005. [74] Carlos Gershenson. Design and control of self-organizing systems. PhD thesis, Vrije Universiteit Brussel, 2007. [75] Carlos Gershenson and Francis Heylighen. When can we call a system self- organizing? In Advances in Articial Life, pages 606{614. Springer, 2003. [76] David E. Goldberg. Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley Professional, 1 edition, January 1989. 180 Bibliography [77] David E. Goldberg. Messy genetic algorithms: Motivation, analysis, and rst results. Clearinghouse for Genetic Algorithms, Dept. of Mechanical Engineering, University of Alabama, 1989. [78] David E. Goldberg. Genetic Algorithms as a Computational Theory of Concep- tual Design. In G. Rzevski and R. A. Adey, editors, Applications of Articial Intelligence in Engineering VI, pages 3{16. Springer Netherlands, January 1991. [79] David E. Goldberg. The Design of Innovation. Springer, 1 edition, June 2002. [80] Seth Goldstein and Todd Mowry. Claytronics: A Scalable Basis For Future Robots. Robosphere, 2004. [81] William Gordon. Synectics: The development of creative capacity. 1961. [82] John Grefenstette. Optimization of Control Parameters for Genetic Algorithms. IEEE Transactions on Systems, Man and Cybernetics, 16(1):122{128, 1986. [83] Claas Groot, Diethelm Wrtz, and Karl Heinz Homann. Optimizing complex problems by nature's algorithms: Simulated annealing and evolution strategya comparative study. In Hans-Paul Schwefel and Reinhard Mnner, editors, Parallel Problem Solving from Nature, volume 496, pages 445{454. Springer-Verlag, Berlin/Heidelberg. [84] Roderich Gro, M. Bonani, Francesco Mondada, and Marco Dorigo. Autonomous self-assembly in swarm-bots. Robotics, IEEE Transactions on, 22(6):1115{1130, 2006. [85] Laszlo Gulyas, Laszlo Laufer, and Richard Szabo. Measuring stigmergy: the case of foraging ants. In Engineering Self-Organising Systems, pages 50{65. Springer, 2007. [86] Maja Hadzic, Pornpit Wongthongtham, Tharam Dillon, and Elizabeth Chang. Introduction to Ontology. In Ontology-Based Multi-Agent Systems, number 219 in Studies in Computational Intelligence, pages 37{60. Springer Berlin Heidelberg, January 2009. [87] Hermann Haken. Synergetics : an introduction : nonequilibrium phase transitions and self-organization in physics, chemistry, and biology. Springer-Verlag, Berlin; New York, 1978. [88] John R. Hauser and Don Clausing. The house of quality. 1988. [89] Dirk Helbing. Systemic Risks in Society and Economics. Technical report, Santa Fe Institute, 2009. [90] Francis Heylighen. The science of self-organization and adaptivity. The encyclo- pedia of life support systems, 5(3):253{280, 2001. 181 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems [91] Francis Heylighen. Self-organization of complex, intelligent systems: an action ontology for transdisciplinary integration. Integral Review, 2011. [92] Herbert G Hicks, C. Ray Gullett, Susan M. Phillips, and William S. Slaughter. Organizations: theory and behavior. McGraw-Hill, New York, 1975. [93] Jack Hipple. The Ideal Result. Springer Verlag, DE, 2012 edition, 2012. [94] Julie Hirtz, Robert B. Stone, Daniel A. McAdams, Simon Szykman, and Kristin L. Wood. A functional basis for engineering design: reconciling and evolving previous eorts. Research in engineering Design, 13(2):65{82, 2002. [95] Tad Hogg. Distributed control of microscopic robots in biomedical applications. In Advances in applied self-organizing systems, pages 179{208. Springer, 2013. [96] John Holland. Adaptation in natural and articial systems : an introductory analysis with applications to biology, control, and articial intelligence. MIT Press, Cambridge, Mass., 1992. [97] Bryan Horling and Victor Lesser. A survey of multi-agent organizational paradigms. The Knowledge Engineering Review, 19(4):281{316, 2004. [98] Philip N. Howard. Pax technica: how the internet of things may set us free or lock us up. Yale University Press, New Haven, 2015. [99] Bernardo A. Huberman and Tad Hogg. Complexity and Adaptation. Physica D: Nonlinear Phenomena, 22(1):376{384, 1986. [100] James Humann and Yan Jin. Evolutionary Design of Cellular Self-Organizing Systems. In ASME 2013 International Design Engineering Technical Con- ferences and Computers and Information in Engineering Conference, pages V03AT03A046{V03AT03A046. American Society of Mechanical Engineers, 2013. [101] James Humann, Newsha Khani, and Yan Jin. Evolutionary computational synthesis of self-organizing systems. AI EDAM, 28(Special Issue 03):259{275, August 2014. [102] James Humann and Azad M. Madni. Integrated Agent-based modeling and optimization in complex systems analysis. Procedia Computer Science, 28:818{ 827, 2014. [103] Nick Jakobi, Phil Husbands, and Inman Harvey. Noise and The Reality Gap: The Use of Simulation in Evolutionary Robotics. In ADVANCES IN ARTIFICIAL LIFE: PROC. 3RD EUROPEAN CONFERENCE ON ARTIFICIAL LIFE, pages 704{720. Springer-Verlag, 1995. [104] Yan Jin and Chang Chen. Field Based Behavior Regulation for Self-Organization in Cellular Systems. 2012. 182 Bibliography [105] Yan Jin and Chang Chen. Cellular self-organizing systems: A eld-based behavior regulation approach. AI EDAM, 28(Special Issue 02):115{128, May 2014. [106] Yan Jin and Raymond E. Levitt. The virtual design team: A computational model of project organizations. Computational & Mathematical Organization Theory, 2(3):171{195, 1996. [107] Yan Jin, George Zouein, and Stephen C-Y. Lu. A synthetic DNA based ap- proach to design of adaptive systems. CIRP Annals-Manufacturing Technology, 58(1):153{156, 2009. [108] Geraint John. Stadia: the design and development guide. Routledge, New York, fth edition edition, 2013. [109] Chris Jones and Maja J Matari c. Adaptive division of labor in large-scale minimalist multi-robot systems. In Intelligent Robots and Systems, 2003.(IROS 2003). Proceedings. 2003 IEEE/RSJ International Conference on, volume 2, pages 1969{1974. IEEE, 2003. [110] Yael Katz, Kolbjrn Tunstrm, Christos C. Ioannou, Cristin Huepe, and Iain D. Couzin. Inferring the structure and dynamics of interactions in schooling sh. Proceedings of the National Academy of Sciences, 108(46):18720{18725, November 2011. [111] Kevin Kelly. Out of control : the new biology of machines, social systems and the economic world. Addison-Wesley, Reading, Mass., 1994. [112] James Kennedy and Russell Eberhart. Particle swarm optimization. In Neural Networks, 1995. Proceedings., IEEE International Conference on, volume 4, pages 1942 {1948 vol.4, December 1995. [113] Newsha Khani. Dynamic Social Structuring in Cellular Self-Organizing Systems. PhD thesis, University of Southern California, 2015. [114] Newsha Khani, James Humann, and Yan Jin. Eect of Social Structuring in Self-Organizing Systems. Passed initial review at Journal of Mechanical Design, 2016. [115] Newsha Khani and Yan Jin. Dynamic Structuring in Cellular Self-Organizing Systems. In John S. Gero and Sean Hanna, editors, Design Computing and Cognition '14, pages 3{20. Springer International Publishing, Cham, 2015. [116] Newsha Khani and Kagan Tumer. Fast multiagent learning: Cashing in on team knowledge. Intel. Engr. Systems Though Articial Neural Nets, 18:3{11, 2008. [117] Mark Kilgour and Scott Koslow. Why and how do creative thinking tech- niques work?: Trading o originality and appropriateness to make more creative advertising. Journal of the Academy of Marketing Science, 37(3):298{309, 2009. 183 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems [118] William Klug and Michael Cummings. Essentials of genetics. Pearson Education, Upper Saddle River, NJ, 2005. [119] Shigeru Kondo and Takashi Miura. Reaction-Diusion Model as a Framework for Understanding Biological Pattern Formation. Science, 329(5999):1616{1620, September 2010. [120] Judith Korb. Termite Mound Architecture, from Function to Construction. In David Edward Bignell, Yves Roisin, and Nathan Lo, editors, Biology of Termites: a Modern Synthesis, pages 349{373. Springer Netherlands, January 2011. [121] Sergey Kornienko, Olga Kornienko, A. Nagarathinam, and Paul Levi. From real robot swarm to evolutionary multi-robot organism. In Evolutionary Computation, 2007. CEC 2007. IEEE Congress on, pages 1483{1490. IEEE, 2007. [122] John R. Koza. Human-competitive results produced by genetic programming. Genetic Programming and Evolvable Machines, 11(3-4):251{284, May 2010. [123] C. Ronald Kube and Eric Bonabeau. Cooperative transport by ants and robots. Robotics and autonomous systems, 30(1):85{101, 2000. [124] Sanjeev Kumar. On Form and Function: The Evolution of Developmental Control. In Philip F. Hingston, Luigi C. Barone, and Zbigniew Michalewicz, editors, Design by Evolution, Natural Computing Series, pages 223{241. Springer Berlin Heidelberg, January 2008. [125] Yannis Labrou, Tim Finin, and Yun Peng. Agent communication languages: the current landscape. IEEE Intelligent Systems and their Applications, 14(2):45{52, 1999. [126] Heiner Lasi, Peter Fettke, Hans-Georg Kemper, Thomas Feld, and Michael Homann. Industry 4.0. Business & Information Systems Engineering, 6(4):239{ 242, August 2014. [127] Steven Michael LaValle. Planning algorithms. Cambridge University Press, Cambridge, New York, 2006. [128] Raymond E. Levitt, Jan Thomsen, Tore R. Christiansen, John C. Kunz, Yan Jin, and Cliord Nass. Simulating Project Work Processes and Organizations: Toward a Micro-Contingency Theory of Organizational Design. Management Science, 45(11):1479{1495, 1999. [129] Kemper E. Lewis. Decision Making in Engineering Design. American Society of Mechanical Engineers, October 2006. [130] Kevin Leyton-Brown and Yoav Shoham. Essentials of game theory : a concise, multidisciplinary introduction. Morgan & Claypool, [San Rafael, Calif.], 2008. 184 Bibliography [131] Stephen C-Y Lu and Jian Cai. A collaborative design process model in the sociotechnical engineering design framework. AI EDAM, 15(01):3{20, 2001. [132] Sean Luke. The ECJ Owner's Manual. Department of Computer Science, George Mason University, zeroth edition, 2010. [133] Charles M. Macal and Michael J. North. Tutorial on agent-based modeling and simulation. In Proceedings of the 37th conference on Winter simulation, WSC '05, pages 2{15, Orlando, Florida, 2005. Winter Simulation Conference. [134] Azad M. Madni. Cross-Cultural Decision Making Training Using Behavioral Game Theoretic Framework. In 3rd International Conference on Applied Human Factors and Ergonomics, 2010. [135] Azad M. Madni. Integrating Humans With and Within Complex Systems. CrossTalk, page 5, 2011. [136] Azad M. Madni and Carla C. Madni. Intelligent Agents As Synthetic Role Players In Scenario-Based Training. Journal of Integrated Design and Process Science, 12(1):39{54, 2008. [137] Azad M. Madni, Assad Moini, and Carla C. Madni. Cross-Cultural Decision Making Training Using Behavioral Game Theoretic Framework. In Advances in cross-cultural decision making. CRC Press, Boca Raton, 2010. [138] Mark W. Maier and Eberhardt Rechtin. The art of systems architecting. CRC Press, Boca Raton, 3rd ed edition, 2009. [139] Marco Mamei and Franco Zambonelli. Theory and practice of eld-based motion coordination in multiagent systems. Applied Articial Intelligence, 20(2-4):305{ 326, 2006. [140] John Levi Martin. What Is Field Theory? American journal of sociology, 109(1):1{49, 2003. [141] Makoto Matsumoto and Takuji Nishimura. Mersenne twister: a 623- dimensionally equidistributed uniform pseudo-random number generator. ACM Transactions on Modeling and Computer Simulation (TOMACS), 8(1):3{30, 1998. [142] Humberto R. Maturana. Autopoiesis and cognition: the realization of the living. Number v. 42 in Boston studies in the philosophy of science. D. Reidel Pub. Co, Dordrecht, Holland; Boston, 1980. [143] Chris McKenna. Community (Season 2: Episode 21) paradigms of Human Memory. Originally broadcast April 21, 2011 on NBC. [144] James McLurkin. Experiment Design for Large Multi-Robot Systems. 185 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems [145] Paul Merrell, Eric Schkufza, Zeyang Li, Maneesh Agrawala, and Vladlen Koltun. Interactive furniture layout using interior design guidelines. ACM Transactions on Graphics, 30(4), 2011. [146] Alexander S. Mikhailov. From Swarms to Societies: Origins of Social Organiza- tion. In Hildegard Meyer-Ortmanns and Stefan Thurner, editors, Principles of Evolution, pages 367{380. Springer Berlin Heidelberg, Berlin, Heidelberg, 2011. [147] John H. Miller. Active Nonlinear Tests (ANTs) of Complex Simulation Models. Management Science, 44(6), 1998. [148] Henry Mintzberg. Power in and Around Organizations. Prentice Hall, Englewood Clis, N.J, 1983. [149] Tom Michael Mitchell. Machine Learning. McGraw-Hill, New York, 1997. [150] Nathan Mlot and David Hu. The ant raft, 2015. [151] Nathan Mlot, Craig Tovey, and David Hu. Fire ants self-assemble into water- proof rafts to survive oods. Proceedings of the National Academy of Sciences, 108(19):7669{7673, 2011. [152] Mark W. Moett. Cooperative food transport by an Asiatic ant. National Geographic Research, 4(3):386{394, 1988. [153] Mehdi Moussad, Dirk Helbing, and Guy Theraulaz. How simple rules determine pedestrian behavior and crowd disasters. Proceedings of the National Academy of Sciences, 108(17):6884{6888, April 2011. [154] Jacquelyn K.S. Nagel and Robert B. Stone. A computational approach to bio- logically inspired design. Articial Intelligence for Engineering Design, Analysis and Manufacturing, 26(02):161{176, April 2012. [155] Radhika Nagpal. A catalog of biologically-inspired primitives for engineering self-organization. In Engineering Self-Organising Systems, pages 53{62. Springer, 2004. [156] Giuseppe Narzisi, Venkatesh Mysore, and Bud Mishra. Multi-objective evo- lutionary optimization of agent-based models: An application to emergency response planning. In The IASTED International Conference on Computational Intelligence (CI 2006), 2006. [157] Janna C. Nawroth, Hyungsuk Lee, Adam W. Feinberg, Crystal M. Ripplinger, Megan L. McCain, Anna Grosberg, John O. Dabiri, and Kevin Kit Parker. A tissue-engineered jellysh with biomimetic propulsion. Nature Biotechnology, 30(8):792{797, 2012. [158] Robert Neches and Azad M. Madni. Towards aordably adaptable and eective systems. Systems Engineering, 2012. 186 Bibliography [159] John von Neumann and Arthur W. Burks. Theory of self-reproducing automata. 1966. [160] Gregoire Nicolis. Exploring complexity: an introduction. W.H. Freeman, New York, 1989. [161] Dustin J. Nowak, Gary B. Lamont, and Gilbert L. Peterson. Emergent architec- ture in self organized swarm systems for military applications. In Proceedings of the 2008 GECCO conference companion on Genetic and evolutionary computa- tion, GECCO '08, pages 1913{1920, New York, NY, USA, 2008. ACM. [162] George Oster and Edward Wilson. Caste and Ecology in the Social Insects. Princeton University Press, Princeton, NJ, 1978. [163] Edward Ott, Celso Grebogi, and James A. Yorke. Controlling chaos. Physical Review Letters, 64(11):1196{1199, March 1990. [164] William G. Ouchi. A Conceptual Framework for the Design of Organizational Control Mechanisms. Management Science, 25(9):833{848, September 1979. [165] Gerhard Pahl, Ken Wallace, and Lucinne Blessing. Engineering design: a systematic approach. Springer, London, 2007. [166] Liviu Panait and Sean Luke. Cooperative Multi-Agent Learning: The State of the Art. Autonomous Agents and Multi-Agent Systems, 11:2005, 2005. [167] H. Van Dyke Parunak and Sven A. Brueckner. Engineering swarming systems. In Methodologies and Software Engineering for Agent Systems, pages 341{376. Springer, 2004. [168] H. Van Dyke Parunak, Robert Savit, and Rick L. Riolo. Agent-Based Modeling vs. Equation-Based Modeling: A Case Study and Users Guide. In Jaime Simo Sichman, Rosaria Conte, and Nigel Gilbert, editors, Multi-Agent Systems and Agent-Based Simulation, number 1534 in Lecture Notes in Computer Science, pages 10{25. Springer Berlin Heidelberg, January 1998. [169] Howard Pattee. Hierarchy Theory: the Challenge of Complex Systems. George Braziller, New York, 1973. [170] David Payton, Mike Daily, Regina Estowski, Mike Howard, and Craig Lee. Pheromone Robotics. 2001. [171] Michael J. Pennock and Jon P. Wade. The top 10 illusions of systems engineering: A research agenda. Procedia Computer Science, 44:147{154, 2015. [172] Henry Petroski. Design paradigms : case histories of error and judgment in engineering. Cambridge University Press, Cambridge [England]; New York, N.Y., 1994. 187 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems [173] Jeremy Pitt. This pervasive day: the potential and perils of pervasive computing. Imperial College Press, London, 2012. [174] Ilya Prigogine. Non-linear Science and the Laws of Nature. Journal of the Franklin Institute, 334(5):745{758, 1997. [175] Ilya Prigogine. Creativity in Art and Nature. New Perspectives Quarterly, 21(1):12{15, 2004. [176] Mikhail Prokopenko. Advances in applied self-organizing systems. Springer, 2008. [177] Mikhail Prokopenko. Advances in Applied Self-Organizing Systems. Springer London, January 2013. [178] Stuart Pugh. Total design. Addison-Wesley, 1990. [179] Howard Raia. Decision analysis: introductory lectures on choices under uncer- tainty. Random House, 1968. [180] Norman C. Remich. DFX. Appliance Manufacturer, 46(8):100, August 1998. [181] Ari Requicha. Swarms of Self-Organized Nanorobots. In Nanorobotics, pages 41{49. Springer, 2013. [182] Ari Requicha and Daniel Arbuckle. Issues in Self-Repairing Robotic Self- Assembly. In Morphogenetic Engineering: Toward Programmable Complex Systems, pages 141{156. 2012. [183] Craig Reynolds. Flocks, herds, and schools: A distributed behavioral model. In ACM SIGGRAPH '87 Conference Proceedings, volume 25-34. 21, 1987. [184] Christina Rogers. U.S. to Propose Vehicle-to-Vehicle, Crash-Avoidance Systems. The Wall Street Journal, February 2014. [185] Adam M. Ross, Donna H. Rhodes, and Daniel E. Hastings. Dening changeabil- ity: Reconciling exibility, adaptability, scalability, modiability, and robustness for maintaining system lifecycle value. Systems Engineering, 11(3):246{262, 2008. [186] Ranjit K. Roy. A primer on the Taguchi method. Competitive manufacturing series. Van Nostrand Reinhold, New York, 1990. [187] M. Rubenstein, C. Ahler, and R. Nagpal. Kilobot: A low cost scalable robot system for collective behaviors. In 2012 IEEE International Conference on Robotics and Automation (ICRA), pages 3293 {3298, May 2012. [188] Farzad Sadjadi. Comparison of tness scaling functions in genetic algorithms with applications to optical processing. volume 5557, pages 356{364. SPIE, 2004. 188 Bibliography [189] Stanley N. Salthe and Koichiro Matsuno. Self-organization in hierarchical systems. Journal of Social and Evolutionary Systems, 18(4):327{338, 1995. [190] Carlos J. Sanchez, Chen-Wei Chiu, Yan Zhou, Jorge Gonzalez, S. Bradleigh Vinson, and Hong Liang. Locomotion control of hybrid cockroach robots. Journal of The Royal Society Interface, 12(105):20141363{20141363, March 2015. [191] Susan M. Sanchez. Work smarter, not harder: guidelines for designing simulation experiments. In Proceedings of the 37th conference on Winter simulation, pages 69{82, 2005. [192] Jen Schellinck and Tony White. A review of attraction and repulsion models of aggregation: Methods, ndings and a discussion of model validation. Ecological Modelling, 222(11):1897{1911, 2011. [193] Jacob T Schwartz and Micha Sharir. On the piano mover's problem. II. General techniques for computing topological properties of real algebraic manifolds. Advances in Applied Mathematics, 4(3):298{351, September 1983. [194] Sriram Shankaran, Dusan M. Stipanovi, and Claire J. Tomlin. Collision Avoid- ance Strategies for a Three-Player Game. In Advances in Dynamic Games, pages 253{271. Springer, 2011. [195] Wei-Min Shen, Behnam Salemi, and Peter Will. Hormone-inspired adaptive communication and distributed control for CONRO self-recongurable robots. IEEE Transactions on Robotics and Automation, 18(5):700 { 712, October 2002. [196] Herbert Simon. The Architecture of Complexity. Proceedings of the American Philosophical Society, 106(6):467{482, 1962. [197] Herbert Simon. The sciences of the articial. MIT Press, Cambridge, Mass., 1996. [198] Kaushik Sinha and Olivier L. de Weck. A network-based structural complexity metric for engineered complex systems. In Systems Conference (SysCon), 2013 IEEE International, pages 426{430. IEEE, 2013. [199] Kaushik Sinha and Olivier L. de Weck. Structural Complexity Quantication for Engineered and Complex Systems and Implications on System Architecture and Design. In Proceedings of the ASME 2013 International Design Engi- neering Technical Conferences and Computers and Information in Engineering Conference, 2013. [200] Yong Song, Jung-Hwan Kim, and Dylan Shell. Self-organized Clustering of Square Objects by Multiple Robots. In Marco Dorigo, Mauro Birattari, Christian Blum, Anders Christensen, Andries Engelbrecht, Roderich Gro, and Thomas Sttzle, editors, Swarm Intelligence, volume 7461 of Lecture Notes in Computer Science, pages 308{315. Springer Berlin / Heidelberg, 2012. 189 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems [201] William M. Spears and Vic Anand. A study of crossover operators in genetic programming. Springer, 1991. [202] Peter Stone and Manuela Veloso. Multiagent Systems: A Survey from a Machine Learning Perspective. Autonomous Robots, 8(3):345{383, 2000. [203] Robert B. Stone and Kristin L. Wood. Development of a Functional Basis for Design. Journal of Mechanical Design, 122(4):359{370, August 1999. [204] Forrest Stonedahl and Uri Wilensky. Finding Forms of Flocking: Evolutionary Search in ABM Parameter-Spaces. In Proceedings of the MABS workshop at the Ninth International Conference on Autonomous Agents and Multi-Agent Systems, 2010. [205] Daniel Stuart, Keith Christensen, Anthony Chen, Ke-Cai Cao, Caibin Zeng, and Yang-Quan Chen. A Framework for Modeling and Managing Mass Pedestrian Evacuations Involving Individuals With Disabilities: Networked Segways as Mobile Sensors and Actuators. In ASME 2013 International Design Engineering Technical Conferences and Computers and Information in Engineering Con- ference, pages V004T08A011{V004T08A011. American Society of Mechanical Engineers, 2013. [206] Thomas St utzle and Marco Dorigo. ACO algorithms for the traveling salesman problem. Evolutionary Algorithms in Engineering and Computer Science, pages 163{183, 1999. [207] Nam P. Suh. The principles of design. Number 6 in Oxford series on advanced manufacturing. Oxford University Press, New York, 1990. [208] Nam P. Suh. A Theory of Complexity, Periodicity and the Design Axioms. Research in Engineering Design, 11(2):116{132, August 1999. [209] Nam P. Suh. Axiomatic design: advances and applications. Oxford University Press, New York, 2001. [210] David J. T. Sumpter. The principles of collective animal behaviour. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 361(1465):5{ 22, January 2006. [211] Sunzi, Lionel Giles, and Don Mann. The art of war: the oldest military treatise in the world. Skyhorse Pub Co Inc, New York, 2013. [212] Genichi Taguchi. System of experimental design: engineering methods to opti- mize quality and minimize costs. UNIPUB/Kraus International Publications ; American Supplier Institute, White Plains, N.Y.: Dearborn, Mich, 1987. [213] Danesh Tarapore and Jean-Baptiste Mouret. Evolvability signatures of generative encodings: beyond standard performance benchmarks. Information Sciences, 313:43{61, 2015. 190 Bibliography [214] John Terninko, Alla Zusman, and Boris Zlotin. Systematic Innovation: An Introduction to TRIZ (Theory of Inventive Problem Solving). CRC Press, April 1998. [215] Guy Theraulaz, Jacques Gautrais, Scott Camazine, and Jean-Louis Deneubourg. The formation of spatial patterns in social insects: from simple behaviours to complex structures. Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 361(1807):1263{1282, June 2003. [216] James Thompson. Organizations in Action. McGraw-Hill, New York, 1967. [217] Vito Trianni. Evolutionary swarm robotics: evolving self-organising behaviours in groups of autonomous robots. (Studies in computational intelligence ; v. 108). Springer, Berlin, 2008. [218] Kagan Tumer and Newsha Khani. Learning from actions not taken in multiagent systems. Advances in Complex Systems, 12(04n05):455{473, 2009. [219] Alan Turing. The Chemical Basis of Morphogenesis. Philosophical Transactions of the Royal Society of London, 237(264):37{72, 1953. [220] Tsuyoshi Ueyama, Toshio Fukuda, and Fumihito Arai. Structure Congu- ration using Genetic Algorithm for Cellular Robotic System. In IEEE/RSJ International Conference on Intelligent Systems, volume 3 of 1549, page 1542, 1992. [221] Arnold B. Urken, Arthur Buck Nimz, and Tod M. Schuck. Designing evolvable systems in a framework of robust, resilient and sustainable engineering analysis. Advanced Engineering Informatics, 26(3):553{562, August 2012. [222] Sjors van Berkel, Daniel Turi, Andrei Pruteanu, and Stefan Dulman. Automatic discovery of algorithms for multi-agent systems. In Proceedings of the fourteenth international conference on Genetic and evolutionary computation conference companion, GECCO Companion '12, pages 337{344, New York, NY, USA, 2012. ACM. [223] Ashlee Vance. Google's Self-Driving Robot Cars Are Ruining My Commute. BusinessWeek: technology, March 2013. [224] Mirko Viroli, Matteo Casadei, Sara Montagna, and Franco Zambonelli. Spatial coordination of pervasive services through chemical-inspired tuple spaces. ACM Transactions on Autonomous and Adaptive Systems, 6(2):14, 2011. [225] Ludwig Von Bertalany. General system theory. General systems, 1(1):11{17, 1956. [226] Ludwig von Bertalany. General system theory: Foundations, development, applications. Braziller. New York, 1968. 191 Behavioral Modeling and Computational Synthesis of Self-Organizing Systems [227] Heinz von Foerster. On self-organizing systems and their environments. In Understanding Understanding, pages 1{19. Springer, 2003. [228] Heinz von Foerster. Understanding understanding: Essays on cybernetics and cognition. Springer Science & Business Media, 2007. [229] John von Neumann. The role of high and extremely high complication. Theory of Self-Reproducing Automata, 1966. [230] Andreas Wagner. Robustness and evolvability in living systems. Princeton studies in complexity. Princeton University Press, Princeton, N.J, 2005. [231] Nigel Warburton. Philosophy: the basics. Routledge, London ; New York, 5th ed edition, 2013. [232] Nicole Washington and Suzanna Lewis. Ontologies: Scientic data sharing made easy. Nature Education, 1(3), 2008. [233] Peter H. Welch, Kurt Wallnau, Adam T. Sampson, and Mark Klein. To boldly go: an occam- mission to engineer emergence. Natural Computing, 11(3):449{474, September 2012. [234] Justin Werfel. Collective Construction with Robot Swarms. In Morphogenetic Engineering: Toward Programmable Complex Systems, pages 115{140. 2012. [235] Justin Werfel and Radhika Nagpal. Extended stigmergy in collective construction. Intelligent Systems, IEEE, 21(2):20{28, 2006. [236] Shimon Whiteson, Nate Kohl, Risto Miikkulainen, and Peter Stone. Evolving Keepaway Soccer Players through Task Decomposition. In Machine Learning, pages 356{368, 2003. [237] Uri Wilensky. NetLogo, 1998. [238] Uri Wilensky. NetLogo Ant Model, 1998. [239] Uri Wilensky. NetLogo Flocking Model, 1998. [240] Alan F. T. Wineld, Christopher J. Harper, and Julien Nembrini. Towards Dependable Swarms and a New Discipline of Swarm Engineering. In Erol ahin and William M. Spears, editors, Swarm Robotics, number 3342 in Lecture Notes in Computer Science, pages 126{142. Springer Berlin Heidelberg, January 2005. [241] Lars Wischhof, Andr e Ebner, and Hermann Rohling. Information dissemination in self-organizing intervehicle networks. Intelligent Transportation Systems, IEEE Transactions on, 6(1):90{101, 2005. [242] Stephen Wolfram. Statistical mechanics of cellular automata. Reviews of Modern Physics, 55(3):601{644, July 1983. 192 Bibliography [243] Stephen Wolfram. Universality and complexity in cellular automata. Physica D: Nonlinear Phenomena, 10(12):1{35, January 1984. [244] Stephen Wolfram. A new kind of science, volume 5. Wolfram media Champaign, 2002. [245] Or Yogev, Andrew A. Shapiro, and Erik K. Antonsson. Engineering by funda- mental elements of evolution. In Proceedings of the ASME 2008 International Design Engineering Technical Conferences, 2008. [246] Byeng Youn, Chao Hu, and Pingfeng Wang. Resilience-driven system design of complex engineered systems. Journal of Mechanical Design, 133(10):101011, 2011. [247] Dandan Zhang, Long Wang, and Junzhi Yu. A Coordination Method for Multiple Biomimetic Robotic Fish in Underwater Transport Task. In American Control Conference, 2007. ACC'07, pages 1870{1875. IEEE, 2007. [248] Jinming Zou, Yi Han, and Sung-Sau So. Overview of Articial Neural Networks. In John M. Walker and David J. Livingstone, editors, Articial Neural Networks, volume 458, pages 14{22. Humana Press, Totowa, NJ, 2008. [249] George Zouein. A Biologically Inspired DNA-based Cellular Approach to Devel- oping Complex Adaptive Systems. PhD thesis, University of Southern California, 2009. [250] George Zouein, Chang Chen, and Yan Jin. Create Adaptive Systems through \DNA" Guided Cellular Formation. In Design Creativity 2010. 2010. 193
Abstract (if available)
Abstract
Engineered systems are facing requirements for increased adaptability, the capacity to cope with change. This includes flexibility to fulfill multiple purposes over long lifespans, robustness to environmental changes, and resilience to system change and damage. This dissertation investigates the use of self-organization as a tool for the design of adaptable systems. ❧ Self-organizing systems have no central or outside controller. They are built up from the interactions of autonomous agents, such as a swarm of robots. Because the agents are autonomous and self-interested, they can fulfill complex functional requirements. They are able to grow and rearrange themselves
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Building cellular self-organizing system (CSO): a behavior regulation based approach
PDF
Dynamic social structuring in cellular self-organizing systems
PDF
A meta-interaction model for designing cellular self-organizing systems
PDF
A biologically inspired DNA-based cellular approach to developing complex adaptive systems
PDF
Extending systems architecting for human considerations through model-based systems engineering
PDF
A social-cognitive approach to modeling design thinking styles
PDF
Modeling and simulation testbed for unmanned systems
PDF
Modeling enterprise operations and organizations for productivity improvement
PDF
Contingency handling in mission planning for multi-robot teams
PDF
Nonlinear control of flexible rotating system with varying velocity
PDF
Bio-inspired tendon-driven systems: computational analysis, optimization, and hardware implementation
PDF
Modeling, analysis and experimental validation of flexible rotor systems with water-lubricated rubber bearings
PDF
Reward shaping and social learning in self- organizing systems through multi-agent reinforcement learning
PDF
Large-scale path planning and maneuvering with local information for autonomous systems
PDF
Incorporation of mission scenarios in deep space spacecraft design trades
PDF
Organizing complex projects around critical skills, and the mitigation of risks arising from system dynamic behavior
PDF
Integration of digital twin and generative models in model-based systems upgrade methodology
PDF
Using nonlinear feedback control to model human landing mechanics
PDF
A synthesis approach to manage complexity in software systems design
PDF
Behavioral form finding using multi-agent systems: a computational methodology for combining generative design with environmental and structural analysis in architectural design
Asset Metadata
Creator
Humann, James
(author)
Core Title
Behavioral modeling and computational synthesis of self-organizing systems
School
Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Mechanical Engineering
Publication Date
09/17/2015
Defense Date
08/28/2015
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
agent-based modeling,complex systems,design theory and methodology,mechanical design,multi-agent simulation,OAI-PMH Harvest,self-organization,self-organizing systems
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Jin, Yan (
committee chair
), Madni, Azad M. (
committee member
), Shiflett, Geoffrey (
committee member
)
Creator Email
humann@usc.edu,scamboy9@aol.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c40-184031
Unique identifier
UC11275046
Identifier
etd-HumannJame-3924.pdf (filename),usctheses-c40-184031 (legacy record id)
Legacy Identifier
etd-HumannJame-3924.pdf
Dmrecord
184031
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Humann, James
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
agent-based modeling
complex systems
design theory and methodology
mechanical design
multi-agent simulation
self-organization
self-organizing systems