Close
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Behavioral form finding using multi-agent systems: a computational methodology for combining generative design with environmental and structural analysis in architectural design
(USC Thesis Other)
Behavioral form finding using multi-agent systems: a computational methodology for combining generative design with environmental and structural analysis in architectural design
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Doctor of Philosophy in Civil & Environmental Engineering at the University of Southern California Faculty of the USC Graduate School Behavioral Form Finding Using Multi-Agent Systems: A computational methodology for combining generative design with environmental and structural analysis in architectural design by Evangelos Pantazis Diploma in Architectural Engineering Aristotle’s University of Thessaloniki, School of Architecture, 2010 M.A.S. in Computer Aided Architectural Design Swiss Federal Institute of Technology, ETH Zurich, 2013 August 2019 2019 University of Southern California All rights reserved Page | 2 David Jason Gerber, D.Des. Associate Professor of Practice School of Architecture/Dept. of Civil and Environmental Engineering University of Southern California Thesis Supervisor Bucin Becerik Gerber, D.Des. Associate Professor Department of Civil and Environmental Engineering University of Southern California Thesis Reader Lucio Soibelman, Ph.D. Professor Chair of the Department of Civil and Environmental Engineering University of Southern California Thesis Reader Marcos Novak, Ph.D. Professor Department of Media Arts and Technology University of California, Santa Barbara Thesis Reader Page | 3 Abstract The current architectural design models are in general not capable of integrating the design formation processes that are directly informed by building performance simulation and construction processes. Developing computational design methods which consider building design holistically and help reduce the increasing complexity in the fields of architecture, engineering and construction (AEC) are considered crucial as we move towards the digitalization of our built environment. This dissertation presents a behavioral design methodology for exploring architectural design alternatives in the early design stage based on the coupling of geometrical parameters with environmental and structural analysis as well as autonomous construction processes. The methodology is implemented in a multi-agent systems framework, in which building elements are represented as agents. The framework considers the design to construction process holistically; and to facilitate an integrative design process, domain specific data which relates to different design phases (architectural, structural, environmental design) can be used to inform the agent's behaviors. The proposed framework is applicable for the early design stage and especially design problems which traditionally require the close collaboration of architects and engineers, such as façade design and shell structures. The objective is to facilitate passing data between the different type of models (architectural, structural environmental) and to enable architectural designers to explore larger solution spaces and gain deeper insight into complex building design decisions. To achieve this, a prototype design tool has been developed that allows the decomposition of design problems into agents and behaviors that by interacting in user-specified geometric and data environments automatically generates and evaluates design alternatives based on performance targets set by the designer in the form of heuristic functions. For the purposes of analysis and validation of the methodology, this dissertation presents a series of experimental designs that vary in complexity in which this approach and the design tool have been applied. Page | 4 Acknowledgments A work of this nature owes a large debt of gratitude to the many people who have directly or indirectly contributed it. During the period that this research was taking shape, I had the privilege to work closely with talented engineers, computer scientists and architects from both academia and in the profession. The first and foremost debt of gratitude is owed to Professors E. Chatzi and L. Hovestadt who were the first ones to support me in pursuing my Ph. D. research interests. Professor Hovestadt’s broad thinking on architecture, mathematics and philosophy and his approach to computational methods was eye opening, and his thoughts resonated throughout my graduate studies, while Professor Chatzi was instrumental in developing both my engineering curiosity and thinking. I also express my gratitude to my advisor, Professor David Gerber, who emphasized my weak points and persistently challenged me to push my boundaries. At times, we had some arguments related to work and procedure that retrospectively evolved into a very educational process. Last, but not least, I want to thank the members of my Doctoral Committee, Professors L. Soibelman, B. Becerik Gerber, M. Novak and Y. Jin for their guidance and support of what has become a meandering path to the completion of this thesis. I am both literally and metaphorically indebted to all the organizations, institutes and companies that accepted my proposals and believed in my research vision. Therefore, I would like to thank the National Science Foundation in the USA, which partially funded my research (grant no.1231001) as well as the Onassis Foundation in Greece. My special thanks goes to Ioanna Kailani who assisted me with the paperwork and has patiently communicated with me over the years. In addition, I would like to thank both the Myronis Fellowship and the Gerondelis Foundation for recognizing my research efforts. I also express my thanks to Autodesk, Inc. and the Institute of Advanced Architecture of Catalonia (IAAC) for awarding me the BUILDers in Residence Fellowship.” Specifically, I would like to thank Areti Markopoulou and Mathilde Marengo from the IAAC, Rick Rundell and Athena Moore from Boston’s BuildSpace for their support throughout the residency, and Glenn Katz and Erin Bradner form Autodesk, Inc. for their help during my IDEA residency. I gratefully express my thanks to Ron Elad from BuroHappold Engineering who offered me an opportunity to apply my computational skills in practice. I express my thanks also to my fellow colleagues Arsalan Heydarian and Leandro Soriano Marcolino for being great collaborators, for making me see things from a different perspective, and for enriching my communication skills. I also want to thank my fellow Ph.D. students from USC, namely: Biayna Bogosian, Nikos Kalligeris, Eyup Koc, Ashrant Aryal, Joao Carneiro, Ali Kazemian and Pouyan Hosseini for patiently listening to my ideas and thoughts. Additionally, I would like to thank graduate and undergraduate students Nick Morof, Rheseok Kim, Brian Herrera, Kevin Daley, Punit Das, Justin Yang and Francine Ngo for their contributions in preparing and running the experiments and for their illustrations of the results. Page | 5 I am grateful to my friends, including Robson Morgan, Oriol Carasco, Jose Sanchez, Stelios Koumlis, Ioannis Bertsatos, Constantinos Schinas, Spyros Panagiotopoulos and Christina Tsakiri, for dedicating their time to discussing, reading and commenting on my work. As always, I send many thanks to my parents Spiros Pantazis and Eleni Margaritidou and my siblings Iason and Magdalena who have always shined light on my path and supported me even if they did not fully agree with my decisions. Finally, I thank the reader for taking the time to ready this thesis and having the interest to reflect on the thoughts contained herein. Page | 6 Preface I would like to introduce the reader to the general context of the work, its intended audience, my personal motivation and the chronology of this thesis. The work presented in this thesis was funded by a grant from the National Science Foundation (NSF Grant no. 1231001) under the Sustainable Energy Pathways (SEP) Program, and involved conducting fundamental research about the application of rigorous computational methods in architectural design. Therefore, the aim is not to produce a final solution, but rather to conduct methodological design research on how computational methods can be utilized a) for reducing the energy footprint of buildings, b) for integrating design performance analysis in the early design stage, and c) for extending the creative capacity of architects. My personal motivation for conducting this Ph.D. research stems from the work done for my diploma thesis at Aristotle’s University of Thessaloniki, which is titled Bridging Digital Design and Fabrication, and discusses the impact of digital design and digital fabrication techniques in architecture. I became interested in geometry rationalization and in finding computational approach for improving the performance of building design, particularly after working as an architect and parametric designer in multiple architectural studios across the globe, as I had the opportunity to participate in the design of large scale project with complex geometry I also realized the complexities involved in the realization of such projects, the inefficiencies in the communications between architects and engineers, and the limitations of the existing design tools. Therefore, I decided to enrich my knowledge of computation and robotics by doing a Masters of Advanced Studies in Computer Aided Architectural Design at the Swiss Federal Institute of Technology, ETH Zurich, where I engaged rigorously in the development of generative design toolkits and the design of robotic construction workflows through the realization of two full-size structures. The combination of empirical experience and theoretical grounding which extends the realm of architecture and ranged from math to philosophy and computer science provided me with an overview of the strengths and limitations of both existing parametric design and digital fabrication methods. I began my Ph.D. studies inspired by swarm intelligence algorithms and recent advances in using robots in architecture. I intended to focus on investigating how using industrial robotic arms in building construction can address the discrepancy between what can be designed and what can be digitally fabricated and constructed. I soon realized that when dealing with multiple robots (which is inevitable in construction sites) complexity and computation both increase in scale exponentially as the number of both robot to robot and robot to structure interactions increase. Another realization was that the research using agent-based modelling and robotic fabrication in architecture mostly focused on the fabrication of a structure without putting a lot of effort into the quality of the structure's performance. Therefore, I got interested in how computational methods, and specifically the modularity and distributed nature of multi-agent systems can be used to integrate information from different fields into the early design. My vision is to develop a holistic design approach that allow designers to explore the full potential of multi-agent systems without introducing and compounding complexity in the design and fabrication process. In this dissertation, I present the results of my investigations of how engineering parameters and fabrication constraints can be represented abstractly and become design drivers that lead to complex yet performative design outcomes that would not be possible to design and build using traditional methods. Page | 7 Table of Contents Part I 1. Introduction .............................................................................................................................................. 13 1.1. Increasing Complexity of Building Design ........................................................................................ 13 1.2. The Direction of Architectural Design ............................................................................................... 14 1.3. Problem Statement .......................................................................................................................... 17 1.4. Motivation ........................................................................................................................................ 17 1.5. Hypothesis and Research Questions ............................................................................................... 18 1.6. Contribution ..................................................................................................................................... 18 1.7. Organization of this Thesis .............................................................................................................. 19 Part II 2. Background and Related Literature .......................................................................................................... 21 2.1. Design Models for Architectural Design ........................................................................................... 21 Models of Design ...................................................................................................................... 21 Digital Design Models in Architectural Design ........................................................................... 26 Summary .................................................................................................................................. 30 2.2. A Brief Historical Review of Complexity and its Relationship to Architectural Design ....................... 31 Theoretical Framework for Approaching Design Complexity ..................................................... 35 Sources of Complexity .............................................................................................................. 37 Underlying Assumptions of Complexity ..................................................................................... 37 Defining Complexity .................................................................................................................. 39 Measuring Complexity .............................................................................................................. 41 Decomposing Complexity in the AEC ....................................................................................... 42 Tools for Managing Complexity ................................................................................................ 46 Complexity and Complex Adaptive Systems............................................................................. 48 A Holistic Design Approach for Managing Complexity .............................................................. 49 Summary .................................................................................................................................. 49 2.3. Multi-Agent Systems (MAS) ............................................................................................................. 50 Complex Adaptive Systems and MAS for Design ..................................................................... 50 Models of Agents ...................................................................................................................... 51 Agent Ontologies ...................................................................................................................... 52 Agent Classes .......................................................................................................................... 53 Applications of MAS in AEC ..................................................................................................... 55 Part III 3. Research Methodology ............................................................................................................................ 59 3.1. Generic Design Problem Solvers Using MASs ................................................................................. 59 Design Problem Decomposition................................................................................................ 60 Hypothesis ............................................................................................................................... 61 3.2. Framework Development for Integrating Multiple Design Phases .................................................... 62 Task 1: Development of Agent Classes .................................................................................... 64 Task 2: Search Space Exploration Using Heuristic Algorithms ................................................. 66 Task 3: Agent Coordination, Evaluation and Negotiation .......................................................... 70 Task 4: Prototype Development (MAS Design Tool) ................................................................. 71 Task 5: Develop a GUI to Allow Interactivity Between Designer and MAS ................................ 72 Page | 8 Part IV 4. Design Experiments ................................................................................................................................. 75 4.1. Experimental Design 1: Bridging Digital Agent-based Modelling and Simulation with Physical Robotic Systems (Agents) ............................................................................................................... 76 Embodied Swarm Behavior ...................................................................................................... 76 Design Process ........................................................................................................................ 77 Results and Analysis ................................................................................................................ 82 4.2. Experimental Design 2: Agent Based Façade Design in Office Buildings ......................................... 86 Façade Design Exploration Using Heuristic Algorithms ............................................................ 88 Daylight Metrics and Design Performance Goals ...................................................................... 88 Façade Design Optimization Using Robotic Simulations .......................................................... 90 Case Study A: Façade Generation for an Office Building in Los Angeles ................................. 92 Case Study B: Façade Construction Simulation of a Bay on a Generic Office Building ........... 100 4.3. Experimental Design 3: Agent-Based Shell Structure Design ........................................................ 105 Structural Form Finding .......................................................................................................... 106 Background and Context of Reciprocal Frames ...................................................................... 107 Case Study A: Fabrication Aware Form Finding ..................................................................... 110 Case Study B: Environmentally Aware Reciprocal Frames ..................................................... 116 4.4. Experimental Design 4: Revisiting an Existing Shell Structure Using Behavioral Form Finding ...... 123 Case Study: Sports Center by Heinz Isler ............................................................................... 125 Design Process ...................................................................................................................... 125 Results and Analysis .............................................................................................................. 128 Part V 5. Overall Results and Analysis ................................................................................................................. 132 5.1. Summary of Results ...................................................................................................................... 132 5.2. Contributions ................................................................................................................................. 136 5.3. Future Work ................................................................................................................................... 137 Part VI 6. List of Relevant Publications .................................................................................................................. 139 7. References ............................................................................................................................................ 141 Page | 9 TABLE OF FIGURES Figure 1 Timeline illustrating the introduction and full adoption of different types of complexity in relation to applicable system types and significant advancements .............................................................................................. 14 Figure 2 Timeline illustrating the evolution of design tools, based a) on whether they are computer based or computational and b) their method of integration ........................................................................................................ 15 Figure 3 Diagrams of methodological design models central to computational design, namely: a) design as a problem solving process (diagram based on plesk’s cyclic approach), b) design as a simulation, c) design as an optimization, and d) design as a space state exploration. The diagrams are based on definitions developed by plesk, cryer, asimow, gero and sosa ............................................................................................................................ 22 Figure 4 Timeline showing an evolution form reduced to complex design models in relation to the type of model (dynamic/static) and whether it considers or not environmental parameters .............................................................. 25 Figure 5 Timeline illustrating the increasing complexity in building structures in terms of the design approach and building paradigm used ................................................................................................................................................ 31 Figure 6 Diagram illustrating the research methodology implemented for this literature survey ......................................... 32 Figure 7 Number of publications including the term complexity in architecture and related domains reviewed by the authors based on a library built from querying databases of architectural design computing communities (250 publications) .................................................................................................................................................................. 33 Figure 8 Plots of the appearance of key terms relating to complexity (above) and different types of complexity (below) as n-grams in the google books library (25,000,000 publications). ............................................................... 34 Figure 9 Diagram illustrating the main characteristics of complex systems with regards to building design ..................... 35 Figure 10 The term complexity disassembled into different levels based on where it applies (2nd level), where it arises (3rd and 4th levels), and keys properties in addressing and managing it (5th level) ......................................... 36 Figure 11 A taxonomy of different complexity types based on whether they regard complexity as relative or as an absolute quantity. The taxonomy is built upon definitions from the fields of general systems theory, cybernetics, information theory, computer science, complexity theory and architecture, engineering and construction literature. ....................................................................................................................................................................................... 38 Figure 12 Diagram illustrating phases of a design to construction process in relation to the definition of architectural complexity provided by mitchell and the holistic definition of architectural complexity provided by the author ........ 45 Figure 13 Diagram showing types of agent models (ontologies) with an increasing level of complexity (from a-d) ........... 53 Figure 14 Diagram illustrating the overall multi-agent systems for design approaches including design problems, designer interaction, results, feedback loops, and the decomposition of the system into subdomains including design generation, simulation, analysis and evaluation ............................................................................................... 62 Figure 15 MAS framework diagram showing agent classes, and interdependencies between agents. Numbers in each component indicate the process workflow .......................................................................................................... 63 Figure 16 Diagram illustrating the typical agents’ internal structure, agent types and agent hierarchy within the mas for design ...................................................................................................................................................................... 65 Figure 17 Diagram showing the basic steps for implementing a heuristic search computationally, namely (from left to right) the definition of design parameters and a set of performance measures. The design parameters are related to the measures via a heuristic function which forms the solution space/landscape that is being traversed using stochastic algorithms .................................................................................................................................................... 67 Figure 18 MAS for design system architecture: mas environments, inputs, data transfers among software platforms, and actions and relationships among agents ...................................................................................................................... 71 Figure 19 Prototype version of the developed graphical user interface for interacting with the mas toolkit ...................... 72 Figure 20 Flowchart showing the description of a design in grasshopper. The shaded boxes represent agent classes of the MAS framework and white boxes represent input data ......................................................................................... 73 Figure 21 Framework diagram illustrating the design phases, parameters and tools developed ....................................... 77 Figure 22 Diagram illustrating the established experimental testbed .................................................................................. 78 Figure 23 Diagram with main design parameters of the robots, part and the environment ................................................ 80 Figure 24 Taxonomy of robot and part designs ................................................................................................................... 81 Page | 10 Figure 25 Geometric configurations of different environments where the active agents (bristlebots) interact with the passive agents (parts) ................................................................................................................................................... 82 Figure 26 Plots showing the relationship of the bristlebots' design parameters with regards to velocity........................... 83 Figure 27 Plots of the number and size of parts clusters over time in 6 experimental runs. The top row shows results of the experimental runs with circular parts in the 3 different arenas while the bottom row shows results of the runs with the hexagonal parts ............................................................................................................................................... 84 Figure 28 Images showing different robot behaviors: a) a single robot navigating a path defined by variation of friction, b) robots forming a chain by pushing parts in synchronous motion, and c) emergent clustering of hexagonal robots, parts and collective transport ....................................................................................................................................... 84 Figure 29 Diagram illustrating the passing of the problem into the system and its decomposition into subdomains including design generation, simulation, analysis and evaluation ................................................................................ 87 Figure 30 Experimental design setup that describes the environment, design parameters and the heuristic function that couples the extrusion of each façade panel with the analysis plane based on the degree (percentage) that each panel affects each specific virtual sensor point ............................................................................................................ 89 Figure 31 Design of the experiment for the simulation of the robotic fabrication segmentation, positioning, reachability, and collision parameters of the construction agent ..................................................................................................... 90 Figure 32 Schematic of simple building data model: relationship between components and object attributes (left) and table with local and global design parameters of the facade panel agent (right) ......................................................... 91 Figure 33 Diagram describing the specific design context of the building with (a) the design surface of one building bay façade module, (b) the design behaviors of the agent-based fenestration, and (c) the analysis surface with the virtual sensor points (d) of this experimental case study ........................................................................................................ 92 Figure 34 Rules of the generative façade panel agent(s) (e-h) and a typical generation process (i) shown in six sequential steps (from t=0, to t=t) including representative window openings ............................................................................. 93 Figure 35 A set of evolutionary façade designs where the same agent panel is applied on different input surfaces ......... 94 Figure 36 A set of evolutionary façade designs on a planar surface. The design parameters that change from to bottom are the panel type probability and sequence, length and extrusion of each panel...................................................... 96 Figure 37 Comparative graph of experimental runs showing three different search methods for placing openings on a south facing facade: linear search (all possible solutions), hill climbing and simulated annealing .............................. 97 Figure 38 Design alternatives presented to the designer along a series of environmental performance metrics in a parallel line plot .......................................................................................................................................................................... 98 Figure 39 A rendering showing the facades designs generated for the southwest facade of the one wilshire building in downtown los angeles. ................................................................................................................................................. 99 Figure 40 Diagram illustrating the different geometries of a single bay depending on the global geometry of the tower structure ...................................................................................................................................................................... 100 Figure 41 Diagram illustrating all the workflows and relationships between the mas agents ............................................ 101 Figure 42 a subset of generated facade panel alternatives with 2 and 3 openings, which are highly ranked in terms of environmental performance and therefore were further tested for their constructability........................................... 103 Figure 43 Plots illustrating 3 design generation cycles, with variable angle (bold, dark lines) and without variable angle (thin, light lines); and the constructability score improvement over time per geometric iteration. ............................ 104 Figure 44 Historical timeline of form active and form passive shell structures .................................................................. 105 Figure 45 Timeline with examples of structures realized using reciprocal frames dating from antiquity to april 2019 ..... 107 Figure 46 Photo of assembled reciprocal structure in situ ................................................................................................. 109 Figure 47 Diagram showing the form finding process (left) and curvature analysis of alternative form found shells (right) ..................................................................................................................................................................................... 110 Figure 48 Diagram with the panels developed flat and assembled in the final structure .................................................. 112 Figure 49 Fabrication process diagram a) showing production of raw material (7.5mm birch plywood), b) nesting of component into panels of variable size (5 sizes), c) cnc milling process, d)preheating of the component, and e) the thermo-forming of panels in a press mold .................................................................................................................. 113 Figure 50 Photos of the final structure in situ .................................................................................................................... 114 Page | 11 Figure 51 Diagram of the perforation system-agents' behavior showing a) the agents' motion towards a local maximum based on sun radiation, b) the agents’ path trajectory, and c) the generated milling pattern based on the agents’ paths ........................................................................................................................................................................... 115 Figure 52 Diagram illustrating the input parameters, two different design behaviors, and the resulting environmental analysis ........................................................................................................................................................................ 116 Figure 53 Diagram describing the design steps of the case study .................................................................................... 117 Figure 54 Graphs showing maximum element stress in relation to length, thickness and geometry as well as the allowable stress ........................................................................................................................................................... 118 Figure 55 Design parameters for reciprocal frame and stress distribution of different units based on the number of elements. ..................................................................................................................................................................... 120 Figure 56 Structural analysis of global geometries generated by varying agent behaviors; (below): solar radiation analysis of global geometries generated with form finding and different agent behavior ....................................................... 121 Figure 57 Four different shell structures designed by isler and constructed in switzerland between 1978-1988 ............ 123 Figure 58 Photos showing physical models (top left), the construction phase (top right), and the contemporary condition (bottom) of the heimberg sports center by h. Isler ..................................................................................................... 124 Figure 59 Flowchart diagram of the proposed behavioral form finding workflow.............................................................. 125 Figure 60 Experimental design setup that describes the environment, design parameters and the heuristic function that couples the form finding with the photophilic behavior, and three different types of analysis (structural, radiation and thermal energy analysis) ............................................................................................................................................. 126 Figure 61 Graphical user interface of the alpha version of the tool. On the top left side (control panel) are all the input parameters, in the middle is the geometry viewport, and on the left is the window used to interact with rhinoceros 3d and grasshopper. In the bottom panel the results generated are shown using a parallel lines plot .................... 128 Figure 62 Analytical results of the four different structures ................................................................................................ 129 Figure 63 A subset of design alternatives presented to the designer, highlighting the one that best meets the design objectives based on the available analytical metrics .................................................................................................. 130 Figure 64 Schematic representation of the "termite" mas framework which combines top-down and bottom-up design approaches ................................................................................................................................................................. 133 Figure 65 A diagram showing how structural hierarchies and different layers of abstraction can come together to produce a complex outcome (i.e. a concert, a building) ............................................................................................ 134 Page | 12 Part I Introduction Page | 13 1. Introduction 1.1. Increasing Complexity of Building Design It is arguable that complexity is one of the most challenging intellectual, scientific and technological topics of the 21 st century (Wolfram, 2002). The complexity of building design nowadays can be witnessed in mixed-use superstructures towering over modern cities in the United Arab Emirates, while geometric complexity has been celebrated in many sophisticated cultural and residential buildings around the world. From an architectural point of view, it is remarkable to observe the evolution of building construction in recent years. Next to the iconic stone cathedrals and buildings of the past centuries now stand prismatic and freeform steel structures equipped with building systems capable of responding to their environments in a dynamic fashion (Figure 1). Undoubtedly, computational advancements have played a big role in the transformation of the built environment from the materialization of drawings into their constructed forms as introduced in the Renaissance to the materialization of digital information (Mitchell, 2005). The introduction of embedded building systems and sensor technologies are turning buildings from static structures into cyber physical systems that can sense and respond to climatic or temporal changes of the environment (Malkawi, 2005, Rahman, 2010). Architectural design thinking has not been greatly affected despite the fact that information technologies have radically changed the way we design and construct our buildings. Unlike many scientific fields, such as biology and physics, where interest in computational methods and complexity theory arose due to the incapacity of existing methodologies to efficiently tackle complex problems, in architecture the first digital tools were not used much for solving design problems (Kalay, 2004) but rather for automating the production of drawings of complex shapes. With a few exceptions, in its early manifestations digitally mediated architecture focused on geometry and approached complexity in a diagrammatic instead of a scientific manner. Digital design approaches remained rooted in representation-based design paradigms instead of developing a deeper understanding of complexity and reconsidering the design process in the light of computation (Jencks, 2000). To date, each domain in AEC still provides largely independent solutions that are already defined outcomes before being passed from one discipline to another. This lack of integrations between disciplines and inefficiencies in the design process have resulted in a built environment that accounts for 40% of global energy consumption and up to 30% of global greenhouse gas (GHG) emissions. Static building models which do not consider the parameter of time (a building’s life cycle) are still primarily used. Therefore, up to 46% of the buildings' energy consumption is locked in for long periods of time due to the life span of these buildings (Block, 2009). As concerns about the environmental footprint of buildings have increased and the environmental performance standards have become more stringent, architectural designers have gained the important responsibilities of employing designs and creativity to reduce energy use and to achieve more sustainable solutions. This is achievable by employing smart and creative design solutions to reduce of building complexity. Mr. A. Pottinger, the lead structural engineer of the Buro Happold engineering firm, commented on the successful completion of the Abu Dhabi Louvre, a challenging project in terms of both architecture and engineering, stated that "One of the absolutely overriding things we had to do was to find simplicity amongst the complexity. If we didn't do that the project wouldn't be buildable.” Page | 14 Figure 1 Timeline illustrating the introduction and full adoption of different types of complexity in relation to applicable system types and significant advancements 1.2. The Direction of Architectural Design Architectural design thus far has been rooted in descriptive (perspective) modelling, which in turn is then passed to engineering disciplines to conduct further analysis prior to commencing construction. Advances in digital design and fabrication have presented an exciting opportunity to merge digital and physical tools and processes for constructing non-standard building structures with better design performance ( Figure 2). This opportunity has also created a demand for architects to address the complex character of design problems in a more methodological and integrated way in order to provide sustainable solutions that can cater for environmental and structural parameters as well as user preferences (Rahman, 2010). The ability to evaluate a larger amount of possible design alternatives (solution space) is essential in architectural design, as has been shown by Woodbury and Burrow (Woodbury et al., 1999) and Gero and Sosa (Gero and Sosa, 2008). To enable architects to explore larger solutions and generate designs that reduce the energy footprint of both the design to construction processes and also the life cycle of buildings, computational design approaches are considered crucial to successfully manage the conflicting inter-domain constraints of building design. Gero and Sosa argue that agent-based approaches can be used to increase the capacity of designers to explore a larger number of designs and also to enable them to develop complex designs by formulizing design goals into agents’ behaviors. Early research efforts emphasized developing computer aided design (CAD) tools that reduced the complexities relating to drafting and the automation of drawing production, rather than developing new design methods and tools (Scheurer, 2010). Parametric and subsequently performance-based design emerged as an integrated approach, which allows designers to consider environmental and structural parameters in the early design stage. With this has come a new maturity that promises to transcend the formal and geometric innovations that were mainly driving the architects' interest in using digital technologies and transfer the focus to how computational techniques can be used to transcribe fundamental formative processes into architectural design (Oxman, 2008). Page | 15 Figure 2 Timeline illustrating the evolution of design tools, based a) on whether they are computer based or computational and b) their method of integration More recently, research in the field of architectural design and building engineering has largely focused on bridging the gap between the physical and digital realms and the integration of fabrication and material constraints in the early design stage (Oosterhuis, 2011). Despite the successful integration of parametric design models into building practice, parametric design for large scale and complex building projects remains labor intensive and rather manual. Unfortunately, although current design practices rely on increased computational capacity, they can be characterized as computer-based yet not computational. Recently, a number of researchers have started developing computationally-based approaches for exploring architectural forms based on the concepts of form finding and optimization (Adriaenssens et al., 2014, Block, 2009), evolutionary computation emergent behavior (Menges, 2007), digital fabrication (Oxman, 2007), .rule-based models (Fricker et al., 2007)(see Figure 2). However, there is still a lack of integrated computational tools that would further extend the designers’ creativity by integrating form generation with design performance feedback (such as structural, or daylight analysis) and physical feedback (fabrication) in the early design stage(Von Bülow, 2007). The shift towards a maturity stage of this digital era in architecture, also referred to as the 3 rd digital turn by architectural theorist M. Carpo (Carpo, 2013), requires architectural researchers to develop a deeper understanding of complexity and evolutionary processes via computational means if they wish to use them wisely for architectural purposes (Oxman, 2008, Oxman, 2006). Page | 16 This approach suggests that the future direction of architectural design should be based upon the holistic consideration of the design process as well as the comprehensive study of computational morphogenesis with regards to building complexity inclusive design performance, dynamic occupant behavior and construction processes. By holistic design I refer to a design approach where a design object (i.e. a building) is considered as an interconnected whole that is part of a larger world and not just a sum of its parts. In such an approach, all parts of a system are intimately interconnected and explicable only by reference to the whole. Page | 17 1.3. Problem Statement From the discussion above, two main issues arise: 1. Due to the paradigm shift that information technologies have brought to the ways we conceive, design and construct buildings, and due to the introduction of sophisticated smart building systems (sensors, smart devices, etc.) that can capture dynamic changes in the environment (occupant behavior), the complexity of building design is rapidly increasing—and so is the quantity of data that architects need to take into consideration while designing a new building. 2. Although a number of sophisticated computational tools which are really helpful for analytical purposes have been made available to designers, they are not as effective when considering the creative aspect of design. Therefore, there is a lack of computational tools which couple form generation with design performance in the early design stage and relate evolutionary processes with the principles of geometric forms, structure, material and constructability. 1.4. Motivation A variety of examples can be observed in nature, where complex, large-scale functional nests are built collectively by millions of social insects, without any global coordination or blueprint, and by using the minimum resources possible. For instance, the routes termites choose to take when foraging or building their colony may appear to be complex; but on a closer analysis, it can be seen as a series of simple responses to environmental conditions. The resulting complexity of a termite colony arises due to environmental parameters, not the “program” which steers the ant (Simon, 1991). As the design of cyber physical systems and buildings responding to their environment and users is further investigated, research efforts should be focused on developing computational design methods as well as on synthesizing design systems that can handle complexity whilst remaining at the same time intuitive and robust (Halfawy and Froese, 2005). Literature suggests that such design approaches can be manifested by developing methods that integrate multiple domains; and if, given a set of specifications, they are capable of generating design alternatives which proactively balance the requirements of the primary domain (e.g. architectural design) with the impact on the energy domain (e.g. environmental and structural engineering) and the process domain (e.g. construction). Current advances in robotic engineering suggest that the use of construction robots will become prevalent in the near future. If this is the case, complex adaptive systems using biologically inspired principles for robustness and scalability, such as termites or beavers, can serve as exemplary cases in the engineering and controlling of collectives of robots that achieve human-specified goals (Bonabeau et al., 1999, Werfel et al., 2014). Motivated by the efficiency and robustness of complex adaptive systems, in this work I emphasize developing a methodology that helps develop evolutionary, decentralized and modular design tools capable of aiding architects and providing bottom-up solutions to complex design problems. A design methodology is proposed which couples the principles of design generation with the principles of design performance by employing behavioral modelling techniques and stochastic search methods. A multi-agent systems framework for architectural engineering design is described and a prototype tool is developed and applied to typical architectural design problems, which require the close collaboration of architects and civil engineers (i.e. façade design, shell design). Page | 18 1.5. Hypothesis and Research Questions The objective of this work is to propose and test a computational methodology which reduces design complexity by decomposing problems into elementary design agencies and by formulating behaviors that incorporate geometric variables with numerical analysis, fabrication and material parameters. Another objective is to develop a multi-agent systems design tool that enables the automated generation of design alternatives based on the coupling of bottom-up rules that relate to design intentions and top-down rules that relate to regulations and constraints. The hypothesis of this work is that in order to manage the increasing complexity of building, the current analogy which exists in digital design between the designer (user) and machine (digital tool) should be reversed. In this analogy, the designer acts like an apprentice using a specific language (Python, C++, Java) or interface to pose questions to the master (computer) and respectfully awaits the answer. In order to promote the designers’ creativity, computational design tools need to be conceived not as drafting aids but as the user’s/designer’s apprentice, which when given a set of specifications, are capable to generate proposals (design alternatives) that the user/designer (master craftsman) can evaluate and critique. In such a situation, the tool is expected to build knowledge through its interaction with the user and the processing of multiple data sets. The specific research questions that this proposal considers are the following: 1. How can architects handle complexity in building design via the formalization of design thinking and by means of computable functions and the holistic consideration of building design? 2. How can the coupling of generative design approaches with analytical solvers in the early design stage enable architects to search for larger solutions more efficiently? 3. How can designers explore emergent forms by applying stochastic search methods to architectural design problems via their methodological decomposition into autonomous agencies? 4. What type of stochastic search methods are best suited for architectural design problems? 5. How can autonomous and adaptive models that integrate geometric properties with performance simulations and robotic construction can be used for architectural form exploration? 1.6. Contribution If the objectives as stated are fulfilled, this proposal will contribute to fundamental knowledge in the areas of agent-based modelling, building engineering and architectural design. In general, the development of this proposal aims to: 1. Extend the notion of architectural complexity based on a rigorous research of complexity and introduce a computational design methodology based upon behavioral modelling and holistically considers building design. 2. Enable designers to explore larger solutions faster by establishing feedback loops between computational form generation and the analysis of design performance measures. Page | 19 3. Introduce different types of agent ontologies in design in addition to swarm intelligence models. The use of the Belief Desire Intention (BDI) Agent Model is used in this work, for instance, to formulate behaviors which combine realistic constraints with design intentions into generative design routines. 1.7. Organization of this Thesis Chapter 2 contains a literature review that contextualizes the methodology used with regards to complexity theory and complex adaptive systems (CAS). The underlying assumptions of complexity theory are reviewed, and a taxonomy of complexity definitions is provided. Concepts and tools for managing complexity which are derived from CAS are correlated with contemporary digital design practices and computational design models in architecture, engineering and construction (AEC). Additionally, theory and applications of multi-agent systems (MAS) and the basic programming blocks of Distributed Artificial Intelligence (DAI) are briefly analyzed along with some examples of biologically inspired evolutionary programming approaches. Chapter 3 contains a proposed computational design methodology based on emergent design models and describes a MAS framework for architectural design. The internal structure of the proposed agent-based methodology is analyzed in combination with the types of agents and the related design algorithms. In Chapter 4, a set of design experiments are presented, in which the developed framework is used for design generation, analysis, and the evaluation of design alternatives. In the first design experiment the MAS tool implemented is applied to the development of a digital simulation by using data resulting from the physical implementation of a swarm robotic system. In the following design experiments the agents are used to represent building components and the tool is used for a) the generation and evaluation of façade designs and b) the form-finding of shell structures. The results from each of the design experiments are discussed separately in each section. Lastly, in Chapter 5 the results and findings of the experiments are summarized along with the contributions of this thesis and suggestions for future work. Page | 20 Part II Background Page | 21 2. Background and Related Literature This section reviews the literature relevant to this proposal, which includes established: a) methodological models of design, b) digital design models in architecture, c) complexity theory, and d) the state of the art in multi-agent systems. The discussion begins by briefly describing methods for representing design processes computationally and specifically discusses existing digital design models in architecture. Next, there is a critical overview of complexity theory and its underlying principles, correlating them to contemporary building design. A review of the existing literature about MAS and discussion of how the development of computational design frameworks using a MAS approach can help to reduce the complexity of building design and extend the designers’ creativity in the future ends this section. 2.1. Design Models for Architectural Design Although design is an activity that can be found in all aspects of human endeavor, it is a relative recent arrival in academics; before the mid- 20 th century it was not understood as a process as widely as it is today (Fischer, 2014). Similar to complexity, there exists no unified theory for design but there do exist multiple working models in every domain that sufficiently describe what design is. The emphasis on theoretical as opposed to methodological description of design has created a lack of clarity with respect to the methodological nature and contributions of digital design methods. The increasing use of computers has created the necessity to represent design problems to computers and therefore forced both theoreticians and design researchers to formalize the design process and explicitly describe the theoretical aspects of the design process. The following section provides a brief overview of theoretical design models in general (2.1) and specifically in architecture (2.2). Models of Design Design has been modeled as a language (semiotics), as a logic system (Archer, 2006), as a problem-solving process (Newell and Simon, 1972), by procedural methods (Asimow, 1962), by reference to typologies (Moneo, 1978), and by simulation and optimization techniques. Despite the many approaches that have been developed around design, it can be generally agreed that design as an activity is a) purposeful, b) goal oriented, c) and creative. Purposeful means there is always a purpose which provides a reason for the design (i.e. design a structure that provides shelter). Without a purpose, it is impossible to establish design goals. Beyond having a purpose, design aims to achieve goals that involve searching to find solutions that best meet the criteria that are described in the design goals. Last, but not least, design is also a creative activity that seeks new solutions. If creativity is lacking, then either an existing solution or no solution will be applied to the problem. Creative design is sometimes defined as novel combination of old ideas, but creativity also means exploring not only the realm of known ideas, but also untested and unexplored realms and ideas. Literature shows that a large number of digital tools have been developed that focus on the first two characteristics of design (purposeful, goal oriented) but fewer tools have focused on the creative aspect of design (Von Bülow, 2007). In the following section a list of design models that are particularly relevant to developing computational design tools is provided, namely: a) Design as Process, b) Design as Simulation, c) Design as Optimization, and d) Design as Space State Exploration (Figure 3). Page | 22 2.1.1.1. Design as a Process Much effort has been applied to develop a method that describes design as a prescriptive process. By modeling design as a series of steps the designer is provided with a method which not only gives structure to the activity, but also helps to sharpen the general understanding of what design is. Herbert Simon also considered the design activity as a problem-solving process and distinguishes between well-structured and ill-structured problems. He based this distinction upon whether or not the problem at stake can be solved by general problem-solving procedures (i.e. analysis). Simon claims that design problems are ill structured because at the outset of the process the problem space is not fully specified (Simon, 1973). In fact, some parameters of the problem may only “occur” to the designer after considerable work and research. Simon outlines a procedure for solving ill- structured problems by decomposing them in smaller, tractable problems and by following a cyclic approach in which the designer alternatively solves part of the problem and while doing so assimilates new information about it. Specifically, in regard to architectural design, Simon states that: …there is no definite criterion to test a proposed solution, much less a mechanize-able process to apply such criterion. Additionally, the solution space is not defined in any meaningful way, for a definition would have to encompass all kinds of structures the architect might at some point consider (i.e. a geodesic dome, a truss roof, arches ...), all considerable materials (wood, metal, concrete, carbon fiber, ...), all design processes and organizations of design processes (start with floor plans, Figure 3 Diagrams of methodological design models central to computational design, namely: a) design as a problem solving process (diagram based on Plesk’s cyclic approach), b) design as a simulation, c) design as an optimization, and d) design as a space state exploration. The diagrams are based on definitions developed by Plesk, Cryer, Asimow, Gero and Sosa Page | 23 start with list of functional needs, start with façade, ...). Such a problem definition would make the problem impossible to compute (Simon, 1973). Similar to Simon, M. Asimow has tried formalize design as a process based on its programmatic description, and proposed a three-step cycle which includes: a) analysis, b) synthesis, and c) evaluation (Asimow, 1962). The design process in this case can be described as a kind of computer system, as a series of interlinking cyclic processes in which the introduction of new knowledge or new events can cause regression to a previous point from any point in the process (evaluation). These cycles repeat in every phase from designing to building a structure, which are: a) the conceptual phase, b) design development phase, c) construction document phase, and d) construction (Cryer, 1994). As the project advances from concept design to construction, the introduction of new constraints and new information in every phase results in the decrease of design freedom and the increase of design knowledge in each of the cycles (Plesk and Wilson, 2001). 2.1.1.2. Design as Simulation When design is considered as a simulation, the aspects of the problem that need to be considered are described with a certain degree of abstraction into a model. This model may be a physical model, a mathematical model or even an abstract thought model. Physical models have a long tradition in architecture for proving ideas and design concepts, unlike mathematical models which have only recently been introduced and mainly due to the wide application of information technologies. In a simulation model it is assumed that if enough of the important parameters are incorporated in the model, the simulation will serve as a testing ground for the system, in which a large amount of information could be observed during a short period of time. This allows designers to test design ideas faster and thus facilitates the decision-making process. Digital simulation models have been used in architecture extensively either as a way to simulate form- finding and dynamic behaviors between particles (Senatore and Piker, 2015), or as a way to simulate daylight and energy consumption (Roudsari et al., 2014), pedestrian flows (Chen, 2009), or build-up construction processes (4D simulations) (Eastman et al., 2011). 2.1.1.3. Design as Optimization Designers strive to provide solutions that best match a number of the design criteria, and therefore all design can be seen as optimization in some form or other. For this reason, optimization methods have received much attention in many different fields that employ design. Mathematical methods of optimization include many techniques, the best known being Linear Programming (Von Bülow, 2007, Keough and Benjamin, 2010). Common to all such methods is the necessity to be able to describe the design parameters as mathematical variables. As a result, design problems are usually converted to "well structured" analysis problems for optimization by fixing the decision variables. The formulation of most optimization methods can be described by the following terms: decision variables set the context for the design, performance variables define particular solutions within a design space, and objective functions define the particular solution of interest in the design space. Thus, only objectives that can be written as functions are considered; and those that are considered are usually simplified by writing their objective function (Rutten, 2014). Thus, the "optimum" that is found is not actually the optimum for the real problem, but only for the limited and simplified problem. Nonetheless, optimums found in this way can give satisfactory results if a sufficient amount of the primary objectives are reasonably defined. Consider, for example, a structural design problem such as the following: find the correct size of member to minimize the overall weight of a given structure. In such a problem, a single objective optimization problem can be easily formulated. The decision variables in this case are the width and height (w,h) of the element, the Page | 24 performance variable is the overall weight (W), and the objective function is f(w), subject to w,h. However, in architectural design, most often the aim is to satisfy multiple design objectives—which might also be conflicting. Thus, it becomes key to form objective functions which lead to desirable design solutions. Recently, researchers have been focusing on multi-objective optimization routines for addressing the complexity of building design (Gerber and Lin, 2013, Keough and Benjamin, 2010). 2.1.1.4. Design as Space State Exploration Design has been also regarded as a decision-making activity and not solely as a problem-solving process (Gero and Sosa, 2008). This approach is related to decision theory and takes into account the fact that design goals are not always defined at the initialization of the designing process. In fact, in many cases determining the goals is a part of the design process. In this approach, design is described as a "forward looking" search activity which is a) exploratory, b) constrained (computable), c) includes decision-making, and d) learning. Search is the common process used in informing decision-making. Decision-making implies choosing, and choices are framed by parameters that can be considered as variables. This set of variables relates to the problem decomposition and the design context. In ordering these variables, Gero (Gero, 2002) represents design as being comprised by three state spaces: function: a definition of an object's purpose/teleology or what it does. This can be seen as the result of behavior and it is distinct from purpose (the “why it does”); behavior: (performance space) or how it does something. It is the description of how the design behaves in a particular environment, and structure (decision space) or “what is.” Structure represents the physical object itself as described by material, topology, geometry and physical characteristics. Despite the fact that heuristic/stochastic search methods can be used to find the values of variables in a specific state, what is also critical to design is the determination of the state space within which to search. This is called exploration of the state space and is akin to changing the problem space within which the decision making occurs. Learning implies the restructuring of knowledge based on the iterative cycles of the analysis-synthesis- evaluation steps (Gero, 2000, Gero, 1996, Gero, 2002). Considering the creative aspect of design as a problem with multiple space states allows us to formulate it as a Markov Decision Problem (MDP)(Kaelbling et al., 1998). Page | 25 Figure 4 Timeline showing an evolution form reduced to complex design models in relation to the type of model (dynamic/static) and whether it considers or not environmental parameters An MDP is defined as a tuple/vector <S, A, P, R>, in which: S: is a set of states (S¡), which represent the state of the world (function, behavior, structure). A: is a set of actions (A¡) available to the agent in states of the world (i.e. generate design, analyze). P: is a transition function/table P(S’|S,S) probability of state S’ given action “A” is taken in state “S”, and R: represents Reward / R(s) = the cost or reward of taking any action A in state S. MDP provides a mathematical framework for modeling decision-making in situations where outcomes are partly random and partly under the control of the decision maker. This makes MDPs a fitting framework for design problems where decision-making involves the designer and also other decision-making entities who might not be known beforehand (legislators, occupants, stakeholders, etc.). Rather than trying to find optimal solutions to a simplified problem, one might better seek satisfactory solutions to multiple sub-problems (Simon, 1996). This is the approach taken with stochastic/heuristic techniques used in artificial intelligence (AI) methods. The key characteristic of formulating design problems as MDPs is that it allows the decomposition of the problem into sub problems or design agencies. Each agency is comprised of a number of autonomous rule-based programmable entities (agents), which have heuristic search objectives encoded into their structures and must choose a sequence of actions in each state based on that. At each iteration, agents receive a utility due to the whole sequence of actions they have performed. By interacting, the agents can decide the probability of whether an answer to a sub problem is true or false. By doing that iteratively for every sub problem and communicating their decision to other agents, they can inductively lead to a solution to the global problem. The MDP formalism assumes that in order to solve a problem, agents receive information through their sensors and decide the probability for a state S’, based on the current state and not the history of states that the agent had. Based on the selected state the agent performs one or a set of actions and evaluates whether it improved or not the world state. Page | 26 Digital Design Models in Architectural Design In the last 20 years, the wide application of digital design models has rapidly transformed the design and construction of our buildings from being the materialization of paper-based drawings to the materialization of digital information. The realization of buildings such as the Guggenheim Museum in Bilbao established a precedent for the bridging between design and construction, as an increasing amount of projects following it have been designed, documented, fabricated and assembled with the assistance of digital means (Mitchell et al., 2007, Mitchell, 2005). This has been possible due to the increasing computational capacity and advancements in the field of computer graphics, which has been accompanied by the availability of parametric design tools and advanced manufacturing processes (Kotnik, 2010). In more recent years, a growing interest in performance-based models and data driven design along with the rapid integration of building information modeling (BIM) indicates a maturity of the use of digital tools beyond formal complexity and towards more performative design solutions (Oxman, 2008, Kalay, 1999). The integration of multiple design models through collaborative BIM platforms has proved to be beneficial for reducing building costs and for improving the design process but has also highlighted the lack of existing methodologies to deal with complexity (Kalay, 2004). Despite the great level of sophistication that has been witnessed in the development of CAD tools, the dominant mode of design remains manual even today and rarely relies on computation (Duro-Royo et al., 2015). Regardless of the sophisticated underlying infrastructure of current design tools, as long as the modelling process is done manually, we cannot regard current design tools as either computational or intelligent. Therefore, current design tools can be considered computational only in a narrow sense, that of being computer programs whose infrastructure relies on computers for the creation of digital geometric objects. Only during the last two decades have we have seen an increasing number of design tools which are based upon computational design methods, and the majority of the most prominent design tools deal with form-finding (Rippmann et al., 2012, Gerber and Lin, 2012, Senatore and Piker, 2015, Caldas, 2008).To provide clarity, in this paper non- computational design models are defined as those models that while making use of computer programs they mostly support manual modelling processes, and let us define as computational design models those programs that rely on computational procedures as the essential part of the modelling/designing process (Marincic, 2016). It is also important to clarify that one impact of the “computerization” of architectural design processes during the first digital turn was the automation of geometrical transformation and the mechanization of producing drawings (Carpo, 2013). On the other hand, the impact of using “computational” design processes, which is currently under way, can lead to the reconsideration of architectural design as a whole, to the complete formalization of the design process into code and the exploration of multiple solution spaces using stochastic techniques. This distinction becomes particularly important if we take into account that the complexity of the (manual) modelling process in a digital environment, which increases proportionally if not exponentially with the size and complexity of the geometric object in question. However, this is not true in an algorithmic modelling process. In an attempt to categorize digital design models according to the various relationships between the designer, the conceptual content, the design processes applied, and the design object itself, Oxman distinguishes between five paradigmatic classes of digital design models, namely: computer aided design models, formation models, generative models, performance models and integrated compound models. In Oxman’s approach, CAD models Page | 27 are descriptive and isomorphic to paper-based design methods, and therefore they have offered little in design thinking. Formation models, which can be developed using parametric modelling or animation tools (i.e. Autodesk Maya, 3d Studio Max, Rhinoceros 3d, Generative Components, etc.) are considered to be topological, as geometric modelling requires the designer to follow a formalized digital design process. In formation models, the geometric properties of objects can be manipulated digitally by the designer via a high level of interaction and parametric control (see Figure 4). In generative models, the formation of geometry is associated with a computational mechanism (algorithm), and most of the times a fitness function is controlling a generative process. Genetic algorithms are a typical example of a generative design models that have been extensively used as a design driving mechanism with multiple applications (Caldas and Norford, 2002, Bukhari et al., 2010, Dimcic, 2012, Gerber et al., 2012, Malkawi et al., 2005, Pugnale and Sassone, 2007). Performance based design models are considered as a process of formation or generation that is driven by a desired performance. This is achieved by defining a set of measures and goals that must be satisfied (Oxman, 2008). Kotnik provides a more general classification and states that digital design models can be considered using the broad a sense of computer, i.e. a machine (hardware) that manipulates data according to a set of instructions (software). In such an approach every piece of data which is included in a design model can by coded as a natural number N. A piece of software (s) that operates on a computer can be thus considered as a function (f) that operates on a subset of inputs (in) which belong to the Natural Numbers (N) (Kotnik, 2010). The function describes the relationship between the inputs and the set of all possible solutions/outputs (out) such that: = , t ∈ N. Based on that assumption, Kotnik distinguishes between 3 general classes of design models, namely representational, parametric and algorithmic. His formalism encapsulates Oxman’s classification and therefore CAD and formation models fall under the representational class, while generative models, performance based and integrated compound models are all subclasses of the parametric class. The algorithmic class includes design models where the concept of computational functions is implicitly coupled with defining relationships of architectural elements. Through this coupling, a computational generalization of geometry is achieved which allows for new forms of creative expression but also requires a methodological formalization of design thinking. This ties back to the general design models described in the previous section. In this thesis, we are interested in the algorithmic class. In order to better understand it, the author describes below the basic algorithmic approaches and categorize them into three different categories, namely: feedback-based design models, within the tradition of cybernetics and systems theory (adaptive, generative, form finding); rule-based design models, within the tradition of transformational grammars and axiomatic designs (.rule-based, grammar based); and intelligence-based design models, within the tradition of artificial intelligence (emergent/cellular, logic based). The growing interest in algorithmic models during the latter years has resulted in the development of multiple paradigms and has also emphasized the lack of understanding of the underlying principles that have shaped digital design tools, such as information theory, topology, cybernetics and complexity theory, to name a few. Although reviewing these fields goes beyond the scope of this work, a review of those topics and how they relate to design complexity is provided in section 2.2. In the next sections, a short description of fundamental algorithmic models is provided. Page | 28 2.1.2.1. Control/Feedback-Based Models Control-based design models are computational models whose unified system of procedures governs the creation and manipulation of spatial forms and whose design method allows and controls this unified system as a whole. Control-based design models are exclusively a top-down modeling paradigm affirming the concepts of control automation, regulation and optimization, thus following the tradition of cybernetics. Based on the implementation of the feedback mechanism we can distinguish between adaptive, parametric and generative models. Control-based parametric models were implemented in commercial software packages such as the CATIA software from Dassault, which has been the archetype for almost all current parametric design tools (Kalay, 2004). Currently, one of the most prominent research fields in feedback-based design models is form finding. Form finding is a design approach which is rooted in the physical modelling of structures and the long tradition of funicular shells (i.e. domes). Form finding, which is already an extension of purely analytical processes to structural design, has been used to generate geometric models that couple form configuration with the forces in a closed feedback loop. In their simplest set-up, form finding methods, both computational and non- computational, generate catenary curves and funicular shapes, which is the direct outcome of the force distribution (Adriaenssens et al., 2014). However, unlike civil structures where, for instance, the design of a bridge is conditioned solely by structural performance, in architecture most design problems require the consideration of multiple parameters such as local resources, historical context, occupant behavior, environmental impact and constructability. Current form finding methods focus on structural design goals but do little in considering the combination of different design goals such as environmental performance (Axel Kilian, 2006). 2.1.2.2. Rule-Based Models Rule-based design models incorporate a set of rules whose application to a particular case helps to evolve the design of the object. The structure of such models involves two distinct levels and the dependence between them. The first level is the definition of the rules, and the second level is the form of application. Rule-based design methods include logic based, grammatical/recursive and emergent/cellular models and were among the first method to be researched in computational design. Logic based models utilize formal logical reasoning as the principle drive for design. Such models have used set theory to codify design decisions in propositional forms, which are then translated to geometry. The first prominent example of logic based models can be found in Christopher Alexander’s book, Notes on the Synthesis of Form (Alexander, 1964). Alexander developed the abstract idea of a “pattern” as a separate entity that encodes a set of relationships which are independent from all others. Following this approach, the design process can be divided into: a) defining the requirements of design, b) identifying patterns that can address the requirements, and c) applying the patterns in ways that can lead to a design solution that satisfies the requirements. Grammatical/Recursive models are another type of models which were extensively researched in design by G. Stiny through the perspective of “shape grammars." In his approach, the geometric rules of an architectural style can be described to a machine, which can regenerate them by the recursive manipulation and transformation 2D and 3D shapes (Stiny, 1980). Stiny’s shape grammars are based upon Lindermayer’s research on modelling plants and Chomsky’s attempt to encode the complexity of language using simple rules (Prusinkiewicz and Page | 29 Lindenmayer, 2012). Although shape grammars have been successfully used to generate design based on existing typologies such as prairie like villas based on Frank Lloyd Wright's style, this approach was limited to simple examples by the combinatorial complexity involved. A more recent implementation is the sequential shape grammar approach, which was developed at ETH in Switzerland. This approach has been implemented in a software called City Engine, and has been used for the procedural modelling of architecture and cities with applications, mainly in gaming (Müller et al., 2006). Emergent design models are based upon the local interaction of simple, rule-following programming entities (automata agents) and follow the tradition of von Neuman’s research into cellular automata and A. Turing’s work on morphogenesis (Marincic, 2016). The design processes in such design models are divided into two levels. At the first level, a number of autonomous and modular code blocks that are called agents are programmed to interact and follow a set of local behaviors. At the second level the accumulated behavior of the agents is considered within a specific environment and the consideration of all interactions globally leads to emergent phenomena. Such emergent phenomena are usually rendered into geometry as points, trajectories or three dimensional (3D) solid objects. A prominent example of emergent based modelling is the implementation of the swarm behavior model by C. Reynolds (Popov, 2011). Additionally, J. Frazer used bottom-up concepts to define a computational model based upon emergence and evolutionary programming, which he called “evolutionary architecture” (Frazer, 1995). Following Neumann’s legacy on cellular automata (CA), Frazer’s approach investigated morphogenesis in the natural world and used that as an abstraction for developing fundamental form generating processes in architecture. However, Frazer’s “Evolutionary Architecture” did not manage to efficiently describe how such morphogenetic natural processes can be transcribed and appropriated in order to develop emergent architectural design models. 2.1.2.3. Artificial Intelligence (AI) Based Models Models inspired by AI were based on the idea of creating design systems which can develop some sort of “intelligence.” The concept of developing computing machines that can be programmed to think on their own was established by A. Turing in his paper “Computing Machinery and Intelligence,” M. Minsky, who invented artificial networks, defined AI as “the science of making machines do things that would require intelligence if done by man,” (Minsky, 1961). Greatly inspired by these advancements, Negroponte believed that the formalization of communication between humans and computers can transform the design process into one where intelligent machines (digital design tools) learn how to adapt to the designer and his style and at the same time can learn some objective truths via induction (Negroponte, 1970). On a similar but more practical path, Sutherland’s Sketchpad, which was the first digital design tool, was quite progressive as it had embedded in it an elementary notion of intelligence, which had been totally missing in design tools up until that time. Sketchpad allowed the designer to roughly draw a circle by adding points with a light pen and the software would recognize the designer's intention and transform it to an exact circle. Beyond the work of the researchers mentioned above, who set the foundations and envisioned how AI based models could be applied in architectural design, there was a big gap of almost 40 years curing which research efforts stagnated due to the fact that AI based models had failed to deliver significant results. Only after 2000’s has a resurgence of interest in AI models has been observed due to the wide availability of data and increased computational capacity (Hebron, 2017). Case Based Reasoning (CBR), for instance, is an approach based on the assumption that in order to solve a certain problem one should refer to similar problems that have already been solved. This requires the existence of a big database of existing cases, which was not possible during the early days of computers. However, nowadays, with the developments in the field of machine learning and the availability of building information models (BIM), the main hindrance of CBR is not a a problem as algorithms Page | 30 can be trained to perform feature searches. AI approaches that are similar to CBR, such as distributed constrained reasoning, have been recently applied in game theory and more recently in office building design (L. S. Marcolino, 2013). Summary In this proposal I am investigating rule-based models, particularly emergent models, and combining them with AI inspired models and principles. The brief description of design models above shows that although logic and grammar-based models were the first types to be developed, they have not been successfully integrated into design tools. On the contrary, models based on control/feedback have received a lot of attention and a number of CAD tools have been developed based on these approaches. Emergent design models have only been studied recently in architecture, and therefore their use is still limited. However, emergent design approaches have a longer tradition in engineering where their modularity, robustness and distributed character have made them appropriate for solving complex problems, especially nowadays when there is an abundance of computing power. Moreover, emergent design models have been shown to be appropriate for developing systems that account for dynamic changes in their environment and can afford design complexity (Kolarevic, 2016). In order to understand how emergent design models can be used in the field of architectural design, we need to have a deeper understanding of complexity theory and complex adaptive systems as well as their underlying assumptions. In the next section, I will attempt to define the term design complexity by bringing together definitions from domains that range from architecture, engineering and construction (AEC) to information theory and general systems theory. Distinguishing between different types of complexity is considered to be essential, due to the fact that complexity in architectural design has been mainly related to geometric complexity, and has not been considered from a complexity theory perspective. Page | 31 Figure 5 Timeline illustrating the increasing complexity in building structures in terms of the design approach and building paradigm used 2.2. A Brief Historical Review of Complexity and its Relationship to Architectural Design It is arguable that complexity is one of the most challenging intellectual, scientific and technological topics of the 21 st century (Wolfram, 2002). The complexity of building design nowadays can be witnessed in mixed-use superstructures towering above the modern cities in the United Arab Emirates, while geometric complexity has been celebrated in many sophisticated cultural and residential buildings around the world. From an architectural point of view, it is remarkable to observe the evolution of building construction in recent years. Next to iconic stone cathedrals and buildings of the past centuries now stand prismatic and freeform steel structures equipped with building systems capable of responding to their environment in a dynamic fashion (Figure 5). The introduction of embedded building systems and sensor technologies is turning buildings from static structures into complex cyber physical systems that can sense and respond to climatic or temporal changes in the environment (Malkawi, 2005, Rahman, 2010). Our environment has become increasingly complex and therefore complexity theory and computation have been radically influencing research in nearly all disciplines in both the sciences and the humanities (Bundy, 2007). A good example is the paradigmatic shift in sciences such as physics and biology, which is the result of studying complexity, by adopting computers as primary tools for simulating and modelling natural processes. Reductionist models have been successively modified or replaced as the predominant paradigm of research in recent years. To further clarify this, the mechanistic worldview of nature, which relies on the continuous top-down reduction of a whole into its parts, is being replaced by the correlation of local interactions and the identification of patterns that can bring the parts into an equilibrium as an emergent property of the overall system (Kauffman, 1993). For example, scientists and biologists have closely investigated complex adaptive systems (CAS) such as termite colonies, and by tracking how termites forage and collectively build their habitats scientists have Page | 32 Figure 6 Diagram illustrating the research methodology implemented for this literature survey developed mathematical models in order to understand how the complex geometry of the habitat (termite mounds) is related to the environmental conditions, the termites’ method of locomotion and the locally available materials (Perna and Theraulaz, 2017).Unlike the fields of biology and physics, architectural design thinking has not been greatly affected by computational thinking, even though information technologies have radically changed the way we design and construct buildings. With a few exceptions, in its early manifestations digitally mediated architecture focused on geometry and approached complexity in a diagrammatic instead of a scientific manner. Digital design approaches remained rooted in representation-based design paradigms instead of developing a deeper understanding of complexity and reconsidering the design process in the light of computation (Jencks, 2000). To approach such a broad topic as complexity theory and draw conclusions which can be useful to the design computing community, the methodology which is illustrated in Figure 6 was implemented. Our review includes research papers from scientific fields that go beyond the fields of Architecture, Engineering and Construction and range from Biology and Physics to Complexity theory. The bibliographic research (ca. 250 publications) was organized into two levels: on a “local” level, the literature within the architectural computer aided design communities (Cumincad, CAADria, eCAADe, ACADia) was queried based on keywords relating to complexity (i.e. complexity theory, design complexity, architectural complexity, etc. and main references and key terms were extracted. On a “global” level, the largest available corpus of digitalized books (ca. 10,000 publications dating from 1910-2010) was queried using N-Gram viewer for different types of complexity (i.e. architectural complexity) and related key terms (i.e. feedback, emergence, self-organization) which were extracted from the local bibliographic search. N-Gram Viewer is an online graphing tool that implements a probabilistic Markov model for predicting the combination of characters on a database (collection of books) given an input sequence of characters and charts, annual counts of words and the possible combinations accordingly(i.e. database = GoogleBooks, input sequence = design complexity, n-grams: …design, complexity, design complexity…) (Weiss, 2015). Page | 33 Figure 7 Number of publications including the term complexity in architecture and related domains reviewed by the authors based on a library built from querying databases of architectural design computing communities (250 publications) This “global” search was used as a form of validation to highlight how the types of complexity and related terms extracted from publications in the architectural communities appear in the global literature. The results of the author's local analysis of the literature is illustrated in Figure 8. The appearance of the word complexity in publications from different disciplines during the past 100 years is plotted. From the graph, the reader can clearly observe that there has been a constant trend of publications relating to complexity from 1935-2010 in a variety of scientific fields including mathematics, physics and biology and also cybernetics, information theory and computer science. In the architectural literature, apart from Venturi’s famous book Complexity and Contradiction” that addresses complexity as a reaction to the uniformity and reductionism imposed by modernism (Venturi, 1977) there were very few publications dealing with the topic in a rigorous and scientific manner. However, in the early 1990s, a significant increase in the number of publications dealing with complexity in architecture and engineering occurred, as shown in Figure 7. Page | 34 Figure 8 Plots of the appearance of key terms relating to complexity (above) and different types of complexity (below) as n-grams in the Google Books library (25,000,000 publications). Our review suggests that in architecture this was a consequence of the wide application of digital design tools while in engineering and science it was the consequence of the first validated results coming out of Santa Fe Institute (SFI). The SFI was founded in 1984 and was the first institution dedicated to providing a common ground for researching complexity theory. It demonstrated how computational methods and complexity theory can be successfully applied across disciplines to solve real world problems (Gell-Mann, 1992). In Figure 8, the key terms which relate to complexity as they appear in the global literature are extracted and plotted. The graph shows there has been a steady increase in the use of these terms since the 1940s; before that date most of the terms —except for structural complexity and adaptation—were almost absent from the literature. In the following sections, a brief historical overview of the evolution of the term is provided and the core underlying assumptions of complexity are described in order to better understand the term. In the next sections, a taxonomy of different types of complexity is devised, and measures developed to manage it within the contemporary architectural context are described. Page | 35 Figure 9 Diagram illustrating the main characteristics of complex systems with regards to building design Theoretical Framework for Approaching Design Complexity There have been terms for complexity in everyday language since antiquity; however, the idea of treating it as a coherent scientific concept is quite new (Wolfram, 2002). Nonetheless, in the late 19 th century, scientific progress supported by the technological advancements brought by the industrial revolution, questioned the linearity and reductionism of the Newtonian Paradigm, which existed in traditional sciences such as mathematics, biology and physics (Holland, 1992a, Wolfram, 2002). The establishment of new theories in the 20 th century provided researchers with new tools for studying how living organisms evolve (i.e. molecular biology) and how complex adaptive systems behave (i.e. a beehive) and put complexity in the scientific landscape (Weaver, 1948). In the 1930s, Alan Turing was the first to associate complexity with the amount of information needed to describe a process, and therefore he provided a new perspective on considering complexity (Turing, 1936). This led Shannon to the formulation of Information Theory in the 1940s by associating the amount of information exchange between the feedback mechanisms of different systems for the accomplishment of a given task (Shannon, 1948). In 1950, Bertalanffy introduced General Systems Theory (GST), which dealt with systems holistically and considered their complexity in relation to the number of their parts and their relationships (Bertalanffy, 1950). Jon Von Neuman mathematically described the logic and structure of automata and considered communication systems as stochastic processes for solving complex problems (Von Neumann, 1951). In the 1960s, Wiener introduced Cybernetics, and focused on analyzing the complex behaviors between systems that operate across multiple domains such as biology, physics and architecture (Wiener, 1961) (Figure 10). From the 1970s onwards, complexity theory started to formalize as a separate discipline due to the incapacity of existing models to explain how biological organisms and complex adaptive systems function (Gell-Mann, 1992). In more recent years, the emerging field of software engineering and systems management brought about an interest in defining different types and measures of complexity (Bennett, 1995, Feldman and Crutchfield, 1998) relating to the amount of computing resources and steps needed to perform computational tasks. Page | 36 Figure 10 The term complexity disassembled into different levels based on where it applies (2nd level), where it arises (3rd and 4th levels), and keys properties in addressing and managing it (5th level) The multiple definitions of complexity is an impediment to developing a clear understanding of complexity and indicates the lack of a unifying framework before the founding of the SFI (Figure 7). In fact, many complexity definitions which are reviewed in this paper represent variations of a few underlying schemes (Crutchfield, 1994, Feldman and Crutchfield, 1998). A historical analog to the problem of defining and measuring complexity is the problem of describing electromagnetism before Maxwell’s equations. In the context of electromagnetism, factors such as electric and magnetic forces that arose in different experimental contexts were originally considered as fundamentally different (Lloyd, 2001). It is now understood that electricity and magnetism are in fact closely related aspects of the same fundamental quantity, the electromagnetic field. Similarly, researchers in architecture, biology, computer science and engineering have been faced with issues of complexity but naturally have considered them within the context of their own discipline. To date, the most comprehensive body of work relating to complexity relates to the research at the Santa Fe Institute (Crutchfield 1994). Since its founding, the SFI has laid the foundations for most topics relating to the study of complexity theory such as evolutionary computation and agent-based modeling, to name just a few (Gell-Mann, 1995, Holland, 1992a, Kauffman, 1991). Our survey indicates that the existence of a common body of research on complexity, such as the SFI's has fostered research in different fields as it offers a common point of reference, and therefore more research efforts should be focused on the development of a more unified framework for understanding complexity. Page | 37 Sources of Complexity The main source of complexity is undoubtedly nature, which produces complex structures even in simple situations and can obey simple laws even in complex systems (Goldenfeld and Kadanoff, 1999). Complexity arises whenever one or more of the following five attributes are found in a system: 1) the existence of many parts, relationships, and/or degrees of freedom, 2) multiple states/communication, 3) broken symmetry (differential growth), 4) emergent properties, 5) non-linearity, and 6) a lack of robustness (Yates, 1978). Consequently, the number of components in a building system, the tight coupling of all connected elements on multiple levels (social, structural, functional, geometrical) and the establishment of specific hierarchies among the elements significantly increase the complexity of such a system. Living systems such as organisms, communities and coevolving ecosystems are the paramount examples of organized complexity (Holland, 1992b). For example, the genomic systems of a higher metazoan cell encodes on the order of 10,000 to 100,000 structural and regulatory genes whose joint orchestrated activity constitutes the developmental program underlying the ontogeny of a fertilized egg (Kauffman, 1993). However, apart from examples in nature and human life (e.g., behavioral, social and environmental sciences), instances of systems with characteristics of organized complexity are also abundant in applied fields such as architecture and engineering (Klir, 1985). Jane Jacobs states that an essential quality shared by all living cities is the high degree of organized complexity, while Gordon Pask considers buildings not as “machines for living” but as complex environments within which the inhabitants cooperate and in which they perform their mental processes (Jacobs, 1961). Thus, he considers architects as “system designers,” and was one of the first researchers to identify that there is a demand for a systems-oriented thinking in order to respond to the complex nature of architectural design. It is important to point out that there is another level of complexity within systems such as buildings and cities which goes beyond analyzing and understanding how they function, and which relates to the complexity of creating something that does not exist. A number of researchers have recently adopted a more systemic approach and suggest that living cities and inhabited buildings should be considered as complex holistic systems (Salingaros, 2000). In Figure 9, a building is represented as a complex system in which tightly interacting subunits are composed and assembled on many different levels of scale with hierarchies that transcend socio-economic and cultural relationships down to geometrical forms and the basic structure of materials. Underlying Assumptions of Complexity To be able to study design complexity beyond the context of architectural design, the association of complexity theory with a number of underlying assumptions which are not considered within the classic scientific paradigm have to be taken into account (Crutchfield, 1994, Dent, 1999). Classic science is based upon the assumptions that: a) an entity can be divided into component parts and that cumulative explanation of the parts and their relations can fully explain the entity (reductionism); b) phenomena can be studied objectively (objectivity), which means that if different observers look at the same phenomena in the same way they will create similar descriptions; and finally, c) there is linear causality between phenomena, which means a cause leads to one or multiple effects in a linear fashion from the initiation to the finalization of a process (Dent, 1999). Page | 38 The seminal work of Alan Turing and J. V. Neumann laid the foundation of complexity theory by relating complexity to the bulk of information exchange, which was defined as the length of the shortest algorithmic description for executing a given task (Von Neumann, 1951, Turing, 1936). Shannon formulated a general theory of communication as it relates to the amount of information exchanged between the feedback mechanisms of different systems for the accomplishment a given task (Shannon, 1948) and is considered to be the father of Information Theory (IT). Along this path, both N. Wiener and R. Fischer viewed communication systems as a stochastic or random processes (Wiener, 1961, Fisher, 1956) and helped define IT mathematically by introducing the concepts of disorder and entropy from thermodynamics. Stochastic processes have since been central in modelling and solving complex problems with unknown structures and boundaries and therefore are of great interest for design exploration and evolutionary computation (Oxman, 2008). General Systems Theory (GST) acknowledges the similarity of principles which apply to systems , regardless of the nature of their parts or the relations and "forces" between them (Von Bertalanffy, 1973). Following this principle, gas particles in a container is a clear example of a physical system, while self-organized assemblies Figure 11 A taxonomy of different complexity types based on whether they regard complexity as relative or as an absolute quantity. The taxonomy is built upon definitions from the fields of general systems theory, cybernetics, information theory, computer science, complexity theory and architecture, engineering and construction literature. Page | 39 of organisms such as a beehive, an anthill or a human community can be considered typical examples of a complex adaptive system. GST defines a system as some circumscribed portion of the world that can be recognized as “itself” in spite of the fact that its constituent parts are subject to perpetual change (Rapoport, 1986). Different systems can be characterized based on four fundamental components, namely: structure, behavior, communication and hierarchy (levels of organization) (Gerard, 1958). Wiener focused on the relationships among system’s components and the manipulation of hierarchies that exist within them rather than analyzing each one of the components in isolation (Wiener, 1961). Notions such as feedback and control were more central to the discipline than any law of traditional physics or mathematics. By embracing nonlinearity via circular causality (feedback) and by introducing concepts such as “forward looking search” in system design, cybernetics contributed to the holistic understanding of complex natural phenomena (Holland, 1992a). Cybernetics offered a new framework on which all individual systems may be ordered, related, and understood based on concepts such as behavior, feedback and hierarchy and consequently its contribution to the field of complexity has been tremendous (Heylighen and Joslyn, 2001). These “younger” scientific disciplines, which mainly appeared in the second half of the 20 th century and became the cornerstones of complexity theory, introduced an alternative paradigm to that of classic science, one that is non deterministic and non-linear (Fischer, 2014). In Figure 10, the term complexity is decomposed in multiple levels based on this worldview. Unlike classical scientific approaches, this new worldview is based on three basic assumptions. First,, an entity can be best understood by considering it in its entirety (holism) and it has characteristics that belong to the system (as a whole) and “not” to any of its parts “individually”. Second, the observer is not independent from the phenomena and the observers’ experiences add to the perceived reality (subjectivity). Lastly, there exists circular causality (feedback), which means that cause and effect in different phenomena is not always linear and that there is a dynamic (non-linear) exchange between action and experience (Maturana and Varela, 1987). Defining Complexity Based on the set of assumptions described in the previous section, Weaver identifies three types of complexity: organized simplicity, disorganized complexity and organized complexity (Weaver, 1948). In Figure 11, a taxonomy is provided of the different classes and subclasses of complexity that has been surveyed in the literature. Organized simplicity applies mostly to “designed physical systems" such as the ones engineers were modelling in the 19 th century (i.e. the mechanical loom). Disorganized complexity applies to both physical and artificial systems, whose behavior is almost impossible to predict (i.e. the motions of a million particles in a gas container) (Klir, 1985). Organized complexity is encountered in complex adaptive systems (i.e. a beehive) and in designed abstract systems (i.e. a building) and is therefore interesting to designers, architects and engineers. By surveying the scientific and architecture literature, it is possible to distinguish four main types of organized complexity which are typical in such systems, namely: structural (also organizational), probabilistic (also deterministic), algorithmic, and computational (Schuh and Eversheim, 2004, Suh, 2005b, Suh, 2005a). 2.2.4.1. Complexity as an Absolute Quantity In biology a living organism can be classified as structurally complex, because it has many different working parts, each formed by variations in the implementation of the same genetic coding (Goldenfeld and Kadanoff, 1999). If we consider an organism as a system, probabilistic complexity is the sum of the inter-relationships, inter-actions and inter-connectivity of parts within the system and between the system and its environment (Stefan Wrona, 2001). Page | 40 Based on the definition of a Turing Machine, Kolmogorov defined algorithmic complexity as the length of the description provided to a computer system in order to perform and complete a task (Kolmogorov, 1965). This highly compressed description of the regularities in the observed system, also called a “schema”, can be used to define the complexity of a system or artificial intelligence computing machine (Minsky, 1961). Algorithmic complexity is also called descriptive, or Kolmogorov complexity in the literature, depending on the scientific community, and is defined as finding the universally shortest description of an object or process (Chaitin, 1990). If we consider a computer with particular hardware and software specifications, then algorithmic complexity is defined as the length of the shortest program that describes all the necessary steps for performing a process, i.e. printing a string. Algorithmic complexity, in many cases, fails to meet our intuitive sense of what is complex and what is not. For instance, if we compare Aristotle’s works to an equally long passage written by the proverbial monkeys, the latter is likely to be more random and therefore have much greater complexity. Bennet introduced logical depth as a way to extend algorithmic complexity and averaged the number of steps over the relevant programs using a natural weighting procedure that heavily favors shorter programs. (Bennett, 1995, Lloyd and Pagels, 1988). Suppose you want to do a task of trivial algorithmic complexity, such as print a message with only 0s; then the depth is very small. But if the example above with the random passage from the monkeys is considered, the algorithmic complexity is very high, but the depth is low, since the shortest program is Print followed by the message string. In the field of mathematical problem solving, computational complexity is defined as the difficulty of executing a task in terms of computational resources (Cover and Thomas, 2012). In computer science, computational complexity is the amount of computational effort that goes into solving a decision problem starting from a given problem formulation (Traub et al., 1983). Within this classification, nondeterministic polynomial time complexity (NP) is one of the most fundamental complexity classes and is defined as the set of all decision problems for which the instances where the answer is yes have efficiently verifiable proofs of the fact that the answer is indeed “yes” (Horgan, 1995, Barton and Ristad, 1987). In other words, computational complexity describes how the time required to solve a problem using a currently known algorithm increases as the size of the problem increases. Depending on that relationship, problems are classified as Polynomial Time (P), Non-Deterministic Polynomial Time (NP), NP-Complete, or NP-Hard, which describes whether a problem can be solved and how quickly. For NP-complete problems, for instance, although a solution can be verified as correct there is no known way to solve the problem efficiently (Cobham, 1965). 2.2.4.2. Complexity as a Relative Quantity Mitchell defined architectural complexity in a digital context as the ratio of added design content to the added construction content (Mitchell, 2005). In Mitchell’s definition, design content is defined as the joint product of the information already encoded in a computer-aided design system and the information added in response to conditions and the requirements of the context at hand by the designer. The construction content of a building is defined by Mitchell as the length of the sequence that starts with the fabrication description of a component and ends with the assembly of the whole building (Mitchell, 1990). Per the above definitions, Mitchell’s definitions overlap with that of algorithmic complexity. Design content refers to the length of the description necessary for describing to a computer system a set of instructions to create a 3D geometry. Construction content refers to length of description necessary to generate toolpaths for the fabrication and onsite assembly of the design content. In Mitchell’s definition, the designer by operating a CAD system handles the complexity of defining the architectural shape, and therefore his definition does not appropriately capture the computational complexity of creating “a design” (i.e. decision-making during the design process). Page | 41 In engineering, Suh introduced axiomatic design and divided complexity in two domains, namely functional and physical. The functional domain includes a set of constraints, attributes and desires coming from the user as well as a set of functional requirements that a design object needs to fulfill. The physical domain includes a set of design parameters and a set of fabrication and construction processes (Suh, 2005b, Suh, 2005a, Suh, 1990). In the physical domain, the complexity of an object is related to the coupling of design parameters and the available construction processes and therefore can be described as an absolute quantity. Within the functional domain, complexity is regarded as a measure of uncertainty in achieving a set of goals defined by a set of functional requirements, which are coupled with a set of design parameters. According to Suh, a design is considered to be complex when its probability of success is low: that is, when the information content required to satisfy a number of functional requirements by a number of design parameters is high. With this definition of engineering complexity, Suh provided a tool to view the complexity of designed and engineered systems from a scientific and not purely empirical approach, and aimed to create a higher level of abstraction in order to enable designers to synthesize and operate complex systems without making them overly complex, per se. Lastly, in the field of construction, complexity is defined as a function of the size and uncertainty of the project on one hand and the combination of organizational and technological complexity on the other (Baccarini, 1996). These two types of complexity are classified in terms of differentiation and interdependency, and thus organizational complexity by differentiation refers to the number and diversity of parts involved in a construction process, while organizational complexity by interdependency refers to the degree of interactions between a given project’s elements (Morris and Hough, 1987). Technological complexity by differentiation refers to the range of construction tasks, while complexity by interdependency refers to the relationships between a network of tasks, teams, technologies and construction activities. Measuring Complexity As can be observed in the section above, contemporary researchers in information theory, biology, engineering and computer science have developed different definitions and measures of complexity, but there seems to also be an overlap as they were asking the same questions of complexity but within their own disciplines. For an extensive analysis of complexity measures, see (Levin, 1976, Gell-Mann and Lloyd, 1996). However, by reviewing the literature we can conclude that the most frequent questions that appear in the literature across disciplines for quantifying complexity of an object, an organism, a problem, a process, or even an investment are the following: 1. How hard is it to describe? 2. How hard is it to create? 3. What is its degree of organization? The difficulty of description (i.e. logical depth) can be measured in bits (i.e. effective complexity), while the difficulty of creation (i.e. design content) can be measured in time and energy (i.e. entropy). Lastly, the difficulty of organization can be subdivided in two groups: one which measures the difficulty of describing an organizational structure and another which relates to the amount of information shared between the parts of a system as the result of its’ structure. Page | 42 Decomposing Complexity in the AEC Admittedly, the design, construction and management of a building is indeed a challenging problem involving multiple disciplines, and therefore it is hard to create an absolute definition as well as measure of complexity. Following W. Mitchell’s and N. Suh’s definitions, design complexity in AEC will be examined under the scope of two domains. The first is the virtual/functional domain, which is directly related to two levels of complexity; the complexity of design problems and the complexity of design processes. The second is the real/physical domain, which is directly related to the construction (fabrication and assembly) and building systems integration (BSI). We will consider the design to construction process holistically and discuss how complexity arises in these subtopics while examining the possible ways it could be addressed. In doing so, the researcher's aim is to clarify architectural complexity, and translate achievements from other fields for design purposes. 2.2.6.1. Complexity of Design Problem Representation In design, as well as in science, when given a specific problem one has to deal with many interconnected variables, often deriving them from functional requirements (Rittel and Webber, 1973). On the contrary, from science, in the design world problems are wicked, which means there exists no clear formulation that contains all the information the problem solving mechanism needs for understanding and solving the problem (Rittel and Webber, 1973). Unfortunately, the lack of clarity in many of these parts increases the complexity of this kind of problem. Through the act of designing, architects and designers face two different aspects of complexity (Glanville, 2007, Glanville, 2001). One aspect relates to the lack of complete information about the design problem, which makes the formulation of a universal design solution difficult (Suh, 2005a). The other aspect relates to the fact that the target is to create something new, which means that the solution is not specified. The paradox is that if a new and innovative design procedure can be specified, how can the resulting outcomes be innovative? Architectural design has a long history of addressing complex programmatic requirements through a series of steps without a specific design target (Terzidis, 2006). Unlike other fields such as engineering, where the target is to solve a particular problem in the best possible way, architectural design problems, because of this novelty aspect, are open-ended, in a state of flux, uncertain and therefore ill-structured (Rittel and Webber, 1973). For instance, the task of designing a house leans towards the side of ill structured problems; the amount of uncertainty involved makes the specification of the problem hard and thus the solution becomes complex (Simon, 1977). Simon supported the idea that the degree of complexity of any given problem critically depends on the description of the problem. Holland described optimization problems in domains as broad and diverse as ecology, evolution, psychology, economic planning and artificial intelligence, and by abstracting from the specific field he examined commonalities relating only to the complexity and uncertainty of such problems (Frazer, 1995, Holland, 1992b). Although designing a house is not an optimization problem, the design methodology required to approach such a problem using digital means can share common features of adaptation and self-organization with an optimization problem in biology, such as the construction of ant hills (Theraulaz and Bonabeau, 1995). 2.2.6.2. Complexity in the Design Process Although there is no well formulated consensus model of the design process in architecture, a typical model has emerged in which the following features take place: a) the assumption that most design problems are ill-defined Page | 43 (wicked) problems by definition; b) the recognition of the importance of pre-structures, presuppositions or proto- models as the origins of solution concepts; c) the emphasis on a conjecture-analysis cycle in which the designer and the other participants refine their understanding of both the solution and the problem in parallel; and finally d) the display of the essential spiral and non-linear characteristics (Cross and Roozenburg, 1992). Despite the fact that the use of computational tools offers an opportunity to formalize the design process, there are no formal architectural design methods that follow the above model to work in a systematic way. The design process can be considered as one in which the architect navigates through an ill-defined problem domain and employs various strategies to elaborate the problem description, iteratively generates and evaluates design alternatives, and after a number of iterations, i.e. when given a time-constraint, the architect proposes a solution (Gero, 1996). In computational terms, the design process can be described as a purposeful (not random), constrained, decision-making, exploratory and learning activity. Decision-making implies a set of variables that relate to the problem definition and context. Search is the common process used in decision- making. Exploration, in this case, is akin to changing the problem space within which the decision-making occurs. Learning implies the restructuring of knowledge based on the cycle of pre-supposition-conjecture-analysis- evaluation cycle (Gero, 2000). The ill-structured nature of the design problems, the existence of changing contextual factors and the engagement of the human factor do not allow the clear definition of the solution space to be explored and therefore increase the complexity of the design process. Non-linearity and the amount of interconnected design parameters between the conjecture-analysis cycles also increases the complexity. In an attempt to improve the latter, both research and professional practice have focused on automating traditional manual methods of production using computer-aided design and algorithmic design tools (Gero, 1996, Scheurer, 2007). Current parametric design systems have facilitated the design and management of non-standard geometries and at first sight these systems seem to reduce the complexities of the design process, at least in terms of algorithmic complexity. This is easily measured if we consider that the printout of a code for a parametric model together with a table of all the parameter sets is much shorter than all the workshop drawings (Scheurer, 2010). However, there are complexities relating to the description of the problem and the definition of efficient design strategies which remain largely unresolved. For instance, in biological systems the blueprint of an organism, that is its genetic code, is considered as a set of instructions that are dependent on a particular environmental context for its interpretation and manifestation and is subject to evolution and adaptation. In architecture, digital design tools were developed to streamline the production of the blueprints of buildings, and thus focused less on formalizing the encoding process during which the designer interacts with the computer in order to manifest his/her ideas. Although the architectural design process has been computer based for more than 20 years, only recently there is rigorous research leaning towards the adaptation of computational design techniques for design exploration (Von Bülow, 2007). In order to leverage the power of computation, more emphasis should be put towards researching how design abstractions can be formally described to computers so that, similar to biology, evolutionary and learning mechanisms can be used in order to extend the cognitive capacity of designers and therefore explore new design schemes or evolve existing ones based on previous knowledge and/or experience. 2.2.6.3. Complexity in Construction Processes Construction projects are invariably complex and are becoming increasingly more so, because of the fragmented nature of the industry and its capability to both generate and collect a large amount of data (Soibelman et al., 2008, Bennett, 1991). Building construction is typically characterized by the engagement of multiple, separate and diverse groups such as architects, engineers, consultants and contractors for a finite period of time. On a Page | 44 higher level, organizational complexity in a project arises when the number and level of differentiation and interdependencies of all the contributing organizations increase (Beyer and Trice, 1979). The differentiation can be either vertical, referring to the level of detail the activities of a project might entail, or horizontal. Horizontal differentiation refers to the number of formal units such as departments, groups, etc. involved or to the way the tasks are structured in terms of labor subdivision and the level of specialization required for each task (Gidado, 1993). For instance, the organizational complexity of a project can increase if the number of different occupational specializations utilized to accomplish a project is high, or when for the duration of the project specialists are working at different times during the project life cycle and/or at geographically separated offices. The dynamic and distributed character of the construction environment increases complexity as a result of the required amount and types of information exchanged between all contributing parts (i.e. designers, engineers, contractors) (Teicholz, 2000). The multitude of different disciplines, the lack of integrated frameworks, and the reliance on classification methods conducted by human protocols, hinders this communication exchange and has caused inefficiencies, project cost overruns and time overruns. Furthermore, quality and maintainability are reduced, design intent is diminished, and the efficient access to objects and information in a timely manner is prohibited (Halfawy and Froese, 2005, Caldas and Soibelman, 2003). On a lower level, complexity in construction occurs when dealing with the fabrication of non-standard geometries and non-repetitive assembly methods. The fabrication process relates to the manipulation of raw material for the production of discrete elements, while the assembly process refers to combination of discrete elements into systems (Mitchell, 2005). Complexity in fabrication can be described as the length of the translation of a specific geometry or shape description into a sequence of instructions for a computer numerically controlled (CNC) machine or a robotic arm that will fabricate such a geometry. Additionally, complexity in the assembly process can be described as the number and diversity of steps required to combine discrete fabricated elements into a structure. Consequently, if expressed in terms of algorithmic complexity, the number and descriptive intricacy of elements and/or the steps needed for their fabrication and assembly increase construction complexity. However, the dynamic environment of the construction site and errors which appear in the construction process result in non- linearity and high levels of uncertainty, thus making it harder to describe construction processes in terms of algorithmic complexity; they can be better described, relatively speaking, in terms of entropy. 2.2.6.4. Complexity in Building (Control) Systems The 20th century has included the introduction of many building systems technologies such as electrification, air-conditioning, fire protection systems, active structural damping, automatic control systems, computer networks and high-performance glazing, to name only a few (Shen et al., 2010). These technologies have transformed buildings from simple structures providing shelter into complex material systems that react to environmental parameters through automated façade systems, and to a certain degree, respond to their user’s needs (Rahaman and Tan, 2011). Although it is remarkable how new technologies have been sequentially incorporated into the building construction process, there still remain open issues that need to be addressed, such as the interactions between different building systems, processes and occupants (Jazizadeh et al., 2012). The general vision is an intelligent building, which is responsive to the requirements of the occupants, environment and society by being functional and productive for occupants in terms of energy consumption and CO2 emissions (Clements-Croome, 2004). Page | 45 Figure 12 Diagram illustrating phases of a design to construction process in relation to the definition of architectural complexity provided by Mitchell and the holistic definition of architectural complexity provided by the author Due to the complexity and diversity of behavioral patterns and preferences, the influence of the occupant’s behavior is considered only in simulation (Halfawy and Froese, 2005). Moreover, the level of sophistication of the system’s components combined with the fragmented nature of the construction industry has not allowed the integration of different systems in the early design phase or later in building’s life cycle (Shen et al., 2010). Consequently, a great challenge remains: to what extent do building systems perform to the level they are intended to both in terms of occupant satisfaction and also in terms of energy consumption? 2.2.6.5. Extending the Definition of Architectural Design Complexity Based on the types of complexity described above, it is clear that in order to encapsulate the complexity of architectural design holistically in a digital context it is important to consider the definition of architectural complexity beyond the realm of architecture. Thus, the terms of design and construction content by Mitchell are decomposed in more detail and extended by including concepts from engineering, adopting Suh’s concept of Page | 46 dividing complexity into functional and physical domains and integrate it with Mitchell’s definition of complexity. Following Suh’s definition of engineering complexity, the design content lies in both the virtual and functional domains and can be further subdivided into architectural design and engineering design content. Architectural design content includes constructing a design model which combines the constraints, environmental conditions and design intentions (design approach) set forward by the occupants/ stakeholders/decision-makers with a set of functional requirements (FRs). This design model maps a set of constraints and attributes to the set of FRs and couples them with a set of design parameters by considering process variables. The engineering content includes finding the shortest description for coupling the functional parameters with design parameters by considering process variables. Construction content includes the mapping of design parameters to process variables such as available resources (material), building technology, construction activities, time and cost. Therefore, the complexity lies in the physical domain. The complexity of construction content is extended to include the number of operations necessary to realize a design and also to engulf: a) the level of differentiation of tasks, b) the interdependency among the function of the tasks, c) the degree of labor skills each task requires, and d) the level of uncertainty in completing a task. Risk, and the uncertainty of how design parameters are coupled with a set of process variables (building paradigm) can be considered as a measure of complexity in the physical domain. The complexity of the design content in the virtual domain depends: a) on the amount of design decisions required to formulate a design model, and b) on the uncertainty of satisfying the functional requirements given a set of design parameters. Moreover, entropy can be considered as a measure of randomness of operations in the design process and can be used for measuring the potential of generative design systems to generate novel solutions based on a set of design decisions (Gero and Sosa, 2008). Tools for Managing Complexity The survey of the literature across different disciplines indicates that the main research tools for managing complexity include a) abstraction, b) modularity and c) the idea of scalability. Abstraction can be used as a tool to compare data by treating them as generic entities that we can compare, encapsulate and draw generalizations from. Modularity can be used as a concept which enables the development of functionally specific components that are specialized to solve particular aspects of problems. Lastly, multi-scalability is a concept that allows one to formally express features and principles (rules) that may be present across different levels but may have completely different effects according to the specificities of the scale. For instance, a fundamental rule in one scale may, on a much larger scale, reveal itself to be a frozen accident. In order to emphasize the importance of abstraction, modularity and scalability, let us consider an example from physics by P. Ferreira (Ferreira, 2001). Suppose you want to describe the number of particles of a given entity with the mathematical function: {-i(ϑ2∇2i)/(2me)-i(ϑ2∇2j)/(2mn)+e2/(4πε0)i1,i21/|ri1-ri2|+ +z2e2 / (4πε0) j1,j21 /|Rj1-Rj2|-ze2 /(4πε0) i,j1/|ri-Rj|} ψ = Eψ. In the equation above, which describes matter at an atomic level, i and j, which vary between 1 and 1,020, represent the number of particles in a human body. It is therefore complicated to solve this equation for ψ , the wave function for the particles. Actually, it is impossible to solve this equation analytically for the atom of helium (i=2 and j=1). So how can we proceed when faced with such problems? Page | 47 Although we are unable to solve this equation analytically, physicists use abstraction to explain the dynamics of larger particles. The physicist consider the problem at different levels, for example, at the molecular level, and develop models from there, considering the characteristics of molecules that they are able to observe, such as mass, charge, polarity. etc. Equations and simulation models in physics are often solved by approximation using large amounts of computer resources and power. With the introduction of computers, physicists were able to computationally simulate and predict molecular behaviors which could not be observed otherwise. Thus, in many scientific disciplines computation has not been considered as an exploratory tool but rather as a neutral tool and has advanced the discipline's mathematization (Kotnik, 2010). However, in architecture, despite the fact that the use of digital design models is now widespread and there exists the capacity to develop complex building models across scales, digital design models are not used to predict occupant behaviors and building performance, but rather as descriptive models that are used to represent and communicate an idea easier and faster instead of substantially affecting current design thinking. At this point it is important to emphasize the difference between computer based and computational design tools. Up until the early 2000s the majority of digital design tools were computer based and have thus automated and mechanized data handling within the design process. Recently, we have seen the introduction of computational tools that promote design exploration and attempt to extend the designer’s intellect by correlating data in novel ways (Von Bülow, 2007). The generalization of geometry via computational tools and methods, requires a higher level of formalization of design thinking but also provides new forms of creative expression. Suh, who introduced the axiomatic design approach, argues that via design formalization and by systematically incorporating scientific principles in design there is the potential to inform empiricism and intuition in design and evaluate the complexity of a design problem holistically (Suh, 1990). Along the same lines, Kotnik suggests that the consideration of digital design as computable functions offers an opportunity to systematize design knowledge and compare existing methods in design (Kotnik, 2010). This is due to the fact that mathematical functions make the governing of cause and effect explicit, and therefore connecting methods of digital design with the concept of computational functions offers a platform for directly transferring formal mathematical concepts into architecture. However, there are two approaches regarding this transfer of mathematical concepts in contemporary digital practices. One approach is related to performance-based design techniques, which are directly related to optimization problems. This approach, although quite prevalent nowadays, can easily direct design thinking towards the parametric manipulation of optimization routines. The other approach to this transfer of mathematical concepts is based on the very idea of computation and the algorithmic description; it offers the possibility of precisely controlling the relationships of the functional requirements and design parameters between architectural elements in unique ways rather than providing optimal solutions. The formalization of design thinking cannot replace the design process, but it can act as a framework for exchanging knowledge between fields of science and design as well as for a more systemic examination of contemporary design practices. The tools for managing complexity outlined in the section above can offer a high-level framework for managing the multiple levels of complexity that are included in building design using digital design methods. Following the axiomatic design approach, design can be broadly defined as the creation of a synthesized solution which satisfies a set of perceived needs through the mapping of processes between functional requirements which exist in the functional domain and design parameters which exist in the physical domain. Through this perspective, concepts such as self-organization, autonomy, topology, holism and entropy from the field of Page | 48 complexity theory can be used to correlate the multiple levels of complexity across the different fields of architecture, engineering and construction. Complexity and Complex Adaptive Systems One of the most important characteristics of complex non-linear systems is that they cannot, in general, be successfully analyzed by determining in advance a set of properties of aspects that are studied separately and then combining those partial approaches in an attempt to form a picture of the whole. Instead, it is necessary to look at the whole system, even if that means taking a crude look, and then allow possible simplification emerge from the work. This makes it clear that the study of complex adaptive systems has lot in common with the design process. Complex adaptive systems are particularly interesting for managing design complexity because it has been proven that a model or schema in relatively few dimensions is exploring a gigantic strategy space far from any optimum or equilibrium (Bonabeau et al., 1999). Think, for example, of a computer learning to play chess. In the not so distant past, chess was an unsolved problem, as the game “Go” still is today, and the adaptive computer learning necessary for chess was not yet available. Nowadays, reinforcement learning models have made great progress in developing models of computer Go games. Similar approaches could prove helpful for design problems as well. In order to better understand how complex adaptive systems can be utilized in the design process, we need to ask a number of questions and relate CAS to other complex phenomena which do not share the same properties. How does a complex system operate? How does it engage in passive learning about its environment, in the prediction of future impacts of the environment, and in the prediction of how the environment will react to the computer's behavior? Another question that arises is how it differs from a system like a turbulent flow, in a fluid, a complex phenomenon but not one that is likely to be adaptive. Yet in the turbulent flow there are eddies that give rise to smaller eddies, and so on. Certain eddies have properties that enable them to survive in the flow and have offsprings, while others do not and die out. Why is turbulent flow not regarded as an evolutionary system? The answer lies in the way information about the environment is recorded. In complex adaptive systems, information is not merely listed in what computer scientists call a look-up table. Instead, the regularities of the experience are encapsulated in highly compressed form as a model or theory or schema. Such a schema is usually approximate, and sometimes wrong, but it may be adaptive if it can make useful predictions, including interpolations, extrapolations and sometimes generalizations to situations that are very different from those previously encountered. It is important to note that the adaptive process need not always be extremely effective in achieving apparent success at the genotypic level, to put it in terms of biological evolution. It is more important that the adaptive process is effective at the phenotypic level, which is a combination of the expression of an organism’s genetic code (its genotype) and the influence of environmental factors which can affect the characteristics and behavior of an organism. A number of examples stemming from research on complex adaptive systems demonstrate that simple organisms have the capacity to improve their predictions and behaviors over time through the appropriate encapsulation of information related to their environment (Petersen, 2014). In computational design research and in academic circles today, most researchers have yet to take a crude look at the whole. Instead, a lot of computational design research is focused on specialization (i.e. performance- based design, robotics) and it is taken for granted that serious work can be done only by looking at one or few aspects of a building (i.e. geometry). Yet every architect needs to make decisions while pretending that all Page | 49 aspects of a design scenario have been considered including all the possible interactions among them (design engineering, construction). However, if there are only disconnected specialists to consult, the collation of their opinions and information cannot always reflect a fair picture of the whole, and the final result will not always be well orchestrated. If the complexity that arises from the interaction of these specializations itself is not managed, the potential of digital media and computation in architectural design is not fully leveraged. A Holistic Design Approach for Managing Complexity The analysis in the previous sections shows that the complexity of creating a building design description (added design content) lies in the functional domain and is conditioned by the a) time, b) information exchange (bits) and c) energy (i.e. entropy) required to come up with a design proposal and d) the level of uncertainty (i.e. percentage of probability) of fulfilling a set of design goals defined by functional requirements and selected design parameters. On the other hand, the complexity of constructing a given design description (construction content) lies in the physical domain and can be measured as an absolute quantity with field specific dimensions. Therefore, the construction content defined by Mitchell is extended to include the a) description of a sequence of fabrication and construction activities, and also b) the range of construction activities, c) the level interdependency between activities and existing resources, d) the technological sophistication required, and e) the level of risk in delivering it. By embracing design complexity on multiple levels, and particularly in both the physical and functional domains, we can determine the complexity of a design to construction process by considering not only geometric complexity and the available technologies (i.e. construction methods, digital fabrication) and also environmental parameters (location, orientation, azimuth), building performance data (heating and cooling loads), dynamic user behavior (space occupancy), and specific construction tasks. Practical application of complexity theory may be found in many disparate disciplines and has included modeling approaches with both systemic scientific and utilitarian objectives. It is not operationally meaningful to view complexity as an intrinsic property of an object (i.e. a building); instead, it is better to consider complexity holistically and assume that the level of complexity arises from, or exists in, abstractions of the world. In field of software engineering, for instance, by breaking down complexity into structured (computable) situations, engineers have managed to deal with both complex problems and new technology. Summary To summarize, this section has provided a critical review of the key assumptions and definitions of complexity in architecture and other disciplines. The review of the literature shows that the lack of a unifying theory for complexity has hindered research efforts and has resulted in researchers from different disciplines creating similar definitions of complexity, because they were considering complexity only within the bounds of their domain. The review also shows that in the field of digital architecture, the lack of scientifically based approaches has resulted in approaching complexity diagrammatically and solely through the perspective of geometry. The main sources of complexity have been described along with their underlying assumptions. Additionally, a taxonomy of complexity definitions was presented along with a classification of complexity measures that have been developed to help design researchers situate the complexity of buildings on the general map of complexity theory. The term complexity in design has been decomposed in the domains of AEC, and the existing definition of architectural complexity in digital design (Mitchell, 2005) has been extended with notions of complexity from the fields of axiomatic design in engineering and the field of construction management. Page | 50 From the discussions in this section, we can conclude that understanding complexity is important for developing computational methodologies which go beyond geometry and for helping designers develop new kinds of design abstractions which are based on a deep understanding of natural form making processes (i.e. nest building) rather than the figurative replication of their form (i.e. the free form shape of a nest). There is a need to develop design tools that can account for multiple aspects of design (structural, material, environmental) early in the design stage and handle complexity without the tools being overly complex themselves. To achieve this, the decomposition of design problems into sub problems is vital in order to help develop multi-agent systems that can help to deal with the problems. Last, but not least, the investigation of mechanisms that govern complex adaptive systems is considered crucial to the development of custom MAS for design. 2.3. Multi-Agent Systems (MAS) A multi-agent system (MAS) is defined as a computerized system composed of multiple interacting agents within an environment. Agents can act together in order to achieve more complex goals than the ones that an agent can achieve on its own. Minsky termed as “agency” a group of agents acting together, which later evolved in the field of MAS (Minsky, 1961, Kalay, 2004). The main advantages of MAS approaches is that due to their modularity and distributed nature, they are able to solve complex problems. MAS are considered an example of Distributed Artificial Intelligence (DAI) and such systems have shown capacity to develop intelligence via heuristic search methods and reinforcement learning. Despite the fact that there is a considerable overlap between Agent Based Modeling and Simulation (ABMS) and MASs, it is important to point out that a MAS is not always the same as an ABMS. The main difference between a MAS and an ABMS is that the latter is used to search for insight into the collective behavior of physical agents (i.e. bees, termites), which, by simply following local rules and without overall coordination leads to complex global phenomena. Classic examples of ABMS can be encountered in complex adaptive systems and include, among other examples, flocking behaviors and ant colony optimization models. Most of these models fall under a more general research area known as swarm intelligence (Bonabeau et al., 1999). On the contrary, research on MASs is targeted on solving specific engineering problems and the agents’ behaviors and structure can be modelled according to the problem and not according to a natural system (Macal and North, 2009). The ABMS terminology tends to be used in scientific fields such as biology, while MAS is more common in engineering and technology. Typical applications of MAS research range from online trading and disaster responses to manufacturing or modelling pedestrian flows. The next section discusses how the study of complex adaptive systems can help designers develop MAS. Section 2.3.2 contains a list of different models and ontologies of agents, which is the basic programming unit of MAS. Complex Adaptive Systems and MAS for Design A popular analogy used to describe the potentials of using MAS approaches for solving complex problems is that of social insects. Termites, for instance, have been colonizing a large portion of the world for several million years and building collectively sustainable structures that utilize the available resources. The success of social insects and other creatures (i.e. beavers) in building their own habitats (Hansell, 1984) can serve as a starting point for developing new abstractions in architecture and engineering using a MAS approach. This research direction had been greatly supported by findings from researchers working at the Santa Fe Institute, which helped establish complexity theory as a separate field. Following the founding of SFI there has been a growing interest in developing models to simulate complex adaptive systems that can be used to reproduce some features of the natural system, or to apply the model to another system to predict its behavior Page | 51 (Bonabeau et al., 1999). Necessary steps toward achieving this goal are understanding a) the underlying assumptions, b) understanding the mechanisms that generate collective behaviors in nature, and c) developing metaphors that show how similar mechanisms can be applied to architectural design. Regarding the underlying assumptions, in the previous section there is an analysis of different assumptions and how they can be used to form abstractions. Regarding the mechanisms, this is where modeling plays an important role. Modeling an artificial system, and as such a building is very different from modelling an ant colony. In the former the model is used to represent an idea and manifest its constructability, while in the latter case the model is used to uncover what actually happens in a complex adaptive system and use that knowledge to make testable predictions. Digital models in architecture have been successfully used for formal explorations, but only in a few cases have they been used to make predictions for other types of structures or for studying the construction process and performance of the structure (Kilian, 2014). However, as building complexity increases, developing predictive models based on concepts such as distributed artificial intelligence and emergence is considered helpful for avoiding inefficiencies resulting from centralized top-down planning. Such concepts form the basis of MAS and can serve as a platform to develop new kinds of abstractions for design. Emergence is a set of dynamic mechanisms whereby structures appear at the global level of a system from interactions among the system’s constituent units. Actions/Tasks are executed on the basis of purely local information, without reference to the global pattern, which is an emergent property of the system rather than a property imposed upon the system by an external ordering influence (Beni and Wang, 1993, Bonabeau et al., 1999). To better understand how emergence occurs in MAS there is a list of the basic characteristics of MAS below. 2.3.1.1.1. MAS Characteristics A community of agents, whether digital or physical, is generally referred to as a multi-agent system. Multiple definitions of MASs exist and an extensive review of agent ontologies is provided by (Weiss, 1999). According to N. Kasabov (Kasabov and Kozma, 1998) in order for a MAS to be considered intelligent it should exhibit the following characteristics: 1. the capacity to accommodate general problems' rules incrementally, 2. adapt in real time to its environment, 3. be capable of analyzing itself in terms of behavior, error and success (state awareness), 4. learn and improve through its interaction with the environment (embodiment), 5. learn by processing large amounts of data, 6. have a memory that can be used for storage and/or retrieval, and 7. have parameters that represent short-term memory, long-term memory, age, etc. Models of Agents Agents are often described as an abstract functional system, such as a computer program that can carry tasks on behalf of its users or physical systems, which could be an autonomous vehicle, such as a drone, or a robot that acts based on signals it receives from its sensors. Due to this, software agents may be referred to as abstract intelligent agents (AIAs) to distinguish them from the physical implementations such as computer systems, robotic systems, and biological systems. In the field of distributed artificial intelligence (DAI) an agent is an autonomous entity which can “receive” perceptions from its environment through sensors and acts upon the environment using actuators. Page | 52 This work will focus on intelligent agents as a way to lay the foundation towards developing DAI driven design tools based on MASs. Agents may vary from being very simple to being very complex. An agent interacts with other agents in its environment and can adjust its actions to achieve a goal. In the case of an “intelligent” agent, the agent can learn or use knowledge (i.e. existing databases) to achieve goals. An example of a basic yet intelligent agent is a reflex machine, such as the thermostat in a building. 2.3.2.1. Mathematical Structure of Agents A very simple agent program can be defined mathematically as an agent function which maps every possible percept sequence to a possible action the agent can perform, or to a coefficient, to a feedback element, to a function, or even a constant that affect the action eventually. : ∗ → An agent function can incorporate various principles of decision-making, such as the calculation of utility of individual options, deduction over logic rules, probabilities (Tambe, 1998), or fuzzy logic (Gero and Brazier, 2004). An agent is therefore an abstract concept, which makes it particular interesting for design purposes as it can be used to express multiple things such as requirements, constraints etc. The program of an agent maps every possible percept the agent receives into an action. The word percept is used to refer to the agent’s perceptional inputs from its environment at any given instant which usually come from a set of sensors (i.e. light sensor, pressure sensor etc.). The following sections briefly discuss basic agent architectures. Simply put, an agent can be anything which perceives its environment through sensors and acts upon that environment via actuators. Agent Ontologies Weiss, who has extensively studied DAI and specifically MASs, classifies agents into four main categories (Weiss, 1999): 1. logic based agents (LBs): in which the decision about what action to perform is made via logical deduction; 2. reactive agents (RAs): in which decision-making is implemented in some form of direct mapping from situation to action; 3. belief-desire-intention (BDI) agents: in which decision making depends upon the manipulation of data structures representing the beliefs, desires, and intentions of the agent; and finally, 4. layered architecture (LA) agents: in which decision-making is realized via various software layers, each of which is more or less explicitly reasoning about the environment at different levels of abstraction. Page | 53 Figure 13 Diagram showing types of agent models (ontologies) with an increasing level of complexity (from A-D) Agent Classes From the aforementioned categories, depending on their complexity and degree of perceived intelligence and capability, the BDI and LA agents can be divided further in five main classes according to Russell (Russell et al., 2003). A brief overview of these five main classes are described below to provide a better understanding. 2.3.4.1. Simple Reflex Agents Simple reflex agents act only on the basis of the current percept, ignoring the rest of the percept history. The agent function formulated above is based on the condition-action rule: if a certain condition exists, then take a certain action. This agent function only succeeds when the environment is fully observable. Some reflex agents can also contain information on their current state which allows them to disregard conditions whose actuators are already triggered. Page | 54 If the environment is only partially observable, infinite loops are often unavoidable for simple reflex agents. Such issues can be avoided by randomizing the actions of the agents at certain intervals, to enable agents to escape from infinite loops. Weinstein and Parunak have successfully used simple reflex agents for sorting text databases and for making them appropriate for data mining (Weinstein et al., 2004). 2.3.4.2. Model-Based Reflex Agents Unlike simple reflex agents, model-based agents can handle partially observable environments. Their current state is stored inside the agent, maintaining some kind of structure which describes the part of the world which cannot be seen. This knowledge about "how the world works" is called a model of the world, hence the name model-based agent. A model-based reflex agent has a type of internal model that is related to the sequence of percepts the agent receives and therefore reflects some of the unobserved aspects of the current state. Input (percept) history and impact of the agent’s actions on the environment can be determined by using its internal model. The agent uses its internal model to choose an action based on the impact of its action or the history of percepts it has received. 2.3.4.3. Goal-Based Agents Another important class is that of goal-based agents. This further expands on the capabilities of the model-based agents by using goals or targets. Goals can be described as desirable situations, which allows the agent to choose between different behaviors by selecting one that reaches a goal state. Search and planning are the subfields of AI that study the decision-making process of selecting actions that lead the agent to achieve its goals. The advantage of this class of agents is its flexibility; the knowledge that drives the decision-making is represented explicitly and can be easily modified. A typical and widely used example of a goal based agent is the distributed flocking model developed by C. Reynolds (Reynolds, 1987). 2.3.4.4. Utility-Based Agents Goal-based agents only distinguish between goal states and non-goal states. It is possible to define a measure of how desirable a particular state is. This measure can be obtained through the use of a utility function that maps a state to a measure of the utility of the state. A more general performance measure should allow a comparison of different world states according to exactly how well the agent would attain a goal state. A rational, utility-based agent will choose the action that maximizes the expected utility of the action outcomes; that is, what the agent expects to derive, on average, given the probabilities and utilities of each outcome. A utility-based agent has to model and keep track of its environment, tasks that have involved a great deal of research on perception, representation, reasoning, and learning. 2.3.4.5. Intelligent/Learning Agents Learning has the advantage that it allows agents to initially operate in unknown environments and to become more competent than its initial knowledge alone might allow. Learning agents have a more complex structure and apart from percepts, actuators and states they also have a a) learning, b) a critical, c) a performance and d) a problem generator layer. The most important distinction is between the learning layer, which is responsible for making improvements, and the performance layer, which is responsible for selecting external actions. By using feedback mechanisms about the current state of the agent, the critical layer determines how the performance component should be modified to do better in the future. The performance component is what we have previously Page | 55 considered to be the entire agent; it takes in percepts and decides on actions. The last component of the learning agent is the problem generator. The problem generator is responsible for suggesting actions that will lead to new and informative experiences. 2.3.4.6. Hierarchies of Agents In order to actively perform their function and achieve their goals, intelligent agents today are normally organized in a hierarchical structure containing many sub-agents. Intelligent sub-agents process and perform lower level functions. Intelligent agents and sub-agents are able to form a multi-agent system that can accomplish difficult tasks or goals with behaviors and responses that display a form of intelligence. Applications of MAS in AEC Research has shown that the complexity and uncertainty often encountered in design problems can be effectively addressed with distributed computation and artificial intelligence (Macal and North, 2009, Yezioro et al., 2008, Holland, 1992a). The nature of design problems is ill structured and therefore, computationally, designers must engage in defining abstractions in order to design explore and optimize (Simon, 1973, Rittel and Webber, 1973). The capacity of distributed systems, in this case MASs, to abstractly model requirements as agent goals and to adapt to local conditions has rendered them appropriate for solving a large class of real world problems in a number of domains, including software engineering, financial markets, security and game theory (Weiss, 1999, Woodridge and Jennings, 1995, Tambe, 1996). MASs are also closely related to CASs, which are characterized by their ability to self-organize and dynamically reorganize their components in different ways and across multiple scales (Jin and Li, 2007). This process allows the agents to negotiate, survive, and adapt within their environments (Macal and North, 2009), but it also requires a basic understanding of evolutionary and generative mechanisms in nature and the development of mathematical models that simulate physical processes (Bonabeau et al., 1999). There exist a number of properties, such as aggregation, nonlinearity, flows, and diversity, and mechanisms such as planning, tagging, internal models, and building blocks that are common to MASs and serve as a reference for designing and developing agent-based models that can be synthesized to form a MAS (Holland, 1995, Macal and North, 2009, Tambe, 1998, Weiss, 1999). Due to their modularity, MASs are adequate for producing portable, extensible, and transferable algorithms, with better integrated development environments and more applications (Macal and North, 2009, Sycara, 1998). The application of MASs in the AEC industry has been less pervasive. Beetz classifies MASs in AEC under three domains of design generation, namely knowledge capturing and pattern recognition, the simulation and performance of building designs, and collaborative environments (Beetz et al., 2004). In the fields of engineering and construction, researchers have been exploring the applicability of MASs from different perspectives, such as collaborative design, construction scheduling and structural optimization to name a few (Anumba et al., 2001b, Beetz et al., 2004, Leach, 2009, Jennings et al., 1995). Agent based modelling and simulation has been used in digital fabrication and building construction for its ability to abstract, adapt and simplify real time complexities into simple basic rules (Scheurer, 2007). Additionally, there has been significant research in developing MASs for autonomous collective construction at the level of algorithms and also at the level of hardware (Werfel et al., 2006, Werfel et al., 2014). In the field of architecture and computational design, the focus of the research on the use of computational methods to date has been mostly on design generation (form and aesthetics) and simulations (Schwinn et al., 2014, Gero and Brazier, 2004). Approaches to design generation can be classified as linear or non-linear based on algorithms that operate either in top-down or bottom-up fashion (Gero, 2000, Herr, 2002, Simeone et al., Page | 56 2013). Many researchers have argued that top-down approaches offer control instead of enough design flexibility, as they operate on fixed design topologies that are sequentially decomposed (Sugihara, 2011). On the other hand, bottom- up algorithms can be challenging to apply for design purposes and often exhibit a lack of control in the design outcome (Sugihara, 2014). In the architectural literature, a majority of the research using the bottom-up design approach has focused on the generative aspect of agent-based simulations and performance models, and has mainly implemented swarm or boid algorithms (Maher and Kim, 2004, Aranda and Lasch, 2006, Ednie-Brown and Andrasek, 2006, Leach, 2009, Carranza and Coates, 2000, Ireland, 2009, Tsiliakos, 2012). Snooks argues that “Swarm Intelligence”(SI) can enable the encoding of design requirements either into agent behaviors of different populations that belong to interrelated sub systems or within a population with adjustable or differentiated behaviors of one system (Snooks, 2011, Leach, 2009, Leach et al., 2004). The distributed nature of agent-based models enables the mutual negotiation of relationships between different design parameters, such as program and form,, or structure and ornament (Snooks, 2011). Focusing more on pattern recognition and the representational aspect of design problems, Achten has proposed a MAS framework for graphic unit recognition in technical drawings. This approach suggests that singular agents may specialize in graphic-unit recognition and MASs can address problems of ambiguity through negotiation mechanisms (Achten and Jessurun, 2002). Menges uses swarm-based agent models in order to establish communication across different design environments (architectural design, structural design) and/or different hierarchical levels (global geometry, material structure) and thus allows for the uninterrupted flow of information from input parameters to multiple design constraints (Parascho et al., 2013, Menges, 2007). Contrary to other researchers, Menges and the Institute for Computational Design in Stuttgart have used agent-based models to realize a number of prototypical structures and have used empirical data and gained experience to develop an interactive, agent- based framework for integrative planning in architectural design (Groenewolt et al., 2018). In the field of structural engineering, Soibelman et al. implemented an agent-based reasoning model to enable designers to more rapidly explore conceptual structural designs for tall buildings. In their approach, a MAS system (M-RAM) provides the designer with previously adapted solutions for evaluation. The solutions are generated by a distributed multi-constraint reasoning mechanism (Soibelman and Pena-Mora, 2000). Dijkstra and Timmermans have created a custom platform, AMANDA, to simulate pedestrian flows in buildings and urban environments (Dijkstra et al., 2001). Meissner has used agent-based simulations for the support and integration of fire protection engineering in the planning process (Meissner et al., 2004), while Klein et al. have used MASs in combination with MDPs in order to develop alternative building management and control systems in relation to occupant habits and preferences (Klein et al., 2012). However, according to Anumba et. al, the encoding of the design requirements (i.e. building design requirements) into agent behaviors and the definition of an agent upon the decomposition of a given design problem is most often highly complex and consequently hard to achieve (Anumba et al., 2001b, Anumba et al., 2001a). Trying to address that issue, a novel approach is presented by Marcolino et al., based on the development of collaborative MAS environments; it combines alternative agent models (social choices) with number theory. Instead of using an agent-based modelling approach to generate geometry, in this case, the MAS is applied to optimize efficient building designs generated in a parametric environment (Revit) using a genetic algorithm. (Marcolino et al., 2015). This approach presents teams of uniform and diverse agent populations with different design and performance goals. The system developed aggregates the agents’ opinions, which relate to a predefined range of design requirements, in order to provide designers with a larger number of pareto optimal design solutions. Page | 57 2.3.5.1. Gap Analysis Using MAS in AEC The review of the literature shows that there are two critical impediments to the furtherance of MASs in general and specifically in architecture; one, there exists a lack of methodology to enable (software) designers to clearly specify and model their applications as MASs, and two, there is a lack of widely available MAS toolkits that support designers in effectively exploring larger solution spaces. Thus, more research efforts are required to develop MAS toolkits that combine generative processes with analytical processes and user related data (i.e. light) in a distributed fashion, as well as tools that enable designers to use and adjust more sophisticated agent- based algorithms such as MDPs and team formation models. The lack of toolkits has led to duplicate efforts, as a lot of work in the field of architecture uses different implementations of ABMS methods, and specifically boid and swarm intelligence algorithms. In addition, apart from a few exceptions, most of the agent-based modelling approaches have not coupled the agent models with actual building components (Groenewolt et al., 2018) or with databases of existing buildings (Soibelman and Pena-Mora, 2000). Even though there has been a number of examples of agent-based models being used to simulate user behavior, such as circulation patterns, the AEC industry has done little to adapt techniques to accurately incorporate the end-users' behavioral and performance information during the design phase of buildings (Kalay, 1999, Simeone et al., 2013). Studies have shown that if buildings are designed according to their users' needs, behavior and preferences, there is potential to reduce the total energy consumption of the building during its operational phase (Fabi et al., 2013). Additionally, research within other domains and industries, such as security (Abbasi et al., 2015, Tambe, 1997), economics (Mullen and Wellman, 1996), and game theory (Jordan, 1992), has shown that user-centered designs could significantly increase the efficiency of building systems. A common approach used to incorporate user-related information during the design phase is through the use of behavioral models to simulate users’ movements, interactions and responses within the designed environment. Such simulations are also designed to estimate the building’s energy consumption more accurately based on its occupants’ possible interactions, comfort levels and preferences (Kavulya et al., 2011, Klein et al., 2011). Although such simulations have been promising and provide a more user-centered analysis of a building’s operations, in many cases, due to the complexities of human behavior (e.g., preferences, personalities, etc.), they do not provide accurate and realistic representation of the actual occupants’ behavior during the operational phase; therefore, in some cases the building could be less energy efficient and not accommodate the occupants’ needs (Bullinger et al., 2010). Therefore, the holistic approach is often lacking. Using a holistic approach would offer designers the capacity to integrate generative design rules with (a) user related information (i.e. preferences); (b) multiple analyses, and performance criteria such as environmental and lighting analysis; (c) building and material constraints; and (d) performance optimization functions. Page | 58 Part III A Multi-Agent Systems Framework for Environmentally Aware Form Finding Page | 59 3. Research Methodology Until recently, most existing 3D modelling design tools could do only what the designer instructed them to do, and a lot of optimization tools needed a lot of data in order to operate properly. However, in many cases architects need to make decisions and evaluate design models without having all the parameters fixed. Performance based parametric tools have been used to help search through the large space of design solutions, but this can lead to largely inefficient and time- consuming processes. Beyond enabling geometric and information modelling, design tools should also extend the capability of architects to solve difficult problems. In order to do so, design tools should have capabilities that cover the following five areas: search, pattern recognition, learning, planning and induction. There are three purposes of the suggested methodology and the corresponding framework: first, to enable architects to build design models that can accommodate changes as the project progresses instead of requiring architects to rebuild the design model at each design phase (as is often the case often with parametric design models); second, to provide a structured environment in which to develop agents and behaviors for applying them to architectural problems, and to develop a computational apparatus that facilitates agent-based design research on one hand and comparison of results with existing design approaches on the other hand; and finally, to allow the reduction of solution spaces using heuristic methods and improving the efficiency of traversing through multiple solution spaces by combining the designers' input with pattern- recognition techniques. By using planning methods, we may obtain a fundamental improvement by replacing the solution space with a much more appropriate solution landscape. This proposal investigates and develops an agent-based framework for the computational exploration of design alternatives in the early design stage. Despite being field specific, it combines concepts from areas that range from architectural design, structural and environmental engineering to complexity theory and multiagent systems, and is adaptable enough that it can be used by a variety of disciplines that involve geometric modelling in design or engineering. The recent advancements in AEC with the increasingly fast application of building information modelling (BIM) (Eastman et al., 2011) and performance based design approaches (Oxman, 2008), as well as the introduction of integrative planning methods (Menges, 2013), and robotic construction platforms such as a digital construction platform (DCP)(Keating et al., 2017), indicate that research into methods and tools that holistically deal with the design to construction process is necessary. 3.1. Generic Design Problem Solvers Using MASs Developing algorithms and computational tools that solve specific problems is a major area in both engineering and computer science, and a number of algorithms have been presented which are able to find optimal solutions to problems. Although such approaches can be very efficient in specific engineering problems (i.e. structural optimization), they are applied after a design has been formulated. Therefore, up-to-date computational design tools have not dealt with the creative aspect of design in the early design stage, but rather focus on the later design stages and rely on a given design space after the problem is well defined. Although these tools have proven invaluable for finding optimal solutions for specific sub problems, they provide little design intuition and feedback for addressing bigger problems. Page | 60 The growing interest in complex adaptive systems is due to the fact that by observing mechanisms that exist within communities of social insects such as the termites, researchers have developed algorithms that can be applied in a variety of problems and domains. Such algorithms can be considered as generic problem solvers and are of particular interest for architectural design, in which almost each problem is unique. Creating agent- based models and developing algorithmic solutions for addressing design problems is a relatively new research topic in architecture, mainly due to their computational complexity. This proposal specifically focuses on the application of new agent models for combining different fields in the AEC. In the following section the main elements of our methodology are described. Design Problem Decomposition As described in Section 2.2.6, design problems are often described as ill-defined, and in terms of their computational complexity one could classify them as NP-Hard. NP-Hard problems can be described as a type of problems in which: -it is not known how to generate a correct solution, -it is not known how to test the correctness of a proposed solution, and -it is possible to compare two proposed solutions and select the more correct one. Based on the classification of problems, a number of computational methods have been developed for solving them (solvers). Following the categorization of problems, algorithms are also categorized into: a) greedy, b) deterministic, c) stochastic, d) exact, e) approximate, f) progressive, g) adaptive, h) specific, i) generic and j) open (Rutten, 2014). This work implements fundamental algorithms that fall under the broad categories of generic, stochastic, adaptive and open algorithms, and applies them in the field of design using agent-based modeling. Since there are multiple classifications and descriptions of algorithms, brief descriptions are given below for each of those categories to provide clarity. Stochastic algorithms include a random component and the result can be predicted probabilistically. We should note that on digital computers, all processes are inherently deterministic, but pseudo-randomness is sufficient to classify an algorithm as stochastic. Hill climbing (HC) and simulated annealing and swarm algorithms are fundamental stochastic algorithms. Adaptive algorithms can operate on a changing set of constraints and inputs. These algorithms run continuously within a dynamic environment. An example of such algorithms are self-organizing maps. Generic algorithms are designed to tackle a wide variety of problems. This flexibility is accompanied by a significant drop in performance. Divide and conquer is an example of a basic generic algorithm. Open algorithms allow external entities (be they human beings or other algorithms) to participate in the solution process. Seemingly unpromising lines of inquiry can be investigated upon the request of an external agent. Research suggests that generic solvers are more appropriate for design, as they are capable of solving almost any problem. But what enables a generic solver to deal with many problems? Doesn’t that require a large amount of intelligence or data? Page | 61 It certainly does, but the important aspect is that the intelligence (knowing) does not need to exist within the solver itself but rather in the way the problem is decomposed and the solver is configured. If the solver is open, it can be extended by another algorithm or interactively by the user, who can fill in knowledge gaps. For example, a generic solver does not need to “know” anything about geometry in order to find if an opening is properly placed; it only needs a collaborating algorithm that can take care of geometric representations (i.e. the NURBS environment). By implementing design systems which are open and dividing the “knowing” aspect (i.e. design generation, design knowledge, intuition) and the solving aspect (analysis, evaluation) into disjointed algorithms, it becomes easier to develop generic problem-solving approaches. Due to this decoupling, such algorithms are easier to repurpose as well. Communication between the algorithms becomes key and the interaction among them defines the problem-solving process. The process can be briefly described as follows. The generic algorithm generates alternative design solutions based on a problem specification. The companion algorithm processes the solution and assigns a quality rating (i.e. cold, warm, hot). The generic algorithm is responsible for interpreting the messages and the collaborator algorithm is responsible for computing the “fitness” of each solution. The algorithms communicate via exchanging numbers and characters (messages) and their communication can be described mathematically: (1) : → (2) The first equation (1) is called the fitness or heuristic function and defines the language between the two algorithms mathematically. The output of f which is labelled as q describes the fitness as a numeric value. The second equation (2) shows the mapping of this function, which specifies the type of data that goes in and out of such a function. The mapping notation (2) states that f has to consume data in the form of and it should return data as a real number (). The combination of heuristic function and mapping creates the solution space (landscape) for a given problem, which can be explored with the aforementioned type of algorithms. The process includes iterative runs during which the generic solver(s) search(es) and find(s) their way around these landscapes and converge on high ground as quickly as possible (Rutten, 2014). The designer’s responsibility becomes developing meaningful problem spaces and setting up relationships between design elements and design targets (performance) using one or more fitness functions. Hypothesis The agent-based framework evolves around the hypothesis that designers will need to develop new types of abstractions (i.e. mathematical models) that will be represented as agents and will be used to describe design problems in generic problem-solving algorithms in order to be able to deal with the increasing complexity of building design. An agent-based design framework for computational morphogenesis is presented in which the modeling of design requirements from different design domains into agent behaviors is suggested. The framework focuses on the early design stage and the objective is to enable designers to couple geometry with different types of numerical analysis in an agent-based fashion and automatically generate and evaluate design Page | 62 alternatives using principles of evolutionary programming. The implementation of custom types of agents allows designers to traverse the solution space and extend existing form finding methods, such as particle spring systems, and couple them with analytic data via heuristic functions. Strategies for exploring the solution space can be achieved via appropriate problem decomposition and by introducing task specific agents, their behaviors, hierarchies and the heuristic functions between them. The fundamental novelty of this methodology is that different types of agents are implemented for each aspect of the design cycle, namely synthesis, analysis and evaluation. To allow for extensibility, the framework is composed of a set of class libraries organized around a core agent library, which is described in the following section. 3.2. Framework Development for Integrating Multiple Design Phases In order for an agent-based computational design tool to be able to generate design solutions, the design problem needs to be described to a number of agents and represented in a tractable way. This is a particularly difficult task due to the fact that problem requirements are not formalized until the later stages of the building design process. In architecture, the majority of agent-based design approaches have focused on adapting the basic behaviors of swarm intelligence models developed by C. Reynolds in order to fit the context of specific design problems (i.e. simulation)(Reynolds, 1987). This work focuses on situations with multiple types of agents in which the variables and constraints are distributed among the agents so that no agent controls all the variables. In such situations, the design problem is defined as a distributed constraint satisfaction problem, and each agent may interact only with a few agents in the system. Local interactions become a feature of the agent; therefore, the agents within such networks can be part of a team and must thus cooperate with each other to achieve a design goal, or they may each have individual targets and goals. Figure 14 Diagram illustrating the overall multi-agent systems for design approaches including design problems, designer interaction, results, feedback loops, and the decomposition of the system into subdomains including design generation, simulation, analysis and evaluation Page | 63 Figure 15 MAS framework diagram showing agent classes, and interdependencies between agents. Numbers in each component indicate the process workflow To provide clarity, an agent is denoted as a software-based programing block and/or computer system that shares the following properties: (a) an agent exists within an environment and responds to it while interacting with other agents. Therefore the agent is situated, in that its behavior is based on the current state of its interactions with both the population of the agents as well as with the environment; (b) an agent may have explicit objectives that condition its behavior and are directly related to specific performance criteria in which the goals are not solely targeted to maximize effectiveness but are used to assess and improve the decision-making process; (c) an agent can adapt and change its behavior based on a utility function that uses analytical data or the agents’ own evolution and interaction history. In this case, individual adaptation requires agents to have sufficient memory to keep track of their actions, usually in the form of a dynamic agent parameter (utility), and therefore (d) an agent has resource parameters that indicate its current stock of one or more resources (energy, material, information) (Macal and North, 2009). Each of the aforementioned properties are expressed as layers within the internal structure of an agent. The established, typical agent structure in this work includes: a) an interface layer through which the agent communicates with their environment, b) a definition layer which describes the set of states and goals of each agent, c) an organization layer which decides the type of actions to be taken by the agent at a given time based on analytical data, d) a coordination layer which keeps track of past and current decisions, and finally e) a communication layer which establishes that the agents are able to communicate among themselves (Figure Page | 64 16).The key assumption is that given an architectural design problem, it can be distributed among agents which cater to different aspects of the problem under consideration. In particular, this model suggests the creation of separate agent classes which have goals related to different steps of the design process, namely: a) synthesis: the generative agent class, b) analysis: the specialist agent class, c) evaluation: the evaluator agent class, and d) coordination: the coordinator agent class (core agents). In the interface layer, a set of control variables and constraints are defined that relate to the design problem (i.e. design a façade). These variables and constraints are connected with a number of actions the agent can perform (i.e. add opening, place a façade panel) in the definition layer. The plan and schedule of the agents’ actions are defined in the organization layer where rewards and penalties are also defined in order to be able to evaluate the outcome of each action. In the communication layer the type and amount of data that is communicated among different agents is defined, while the success or failure of each action is measured using rewards and costs. Using Distributed Constraint Reasoning (DCR), a set of agents can start forming teams (i.e. generative with specialist) and cooperate by coordinating their actions, plans and schedules (Tambe, 1997). In the proposed multi-agent systems framework the four generic agent types, each focusing on the design goals are applied to different design cases. Each agent type appends to different design domains and goals are defined by the designer, based on available data and the design intentions. This paper presents case studies in which agents are simple programming modules, and have the aforementioned structure and are linked to design software in order perform different design actions (i.e. generate geometry) based on their type and state (Gerber et al., 2017). The agents implement core geometric functions that exist within the design software by accessing specific application programming interfaces (APIs) such as RhinoCommon. However, there have been some experiments in which whole computer systems have been considered as agents and the agents’ actions relate to the implementation of specific commands within the system (L. S. Marcolino, 2013). The parameters in the research include: the decomposition of simple design problems into different design agencies that form agent networks; the generative capacity of the MAS framework to create unique and complex outcomes, which improve with iteration, and integrate geometry formation with environmental performance criteria as they relate to established standards and user preferences; and integrating geometry rationalization based on the coupling of structural performance with environmental performance criteria. . This work is a preliminary step toward developing an integrated MAS toolkit that utilizes computational methods to find solutions through discovery rather than precise analysis. Figure 15 shows the proposed framework, emphasizing the main steps and processes involved in this methodology. Task 1: Development of Agent Classes This approach is developed in two stages; first, the generative aspect of design, in which agents act autonomously; and second, the optimization of generated outcomes, in which agents act collaboratively and negotiate to find optimal solutions. The system explores these complexly coupled relationships between the generative and analytical design processes. First, a set of generative agents and behaviors are modeled, based on a given design site’s location and orientation, a building façade bounding context, and designer defined parameters that append to building components (length, width, thickness, type). Initially, the systems’ agents act autonomously and develop design alternatives, which satisfy local rules and constraints from the geometric domain, avoiding specific areas that are reserved for window openings and views, and collision checking for constructability. Page | 65 Figure 16 Diagram illustrating the typical agents’ internal structure, agent types and agent hierarchy within the MAS for design During a second loop, the designs are analyzed by a set of specialist and user preference agents, which communicate their data back to the generative agents in order to adjust parameters to regenerate design alternatives based on specific user preferences and performance goals. Five different classes of agents are modeled with actions, properties, states and goals. The agent classes include: 1. generative agents that relate to the design intention and geometric properties of the building component and is responsible for generating façade panels that regulate the amount of light that enters the office space, 2. specialist agents with a number of different sub-classes (based on the types of analysis) for analyzing and evaluating the generated designs’ performance, 3. simulation agents that are responsible for simulating analytical results and/or user preferences and presenting them to the designer, 4. evaluation agents that are responsible for collecting the available analytical data and are based on heuristic functions that evaluate and rank design alternatives, and 5.-a coordination agent that ensures that each agent is aware of the other agents’ states and is responsible for the communication and coordination of the different classes. All agent types have the same number of layers, but their definition is based upon a specific domain (i.e. generative design, environmental or structural engineering) and a basic set of principles that are related to the specific domain and affect the agent’s behavior. Behavioral rules may vary in complexity and the levels of Page | 66 information taken into account during the decision-making process. The level of information needed for each agent can be either based on established analytical methods for environmental and structural design or on the designer’s input. For instance, the behavior of a generative agent that represents a façade panel is dependent on a set of input parameters that are coupled with basic principles found across environmental design methods such as: a) orientation, b) sun positions, or c) the level of light needed with regards to the use of that space. Task 2: Search Space Exploration Using Heuristic Algorithms As a first step towards steering the behavior of the generative agent for producing design alternatives, its input parameters were coupled with analytical values obtained by a specialist agent's stochastic search algorithms. By developing simple heuristic functions, the designer specified the relationship between design parameters with analytical values. This relationship was either based on the designer’s experience or on a sensitivity analysis that allowed the designer to check which design parameters have bigger impacts on the specific analysis. For example, if the designer performs a daylight analysis, the specialist agent will collect a numerical value that reflects lux values (Gerber et al., 2017), and if the designer performs a structural analysis the specialist agent will collect a numerical value that reflects displacement (Pantazis and Gerber, 2016). The specialist agent passed the input parameters to a commercial analytical solver and collected the analytical values which it communicated to the generative agent. At each iteration, the specialist agent was responsible for performing the analysis and communicating the analytical values to the generative agent (updating the values of generative agent). In cases in which there were multiple analyses, a weighted value was attached to each design parameter that indicated the impact of each design parameter to the analysis. For each of the parameters that was updated, a credit (i.e., +1) is attributed to each agent if the value obtained at the sensor point with the highest impact was closer to the target value (Figure 17). If the obtained value was further than the target value a penalty is attributed (i.e., -10). Based on this credit or debit, the generative agent decided in which direction to update a design parameter. The experiment began by implementing four search optimization procedures and selecting the one that one performs better given a design target in terms of time, design diversity, applicability on other problems. Initially, two basic local search algorithms were implemented: hill climbing and simulated annealing, since they are fundamental heuristic algorithms that prove to be applicable and perform quite well for a wide range of problems. Two more complex evolutionary algorithms were implemented, namely a particle swarm optimization (PSO) algorithm, which is a distributed local optimization algorithm, and a stochastic search diffusion (SDS) algorithm. Unlike hill climbing and simulated annealing, PSO does not use selection; rather, improved solutions appear via the interaction of the agent population. Page | 67 Although the first two algorithms are fundamental in the artificial intelligence research, as noted by M. Minsky in his seminal paper “Steps towards artificial intelligence,” and they have been widely used in the fields of engineering, little research has been done on their application to design problems (Turrin et al., 2011). Therefore, it was essential, as a first step towards building an intelligent MAS tool, to apply them to design problems and compare them to population-based approaches such as PSO and SDS. For each of the approaches, the researcher has developed a custom heuristic function that associates a design feature (i.e. window position on a facade) with one or more analytical results (i.e. the amount of daylight entering a space). An example of a simple heuristic with two analytical values was presented in the previous section. In order to be able to draw conclusions, we see how the different algorithms perform when a) generating unique solutions each time they run or b) operating on a small versus a large sample size. In order to be able to validate the results, the problem was reduced to finding the optimal position of a generic façade surface and searching linearly to map (brute force) the whole solution space. The researcher then observed the relationship of the parametrization of the design features in relation to the analytical results and attempted to use this information to guide the search more efficiently based on this relationship and the performance of each of the approaches discussed below. 3.2.2.1. Hill Climbing Hill climbing is a basic local search method best suited for convex and constrained optimization problems. It is an iterative algorithm that starts with an arbitrary solution to a problem (i.e. the position of opening on a façade), and then attempts to find a better solution by incrementally changing a single element of the solution. If the Figure 17 Diagram showing the basic steps for implementing a heuristic search computationally, namely (from left to right) the definition of design parameters and a set of performance measures. The design parameters are related to the measures via a heuristic function which forms the solution space/landscape that is being traversed using stochastic algorithms Page | 68 change produces a better solution, an incremental change is applied to the new solution, repeating until no further improvements can be found. HC is a good method for finding a local optima (a solution that cannot be improved by considering a neighboring configuration), but it does not necessarily guarantee that the best possible solution is found (optimal solution) out of all possible solutions (the search solution space) as it can be easily trapped in local optima. Such a drawback can be dealt with by using repeated local searches (shotgun HC). Despite the above disadvantages, hill climbing is an algorithm with a wide field of application due to its simplicity and good performance; and therefore the researcher considers it as an optimization search approach. Let me briefly describe the algorithm below: Step 1: Choose an opening position (i.e. the x,y position of the window) at random. Call this position (string) the best evaluated (in the experimental case study the initial string chosen was evenly spaced openings). Step 2: Choose another window position a step size away from the previous position (i.e. for two windows, there are a maximum of 8 possible steps to search in). If the change of the window position parameter leads to an equal or higher value of the heuristic, then set best-evaluated to the resulting string. Step 3: Go to step 2. 3.2.2.2. Simulated Annealing (SA) Simulated annealing (SA) is a probabilistic technique for approximating the global optimum of a given heuristic function. It is a method suitable for problems with a large solution space in which searching for an approximate global optimum is more important than finding a precise local optimum in a fixed amount of time. It can be used for solving both unconstrained and bound-constrained stochastic optimization problems (Cutler et al., 2008, Shea et al., 2006). To clarify, consider the example of placing openings onto a façade: the position of the window openings are coupled with the total radiation behind the façade. At each step, the algorithm changes the position and size of the window and measures the amount of light entering the space behind the façade based on the heuristic mentioned in section 1.3.1. The algorithm selects a new value from the parameter range, based on whether the obtained analysis was closer or further from the desired value (i.e. maximizing the natural light availability). The algorithm is briefly described below. Step 1: Choose a new parameter value at random to generate a geometry. The distance of the new point from the current point, or the extent of the search (Budget), is based on a probability distribution with a scale proportional to the light distribution inside the room (heuristic function). Step 2: Receive a numeric value from the analysis (i.e. environmental analysis). Step 3: Accept all new points that increase the heuristic, but also decrease the probability of choosing such points over iterations, favoring points at a further distance from the objective (sub-optimal). By accepting points that decrease the objective, the algorithm avoids being trapped in local minima in early iterations and is able to explore globally for better solutions. Step 4: Terminate when there is no more budget left or if the maximum amount of iterations have been reached. 3.2.2.3. Particle Swarm Optimization (PSO) Page | 69 Particle swarm is a population-based stochastic algorithm for optimization. It consists of three basic steps, namely 1) generate the particles' positions and velocities, 2) update the velocities, and finally 3) update the particles' positions. Here, a particle refers to a point in the design space that changes its position from one move (design iteration) to another based on velocity updates. Unlike evolutionary algorithms (i.e. a genetic algorithm) the particle swarm does not use selection, but rather evaluates the interactions between the design parameters and a defined heuristic and iteratively updates them with the target to improve the quality of problem solutions over time. In this respect, PSO is similar to the genetic algorithm; but its main advantages are: a) it can converge faster toward optimal solutions (Felkner et al., 2013) and b) it performs better computationally. The algorithm operates on a collection of design iterations called "particles" that move in steps throughout a surface domain. The steps are described briefly below: Step 1: Begin by generating n initial particles with different opening positions and assigning different velocities (length of panel) and probabilities to each generation type. Step 2: For each generated design, perform an environmental analysis and retrieve results. Step 3: Evaluate the results based on the given heuristic function, determine the best (user defined- desired) opening positions, the new velocity, and the best generation probability for the agents. Step 4: Update (iteratively) the opening positions (the new location is the old one plus the new probability, modified to keep the facade surface filled), probabilities of types and neighbor agents. Step 5: Iterate through steps 2 and 3 until the algorithm reaches a stopping criterion, which could be a maximum amount of iterations or a point where the designer is satisfied. 3.2.2.4. Stochastic Diffusion Search (SDS) Stochastic diffusion search is a “parallel probabilistic pattern-matching algorithm,” and is capable of rapidly identifying the best instantiation of a target pattern in a noisy search space (De Meyer, 2000). Unlike stigmergic or swarm behaviors which rely on the modification of physical properties of a simulated environment, SDS uses a form of direct (one-to-one) communication between agents similar to the tandem calling mechanism employed by one species of ants, Leptothorax acervorum (Nasuto and Bishop, 1999). In SDS, based on the problem decomposition, agents formulate a simple hypothesis which serves as a candidate solution to the search problem. The agents iteratively perform cheap, partial evaluations of their hypothesis and share information about them (diffusion of information) through direct one-to-one communication with other agents. As a result of the diffusion mechanism, high-quality solutions can be identified from clusters of agents with the same hypothesis. Below is a brief description of the operation of SDS. Step 1: Populate a surface (i.e. façade) with agents who avoid specific areas (i.e. openings). Step 2: Generate an initial population of solutions with different opening position on the façade. Step 3: Create an agent for each design solution. The agent maintains a hypothesis for the placement of the openings in the form of a message (i.e. window position will increase daylight). Step 4: Perform analysis (i.e. daylight factor analysis) for each generated design and retrieve the results. Page | 70 Step 5: Evaluate the results based on the given heuristic function. Step 6: If the fitness criteria are satisfied, select another position randomly. Repeat steps 3 and 4. . Step 7: If the condition is not satisfied, report the position and ask another agent if it was able to satisfy the condition. If the answer is yes, the agent saves this position. Step 8: Repeat until the majority of the generated solutions satisfy the fitness criteria. Task 3: Agent Coordination, Evaluation and Negotiation Communication and negotiation mechanisms among the agents are established in order to update the behavior of the generative agent(s) and improve their geometric results. Text file messages update values and/or actions to negotiate across different agents. At each iteration, the coordinator agent class checks the state of other agents, reports their state (i.e. analysis has finished), calculates their utility, and predicts future actions based on the utility. The effect of their actions and behaviors on other agents is dependent on the hierarchy established among the different agents (i.e., a generative agent is higher in the hierarchy than the specialist agent) or on the degree of importance of each agent’s related behavior, which is expressed as a utility. The target of the negotiation for each agent is to satisfy its own goal while minimizing the negative side effects on the other agents. The satisfaction of each goal is measured by the increase or decrease of each agent’s utility. Hierarchy among the agents is established and applied by a coordinating agent that communicates and controls the rest of the agents. Figure 16 illustrates the basic structure of an agent and the established hierarchies among the system’s agent classes. The designer is responsible for designing the interaction mechanisms between agents and the different input design parameters. The designer also couples different types of analyses with design parameters and establishes trade-off processes among the agents. Page | 71 Task 4: Prototype Development (MAS Design Tool) The system prototype is built on top of a number of open source software platforms and tools, as well as commercial software that offer open application programming interfaces (APIs). The core agent types/classes of the system are developed in Java on the Eclipse Platform, which serves as the common interface for diverse integrated development environment (IDE) based products to facilitate our integrations (Rivières and Wiegand, 2004). The custom programmed MAS utilizes libraries and classes from Processing, a Java-based programming language with its own IDE (Reas and Fry, 2007). For the geometric adaptation and transformation of the building components, the IGEO library is implemented, which has been developed to offer automatic data management of Non Uniform Rational B-Splines (NURBS)based geometry as agents, as well as method chaining for coding efficiency (Sugihara, 2014). A custom java applet and a graphical user interface (GUI) are created to generate geometric configurations, which are then imported into the Rhino 3D NURBS design environment for further analysis (Robert Mcneel and Associates, 2019). Figure 18 MAS for design system architecture: MAS environments, inputs, data transfers among software platforms, and actions and relationships among agents Page | 72 Designs are analyzed and evaluated using Grasshopper, a visual scripting editor within Rhinoceros and, specifically, two Python based environmental simulation plugins, Honeybee and Ladybug (Roudsari et al., 2014). Automation functionalities are added via custom programming in Python, in order to obtain and simulate environmental analysis from specialized stand-alone software (OpenStudio, EnergyPlus, Radiance, and Daysim). Karamba 3D (Preisinger and Heimrath, 2014) is used to perform finite element analysis (FEA) on generated designs, while Kuka|prc (Johannes Braumann, 2011) is implemented to generate robotic construction simulations and evaluate designs based on their constructability. The data obtained is saved as .xml or .txt files, which are used to model and form the parameter bounds of an environmental set of agents. The Python programming language is used to handle the calls of the different platforms, while a "‘federated" system architecture is used to relate multiple software environments. Extensible Markup Language (XML) is deployed to control and manage the agents' properties and states, because it provides a flexible and adaptable information identification method that allows designing a customized markup language for almost any type of document (Shea et al., 2005). Figure 18 illustrates all of the system's architecture, a description of the platforms used, and the corresponding relationships within this system. Task 5: Develop a GUI to Allow Interactivity Between Designer and MAS The ability to interact with the optimization routines visually permits considerable control by the designer over the forms of solutions generated by a stochastic optimization method, and also helps to avoid the algorithm getting stuck in local optima (Radford and Gero, 1980). Radford notes that designers can improve the performance of stochastic algorithms through the interactive manipulation of parameters. In order to be able to engage the designer and allow him or her to interact with the system, the researcher has developed “Termite,” Figure 19 Prototype version of the developed graphical user Interface for interacting with the MAS toolkit Page | 73 an applet that operates within Processing, but can also be called from the visual programming editor, Grasshopper, and allows for the easier building of agent-based models within the 3D environment of Rhinoceros 3D. The Termite toolkit contains some packaged agent classes that are programmed in Java and Python and have a wrapper for Grasshopper, which allows the designer to easily build agent-based simulations by combining different components and visualize in them in the viewport of Rhinoceros (Figure 20). To allow for more integration beyond the bounds of a specific 3D environment, the researcher has also started the development of a prototypical Java based GUI, which is stand-alone and can call different type of software based on the design objective. The GUI allows the designer to: a) define input geometries, b) select available analytic solvers, and c) define heuristic functions that correlate geometry generation with the analysis using xml files. The designer can then initialize the search process and evaluate the results by visually assessing both the geometry and the corresponding performance. By integrating Parallel Line and Pareto Front plots in the definition of the simulation agent, the designer can select which analyses she/he wishes to correlate and choose which way to visualize it. By showing correlations between the geometry of the generated designs and their impact on the analytical results graphically, designers can gain insights into how the adjustment of design parameters affects the performance of the design. After gaining insights into the parameter range, the designer can call one of the heuristic algorithms to run iteratively and present design alternatives that meet the pre-defined design performance targets. Figure 20 Flowchart showing the description of a design in Grasshopper. The shaded boxes represent agent classes of the MAS framework and white boxes represent input data Page | 74 Part IV Design Experiments Page | 75 4. Design Experiments To demonstrate, test, measure and iterate upon the MAS framework, a series of design experiments is pursued. The selected design experiments have been built around the literature review and primarily address two crucial gaps found in it. First, swarm intelligence methods have been used in architectural design for generating bottom- up design simulations, but there are not many examples of how such techniques can be used for developing and coordinating swarms of robots in the physical world in ways that lead to emergent structures. The second gap is the fact that most of the preceeding work in design has used agent-based modelling without considering environmental parameters such as weather data, thus resulting in agents operating in a generic 3D box or on surface geometries. Therefore, the experiments begin by focusing on the way we can study emergent structures by empirically observing physical robots as well as using this data to inform abstract agents and digital swarm simulations. On the other hand, the experiments also focus on developing agents that represent building components whose environment is composed both from geometry (i.e. building envelope) and also from data (i.e. weather data) and address design problems that traditionally require the close collaboration of architects and engineers, such as façade design and shell design. The methodology of the experimental designs has been to work in an incremental fashion with the overall objective, which is to be able to measure improvements in the design process, in design outcomes, in formal terms as well as in design performance terms. In the case studies, a methodologic decomposition of the design problems is performed on different agencies, in which a number of design parameters are coupled with structural and environmental performance targets as well as process and constructability metrics in order to drive design exploration. Specifically, the design experiments include the following: 1. the design exploration of emergent structures, using agent-based modelling simulations and behaviors that are developed based on empirical observations of a physical swarm of low level robots, rather than the generic ones. This study develops an experimental testbed to investigate how real-life constraints can be used to inform the digital simulations of abstract agents and how such simulations can be used in order to provide better insights about how to coordinate large swarms of robots to construct functional structures; 2. façade design exploration using agents that represent façade panels that are informed by environmental simulations. Façade panels for an office building are probabilistically generated and their geometry and placement on the façade is informed by daylight and energy analysis; and 3. shell structure design exploration using environmental parameters. Free- form shell structures with different topologies are generated and their shape is augmented based on environmental and structural analysis. Obviously, these experiments capture only a small portion of what architectural design problems entail; the reason for this selection is that such problems are more constrained and therefore more suitable for describing computationally and being able to draw some conclusions. The design experiments are developed in such a way that they allows the researcher to measure the capacity of the framework in order to generate design alternatives by coupling, in the early design stage, design parameters with functional requirements that relate to design performance. Through the synthesis of the experiments, the research begins to: apply the developed framework to different design problems and evaluate it; point to the successes and failures of the application of agent-based modelling approaches for early stage design, and also its implications for autonomous construction; and it allows Page | 76 the researcher to begin to draw conclusions on the affordances assumed through the behavioral modeling approach in combination with analytical methods as well as necessary refinements and future direction. 4.1. Experimental Design 1: Bridging Digital Agent-based Modelling and Simulation with Physical Robotic Systems (Agents) Reviewing the literature in Section 2.3.5 indicated that the most common paradigm of agent-based modelling in the field of architecture is that of swarm intelligence. Despite the fact that there have been a number of design projects using this ABMS approach as a generative mechanism, in most cases the agents are abstract and have no physical representation. A brief summary of a typical ABMS design process would be: designers implement an existing agent-based (flocking) simulation and in the majority of cases provide a custom geometric environment that triggers the bottom-up generation of a design. The designer then manipulates the agents' behavior based on the simulation and their design intention, and at some point freezes the simulation and proceeds to the design materialization following a top-down approach. However, in complex adaptive systems, these behaviors are a direct result of the physical constraints and forces, as in the case of termites where the dynamic feedback between environment and behavior is what gives shape to the emergent structure of the mound. The first experiment addresses this lack of developing behaviors and simulations by transcribing empirical observations and analysis of a robotic system, and investigates how to: 1. develop a reciprocal relationship between digital swarm simulations and a swarm of low level robots that operate in the physical world. We study how we can inform behaviors of agents in digital simulations, by developing an experimental setup that allows us to observe a swarm of robots in the physical world and use collected data to inform our digital simulation model (Figure 21), and 2. explore whether and how the relationship between the environment and a swarm of active agents (in this case, low level) and passive agents (building blocks) can result in self-organizing structures that present desirable characteristics for architecture and construction. The hypothesis of the experiment states that by observing how such a simple robotic system operates in physical space and how the locomotion (behavior) of robots is affected by the environment and the blocks, the researcher can develop agent-based simulations with behaviors informed by physical observations. The capacity to achieve collective behaviors without the need for top-down control is really valuable in the context of robotic construction where robustness, adaptation and scalability are of great importance. The objective of this experiment is to mechanically embed intelligence in the body of the robot-parts that allows the formation of emergent structures that exhibit characteristics over time (i.e. stability, clustering, collective transportation) without extra software control. Embodied Swarm Behavior In this experiment, a simple type of walking robot, also known as a bristlebot, is used as a hardware platform to investigate how different behaviors can be mechanically encoded in the body of an agent by changing the geometry/shape of the robots and the environment they are operating in. This experimental design is based on an interdisciplinary workshop that included architects, computer engineers and roboticists working on collective construction (Andréen et al., 2016). The bristlebots (active agents) operate in a 2D arena filled with differently shaped building blocks (passive agents-parts). The geometry of both the robots and the parts can be parametrically adjusted in order to test different basic behaviors (push brick). Page | 77 Figure 21 Framework diagram illustrating the design phases, parameters and tools developed Design Process Through physical experimentation and video analysis, the relationships between the properties of the emergent patterns (size, temporal stability) and the geometry of the robot/parts are studied. This work couples our MAS framework with a robust robotic system and a set of simulation and analysis tools are used for generating and actualizing emergent 2D structures. By controlling the ratio of blocks (i.e. number of blocks * area of each block) in relation to the area of the arena (i.e. panel area) the researcher can define the level of occupancy in the arena, while the interaction of the bristlebots with the blocks results in the formation of emergent structures. The design process consists of the following steps (Figure 21): 1. designing custom bodies for a swarm of simple and robust robots, namely bristlebots, 2. simulating their motion in a 2D environment filled with passive blocks of a certain shape, 3. investigating how the relationship of the robot’s and block’s geometry affects the formation of emergent structures, 4. performing experimental runs with 20-100 robots and 2 different types of block geometries, in 3 different types of environments (circular, triangular, rectangular), and 5. tracking the motion of the robots using computer vision algorithms and compare the clustering with the simulated results. Page | 78 As a first step, the researcher implements a small swarm of bristlebots and investigates the level of clustering that different geometries of the part can achieve and how the robot geometry can affect the manipulation of the parts and the locomotion of the robot. Different shapes of robots and parts are studied along with different types of boundary geometries. Using Kalman filtering, the position and motion path of the bristlebots and the parts are identified as well as their level of clustering (Kalman, 1960). By collecting data from multiple experimental runs, the relationship between the shape of the robots’ bodies and the parts is analyzed with the objective of investigating how these geometric features can affect the formation of 2D structures with desirable characteristics, such as the area of clustering, cluster stability, etc. In the second stage, in order to get more control over the robotic swarm the researcher develops his own bristlebot design (hprbot), which is equipped with a photo-resistor, a hall sensor, and a microcontroller that allows for remotely controlling the bristlebot. The experimental setup and design process is described next. 4.1.2.1. Experimental Setup The experimental setup is based on a workflow that was initially implemented at the Smart Geometry Conference 2016 (Andréen et al., 2016). It includes a reconfigurable two-dimensional environment (Arena) in which simple robots (Agents) move and push around passive building blocks (Parts) for specific time intervals (5-45 minutes) or until they reach a state of an equilibrium (Figure 22). The shapes of the arena in which the robots operate can be considered as the boundary of an architectural panel to be designed. The design parameters that were studied at this stage include the shape of the boundary, the geometry and number of the robots, the shape and number of blocks, the ratio of area to number of blocks (i.e. degree of transparency), and the runtime. The motion of both Figure 22 Diagram illustrating the established experimental testbed Page | 79 agents and blocks is tracked using a high definition camera (GoPro) and analyzed using a kit of custom developed computer vision tools (Python, OpenCV, Matplotlib) which allows for the performative evaluation of the emerging structures and behaviors. In the first stage, the bristlebots used are cheap and commercially available (www.hexbug.com/nano) and can move at an approximate speed of 11mm/s by combining a simple vibrating motor and 6-14 angled soft legs. The way the bristlebots move is affected by the number and geometry of the bristles, the friction of the surface and the obstacles they encounter. Based on the variability of the ground surface and obstacles they encounter, the bristlebots can move relatively straight or follow random trajectories. Since there is no microcontroller on the hexbugs, as a first step the bristlebots are “programmed” by altering their body geometry and the environment in which they operate. The body geometry of the bristlebots is altered by adding covers with variable geometry. Initial experiments showed that big and/or front heavy covers severely alter the locomotion of the bristlebots and consequently the patterns they generate (Andréen et al., 2016). Based on the first observation, the researcher developeddifferent covers/bodies to test how bristlebots grab and move blocks around the arena. In conjunction with the bristlebots' cover geometry, the researcher developed alternative design geometric configurations for the passive parts/blocks (Figure 23). For simplicity, the experiment starts by selecting two primitive shapes, the circle and the hexagon, and developing different types of blocks by topologically altering the shapes. The bristlebots operate in an environment (arena) that can be reconfigured in different shapes. 4.1.2.2. Simulation Tool An agent-based generative tool was developed in order to be able to explore different design alternatives based on the physical setup described above. Unlike existing agent-based tools that implement swarm behaviors and are not connected to the physical world, in this case the tool is modelled after my experimental setup and is used as a platform to explore faster how different boundary conditions and geometries of the agents can lead to different self-organizing configurations. The tool was implemented in Processing (Reas, 2007), using Box2D as a physics engine and allows for fast iterations and design alterations, which can then be tested physically. Additionally, the tool enables the designer to test the scalability of different configurations. It also enables the observation of the global impact of local rules when a large number of agents interact in the environment. The aim is to use the data from the physical experiments to inform the simulation of the tool to match the behavior of the robots and their interactions in the physical world. By doing this, the global behavior of the system can easily be explored. The tool consists of a 2D environment populated with active agents and building blocks. The designer can parametrically alter the design of both the boundary geometry as well as the robot/part geometry using the Dynamo visual scripting editor (DynamoBIM.org) and import them directly into Processing as a .json file. Page | 80 Figure 23 Diagram with main design parameters of the robots, part and the environment To achieve a realistic simulation of the locomotion, the designer can, apart from using their geometry, control the: a) ground friction, b) object-object friction, c) object density and the d) coefficient of restitution. The vibrating motor which propels the robots to move forward is modeled as an applied vector force on active agents with some noise to account for the variable ground friction that affects the trajectory of the robots. The robots are placed randomly in the arena, and their position (x,y coordinates) at specific time intervals is exported as a .csv file for visualization purposes. The video data is processed using blob detection algorithms implemented using Open CV. Open CV is an open source computer vision library for the C++ and Python languages with classes for analyzing the video in real time (Bradski and Kaehler, 2008). The aim of the analysis is to see how the geometry of passive blocks, robots and the shape of the boundary affects: a) the trajectory of the robots in the arena, b) the formation of larger clusters of blocks through physical interlocking and c) the collective transport of blocks by the robots. Additionally, the video analysis is used to calibrate all the above parameters (a-d) and improve the accuracy of the simulation. Changing the geometry of the robots affects how they interact with the passive blocks (i.e. grab and push one block at a time) and also with other robots (i.e. forming chains). By fine-tuning the geometric parameters of the blocks (parts), the formation, size and stability of the clusters can be controlled to a certain level. 4.1.2.3. Design of the Robot (Active Agent) By changing the shape and center of mass (body position) of the body which is fitted onto the hexbug, different speeds and level of agility can be achieved. The robots' motion is influenced by the weight of its body and the shape, weight and total number of parts in the arena. The main design parameters that were found to affect the locomotion of the bristlebots are illustrated in Figure 23. Additionally, simple behaviors can be encoded mechanically by modifying the robot body. For example, by introducing a U shape (grabber) in the front of the robot body, it can easily grab and push a brick. Page | 81 Figure 24 Taxonomy of robot and part designs Different robot body designs were developed and tested in relation to the shape of the parts and a taxonomy was created, which is shown in Figure 24. Thin acrylic bodies with grabbers did not significantly reduce the robots' speed, provided directionality and enabled one-to-one engagement with the passive parts (push behavior). Additionally, by introducing a tail at the end that has a complimentary shape to the front grabber, the bristlebots demonstrated a cohesive behavior, durig which they formed chains of 3-4 robots by pushing each other. Robot bodies without grabbers would not engage with the part for long and would demonstrate a wandering behavior by diverting their trajectory, depending on which obstacle they encountered. Cardboard robot bodies were heavier and slowed down the bristlebots significantly; but they provided more stability and direction to the movement and the robots demonstrated a collective transportation behavior. An additional design parameter which emerged was the available energy in the system. Figure 26 shows a plot of the robots' speed in relation to the robots' design parameters. When the bristlebots’ batteries were charged, the robots could traverse the arena faster and engage with parts but once the batteries discharged the motor's rotation (rpms) slowed down and the bristlebots’ would often move in circles without sufficient force to push any parts. 4.1.2.4. Design of the Part (Passive Agent) As far as the parts are concerned, two different types of geometries have been tested along with their topological variations: a) one set of designs based on a hexagon and four different types of block designs, and b) the second set of geometries based on circles, with four different types of blocks created based on the topological variation of 2, 3 and 4 circles (see Figure 24). The geometrical and topological variations consisted of making the parts bigger, more stable clusters and by creating connecting points with higher friction. The main design parameters are the weight of the parts and the parts' ground contact surface, which affects its friction. Moreover, the number and length of edges of the parts affect their connectivity with other parts. Hexagonal parts form more stable structures due to the friction of the sharp edges while circular parts did not cluster for long periods of time. However, circular parts are easier for the robots to transport as they can provide multiple grabbing points. Lastly, the topological transformations of the shapes helps the creation of bigger clusters but constrains the flexibility of the robots to move around the arena. The researcher has calculated the size and number of parts in clusters Page | 82 over time to see how the boundary of the arena and the robot-part geometry affect the formation of clusters (self- organizing patterns). 4.1.2.5. Design of the Environment Another major parameter that affects the behavior of the bristlebots is the shape and size of their environment. Three basic shapes were tested, namely a rectangular, a circular and a triangular form, to explore the impact of the global geometry to the behavior of the bristlebots (Figure 25). Apart from the shape, the texture of both the sidewall and the floor of the environment influenced the locomotion of the robots significantly. For example, by adding more textured materials (i.e. fabric) the bristlebots could be directed through specific paths without an extra control mechanism until it reached an endpoint. An additional parameter that was considered was the introduction of fixed parts (islands) in the arena; these reduced the variability of the clusters, and also, depending on their number (1-3) they constrained the motion of the swarm in specific areas. Lastly, although the researcher only investigated flat 2D arenas, different sloping configurations were tested and if the slope was bigger than 8.5% the bristlebots could not move the parts, while the bristlebot itself was not able to move forward if the floor slope was above15%. Figure 25 Geometric configurations of different environments where the active agents (bristlebots) interact with the passive agents (parts) Results and Analysis This first design experiment bridges the gap between agent-based swarm simulations in the digital realm and a physical manifestation of a (swam) robotic system. An experimental setup was presented, allowing the researcher to study and develop agent behaviors based on the empirical observation of a robotic system. The initial experiment explored the generation of 2D self-organizing patterns using a swarm of basic bristlebots, which are commercially available (hexbugs). Page | 83 Figure 26 Plots showing the relationship of the bristlebots' design parameters with regards to velocity Working with a simple and cheap robotic platform did not allow the robots to perform any sophisticated tasks but it enabled the researcher to explore the scalability of this approach and physically test the interactions of a large number of robots. Although more experiments need to be performed in order to reach statistical significance, the first indications show that the behaviors persist even if the researcher increases the number of robots 10-fold (from 20 to 200 robots in the arena). Approximately 200 experimental runs were conducted in the 3 different arenas thus testing different geometries of robots and parts. The experimental runs were analyzed to get insights by measuring the number and size (area) of the clusters over time using the computer vision techniques mentioned in the previous section. The experimental results in Figure 26 show that there is a direct relationship between the velocity and the design parameters of the bristlebots. This observation shows that there is a correlation between the basic selected design parameters and the actual velocity of the robots, which will allow the researcher to build a more reliable simulation of the agents. Figure 27 shows a plot of the experimental runs and calculate the size and number of part clusters over time to see how the boundary of the arena and the robot-part geometry affect the formation of clusters (self-organizing patterns). This analysis helps researchers develop an understanding of potential future applications of such a system, such the collective transportation of parts or assembly of 2D structures. By analyzing the graphs, we can observe that the hexagonal components form larger structures than the circular components and the emergent clusters remain together for longer durations. We can also observe that in the triangular arena the robots formed bigger and more stable clusters (Figure 27 d, e, f). Page | 84 Figure 27 Plots of the number and size of parts clusters over time in 6 experimental runs. The top row shows results of the experimental runs with circular parts in the 3 different arenas while the bottom row shows results of the runs with the hexagonal parts Figure 28 Images showing different robot behaviors: a) a single robot navigating a path defined by variation of friction, b) robots forming a chain by pushing parts in synchronous motion, and c) emergent clustering of hexagonal robots, parts and collective transport In terms of the robot's shape, designs with "grabbers" proved to be reliable in consistently engaging with one part at the time and pushing it forward. Robot designs that had the same geometry as the parts but did not have grabbers became part of the assembly and moved together with the cluster of parts. However, in more than 50% of the runs the robots clustered only with other robots and the lack of directionality of their shape resulted in one robot counteracting the motion of another and therefore moving in circles but not moving forward. Regarding the geometry of the environment, the initial experiments used one point of entry that was manually closed once all the robots were in the arena. If the entry point was left open, bristlebots would eventually exit the environment. Different initial configurations for the parts were also tested, namely: randomly placing parts in the whole arena, placing parts in one cluster in the middle of the arena, and placing parts at the edge of the arena. Page | 85 The introduction of fixed parts (obstacles) in the arena was also tested and led to the quick formation of clusters; depending on the number of obstacles (1-3) this significantly constrained the motion of the swarm. The robots tended to gather towards the boundaries and were able to draw constant trajectories following the boundary of the arena independent of their geometry. Sharp angles (θ<90º) tended to trap the robots. By controlling a set of input parameters, namely: population size, the shape of the part, the shape of the robot, the shape of arena (environment) and material the researcher was able to test which parameters have a greater impact on the emergent structures. Control over the locomotion of the robots can be achieved without global coordination, either by developing specific body types or by introducing “soft” boundaries which have a higher level of friction (i.e. textile). It was demonstrated that by changing the agents' bodies, the shape of the parts, the ratio of the parts to robots inside the arena, and the boundary condition of the environment where the agents interact, the researcher can mechanically encode basic behaviors in the robots (agents) without the need for extra programming. The agents' (robots) behaviors are directly connected to the geometry of the robot and are as simple as: engage with a part, push a part, move towards an attractor, form a chain (Figure 28). Page | 86 4.2. Experimental Design 2: Agent Based Façade Design in Office Buildings Façades are among the most complex architectural systems, combining aesthetic, structural, environmental and construction concerns (Bechthold et al., 2011). Building façades, in large part, determine the amount of available direct and indirect natural lighting in the building (Bechthold et al., 2011, Reinhart et al., 2006). Since lighting accounts for 20-25% of the total electrical use in buildings, and more specifically 30-50% in commercial buildings (Ander, 2003, Phillips, 2004), the researcher believes that by reconsidering the performance of the façade, architects can improve both buildings' energy efficiency and also the occupants' comfort. Therefore, in this experimental design, the focus is on the generation of architectural façade designs. The experimental design investigates how we can create façade designs that improve the design and performance of a building by combining a) environmental analysis data, specifically solar radiation and luminance, with user preferences for light intensity within the office environment, and b) robotic simulations. The key assumption is that given an architectural design problem, it can be decomposed among the agents which cater to different aspects of the problem under consideration in the early design stage. In particular, the implementation of separate agent classes as presented in the previous section is proposed. Five different classes of agents are implemented, with actions, properties, states and goals. These agent classes have goals related to different steps of the design process, namely: a) synthesis: generative agent, b) analysis: specialist agent, c) evaluation: evaluator agent, d) simulation: construction simulation agent, and e) coordination: coordinator agent. Each type of agent relates to different design domains and communicates with a different type of software. By defining different agent classes and behaviors and selecting a design scheme depending on a combination of the designer’s experience and predictions about the performance goals of the design (i.e. based on previous analytical data), the solution space can be effectively reduced and allow the exploration of multiple design solutions. This experiment is developed in two stages; initially the experiment examines the generative aspect of design in which the agents act autonomously; then, the experiment tests the optimization of generated outcomes in which agents act collaboratively and negotiate to find optimal solutions. First, a set of generative agents and behaviors are modeled, based on a given design site’s location and orientation, a building façade bounding context, and designer-defined parameters regarding the building's components (i.e. length, width, thickness, type). Page | 87 Initially, the agents acted autonomously and developed design alternatives that satisfied the local rules and constraints from the geometric domain, avoiding specific areas reserved for window openings and views, and collision checking for constructability. During the second loop, the designs were analyzed by a set of specialist and user preference agents, that communicated their data back to the generative agents in order to adjust the parameters and regenerate design alternatives based on specific user preferences and performance goals. Two case studies are presented: in the first case the specialist agent is using daylight simulations to evaluate each design, and in the second case the specialist uses a robotic simulation to evaluate each design. The objective of this experimental design is to demonstrate that: a) stochastic optimization can be combined with agent-based architectural modeling to enable design exploration, b) analytical values and construction simulations can be used to inform and steer the agents' behavior towards the generation of better performing design alternatives, and c) heuristic functions that are not coupled to specific geometrical features (the shape of openings) but are rather more abstract relationships (glazing ratio, number of openings) can be developed and used for multiple design cases in an early design stage. Figure 29 Diagram illustrating the passing of the problem into the system and its decomposition into subdomains including design generation, simulation, analysis and evaluation Page | 88 Façade Design Exploration Using Heuristic Algorithms In order to drive the behavior of the generative agent towards producing design alternatives that meet the design objectives the researcher coupled its input parameters with the analytical values obtained by a specialist agent. At each iteration, the specialist agent was responsible for performing a daylight analysis and communicating the analytical values to the generative agent (updating the values to the generative agent). In this case, there are multiple analyses and a weighted value is attached to each design parameter that indicates the impact of each design parameter on an analysis. For each of the parameters that are updated a credit (i.e., +1) is attributed to each agent if the value obtained at the sensor point with highest impact is close to the target value. If the value obtained is further than the target values a penalty is attributed (i.e., -10). Based on this credit or debit the generative agent can decide how to update a design parameter. Initially, two basic local search algorithms were implemented: hill climbing and simulated annealing. These algorithms were chosen to test two fundamental heuristic algorithms that proved to be applicable to a wide range of problems. For each of the approaches, the researcher developed a custom heuristic function that associates a design feature (i.e. window position on a facade) with one or more analytical results (i.e. the amount of daylight entering a space). In order to draw conclusions, the researcher evaluates how the different algorithms performed vis-à-vis a) generating unique solutions each time they run, and b) operating on a small versus a large sample size. In order to validate the results the researcher reduced the problem to finding optimal opening positions in a generic façade surface and searching linearly to map (brute force) the whole solution space. The researcher then observed the relationship of the parametrization of the design features in relation to the analytical results and attempted to use this information to more efficiently guide the search, based on this relationship and the agents' performance in of each of the approaches discussed below. Daylight Metrics and Design Performance Goals Analytical values calculated by the specialist agents were used to steer the behavior of the generative agents. The fact that different disciplines are focused on different aspects of using daylight makes it difficult to evaluate a design strategy and therefore provide “goal” values. To enable a comprehensive understanding of the tradeoffs between daylighting performance and how different design features affect it, five different types of daylight metrics are considered. Of those, three are already established daylight metrics and relate to a) illuminance levels such as daylight factor analysis (DFA), b) energy consumption with regards to artificial lighting such as continuous daylight autonomy (CDA), as well as c) illuminance levels with regards to lighting preferences defined by the building's occupants such as useful daylight illuminance (UDI). Reinhart provides an in-depth description of the calculation of the above metrics and their integration in sustainable building design (Reinhart et al., 2006). Two additional metrics were introduced that are used to calculate the maximum light variance and the level of light diffusion within a space. The variance was calculated by selecting the minimum and maximum light values over the specified time across the office space, and the level of light distribution was measured by comparing the daylight values between neighboring sensor points. Last, but not least, to help the designer build a holistic understanding of how specific design decisions about the amount of daylight affects the energy efficiency of the building, he performed an energy analysis to measure how the heating and cooling loads of the office space change based on the different façade alternatives presented by the agents. Page | 89 Figure 30 Experimental design setup that describes the environment, design parameters and the heuristic function that couples the extrusion of each façade panel with the analysis plane based on the degree (percentage) that each panel affects each specific virtual sensor point All the metrics have been calculated using simulation software developed by the Department of Energy (DOE) as accessed via the Ladybug and Honeybee tools. The designer needed to provide some specifications for the experiments. For the CDA the designer defined the minimum acceptable lux values, and for the UDI an upper and a lower boundary goal was defined. The designer also specified which periods of the day and year she was interested in analyzing in the generated design: i.e. 9:00 am-12:00 pm, 9:00 am-5:00 pm, and winter, summer, spring, autumn. The designer also defined the size and height of the analysis plane and a specific resolution that was divided into a number of sensor points (in this case 2,623 points). At each sensor point, for the DFA the researcher calculated the amount of incident daylight in lux and for CDA and UDI a percentage was calculated based on whether the amount of daylight provided illuminance within the user-specified range for a specific time period (part of the day, part of the year). For calculating the final goal based the illuminance on a sensor plane, the researcher averaged the values received for the CDA and UDI at each sensor point. The heuristic function that describes this relationship is the following. × = !" #ℎ ℎ%1, # ± ()! !" #ℎ 1, #ℎ 2, # + ± , % 2 . Page | 90 Figure 31 Design of the experiment for the simulation of the robotic fabrication segmentation, positioning, reachability, and collision parameters of the construction agent The office space was analyzed for each season by simulation and the results were used to a) inform the position of the opening, b) change the probability of placing an agent type, and c) change the depth of the façade. Façade Design Optimization Using Robotic Simulations Apart from the analytical values relating to daylight performance, and to add another layer of complexity and enable a comprehensive understanding of tradeoffs between design generation, daylighting performance and constructability, the researcher describes a class of construction simulation agents. This class of agents is tasked with simulating the robotic construction of the designs generated and evaluating them based on their constructability. To develop the robotic simulations the researcher assumes an offsite robotic construction process using an industrial robotic arm. Based on that assumption, a number of constraints emerge that relate to the working volume of the robot, the positions of the robots, pickup locations, and robot self-collisions. These constraints are used to develop a heuristic function which is used by the construction simulation agent to evaluate each design alternative. Page | 91 Below, the list of measurements and definitions in the heuristic function includes: i) Pos(x): the number of positions needed to assemble the whole façade and the maximum number of panels (Panels(n)) that the robot can place from a given position. A best position is considered from which the robot can place the maximum number of panels possible; ii) Col(j): the number of collisions between the robot and the panels already placed; iii) Sing(k): singularity positions that the robot can take while trying to reach a point in space. Singularity positions may result into self-collisions and therefore need to be avoided; iv) D(t): this calculates the sum of the travel distance from the pickup locations to panel placement (Figure 31 Design of the experiment for the simulation of the robotic fabrication segmentation, positioning, reachability, and collision parameters of the construction agent (Figure 31) . Finally, the constructability function is defined to enable measuring the performance of the construction process and rank the designs which depends on the parameters defined in steps i-iv (see equation below): , ) = ./0123 0 .43 5 67∗∑9:;< − ∑ >? . The definition of the heuristic above was based on empirical knowledge gained via two robotic workshops held at ETH Zurich (Pawlofsky et al., 2012) and include the following assumptions: more robot placement/calibration positions increase the construction time and thus result in a lower score, and singularity positions may damage the robot and thus are defined as further impacting the negative score. Based on a user-defined number of maximum collisions when placing the panels the researcher eliminates (deletes) panels or alters the probability factor for each panel that is passed as input to the generative agent for the next iteration. Each design generated is analyzed based on its construction simulation and the constructability score is used to a) change to the probability of placing an agent type in order to avoid collisions, b) inform the position of the opening, and c) segment the design in ways that are better for its construction by the robot. Figure 32 Schematic of simple building data model: relationship between components and object attributes (left) and table with local and global design parameters of the facade panel agent (right) Page | 92 Figure 33 Diagram describing the specific design context of the building with (a) the design surface of one building bay façade module, (b) the design behaviors of the agent-based fenestration, and (c) the analysis surface with the virtual sensor points (d) of this experimental case study Case Study A: Façade Generation for an Office Building in Los Angeles Next, the researcher considered the design of a façade for an office space located in Los Angeles, California. The office space is to be located on the 4th floor of a 30-year-old commercial building of 50 square meters and faces south. The MAS toolkit was used for the generation of alternative façade designs on a single bay (12.00 x3 .00 m) of a generic office building geometry using three different types of panels. The system is applied for the generation of whole façade geometries and the objective was to determine the optimal positioning of windows with regards to performance goals relating to daylight distribution and energy consumption. A basic design setup was defined with 4 global and 8 local parameters. The global parameters were: a) the location of the building, b) the input design surface, c) the orientation of the surface and d) the number of openings. The global parameters defined the geometric (surface domain) and data (weather data, sun position) environment of the agent. The local parameters included the following: a) generative angle, b) panel types, c) panel length, d) extrusion length, e) extrusion type, f) extrusion angle and g) clearance between the panels (Figure 32). The hypothesis of the design experiment is that by running daylight simulations on a base case in a given location (i.e. Los Angeles), the researcher can couple the design parameters with the performance goals via agent behaviors. For instance, the position of the extrusion of the panels were coupled with the amount of daylight. By observing the relationships between the design parameters and the goals, the researcher can extract “interpolation” functions that allow the system to predict the performance of the generated design solutions of any given design surface (in the same location). The case study is divided into two sections: (1) the development of a generative agent-based façade paneling system that is optimized based on environmental performance analysis, Page | 93 Figure 34 Rules of the generative façade panel agent(s) (E-H) and a typical generation process (I) shown in six sequential steps (from t=0, to t=T) including representative window openings and (2) the adaptation of designs based on specific user preferences that are collected and profiled based on how they relate to lighting conditions within the office space. The collected light preferences are used as an input and means to formulate goals that drive the system towards optimality. The lighting preference data was collected from 89 participants. Details about the collection of user preferences used can be found in (Heydarian et al., 2015b). For simplicity at this stage, as well as to push the results towards more energy efficient designs, the researcher used the preferences of a single group of users who preferred more natural light (23% of all participants). Although this group does not reflect the most preferred lighting setting among the sample, this group was selected in order to test the ability of the system to generate more energy efficient solutions as one of the initial validation experiments. Page | 94 The increase of natural light was defined, along with its distribution inside the office and the satisfaction of user preferences as design requirements; the objective wais to develop a design system that generated alternative façade designs that address the environmental performance criteria of program and location. The building façade described the design domain, while the number and size of the openings are described as areas where the components cannot be placed (Figure 34(B)). A construction sequence is then simulated by placing one component after the other sequentially and implies fabrication and erection constraints. Each component type is based on a probability factor (explained in Section 3.6) where the components connect sequentially while ensuring that they do not self-intersect as well as ensuring that the whole façade remains an interconnected structure covering the whole design surface (Figure 34(B)). The interior of the office is analyzed through 2,623 virtual sensor points (Figure 34(D)) that measure light intensity within the space at height levels that the designer can specify (i.e., floor level, tabletop level). These virtual sensor points exist within the simulation environment of the analytical feedback loop. The designs generated are simulated and analyzed environmentally and their performance is then coupled with user preferences (Heydarian et al., 2015a). 4.2.4.1. Design Process The designer must initially provide a geometry which describes the whole (massing model) or one part of a building envelope (one bay) as well as a file with the location and weather data. She or he then defines a number of desired openings, the building component and a basic generative mechanism (local design rules). A generative mechanism can include more or less complex design parameters, which in most cases pertain to the design problem. In this design experiment, the design parameters are the following: panel type, angle, length, extrusion length, extrusion uniformity, and the maximum number of components. Through a GUI the designer can test and visually evaluate different aggregations of components on the façade surface. When satisfied, the designer can save the configuration in an xml file. This file holds the core design information and is then run for a number of iterations (defined by the designer) and the values are updated and optimized based on the analysis performed. Each panel is defined as an agent and has 3 different states and 8 design parameters (type, probability, angle, length, extrusion, extrusion type, extrusion angle, and clearance) as listed in Figure 32. The panels populate the design surface in different configurations (based on probability) while trying to avoid areas reserved for the openings (window, clear glazing). The number, size and relative position of the openings and the extrusion length are updated by the stochastic algorithm at each iteration. To be able to search for optimal alternatives the designer defines: a) the type of analysis (i.e. DFA, CDA, UDI, etc.), b) the resolution of the analysis and c) the analysis period (i.e. daily, annual). He or she inputs user- collected data relating to user preferences and sets the targets for the heuristic function. The optimality can be Figure 35 A set of evolutionary façade designs where the same agent panel is applied on different input surfaces Page | 95 adjusted by the designer and in this experiment it relates to the following goals: a) decreasing building energy use (annually) by increasing the amount of natural light available, b) providing more distributed light during the day (daily cycle), c) meeting the levels of light preferred by users. At last, the designer runs the system and selects a) the type of search method and b) the type of analysis output. Once the system completes a cycle of iterations it can suggest possible positions for openings based on the defined performance goals. Additionally, the designer can evaluate the results visually both aesthetically (geometry) and quantitatively (energy performance) through graphs that show the tradeoffs between different goals (i.e. amount of daylight and total heating load). Below, the four phases of the design process that happen within the system are described in detail. In the first phase, the generative agent iteratively grows 2D lines on the panel surface while trying to avoid areas that are reserved for specified window openings. In the second phase, the lines are transformed into 3D surfaces via extrusion, finalizing the window panel pattern. In the third phase, the position and size of the openings are changed, triggering the regeneration of panel pattern configurations. For each design generated, the design parameters that affect the environmental analyses the most, such as the depth of the panels, are kept constant. In the last phase, the aforementioned design parameters are altered in order to optimize the environmental analysis and converge towards the users' preferences. In the first phase, the parameters are: L, which defines the length of each line; p1, p2, p3, which are the probabilities of each agent’s behavior; the connection angle between the lines; the maximum number of agents; and the number, size and position of the openings. In the second phase, the designer specifies d, the maximum extrusion length, and θ, the maximum offset in the vertical direction. Hence, the lines are not only transformed into 3D surfaces according to the length, and are also designed to be able to twist in order to better filter the light. In the third phase, the position and size of the openings are changed in order to find an optimal configuration for bringing more direct light into the space. In the fourth phase, a new type of specialist agent is created for each of the selected analyses. We consider 3 different types of environmental analysis and thus 3 specialist agents are implemented: a daylight factor analysis agent (DFAa), a continuous daylight autonomy agent (CDAa), and a useful daylight illuminance agent (UDIa). The designer specifies a utility “u” for each of the analyses performed and the user preference profiles, thus defining the most important analysis, and adding a weighting factor to the agents' behaviors. The weighting factor is calculated as a percentage over all performed analysis types depending on the performance target that the designer sets. All of these aspects affect the amount (measured in lux) and type of sunlight (direct or indirect) that enters the room, changing the illumination inside the space. As shown in Figure 33(I), in the first phase of the algorithm the agent starts at a user defined initial point on the panel and performs a series of iterations. In each iteration, the agent grows one line from its current position and moves to the end of that new line (Figure 33 (G)). The agent can grow three different types of lines according to three different behaviors: straight, left-curved, or right-curved, based on an angle (θ) as shown in Figure 33 (G). In the beginning of each iteration, the agent picks its next behavior according to the (user) specified probabilities p1, p2 and p3. Page | 96 However, the agent must also obey four constraints: (1) the new line must not intersect a previously constructed line, (2) the agent must not leave the boundaries of the given surface, (3) the agent must avoid specific areas reserved for openings, and (4) the amount of generated agents cannot exceed a maximum number specified by the designer (Figure 33(I)). If the probabilistically chosen behavior violates these constraints, a new behavior of left, right, or straight is selected until the behavior is valid. More specifically, the agent checks the history of all previous behaviors as well as the life value of each agent; it then changes to the behavior that has the ratio furthest away from the desired one according to the given probabilities p1, p2 and p3, which naturally induce a ratio. The above probabilities are scaled based on the life state of the agent, which is influenced by whether the agent was created in a valid area (i.e., if agent is created in non-valid area its life state decreases significantly). This phase is terminated either after a pre-specified number of iterations is reached or if the maximum number of agents has been created. In the second phase, shown in Figure 33 (E-H), the lines are extruded into 3D geometries. For each line, a length and angulation are chosen according to the following equation: d’= d * w; θ’ = θ * w, where 0 ≤ BB ≤ 1 is a weight given by the current sun radiation entering the panel in the position of the line. Hence, each line will have a different d’ and θ’ bounded by the preference of the user. The designer, according to the user preference, can specify two different types of extrusion: uniform or non-uniform (Figure 33(H)). The non-uniform case differs from the previous description in that the user can also specify a control point which affects the “degree” of the curves, generating the surface as shown in Figure 33(F). Finally, these parameters define the aperture a’ between surfaces (Figure 33(E)), which in turn influences the amount and type of light that enters the space. Figure 36 A set of evolutionary façade designs on a planar surface. The design parameters that change from to bottom are the panel type probability and sequence, length and extrusion of each panel Page | 97 In the third phase, the designer assigns (1) a specific set of days throughout the year (i.e., summer, winter solstice), (2) the minimum and maximum desired luminance obtained for the majority of the sensor points inside the office, and (3) a specific time-period throughout the day (i.e., 9:00 am-5:00 pm) as properties for the specialist agents (DFAa, CDAa, UDIa); the goal is to find the optimal combination of generated façade panels and (4) the positioning of n openings in order to provide luminance within the desired thresholds. The specialist agents (DFAa, CDAa, UDIa) for each run collectively compare across and search the whole solution space for possible positioning (x, y) of a number of openings (n=2 in this data set) on a given surface. The goal is to generate façade panel configurations that provide natural light availability above a designer defined level (i.e., CDA>150 lux) and closer to a user defined level (i.e. 150<UDI<1200). A run is then automated for an annual simulation, which then calculates the amount of natural light available on the specific dates as well as the average values throughout the year. The optimal and sub-optimal solutions from each run are selected and passed as inputs for the next phase along with the corresponding analyses to the CDAa and UDIa whose goals are to alter specific design parameters (i.e., extrusion) in order to improve the environmental performance of the design. A heuristic function as described above is defined for changing the design parameter based on the analysis values. Finally, the UPa takes as input the best ranked designs in terms of the CDAa analysis and combines them with user simulated data; it attributes a credit to each agent that is proportional to how much the analysis of each virtual sensor point deviates from the user preferences (Figure 30). At this stage, the user preferences reflect the preferred lux value of a user. Based on this utility factor, all the above parameters are re-calibrated and the agents iteratively negotiate in order to better meet the performance values while ensuring the minimum amount of conflicts with the other specialist agents. Figure 37 Comparative graph of experimental runs showing three different search methods for placing openings on a south facing facade: linear search (all possible solutions), Hill Climbing and Simulated Annealing Page | 98 Figure 38 Design alternatives presented to the designer along a series of environmental performance metrics in a parallel line plot 4.2.4.2. Experimental Runs and Analysis At first, the researcher tested the capacity of the system to generate and evolve designs on different input geometries and different design parameters for the generative agent. Multiple iterations were performed with different initial conditions and different parameters to illustrate the generative capacity of the system (Figure 35). In Figure 36, a set of design alternatives are shown that were generated by the system and vary from the normative horizontal louvers to complex panel designs. The diversity of the designs is achieved either due to its environment (the curvature of the design surface) or by encoding different agent behaviors (i.e. photophilic behavior: change the position of the opening to increase the amount of daylight available). Second, to compare the performance of generated designs and traverse the solution space more efficiently, the design parameters are coupled to the performance goals. By doing this, the experiment explored if there is any relationship between the design parameters and the environmental performance that can be mathematically described in a heuristic function. Given the set of Page | 99 performance goals defined by the designer, two separate heuristic searches wer implemented as described below. Initially, the system executed a linear search (brute force) to map the extent of the solution space in relation to the positioning of n number of openings (n=2). Each design alternative was given a unique identity (hash ID), and at each step the system checks whether the geometry existed to avoid generating duplicate geometries. Subsequently, a dictionary was created with all the IDs for easy lookup. Once we have mapped the solution space, the system ran iteratively using two basic stochastic algorithms, namely simulated annealing and hill climbing. Although these algorithms are not the most efficient in computational terms, they are suitable for open-ended problems (Simon, 1991) and form a good starting point for testing our framework. In Figure 37 the performance of each algorithm is graphed with regards to the daylight factor analysis. It is evident that using either of the two algorithms will significantly cut down the search space (linear search) by a factor of 0.023 and 0.006 accordingly. Additionally, the combined results from multiple analyses are plotted and are visually communicated to the designer using parallel line plots (Clevenger and Haymaker, 2009). The plot includes all the design alternatives with their hash IDs, the environmental analysis performed, façade orientation, heuristic search algorithm used, and simulation time. The designer can interactively adjust the boundaries of each analysis and filter out alternatives that do not meet their requirements. In this way, the designer can a) gain insight about how design parameters affect environmental performance, b) compare whether there is diversity among the solutions of each run, and also c) evaluate how well each algorithm performs in a multi-objective context. In this case, diversity means design alternatives that have similar performance characteristics but have distinct geometries. Figure 39 A rendering showing the facades designs generated for the southwest facade of the One Wilshire building in downtown Los Angeles. Page | 100 Case Study B: Façade Construction Simulation of a Bay on a Generic Office Building This case study also considers the design and construction of a façade for a generic office space located in Los Angeles, California. The framework was extended to explore how a robotic assembly simulation could become an integral part of the early design stage workflow and be integrated with daylight analysis in order to measure construction efficiencies. This study is a progression from the previous case study and the MAS approach is further extended to include the geometric constraints of the complex façade fenestration or louver pattern and also to include robotic simulations as a way of validating the constructability of the designs generated designs. The objective of this case study was to initially search the design solution space for environmentally efficient facade alternatives based on different window configurations and variable design parameters such as angulation. The most efficient results were passed to the construction simulation for testing and feasibility evaluation. Through this feedback loop, at each iteration the generative process is informed by both simulations and tradeoffs between the agents were established through each agency's utility functions. In this way, the system iteratively limited the solution space of possible better performing designs (environmentally) by constraining the design parameters to those that yielded combined constructible and environmentally efficient outcomes. The five agent classes mentioned in Section 3.2 were implemented after specifying the design, analysis and fabrication processes. These included: a) a generative agent, b) a specialist environmental agent, c) a construction simulation agent, and d) a coordinator agent. The generative agent had the same set of design parameters described in the previous case study. This agent was tasked to create alternative geometries iteratively based on the input design parameters and context as updated on each run. The design parameters of the generative agent were defined by the designer and were later conditioned by the constraints imposed by the construction simulation agent. Figure 40 Diagram illustrating the different geometries of a single bay depending on the global geometry of the tower structure Page | 101 The specialist agent performs two types of environmental analysis on the designs generated and had the objectives of combining the analyses and the passing of the analysis data (lux values, CDA scores) to the coordinator agent along with a set of messages in the form of a report. The goal for the specialist agent was to increase the amount of natural daylight entering a typical office space (i.e. the DLA) while keeping the lux values within a threshold (i.e. CDA) defined by the space typology and dimensions. The construction simulation agent received as inputs, the generative designs, which were evaluated to determine their performance within a specified range in terms of environmental analysis. The system then performed the construction simulation and checked for constructability. Based on the constructability heuristic function described in Section 4.1.3., the designs were ranked when necessary the design parameters were updated and passed to the coordinator agent, forming one of the system’s feedback loops. The goal of the implemented construction simulation agent is to decide how to segment the design to ease construction and to find optimal positions for the robot that speed up the construction and eliminate robot’s self-collisions when given a design. Finally, the coordination agent (also developed using Python) established communications between the generative process (generative agent), the analytical process (specialist agent), and the simulation process (construction agent) by passing the analysis data; it simulated the results and messages back to the generative process thus forming the culminating feedback loop in the system. In terms of technical implementation, the Python and Grasshopper visual scripting editors are used to call the different agent processes that have been implemented in Java using Processing libraries, (Reas, 2007). For the creation and management of geometry created by the agencies the system uses the IGeo library ( Figure 41a, b), an open source NURBS-based library, in order to output geometries that can be further used for the fabrication purposes (Sugihara, 2014). For the construction agent, the author implemented KUKA|prc, a simulation plugin for Grasshopper, to simulate the robotic construction sequence and the complex fenestration assembly process. The specialist agents were Figure 41 Diagram illustrating all the workflows and relationships between the MAS agents Page | 102 developed using Python and use the Ladybug and Honeybee environmental simulation tools (Figure 41 g, h) to combine three different environmental analyses: Energy Plus, Daysim and OpenStudio (Roudsari et al., 2014). A series of simulation experiments were conducted to collect data and analyze how the simulation results inform the generative process of an environmentally optimized shading system for a typical office environment. The workflow is applied on the south facing façade of three different generic office building typologies whose geometry varies from orthogonal to free-form. The geometry of a single bay of each tower type is shown in Error! Reference source not found.. 4.2.5.1. Design Process Similar to the previous case study, the design process was divided into two phases. In the first phase the designer developed a) an initial design component of a façade and defined a subset of alternative panel types (3 in total for this experiment); b) provided the following data as inputs for the generative MAS for design system: length, angle, probability for each panel type, and depth or extrusion of panel; c) ran N number of iterations using a hill- climbing optimization algorithm; and d) generated a solution space of design alternatives that were then evaluated for their performance across DLA and CDA metrics. In the second phase, the best performing designs were a) passed to a robotic simulation software that explored different construction strategies, f) collisions and errors were registered as negative scores in the ranking equation, and c) the designs were ranked based on their constructability in terms of time needed for construction and, finally, d) these scores were then passed back to the generative system for further optimization. The following steps were followed in the design of this experiment: 1. First, the designer set the input parameters and initialized a run of the generative system. The system used a hill climbing algorithm that searched for optimal window positions. Int each iteration the position of the windows was changed by adding a small increment to the surface domain of the facade. For each set of window positions the system output one unique design with the position of the openings that provided the most daylight to the interior as described as optimal. The data was collected by the system for 6 different generative angles: π/2, π/4, π/6, π/8, π/10, π/12. The generated designs were automatically passed to the specialist agent for lighting analysis. The system then performed two kinds of analysis, a daylight factor analysis (DLA) and a continuous daylight autonomy (CDA) for the designs generated 2. Once the analysis was completed, the designs with the highest scores were sorted and passed for robotic simulation. 3. The user selected the robot, in this case the 6-axis KUKA-KRL60, and defined the pick up location and the robot movement axis, in this case parallel to the design surface. 4. A planning process was selected, and the designs generated were segmented into groups based on the maximum reachability of the robotic arm. For the purpose of this research, only two different construction strategies were defined and thus divided the designs into rectangular and circular segmentations. 5. The robotic simulation was run and the system measured collisions, singularities, panels, and positions in order to develop the heuristic for the constructability agency. . Page | 103 Figure 42 A subset of generated facade panel alternatives with 2 and 3 openings, which are highly ranked in terms of environmental performance and therefore were further tested for their constructability. 4.2.5.2. Results and Analysis This section presents the initial results based on running the generative and analytical loops of the system for 450 cycles. The duration of each generation analysis cycle is approximately 5 minutes including the geometry generation and the environmental analysis, while the robotic simulation cycle is approximately 3 minutes. Six different generative angles were tested to explore the system's capacity to create an expanded and highly varied solution space of 450 unique geometries. The 10 highest-ranking designs from each cycle were selected and passed to the robotic construction agent. One pick-up location for all 3 types of panels was set; the range of possible segmentation positions was set to 4 and the minimum was set to 2. The segmentation position refers to the way a generated design was divided into parts based on the work volume of a specified robot. This study only considered circular design segmentation based on the maximum reach of the available industrial arm. Based on the simulation and the constructability heuristic function, the designs generated were improved for construction purposes either by eliminating panels or by changing the sequence of panel types. The updated parameters for each design were stored in XML files and were passed back to the generative agent. Figure 43 presents graphs of the constructability score of the initial and improved geometries for all 6 different generation angles. It is observed that the constructability score is highly variable with the initial conditions but becomes a line with zero or positive slope for the improved geometries across all the generation configurations. Page | 104 Figure 43 Plots illustrating 3 design generation cycles, with variable angle (bold, dark lines) and without variable angle (thin, light lines); and the constructability score improvement over time per geometric iteration. The results show that geometries with a generation angle of π/10 performed better overall in terms of constructability score over time as the deviation from the initial and final scores is the largest compared to the rest of the runs by 6%. In the initial runs, the constructability score from one design iteration to the next ranges from 0.1 to 0.6 while in the optimized cases it drops below 0.15 and is constantly improved. The results show that geometries with a generation angle of π/10 performed better overall in terms of constructability score over time as the deviation from the initial and final scores is the largest compared to the rest of the runs by 6%. In the initial runs, the constructability score from one design iteration to the next ranges from 0.1 to 0.6 while in the optimized cases it drops below 0.15 and is constantly improved. The system continues to be tested, validated, and further integrated. In this case study a number of limitations should be pointed out. First, for reasons of clarity, only one type of design segmentation was simulated and tested on a zero curvature (façade) surface domain. Moreover, at present the system is not fully automated and still requires the designer to work across multiple interfaces. On occasion during the generation of a design alternative the criteria of creating a complete fenestration pattern across the domain of a surface, as well as being faced with a local minima problem within the hill climbing algorithm's manual deletion of collision points enabled the continuation of the pattern and design. Last, but not least, in order to validate the results the researcher is planning to perform a physical experiment in order to cross-reference a simulated value with the actually obtained value. Page | 105 4.3. Experimental Design 3: Agent-Based Shell Structure Design In this experimental design, the applicability of the framework was tested for designing thin shell structures, a design problem that requires the close collaboration of architects and engineers to create articulated and efficient design solutions. The focus is on form finding by describing agents as structural building components, where a number of design parameters is coupled to both the building's environmental and structural performance. The design parameters were derived by a vernacular structural system that analyzed the shell forms devised by the system. The structural system, which is known as reciprocal frames, was selected due to its ability to span areas much larger than the length of its structural elements. Another reason for selecting this structural system is the fact that its inherent complexity was a hindrance for modeling it and using it in practice. The objective was twofold: a) to extend our MAS in order to consider two types of analysis, the design generation and specifically environmental and structural analyses; and b) to compress the two steps of the design process used in the first experiment (design generation and design optimization) into one. In an attempt to operate in an incremental fashion, the researcher initially did a pilot physical case study using a parametric form finding approach to generate the shell geometries that take into account a vernacular structural system as well as material and economic parameters. He then used an agent-based simulation to create perforations and optimize the effect of the solar radiation underneath the structure. The objective of the pilot study was to show the limitations of parametric modelling and demonstrate how a MAS approach could best be Figure 44 Historical timeline of form active and form passive shell structures Page | 106 utilized in the context of form finding for addressing environmental parameters. Based on the results of the pilot study, the researcher developed another case study in simulation in which the MAS framework was used for form finding shell structures that are informed by environmental parameters such as the position of the sun as well as the structural parameters described in Section 4.2.2. Structural Form Finding Structures can be categorized in many different ways according to their shape, function and materiality. Shells are structures defined by a curved surface that is thin in the direction perpendicular to the surface, but there is no absolute rule as to how thin the surface has to be. This definition includes a wide variety of structures from concrete shells to grid shells and bird’s eggs. Shell structures can be further classified as form active when the form adapts to different loadings (i.e. a spider's web, or a sail) and form passive if the shape does not changing significantly (i.e. a dome) (Figure 44).The structurally efficient shell design challenge lies in determining the "right" structural shape that will resist loads within its surface without the need for extra structural systems (Adriaenssens et al., 2014). Of all the traditional structural design parameters such as material choice, section profiles, node type, global geometry and support conditions, the global geometry predominantly dictates whether a shell will be stable, safe and stiff. Preceding work in the field includes the works of A. Gaudi, P.L. Nervi, F. Candela, E. Torroja, F. Otto and the contemporary work of J. Ochsendorf, A. Kilian, P. Block, C. Williams and M. Sasaki, to name but a few (2014, Pottmann et al., 2015, Chilton and Isler, 2000, Adriaenssens et al., 2014). As just one example from these precedents, Heinz Isler made extensive use and analysis of physical scale models that were cast in plaster upside down and then scaled to full size. Isler believed that physical models ensure a more holistic simulation of the problem, although they posed the ensuing challenges of accuracy and the scalability of material and mass. In part inspired by Gaudi’s physical hanging chain models, A. Kilian at MIT introduced the use of particle spring systems for digitally simulating the behavior of hanging chain models for finding structural forms composed only of axial forces (Kilian, 2006). Another critical precedent is the work of Daniel Piker; he introduced an intuitive visual scripting tool, Kangaroo, that enables digital form finding. Kangaroo, a non-linear “physics based” engine, is embedded directly within the Rhinoceros-Grasshopper computer-aided design environment, thus enabling geometric forms to be shaped by material properties, applied forces, and interacted with in real time. By embedding rapid iteration and simulation in the early-stage design process, Kangaroo allows a faster feedback loop between the modification of a design and the engineering analyses (Piker, 2013). This is particularly useful for the design of structures involving large deformations of material from their rest state, such as tensile membranes, bent-timber grid shells and inflatable structures. Kangaroo can also be applied to the interactive optimization of geometric and aesthetic qualities that may not themselves be intrinsically physical. Another research group lead by Philippe Block has developed a structural form finding software package, Rhinovault, that implements the Thrust Network Approach (TNA) to create and explore compression-only structures. It uses projective geometry, duality theory and linear optimization, and provides a graphical and intuitive method that adopts the advantages of graphic statics for three-dimensional problems (Block and Ochsendorf, 2007). Rhinovault is based on relationships between form and forces expressed through diagrams that are linked through simple geometric constraints: a form diagram, representing the geometry of the structure, reaction forces and applied loads, and a force diagram representing both global and local equilibrium of forces acting on and in the structure (Van Mele et al., 2012). Rhinovault takes advantage of the relationships between force equilibrium and three-dimensional forms and explicitly represents them by geometrically linking form and force diagrams. Page | 107 From these precedents, the author has observed an evolution from physical simulation methods and historical form finding techniques to a series of contemporary form finding tools that implement contemporary mathematical models and digital simulations for computing and analyzing shell structures. In the case of TNA, linear optimization is used; in the case of Kangaroo the geometry optimization is non-linear. Moreover, TNA’s reduction of the problem into two dimensions offers a more efficient computational model for computing the force distribution. Although these tools have facilitated the design of shell structures, which although they can be free-form they are pre-rationalized, there is still a necessity for computational tools which enable designers to steer away from purely form-found geometries and help them explore alternative forms by a) allowing them to introduce more architectural parameters in the early design stage and b) integrating constraints related to the fabrication and construction processes. Background and Context of Reciprocal Frames The principle of reciprocity in structural design and construction, i.e. the use of load bearing elements to compose a spatial configuration wherein they mutually support one another, has been known since antiquity (Pizzigoni, 2010). Etymologically, reciprocity derives from the Latin recirpocus, which is composed of the two parts recus meaning backwards and procus meaning forwards. The word reciprocity implies the practice of exchanging things with others for mutual benefits. Such a definition emphasizes the obliged stressed return of a certain action. The development of reciprocal frames has not had a linear history and the evidence of its knowledge and applications around the world seems to be unrelated to one another. However, a common point in the use of this system is the use of timber as a construction material in both the Occidental and Oriental cultures. Thus, it is worth noting that it was more of a practical and construction issue in Europe, for development of planar spanning configurations, while in Asia, it was used more for the ceremonial realization of three-dimensional structures. An overview of reciprocal frame structures dating from antiquity until today (April 2019) are illustrated in Figure 45. Figure 45 Timeline with examples of structures realized using reciprocal frames dating from antiquity to April 2019 Page | 108 The first reciprocal frame structures are traced back to Chinese and Japanese religious architecture in the 12th century, as seen in the wood constructed roof support systems of the mandala roof. In Europe, the concept of spanning distances longer than the length of the available timber beams was the main reason for the use and development of the reciprocity principle (Larsen, 2008). During the 13th century, Villard de Honnecourt conceptualized roof support structures that were based on this principle in his sketches. Later in the 16th century, Leonardo Da Vinci, who laid the foundation for a scientific study of reciprocal structures, explored at least five different spatial configurations based on the principle of reciprocity, experimenting with regular and non-regular 2D and 3D geometrical configurations. Sebastiano Serlio addressed the problem of planar roof construction with short beams in his book on architecture dated 1556. A comparable structure system made of reciprocally supporting bar-shaped elements is the Zollinger system, which is mainly used in timber roof construction; Friedrich Zollinger obtained a patent for it in 1923 (Kohlhammer and Kotnik, 2011). It has been through the development of sophisticated timber products such as glulam trusses and plywood, that produce long spanning structural elements through adhesive technology, that has led to the replacement and lack of further development of reciprocal frames and similar structural systems. Currently, the principle of reciprocity continues to stimulate the interest of designers and researchers and it has again become a topic for academic research. (Thomas Kohlhammer, 2010). Architectural applications include the Mill Creek Public Housing project by Louis Kahn (1952-53), the Bunraku Puppet Theater by Kazuhiro Ishii (1994), the Pompidou Metz museum by Shigeru Ban (2008) and the Apple Story at 5 th Avenue New York City, USA by B. C. Jackson, to name a few, and can be found around the world. Moreover, a set of experimental works related to structural, geometric and constructive issues of reciprocal structures are appearing, such as the Forest Park structure by Shigeru Bahn, and ARUP AGU, the Serpentine pavilion by Cecil Balmond and Alvaro Siza (2005), the H-edge pavilion by Cecil Balmond and students from Penn University (Pugnale et al., 2011) and the research pavilions at EPFL Lausanne (Nabaei and Weinand, 2011). In these projects the reciprocal principle has been explored by using different materials, element sections, joints and planar or 3-dimensional configurations providing the fundamental evidence that adaptations of this typology should be further investigated for a diversity of architectural styles, patterns, performance characteristics and local sensibilities. Based on a specific type of reciprocal frame, the researcher re-examined the applicability of such a vernacular structural system by analyzing its functional and material behavior. The research models a design system that fosters design exploration by incorporating issues of recyclability and material efficiency coupled with design and spatial comfort performance objectives. This is partially achieved by implementing digital fabrication techniques and also through the incorporation of agent-based design technologies enabling emergent and intrinsic performance. 4.3.2.1. Defining Reciprocal Frames A reciprocal frame is a structural system formed by a number of short bars that are connected using friction only. Most importantly, the reciprocal frame can span many times the length of the individual bars. "Reciprocal" refers to the fact that such structures are composed of a number of elements (also referred to as short beams) that structurally interact through simple support binding in order to create more complex structures of dimensions much greater than the single elements from which they are composed. Page | 109 The application of the reciprocity principle requires: a) the presence of at least two elements allowing the generation of forced interactions; b) that each element of the assembly must support and be supported by another one; and c) that every supported element must meet its support along the span and never at the vertices in order to avoid the generation of a space grid with pin joints(Larsen, 2008). The space structures that conform to the above requirements are called reciprocal and are constituted of at least two interlaced linear elements where the final form is relative to a basic component type in its material as well as the connection technology. The components can be identical or non-identical but should follow a specific global tessellation pattern. The joining of the components at the node points can generally be carried out without mechanical connections, solely by pressure and friction. To support the frictional forces simple connection techniques such as tying together or notching of the elements can be used. In fact, in the context of wood processing techniques it is recognized that the complexity of the connection technology becomes an important feature in order to distinguish different structural propositions both from financial and structural aspects (Nabaei and Weinand, 2011). From a structural point of view, each individual element in the system works as a single beam. This beam lies at each of its edges either on another component or, if it forms the edge of the structure, on the supports of the entire system. Each element bears the supporting force of one of the neighboring elements and optional dead loads or live loads. The interest of such a structural system lays in the fact that the “global” form is determined by the “local” condition of the building elements. Figure 46 Photo of assembled reciprocal structure in situ Page | 110 Case Study A: Fabrication Aware Form Finding The first case study used existing parametric form finding methods to generate the shell geometries and analyse them in reciprocal frame. Then, an agent-based modelling was used to create perforations on the shell in order to enhance the solar radiation analysis underneath the structure. The emphasis of this case study was on using an integrated design-to-construction workflow and investigating the topic of material aware form finding through the perspective of a traditional structural system, that of reciprocal frames, as well as the affordances relating to file-to-factory processes, all while working with a real world site, material, assembly, and cost constraints. A self- standing shell structure at 1:1 scale was designed, fabricated and constructed on the rooftop of a cultural center in the center of Athens, Greece (Figure 46). The MAS framework in this case study was not used for form finding; instead, a parametric modelling approach was used, and agent-based simulations were used to create perforations. Kangaroo, a physics-based solver was used for form finding the shell geometries by providing boundary conditions and material weight (Piker, 2013). The form found shells were analyzed parametrically into structural elements using the reciprocity principle that was described in the section above. A solar radiation analysis was performed on the generated geometries to evaluate their environmental performance. The environmental data were passed to generative agent that was tasked with creating perforations on the shell (in the Processing 2.0 environment). The case study was realized following a design-to-production workflow that proceeds through the following steps: 1) form-finding of a canopy shell surface with two support conditions through the use of a mesh relaxation algorithm; 2) the discretization of the generated surface into its iso-curves that are populated with interlocking building components, where each component follows the principle of reciprocity and is modeled with an associative parametric geometry modeler; 3) informing the geometry of the basic element based on the selected Figure 47 Diagram showing the form finding process (left) and curvature analysis of alternative form found shells (right) Page | 111 material (thermoformed plywood) to optimize its transversal section and render it resistant to the forces under which the structure would be placed; 4) performing asolar radiation analysis, and using this data for informing the trajectory of the agents in order to control the permeability of the structure by introducing perforations probabilistically on the building components; 5) examining the fundamental mechanical properties of a single arch using the Finite Element Method (FEM) while considering the non-linear contact boundary condition; this is performed with the static behavior of the structure under the self-weight load case; and 6) constructing a prototype at 1:1 scale using a new type of thermoformable plywood panels, in which the material usage was optimized by using the minimum allowable thickness, and by grouping the components to be fabricated in panels with non-standard dimensions that correspond to the available veneer width rather than fixed plywood panel dimensions. 4.3.3.1. Design Process As the first step, the design of the experiment(s) began with setting the boundary and support conditions of the structure. By implementing a mesh relaxation algorithm, a series of surfaces was generated with the same support conditions. Based on a curvature analysis, a selection of these initially designed surfaces were parametrically discretized; their iso-curves were extracted and populated with a basic component type in an automated and parametric fashion (Figure 47). Interlocking components were placed sequentially in pairs on the division points of each iso-curve and were oriented parallel to the tangent vectors. The panels were further analyzed with Ladybug, another Grasshopper plugin that runs a Radiance-based simulation. All the components were numbered and unrolled in flat panels with all the perforation and notch lines projected. Thus, the material calculations were completed, relating the covered area of the shell surface to the simulation data that was exported as a text file. The text file was passed to the custom flocking algorithm in the Processing environment in 3D, where the agents read the vertices of the component surfaces and coupled data values. The agents were spawned and programmed to make movement and trajectory decisions based on the local information including the intensity values from the sun simulation, proximity to neighbors, and trails left by other agents. Each agent had the capacity to read the data from the simulation, which was paired with its corresponding point in a mesh object as well as data related to its neighbors, and constraints as trajectories in time. Thus, the agents' environment was a collection of points within which they were constrained, and each point was assigned an intensity score based on the data from the simulation. The agents' trajectories then became a generative geometry for material organization and material re-organization which happened in a collective recursion. The agent-based trails were exported again as a text file into the Rhinoceros/Grasshopper scripts to be incorporated as another layer of information that controls the permeability of the panels through a process of CNC-driven material erosion through milling (Figure 49). The third step was to inform the component design based on the selected material and the condition of the neighboring elements. Furthermore, each component was analyzed to determine the amount of solar radiation it receives during a given period, and a value (kwh/sqm) was given for each mesh point. The weighted values were passed on to the multi-agent design system that controlled the permeability of the structure, by drawing on the surface the reaction of the agent swarm as paths for areas to protect or remove material. Agents were created at the mesh points of each component and the associated value at each point influenced the agents’ flocking behavior. The path of the agents was exported and translated in a perforation pattern via dashed slits for improved solar radiation protection and enhanced comfort beneath the structure (Figure 51). The last step included creating the prototype of the whole structure at 1:1 scale using curved thermo-formable plywood panels that were CNC-milled flat and subsequently thermoformed. Page | 112 4.3.3.2. Part-to-Whole Relationship A basic module of a two-element reciprocal frame structure was parametrically defined and investigated as the basic structural system. Instead of using traditional linear or planar elements, mutually supported curved panels were used instead, for reasons of both formal design aspiration as well as for creating a thickened pillow shell that provided a more complex shading condition. The self-standing reciprocal frame structure presented in this study shows an example of the design practice of a final form driven by the connection technology in which a relaxed modular global form is discretized by means of mutually supported panels. The proposed slide connection scheme inspires a new family of reciprocal frames, where instead of linear members (beam or bar), folded or formed members are mutually supported. The connection between the building components is integrated as a notch with a specified angle within the geometry of its members, unlike the traditional reciprocal frame system in which the connector members are regular. A V-form module of given angle is fabricated through the thermoforming of plywood panels and is then spatially multiplied using consecutive rotations and translations that follow the tangent vectors of a discretized catenary arch. The structure can be decomposed into three principal module types each consisting of curved panels (convex and concave) with locally specific angles in relationship to their generatively form found neighbors. These modules are then interlocked sequentially along their uniquely milled U shape cuts to form an arch (Figure 48). The inter-panel stability is provided by the rigidity of the slide connection and the axial contact of reciprocal panels. The structural performance of the whole structure improves when more than 2 arches are connected together. This is due to the fact that they then act like a truss, with only axial compressive and tensile forces. Bending moments and shear forces are minimized in network arches (Tveit, 1987). Figure 48 Diagram with the panels developed flat and assembled in the final structure Page | 113 4.3.3.2.1. From Computational Modelling to 1:1 Physical Prototyping In order to test the feasibility of such a design approach, a prototype at 1:1 scale was produced. A selection of photos illustrating the project is shown in Figure 50. The design and programming of the structure was at first optimized for discretizing an irregular surface of curved panels using a single pressing mold. The size of the structure was parameterized based on a maximum amount (area) of material, that of 54 m 2 . The whole production workflow included: 1) material processing, 2) manufacturing of the pressing mold, 3) CNC cutting of the components, 4) thermoforming and post processing of the components, and 5) the final assembly and erection on site. Through this approach to the reciprocal frame the researcher investigated the potential of using curved plywood components instead of planar ones. For that reason, he selected an innovative material called UPM Grada, which is specifically designed for the manufacturing of form pressed plywood panels. UPM Grada uses the application of an adhesive film that allows the plywood to be thermo-formed after production by applying pressure through a custom-made mold. The UPM Grada technology allows the fabrication of custom panels as the film can be cut according to the veneers available. Of great significance for material efficiency was the creation of plywood panels in sizes that corresponded to the available veneer sizes and not the standard dimensions. This saved material as plies were used “as is” and reduced material processing time as multiple plies did not need to be stitched together in larger panels before being glued to form the final plywood panel that was CNC milled. Moreover, the module’s geometry was optimized in order to be fabricated by a single mold for reducing production costs. The panels were thermo-glued at first flat and were milled afterwards using a 5-axis CNC machine. The cut pieces were then reheated and pressed consecutively in the mold to take the final shape. Figure 49 Fabrication process diagram a) showing production of raw material (7.5mm birch plywood), b) nesting of component into panels of variable size (5 sizes), c) CNC milling process, d)preheating of the component, and e) the thermo-forming of panels in a press mold Page | 114 A master parametric 3D model was developed in Grasshopper-Rhinoceros that generated all the cutting files for the 148 components in a file-to-factory process in which custom routines were developed to give each component their precisely calculated slots for the sliding joints, all in gradually shifting positions and variable angles in order for the pavilion to achieve its irregular funicular form. All components were uniquely labeled and numbered in order to facilitate assembling and dismantling of the structure without the need of experts or detailed drawings. The plywood components were manufactured in Greece at a factory outside Athens and transported to the site. 4.3.3.3. Results and Analysis The thin self-standing shell structure with a footprint of 24 m 2 required 54 m 2 (0.4 cubic meters) of thermo-formed birch plywood. On site assembly of the pavilion was completed in two days by a group of five non-expert workers. It was empirically discovered that the film adhesive and thermo-forming process produced panels with more elastic behavior than the traditional gluing technique using phenol-formaldehyde-based wood glue and high voltage electricity for curing the glue while the panel is being pressed. This facilitated the assembly process as it allowed the panels to slide in place more easily. Given the common cost and material constraints locally (and arguably globally) for projects like these, the project sought to be highly resourceful with material usage aided by computational design tools and CNC manufacturing. Figure 50 Photos of the final structure in situ Page | 115 Figure 51 Diagram of the perforation system-agents' behavior showing a) the agents' motion towards a local maximum based on sun radiation, b) the agents’ path trajectory, and c) the generated milling pattern based on the agents’ paths The total scrap wood did not exceed 5 m 2 and was used to create the seating below the structure. In further comparison to the preceding work (Weinand, 2011) the researcher attempted to define benchmarks and measures for the affordances of our approach. By adapting the principle of reciprocity and a specific family of reciprocal frames with mutually supporting curved panels we benchmarked the construction of a maximum 600 cm span out of 45 cm long, 7.5mm thick interlocking panels that required no additional joinery. A previous structure following the same principle, realized by Profrssors Y. Weinand and S. Nabaei and students in Lausanne, had achieved a span of 740 cm with 21 mm thick panels (Nabaei and Weinand, 2011). While the project is clearly indicative and statistically descriptive of improved span-to-material-depth the researcher expects to continue to investigations for further efficiencies of span, material usage, and applicability to complex curvature and other design vernaculars. Another major thrust of my future investigations is that of the impact of the swarm generated pattering and its influence on the structural and environmental performance of the structure. Figure 51 shows a diagram of the perforation approach and its impact on the solar radiation analysis underneath the structure. This case study is the very first built experiment which combines research towards the development of a holistic multi-agent system framework that uses a basic agent model of flocking behavior and combines it with principles of parametric form finding and material properties. The self-standing structure, from its inception through to its use, incorporates real world material, assembly, cost, and human constraints that have informed the future development of the MAS system. It can be observed that if the internal forces are known, each element in such a structure can be adjusted to its local stress and therefore an optimized material consumption can be achieved. Page | 116 Figure 52 Diagram illustrating the input parameters, two different design behaviors, and the resulting environmental analysis This pilot study explored the applicability of a bottom-up MAS approach for architecture through the combination of parametric form finding techniques with an agent-based design system and digital fabrication processes. The aim of this design experiment was to test an agent-based workflow combined with a fabrication protocol that has the potential to combine design intentions with a structural system and material constraints. The focus was on showing the disadvantage of using parametric modelling. The top-down nature of parametric modeling was restrictive in terms of adjusting the position and shape of the joints in order to facilitate easier assembly. The study emphasized how agent-based modelling can be used for encapsulating the intrinsic features of the material properties and assembly methods in the form of simple local rules. In such an approach the design intentions can be represented by different agencies, and local constraints can be described as agent behaviors which lead to global characteristics instead of imposing “architectural gestures” in a top-down fashion. The design optimization becomes an iterative process in which both the solution and the starting condition(s) are constantly oscillating towards an equilibrium defined by multiple performance criteria in response to the given topology and the user’s design intentions. This experiment has presented an initial analysis of the potential generative and automation possibilities of integrating digital fabrication workflows driven by MAS design framework. Case Study B: Environmentally Aware Reciprocal Frames This case study was an attempt to extend purely form finding techniques through the integration of a generative (form finding) agent with two specialist agents: one for structural design and one for environmental design. The form found geometries are analyzed geometrically into reciprocal frames and are analyzed structurally. Page | 117 Figure 53 Diagram describing the design steps of the case study As described in Section 4.2.2, the structural system relies on the principle of reciprocity and its morphology is further analyzed in this case study. The advantage of this system is that it can be used to cover large spans by using short structural elements. The system is versatile and can be used either with planar or free-form geometries, but its complexity has restricted its use. In an attempt to work in incremental fashion, the research proceeded from the simplest version of the system with two interlocking planar elements which was studied in the previous case study) to a reciprocal frame system with 3 elements and variable linear geometries. The objective was twofold: on a global level, to use the MAS framework to explore geometries that were informed by the position of the sun at specific times of the year, and on a local level to study the morphology of the system locally and extract rules to be used for informing the global configurations. 4.3.4.1. Design Process The design process for this case study included four phases. The first phase included the definition of typical parameters in structural design such as: F, the outline of the provided footprint in the form a polyline; P, the topology of the network; S (n,t) the number and type of support conditions; E which is the material stiffness; G, the material weight; M, the maximum number of agents; and L, loads. Additionally, the designer provided: LL, the longitude and latitude of the site location; O, the orientation (N, S, E, W); weather data (.epw file); EB, which is the environmental behavior (i.e. providing shadow beneath the structure in the morning). Page | 118 Figure 54 Graphs showing maximum element stress in relation to length, thickness and geometry as well as the allowable stress In the second phase, based on the provided footprint and topology, generative agents were created whose behavior was informed by a particle spring system. The behavior of the agents is further extended to account for environmental parameters (i.e. sun positions), that were extracted from the weather file input. At each time step, the position of the agents was updated based on the summation of forces that act upon it including gravity, stiffness, and sun attraction/repulsion. Iintroducing specific positions of the sun as “virtual forces” directly linked the environmental parameters with form finding. In this phase, the designer specified the duration of the form finding process and the value force of the sun, which is related to the environmental behavior. Once a global equilibrium was reached, a shell geometry was generated based on the optimal force distribution. In the third phase, the generated shell geometry was exported and passed for environmental and structural analysis. The shell was analyzed as a structural system, in this case reciprocal frames. At each generation the design parameters that affected the daylight analysis, such as the sun force and position of supports, were updated based on the daylight analysis values. The designer was able to interactively update the parameters in order to steer the form based on his or her intentions. In the last phase, once the desired daylight performance was reached, the topology and size of the structural members was updated in order to satisfy the structural requirements (Figure 52, Figure 53).Two type of specialist agents were created for each of the analyses performed; they communicate with the generative (form finding) agent, namely the structural analysis agent (SAa) that accounts for displacement, and a daylight factor analysis agent (DFAa) that accounts for the amount of sunlight on and beneath the shell structure. The local analysis of different types of reciprocal frames provided the designer with insight about how the design parameters of the selected structural system (i.e. the number, size, and geometry of the structural member) affected its structural behavior. The designer specified a utility, “u,” for each of the analyses performed; this defined the most important agent type, and the designer also added a weighting factor to the agents' behaviors. The weighting factor was calculated as a percentage over all the analysis types performed and depended on the performance target that the designer set. All the above aspects affected the amount of sunlight (measured in lux) projected on and beneath the surface as well as the maximum displacement of the shape (measured in milimeters). Page | 119 4.3.4.2. Results and Analysis The results of this work, so far, are on both a local and global level. On the local level the researcher has tested different section profiles that range from standard to non-standard, and has observed that the structural performance of each reciprocal element is largely affected by its cross-sectional profile. Six different types of profiles were tested, including: a circular pipe, a rectangular element, and 4 types of planar elements with different proportions of width over height, and with one transformation (torsion along main axis and offset along the main axis). Figure 54 shows a plot of the stresses on the reciprocal elements with different geometries in relation to varying lengths and thickness. Only one type of element (twisted planar) fails to meet the requirements for allowable stress based on the American Building Code, while the rest of the profile types show an almost linear relationship between their length and thickness to the stress. As the length increases the stress also increases, but as the profile increases the stresses on the element decrease. In terms of the unit, we can see in Figure 54 that as the number of elements increase (valency) the stress decreases. For a unit (reciprocal frame) of n=3 and a pipe profile the stresses can be as high as 1620kN while for n=6 it drops as low as 703kN (66%). At this stage, for simplicity and the clarity of the experiment, only one profile type and reciprocal frame unit (n=3) are tested, applying it on the global geometry. In terms of global geometry, 4 different topological cases were generated based on the support conditions and whether or not the geometry had an opening. One of the topologies was tested for spans of both 50 and 100 meters. The researcher then analyzed both the form found, the environmentally influenced geometries, and measured the following: principal forces (compression, stress, moments, displacement and cross section utilization) (see Figure 56). The utilization of the cross section is the ratio of applied stress over the yield strength of the element; using a safety factor of 200% the elements were sized in order to achieve a target utilization between -50% to +50%. Additionally, the maximum displacement was measured and was within limits (0.5 to 5cm). The structural analysis shows that geometries with a hole had a significant increase in moments (130-200% increase); it was observed that the amount of support conditions did not significantly affect the resulting principal stresses but rather their position and shape (see Figure 56).The environmentally informed shells were subject to higher principal forces as their form was modified from the pure form finding; however they performed better in the environmental analysis (see Figure 56). Although the designed workflow is not yet fully automated in terms of the generation and transfer of geometrical data for finite element analysis, the intent is to lay a foundation that can be automated for generation and analysis to form a singular structural agent, which will have the analytical results as an input. With proper automation, ample results can be generated for different profiles, different notch styles, and varying material parameters, varying cross-sectional dimensions and varying frame morphologies. The next steps include the development of an autonomous structural agent, which will be able to analyze, visualize and evaluate structural analysis data from the generated geometries and communicate information to the generative agent in order to continually optimize the next generative iterations. The final objective is to develop a multi-agent system for architectural, structural and environmental optimization from design to construction. Page | 120 Figure 55 Design parameters for reciprocal frame and stress distribution of different units based on the number of elements. Page | 121 Figure 56 Structural analysis of global geometries generated by varying agent behaviors; (below): solar radiation analysis of global geometries generated with form finding and different agent behavior Page | 122 This case study demonstrates how the MAS framework presented can be applied to designing form found shell structures which consider environmental parameters. The simulation results showed improvements both in terms of energy efficiency by form finding shells whose global geometry increased the amount of available daylight. In terms of the design process, we provided the designer with a methodology to describe a structural system as an agent class and offered an approach to steer away from pure form finding by integrating environmental parameters as behaviors. This approach was extended by implementing an agent class which enables form finding and allows the creation of behaviors that are based on environmental parameters. The experimental results are presented in the form of two scales and investigations: 1) on the local level from studying the critical design parameters of the reciprocal frame morphology (valency) and the cross sectional and element lengths; and 2) on a global level, for the design of a shell structure. The geometric configurations of the design solutions vary, but the coupling of the generative with the analytical and optimizing processes in an agent- based logic ensured the satisfaction of the prescribed goals even if the design setup changed. The objective has remained: to demonstrate the value of the emergent, non-standard, and geometrically intricate as a viable post-fordist solution for form finding and performative driven design. The MAS framework was able to generate a shell as a reciprocal frame configuration(s), which provided longer spans based on short self- similar elements. The experiments have demonstrated that architects and designers can benefit by implementing an integrated bottom-up design approach to the design of building components that lead to optimal global configurations. By combining analytical methods with agent-based modelling, and by properly formulating and passing analytical data automatically to a generative process, the MAS system provides designers with a larger pool of complex yet well performing design solutions that could not be modelled manually. Page | 123 Figure 57 Four different shell structures designed by Isler and constructed in Switzerland between 1978-1988 4.4. Experimental Design 4: Revisiting an Existing Shell Structure Using Behavioral Form Finding This experimental tested the framework by applying it to an existing structure that was designed in the 1970s by Heinz Isler using form finding methods. The objective was to extend existing form finding techniques by representing additional design intentions as agent behaviors, and the aim was allow designers to consider multiple design objectives beyond structure and material in the early design stage. The topology of the structure, as well as behaviors relating to the positions of the sun throughout the year, namely photophilic/photophobic behaviors, were investigated in addition to the typical parameters such as stiffness, support conditions and loads. Another objective was to evaluate the approach of experiment 3 by applying it to an actual case study, evaluating the energy footprint of the structure (not only solar radiation), and trying to compress form finding and environmental optimization into one step. Isler devised a model for physical form finding, that exploited the deformation of a flexible hanging cloth under gravity to generate a surface in pure tension under self-weight. The resulting form, when inverted, is in pure compression under equivalent loading. Although he had been experimenting with this method since the 1950s Isler started actually building structures in the late 1960s (Chilton, 2010). The virtue of this technique is that it can generate an infinite number of potential surface forms, even with the same supports and loads, just by altering the type and orientation of the cloth. Another advantage of the technique is that instead of pushing the form towards a desired geometry Isler was letting the form find itself based on the design criteria he had set. Page | 124 Following that, precise measurements of the physical model, on many occasions a large scale model, were made and load tested to determine the stress distribution and buckling behavior. Since geometry is the key factor in assessing the structural behavior of a shell, this stage of the modelling process enabled Isler to modify the shell form according to its predicted performance. Once a working model showed the desirable performance it was selected and used for other similar structures homogenously. It is clear that although this process proved to be rigorous and accurate it is also time consuming and labor intensive, so only a small number of design iterations were possible within the limits of building design. As was briefly described in Section 4.3., a number of researchers have been inspired by Isler and have developed computational form finding methods. When Isler presented his seminal work on his physical form finding method at the First Congress of the International Association for Shell Structures (IASS) in 1959 he listed five key aspects of shell design: 1) the functional, 2) the shaping, 3) the architectural expression, 4) the statics, and 5) others, such as acoustics and light (Isler, 1959). The review of the literature indicates that the most emphasis has been put on the first four aspects and in acoustics, but little research has been done on the impact of light. Therefore, a drawback of Isler’s and other contemporary form finding methods is that they do not consider contextual and environmental parameters. Although computational tools offer designers the opportunity to simulate the behavior of fabrics digitally, they also give an opportunity to model additional behaviors which relate to sun, humidity or temperature, for instance. Thus, the hypothesis here (similar to the previous experiment) is that if by developing design behaviors based on environmental parameters in the early design stage designers can augment the purely form found shapes in order to reduce annual energy consumption and improve the daylight factor analysis of a structure without sacrificing its structural performance. Figure 58 Photos showing physical models (top left), the construction phase (top right), and the contemporary condition (bottom) of the Heimberg Sports Center by H. Isler Page | 125 Figure 59 Flowchart diagram of the proposed behavioral form finding workflow Case Study: Sports Center by Heinz Isler Isler used fabric and physical form finding to design a number of structures (Chilton and Chuang, 2017). One of the most widely used of Isler’s designs is that of the tennis and sports halls that he built in various locations in Switzerland (Figure 57). In this case study, a tennis hall that was built in 1978 in Heimberg, a small town in Switzerland, is revisited. The thin shell structure has a span of 48 meters, a length of 72.00 meters and is supported in 10 points. It is made out of 100 mm thick reinforced concrete, has a footprint of approximately 3,000 m 2 and is still in use. Isler developed different designs for one bay (48 x 18 m) to test different design parameters such as the fabric density and orientation. He then evaluated the performance of the structure and selected one that was replicated four times to create the final structure (Figure 58). The structure served as an archetype for three more shell structures that were designed and built in the next decade. The structures are all located in Switzerland and have exactly the same span, but their overall length and orientation vary. The material of all the structures is untreated concrete cast on top of 50 mm insulated Styrofoam panels, while the openings are single panel curtain walls. Design Process Although little information is publicly available about the detailed geometry of the Heimberg shell, by accessing information about the shell via an online database (www.structurae.com) one can see the basic design parameters, simulate the structure, and generate a 3D model using Rhinoceros 3D and the Kangaroo Particle Physics Solver. The first step after generating the 3D models is to simulate the existing structure and analyze it structurally. Page | 126 Figure 60 Experimental design setup that describes the environment, design parameters and the heuristic function that couples the form finding with the photophilic behavior, and three different types of analysis (structural, radiation and thermal energy analysis) The researcher modelled the concrete material and analyzed the structure using Karamba, a finite element analysis software geared towards interactive use in the visual scripting editor Grasshopper (Preisinger and Heimrath, 2014). He performed the same step for all four structures that were designed based on the same model. Apart from modeling the structure parametrically using the Kangaroo Particle Physics Solver, the author also developed an agent-based modelling approach in order to explore more design alternatives. The established MAS framework was used to integrate the environmental parameters in parallel to form finding. This approach was based on the following assumptions: the generative agents (form finding) are represented as particles that are interconnected to represent a mesh surface, and each connection among the agents is modelled as a linear elastic spring with variable stiffness. Figure 60 graphically illustrates this approach. Forces are applied to each particle, but instead of only applying gravity and dead loads, as is typical in existing form finding methods, the researcher also modeled virtual forces such as sun attraction and scaled it with a weight factor to adjust its impact. The hypothesis is that agents can attain an equilibrium state and by iterative calculations the form can be optimized not only for weight but also based on its environmental performance. The design process can be summarized as follows: 1. The typical parameters in structural design are defined, such as: F, the outline of the provided footprint in the form a polyline; P, as well as determining the number and type of support conditions S (n,t); the material stiffness (E); material properties (G); and loads (L). Page | 127 2. Next, the maximum amount of agents and the topology of their connections was determined. This was the discretization of the given input outline and set the initial geometric configuration of the mesh surface for the form finding. In this case 4 different topological variations were developed, namely: orthogonal, triangular, hexagonal and rhomboidal. The rectangular variation was selected for the purpose of clarity. 3. The designer provided: the location’s longitude and latitude (LL), as well as the orientation (O) of the structure, orientation (N, S, E, W), and the creation of a weather data (.epw file) file. Additionally, the designer provided a generic use of the space (i.e. school, office, gym). 4. Next, the designer developed an environmental behavior for the agent, i.e. photophilic or photophobic behavior depending on the design objective. In this case the behaviors are very basic, such get attracted by selected sun positions to allow direct sunlight in the morning. 5. External physical loading (self-weight, dead load) was applied on the surface and external virtual loading (i.e. sun attraction) was used to derive the shape of the shell. The stiffness, weight and level of attraction of nodes were adjusted to find the equilibrium state. This is the main difference between the suggested approach and Isler’s physical modelling, or computational form finding approaches. At each time, the position of the agents is updated based on the summation of gravity forces and also on additional forces that act upon it. By introducing specific positions of the sun as “virtual forces” this work directly links environmental parameters with form finding, which the deigner can adjust by providing “weights” for each force. In this step, the designer specifies the duration of the form finding process and the “weight” of the photophilic or photophobic behavior. 6. Once a global equilibrium is reached and the velocity of each agent is close to 0, a NURBS geometry is generated based on the optimal force distribution. This can be directly exported to Rhinoceros 3D for further analysis. 7. The generated geometry is automatically passed to an analytical software for structural analysis and environmental analysis using Grasshopper in conjunction with the environmental simulation and analysis software which was used to model the existing structures. The results are collected and are used to inform the weight of the environmental behavior. 8. The process was repeated iteratively until the design objectives are met or the user stops it when is satisfied. Three types of specialist agent were created for each of the analyses performed and they communicated with the generative (form finding) agent, namely: the structural analysis agent (SAa) which accounts for displacement, an energy analysis agent (EAa) which accounts for the total thermal energy required annually for the structure, and a daylight factor analysis agent (DFAa) which accounts for the amount of sunlight on and beneath the shell structure. Page | 128 A coordinator agent ensures the passing of information between the agents and the different software platforms. The flowchart in Figure 59 graphically outlines the steps of the design process. In this case, the MAS framework is used to rapidly develop alternatives and enable the designer, via a GUI, to build a representation of how different behaviors and configurations affect both the structural and environmental performance of the shell design (Figure 61). Results and Analysis In order to analyze the structures environmentally, the author used the coordinates of the structure to extract the solar path and compile a weather file for each area using Ladybug and Honeybee. He provided data such as the use of the space (program), schedules, and basic material properties in order to build the energy analysis model and analyzed each structure. In Figure 62 the author has tabularized and compared the analytical results for each of Isler's structures. As one would expect, structurally they behave almost the same with only small differences between them that are due to the difference in their sizes. However, their environmental performance varies quite significantly. The annual energy consumption, for instance in Case D, which is the longest structure and is also oriented along a southeast/northwest orientation axis has the lowest average daylight factor. Figure 61 Graphical user interface of the alpha version of the tool. On the top left side (control panel) are all the input parameters, in the middle is the geometry viewport, and on the left is the window used to interact with Rhinoceros 3D and Grasshopper. In the bottom panel the results generated are shown using a parallel lines plot Page | 129 Figure 62 Analytical results of the four different structures Although are simulations are not based on detailed 3D models, in order to validate that the results are correct the researcher compared them against the results of a survey from the Chair of Ecological System Design at ETH Zurich, which in a survey catalogs the embodied environmental impact of building stock in Switzerland since the 1920s (Ostermeyer et al., 2018). This case study's annual energy simulation results fall within the survey’s indicated range of average energy consumption of buildings according to their age. These results are then used as a baseline and Case A was selected. To apply this researcher's approach, the results of the other 3 cases were used as design targets and the MAS system was used to generate new shell shapes. In order to augment the purely form found shapes a photophilic or photophobic behavior was assigned to specific points that corresponded to the sun positions of a given location that were selected by the designer. To clarify this, depending on the location, the designer may assign a photophilic behavior to the positions of the sun during the morning hours, which increases the amount of daylight, and assign a photophobic behavior to the positions of the sun during the afternoon hours which increases the amount of daylight but also significantly increases heat gain, which might negatively affect the structure's total energy consumption. In order to ensure that the behavior does not lead to undesired results, the light attraction force is scaled according to the number of the attractor points, the distance of the point to the structure, and a weight pb(w) as seen in the equation below. Page | 130 C SunAttactionForce Npt ∗ D ∗ pSB. The weight w is related to a heuristic function in which the analytical results are used to calculate its value. The heuristic function uses the total energy consumption (tE), the total radiation (tR) underneath the structure, and the maximum displacement in order to calculate the value of the weight. The heuristic function is used to adjust the value of the force according to the desired result. The closer to the desired analytical results a solution is as compared to a base case (i.e. the purely form found case), the higher the weight of the photophilic behavior becomes. A penalty is added to reduce the weight if a point is moved to an undesired location. SB tE, U! ! + tR, U! ! % − ∑ X Using this heuristic, the author ran the system. The designer can interactively change the value of the behavior based on the assessment of the analytical results and the geometry. Once she or he sets a value for the behavior the systems runs for a specified amount of iterations in order to generate design alternatives. Figure 63 displays a parallel line plot to show a subset of the behaviorally form found shapes. The blue lines indicate the values of the four existing case studies while the red ones show the design alternatives of Case Study A after applying a photophilic behavior. The system was able to generate alternatives that have as much maximum displacement as the base case, yet they decrease the average annual consumption by 12%, increase the daylight factor analysis by 9%, and increase the average solar radiation underneath the structure by 102%. Figure 63 A subset of design alternatives presented to the designer, highlighting the one that best meets the design objectives based on the available analytical metrics Page | 131 Part V Conclusions Page | 132 5. Overall Results and Analysis The MAS framework presented in this dissertation has a tremendous potential to develop expert design systems in which the designer defines the components and low-level relationships and the systems generates and evaluates design alternatives based on the designer's interactive feedback. The open-ended character of the framework allows the designer 5.1. Summary of Results The results of this dissertation are discussed in two parts: 1. the general development of the methodology based upon multi-agent systems, and 2. the necessity of developing an understanding of complexity and developing new kinds of abstractions and holistic approaches for managing architectural complexity. 5.1.1.1. Multi-Agent Systems Framework for Architectural Design This dissertation has presented Behavioral Form Finding, a novel design methodology for exploring agent-based modelling and simulation methods for architectural design. The framework allows the user to model building components as agents based on a decomposed architectural design problem (i.e. façade design, shell design). Through the integration of data sources and the combined use of analytical tools and construction constraints, the designer controls the behavior of the agents that generate the design alternatives. This was demonstrated via a set of design experiments in which a number of agent classes were implemented that dealt with the generation of geometry or the translation of the geometric models into structural, environmental and construction models, as well as with the data passing between them. These classes can be combined into teams of agents for facilitating design exploration and for helping designers explore larger solution spaces and lead to solutions which would not be attainable via conventional design and building methods. A number of heuristic methods were investigated for informing the agents and predicting better behaviors depending on a set of objectives. It is necessary to note that a MAS approach differs from conventional design approaches in that the designer does not draw the geometry directly but rather develops and adjust agent behaviors which lead to the generation of geometries. Behavioral form finding can be considered as a viable extension of existing agent-based modeling techniques in that it offers a framework for combining agent behaviors with analytical results and is also a way to model design objectives as forces that act upon the agents. In Figure 65 the suggested framework is graphically illustrated. The Termite framework provides a way to combine top-down and bottom-up design approaches and provides a library of abstract and physical agents as well as a number of behaviors. It also provides methods for forming agent behavior using computational methods that vary from geometric rules to heuristic methods and agent team formation. Lastly, the MAS framework provides a set of abstractions for mapping signals and cues that can be collected via sensors. Page | 133 Figure 64 Schematic representation of the "Termite" MAS framework which combines top-down and bottom-up design approaches The multi-agent systems, either physical or digital, which have been investigated here are scalable, because each additional agent (i.e. robot) does not increase interaction complexity. Instead of directly dictating commands to each robot individually, this framework allows the designer to specify high level rules and spatially varying design parameters. In the proposed MAS framework, the designer does not need to design the final structure explicitly but is more like a conductor by creating a robotic construction setup, and from that deducts geometric rules, starts running the system, and then controls the behaviors of the players (robots) and adds or removes robots as necessary. High level constraints can be site specific, such as to avoid an area (the window openings we saw in experiment 2), or move towards areas with more solar radiation, or include preferences such as regions where material density is unwanted. The advent of digital fabrication and additive manufacturing has had a great impact on the practice of design. As we move towards the fourth Industrial Revolution and the application of robotic construction is becoming more tangible, we should consider the concepts that come from the field of complexity that were outlined in Section 2. Additionally, digital fabrication should be viewed through the perspective of a group of robots and not just single robotic arms. Current robotic simulations and design practice require the designer to provide a geometry and related toolpath for a robotic arm to construct a structure. It is apparent that this easily becomes an intractable design task if we have a group of robots. With each additional robotic arm, the overall complexity due to robot- to-robot and robot-to-structure interactions increases exponentially. Experiment 1 demonstrated how simple behaviors can be deduced from a very simple robotic system in order to develop simulations based on it. This was a pilot study to demonstrate how kinematic and dynamic constraints can be coupled with the physical limitation of a specific robotic system. By introducing these constraints, the design tool prevents the generation of physically non-feasible design and design alternatives that would be impossible to fabricate. Page | 134 Due to the lack of tool specific design tools, generative design examples to date have been complex to construct. The typical process has included generating a design digitally and exporting it to a design platform that would use a top-down approach to break it down into pieces that can be fabricated and assembled into the final structure. This required a significant logistic effort and coordination that increases the complexity of construction. By coupling the design tool with a robotic system the design process is simplified while allowing the generation of complex and site specific design alternatives. The experimental design has shown the potential of the MAS framework to integrate generative design protocols with robotic construction, an experimental design where the design generation is coupled with abstract agents and a fabrication system. 5.1.1.2. Towards New Kinds of Abstractions: A Building as a Finely Tuned Orchestra In an attempt to summarize how the principles of holism and complexity (that were reviewed in Section 2.2) and concepts such as abstraction and modularity can be used to develop tools that enable designers to produce cohesive architectural proposals, let us draw a metaphor between designing and constructing a building and preparing and delivering a symphonic concert. Consider a symphonic orchestra with our “abstraction hats” on (see Figure 65). An orchestra is an instance of hierarchy and abstraction at the same time. It is analogous to many things we see every day, including information technology systems, games, factories, cars, buildings and cities. They all have a common thread—collections and layers of rules, specifications, archetypes and abstractions. In an orchestra, the conductor leads the system. She or he is not playing an instrument, but directs the events and acts as the quality controller. The conductor is the highest level in a hierarchy of abstractions. If we look at a musical score, which is similar to architectural drawings, we see embedded instructions. The notes are written, but they are not the sounds. They encapsulate the rules of the music—the sequence, pitch, volume and timing. But nothing happens until we get to the executive layer: the individual musicians and their instruments. Figure 65 A diagram showing how structural hierarchies and different layers of abstraction can come together to produce a complex outcome (i.e. a concert, a building) Page | 135 The score contains the high-level sequential instructions necessary to execute in specific ways the lower-level instructions required to get the intended result. The musicians do that job, and perform a role analogous to that of architectural designers. The musician (designers) will execute parts (automation) within the score (high-level commands) on a particular instrument (operating system/design tool). The musician translates the encoded information into sounds (a tool) necessary for the audience appreciate the outcome (business function), all managed by the top layer (orchestration), and intended by the composer (IT service owner) to please the audience (clients). These abstractions enable useful hierarchies. Abstractions use interfaces between layers. Abstractions are built on trusted behaviors. The system works because each layer is aware of the rules agreed to by the adjacent layers. There is a common language between those layers: music. A transformation happens within each layer when something more specific is added. If the conductor had to play all the instruments, the outcome would be impossible. This allows us to see specialization and abstraction working to form a useful hierarchy, from a general case to a specific one. There are technical elements, encoding elements and transformational elements. Without these structures, we would not have an orchestra. Abstract layers link together via a common language, and internally take care of translation. While we are all aware of this example on the surface, it is worth examining it in the light of our new awareness. This idea is as natural to us as breathing, but the sophistication of the abstractions would not have happened without recognizing the incremental disciplines involved. Humans built this musical result after years of evolved thinking. It is an example in which far more has been realized because of the structural rules imposed. It has yielded far greater returns than if we simply sat the entire group of musicians in a room and shouted, “Play!” Instead, it is as if we sat the musicians down and had them agree to a set of operating principles for an agreed- upon common target. Then we filled the agreed structure with the rules of musical notation (software) that we had enabled and tested a priori, and then we executed the sequence within them. The result is robust, recognizable and repeatable, and can deliver far more impact than random sounds, or even one musician trying to play each instrument individually. If we make an analogy between the delivery of an opera and a building project we would observe that a lot of complexities and issues arise in the latter because the “musicians” of the construction industry (i.e. architects, engineers and construction crew), although they agree to a set of operating principles (project brief, contracts etc), are sitting in different rooms; each one is using their own notation and playing alone. Page | 136 5.2. Contributions The main contribution of this work is a design methodology for integrative architectural design in the form of a multi-agent systems framework. In this proposed methodology, the agents, that is modular programming blocks, are used to represent building elements and enabling the seamless combination of generative design with environmental and structural analysis, as well as construction constraints. The proposed methodology differs qualitatively from conventional design approaches; instead of drawing explicit geometry, the architect develops agents and designs behaviors that generate the geometry. The role of the designer then becomes a) to decompose design problems appropriately so that solutions can be generated by a group of agents, and b) to evaluate the behaviors developed based on a set of design targets. The framework is manifested with “Termite,” an agent-based design and simulation toolkit, which serves as an apparatus for the developing and prototyping of behavior and studying the relationship of low-level agent behaviors with emerging high level system behaviors. As developing and testing agent behaviors is an open- ended task that can be tedious to control, the author has investigated how we can use analytical methods and statistics to evaluate both user defined agent behaviors and the emerging system behavior. Due to both visual and numerical feedback, the designer can intuitively understand the impact of the agents’ behaviors on the different type of performance metrics that she or he has defined. Thanks to the integration of domain-specific data (i.e. weather file, material characteristics, geometric constraints) and the communication with external analysis software (such as Finite Element Analysis and Energy Analysis software) the system can quickly evaluate designs and present to the designer the designs that meet the defined targets. The framework is applied to a number of experimental designs that deal with a variety of problems. To be able to draw conclusions, the case studies were focused on the sub-problems of building design that have traditionally required the tight collaboration of architects and engineers to achieve cohesive solutions. Façade design, as well as free form shell design, are typical problems of that class and the experiments have demonstrated how low- level agent behaviors can lead to emergent design outputs globally. The research also shows how, by developing and controlling behaviors which relate to building elements or construction processes, better performing design alternatives can be achieved both quantitatively and qualitatively. Due to its unique ability to accommodate change, the proposed agent-based framework can be effectively employed in the early design stage, thus enabling architects to explore design alternatives by considering geometrical aspects and also the environmental, structural and construction aspects. Last, but not least, another important contribution of this work is bridging the gap between behaviors developed digitally and the actual behavior of physical robots, by establishing an experimental setup which relies on both physical experimentation and computational analysis. The first experiment demonstrated how swarm like behaviors can be implemented physically via the manipulation of geometric features of robots and leads to results similar to the ones observed in digital simulations. Specifically, the author developed tools for gathering data from physical experiments with low level robots and used the data collected to design and test how the shape of the agents can lead to the formation of emergent structures. The author considers this as the very first step towards the development of a systemic approach in building design for investigating novel building systems and autonomous robotic construction processes. This step has been motivated by the increasing integration of sensors in robotic construction, which has opened up the possibility of expanding multi- agent systems approaches from the digital realm to actual physical implementations. Despite this fact, during the last decade the majority of the research on the application of robots in building construction has solely focused on industrial robotic arms and not on other types of robotic systems. Page | 137 5.3. Future Work In the near term the MAS toolkit may be improved by several additions. The algorithmic framework could incorporate more agent classes and further facilitate the formation of teams of agents by retrieving data from existing projects. A generic building component class, in which the designer could directly import a building element from BIM software instead of designing it, would greatly improve the integration of the toolkit with BIM platforms. Additionally, to capture a building's future performance holistically, a specialist agent class focusing on human comfort could be developed. Another improvement would be a more rigorous integration of multiple user preferences through interactive visualization of design alternatives using augmented reality. Last, but not least, although designers can inform the decision-making by coupling the assessment of geometric features with the visualization of analytical results, correlating all the high dimensional datasets generated is challenging for human cognition (Harding, 2016). Implementing dimensionality reduction (DR) methods is considered crucial for aiding designers by reducing high dimensional data while maintaining nonlinear associations between the design parameters. Therefore, extending the framework to include non-linear approaches for feature extraction such as self-organizing maps (Kohonen, 1998) is an essential step for integrating more types of design analysis. In terms of the Termite's toolkit, in the long term it will benefit from further user testing and by applying it to more real cases; this will improve the reliability of the framework in actually producing meaningful solutions, and also to test its scalability. Another major advance would be to create a web interface to visualize both results and geometry as well as a cloud-based database for storing and easily querying all the solutions generated. The way the framework has been implemented allows for this extension, and this is considered to be an important step towards developing artificially intelligent tools. For example, architects could develop their own library of designs, with all the analytical data structured in a way that could be easily accessed from machine leaning algorithms, so that when they are working on a new project the system could easily access previous projects that are similar and use existing data to generate design alternatives. Lastly, based on the preliminary study with bristlebots and given the open-ended character of the multi-agent systems framework, a long-term goal is to model, simulate and control a more sophisticated cyber physical robotic construction system and apply it to the construction of 1:1 scale structures. Current research in novel building methods using robotic arms and additive manufacturing has opened up the possibilities of developing new building methods but it is also taking one step further towards making them acceptable by the building codes (Schwinn and Menges, 2015)(). It is the author's firm belief that viable autonomous robotic construction in the real world will be achieved though the combined research of abstract agents, physical robots, and vernacular building systems. Thus, this vision, which is supported by a number of peer roboticists and design researchers, is to develop a robotic platform which operates with sunlight, can navigate in 3D space while dealing with external disturbances like uneven surfaces and dust, and can manipulate generic building blocks that can be found in its environment in order to build aggregate structures. Page | 138 Part VI Bibliography Page | 139 6. List of Relevant Publications The following publications were completed as part of the author's Ph.D. studies. There are 22 peer-reviewed international publications, of which 5 are journal papers and the other 17 are long conference papers. In addition to this, there are two publications currently in the review process and one in the submission process. Last, but not least, there is the release of Termite” the alpha version of a multi-agent systems toolkit for architectural design. Peer-Reviewed Journal Papers Published [1] Pantazis, E., and Gerber D. (2018): “A framework for generating and evaluating façade designs using a multi-agent systems approach,” International Journal of Architectural Computing. Vol 16, p. 248-270. [2] Gerber, D., E. Pantazis and A. Wang (2017): "A multi-agent approach for performance-based architecture: Design exploring geometry, user, and environmental agencies in façades," Automation in Construction, 76, p. 45-58. Currently being reviewed [3] Pantazis, E. and Gerber, D. (2019): "Beyond Geometric Complexity: A Critical Review of Complexity Theory in Architecture," Architecture Science Review Journal, Vol 60, 7. (accept minor reviews) [4] Pantazis, E. and Gerber D. “Behavioral Form Finding: Interactive Shell design using Multi- Agent Systems,” ArchiDoc. (accepted minor reviews) Ready for Submission [5] Pantazis, E., and Gerber D. (To be submitted in April 2019). “Autonomous Collective Construction Using a Swarm Robotic System,” Automation in Construction. Peer-Reviewed Conference Papers 1. Pantazis, E. and Gerber, D., (2017) “Emergent order through swarm fluctuations- A framework for exploring self-organizing structures using swarm robotics, in ShoCK: Sharing computational knowledge!" Proceedings of the 35 th eCAADe Conference, A. Fioravanti, et al., Editors, Sapienza University of Rome, Italy p. 75-84. 2. Heydarian, A., Pantazis, E., Gerber, D., and Becerik-Gerber, B. (2016). “Defining Lighting Settings to Accommodate End-user Preferences while Reducing Energy Consumption in Buildings,” Proc. Construction Research Congress, pp. 1122-1132. 3. Pantazis, E. and Gerber, D. (2016): “Design Exploring Complexity in Architectural Shells- Interactive Form Finding of Reciprocal Frames through a Multi-Agent System,” in 34 th eCAADe: Complexity and Simplicity A. Herneoja, et al., Editors, Oulu, Finland. 4. Marcolino, L.S., Xu, H., Gerber, D., Kolev, B., Price, S., Pantazis, E., and Tambe, M. (2016): "Multi-Agent Team Formation for Design Problems," in Coordination, Organizations, Institutions, and Normes in Agent Systems XI, V. Dignum, et al., Springer. 5. Pantazis, E., Gerber, D. and Wang, A. (2016) “A Multi-Agent System for Design: Geometric Complexity in Support of Building Performance,” in Symposium for Modelling and Simulation in Architecture and Urban Page | 140 Design (SIMAUD), Ramtin, A., et al., Editors, pp 137-143, London, UK. 6. Gerber, D., Pantazis, E., (2016) “A Multi-Agent System for Design: A Design Methodology for Design Exploration, Analysis and Simulated Robotic Fabrication”, in ACADIA16: Posthuman Frontiers, Velikov, K., et al.,Editors, p 12-21, University of Michigan, Ann Arbor, Michigan. 7. Gerber, D., Pantazis, E., and Marcolino L.S. (2015) “Design Agency,” in Computer Aided Architectural Design Futures. The Next City- New Technologies and the Future of the Build Environment, Celani, G., et al., Editors, Springer, p 213-235, Sao Paulo, Brazil. 8. Heydarian, A., A., Carneiro, J.P., Pantazis, E., Gerber, D., and Becerik-Gerber, B. (2015). “Default Conditions: A Reason for Design to Integrate Human Factors,” Sustainable Human-Building Ecosystems, pp. 54-62. 9. Heydarian, A., Pantazis, E., Carneiro, J.P., Gerber, D., and Becerik-Gerber, B. (2015): “Towards Understanding End-user Lighting Preferences in Office Spaces by Using Immersive Virtual Environments,” The International Workshop on Computing in Civil Engineering, pp. 475-482. 10. Gerber, D., Pantazis, E., Marcolino, L., Heydarian, A., (2015): “A Multi-Agent Framework for Simulation of Cyber Physical Social Feedback for Architecture,” The Symposium on Simulation for Architecture and Urban Design (SimAUD), pp. 205-212. 11. Heydarian, A., Carneiro, J.P., Pantazis, E., Gerber, D. and Becerik-Gerber, B. (2015): "Default Conditions: A Reason for Design to Integrate Human Factors," in Sustainable Human–Building Ecosystems. p. 54-62. 12. Heydarian, A., E. Pantazis, J.P. Carneiro, D. Gerber and B. Becerik-Gerber (2015):" Towards Understanding End-User Lighting Preferences in Office Spaces by Using Immersive Virtual Environments," in Computing in Civil Engineering. ASCE, pp.475-482. 13. Heydarian, A., Pantazis, E., Gerber D. and Becerik-Gerber B.(2016): "Defining Lighting Settings to Accommodate End-User Preferences while Reducing Energy Consumption in Buildings," in Construction Research Congress 2016. pp. 1122-1132. 14. Marcolino, L.,Price, S., Pantazis, E. ,and Tambe, M. (2015): "Multi-Agent Team Formation for Design Problems," in Coordination, Organizations, Institutions, and Normes in Agent Systems XI: COIN 2015 International Workshops, COIN@ AAMAS. Springer, 354, Istanbul, Turkey. 15. Marcolino, L.S., Gerber, D., Kolev, B., Price, S., Pantazis, E., Tian, Y. and Tambe, M. (2015): "Agents Vote for the Environment: Designing Energy-Efficient Architecture," in AAAI Workshop on Computational Sustainability. 16. Marcolino, L.S., Xu, H., Gerber, D.,Kolev, B., Price, S.,Pantazis, E. and M. Tambe (2015): “Agent Teams for Design Problems,” in COIN@AAMAS, p. 189-195, Istanbul, Turkey. 17. Pantazis, E. and Gerber, D. (2014): “Material Swarm Articulations—New View Reciprocal Frame Canopy,” in 32 nd eCaade: Fusion, E. M. Thompson, et al., Editors, pp 463-473, Newcastle England. Page | 141 7. References 2014. Shell Structures for Architecture: Form Finding and Optimization, Taylor & Francis - Routledge. ABBASI, Y. D., SHORT, M., SINHA, A., SINTOV, N., ZHANG, C. & TAMBE, M. Human Adversaries in Opportunistic Crime Security Games: Evaluating Competing Bounded Rationality Models. Proceedings of the Third Annual Conference on Advances in Cognitive Systems ACS, 2015. 2. ACHTEN, H. & JESSURUN, J. An Agent Framework for Recognition of Graphic Units in Drawings. 20th eCAADe Conference Proceedings: Connecting the Real and the Virtual - design e-ducation, 18-20 September 2002 2002 Warsaw,Poland. 246-253. ADRIAENSSENS, S., BLOCK, P., VEENENDAAL, D. & WILLIAMS, C. 2014. Shell Structures for Architecture: Form Finding and Optimization, London and New york, Taylor & Francis - Routledge. ALEXANDER, C. 1964. Notes on the synthesis of form, Harvard University Press. ANDER, G. D. 2003. Daylighting performance and design, John Wiley & Sons. ANDRÉEN, D., JENNING, P., NAPP, N. & PETERSEN, K. 2016. Emergent Structures Assembled by Large Swarms of Simple Robots. In: VELIKOV, K., AHLQUIST, S., CAMPO, M. & THUN, G. (eds.) ACADIA 2016: Posthuman Frontiers. Ann Arbor, Michigan. ANUMBA, C., UGWU, O., NEWNHAM, L. & THORPE, A. 2001a. A multi-agent system for distributed collaborative design. Logistics Information Management, 14, 355-367. ANUMBA, C. J., NEWNHAM, L., UGWU, O. O. & REN, Z. 2001b. Intelligent Agent Applications in Construction Engineering, Creative Systems in Structural and Construction Engineering, CRC Press. ARANDA, B. & LASCH, C. 2006. Flocking. Tooling. New York: Princeton Architectural Press. ARCHER, A. 2006. multimodal semiotic resources in an Engineering curriculum. Academic Literacy and the Languages of Change, 130. ASIMOW, M. 1962. Introduction to design, Prentice-Hall. AXEL KILIAN. 2006. Design Exploration through bidirectional modeling of Constraints. Doctor of Philosophy in Architecture, Design and Computation, MIT. BACCARINI, D. 1996. The concept of project complexity—a review. International Journal of Project Management, 14, 201-204. BARTON, G. E., BERRYWICK, R.C. & RISTAD, E. S. 1987. Computational Complexity and Natural Language. BECHTHOLD, M., KING, N., KANE, A. O., NIEMASZ, J. & REINHART, C. Integrated environmental design and robotic fabrication workflow for ceramic shading systems. In: CONSTRUCTION, I. A. F. A. A. R. I., ed. 28th International Association for Automation and Robotics in Construction (ISARC), 2011 Seoul, Korea. 70- 75. BEETZ, J., VAN LEEUWEN, J. & DE VRIES, B. 2004. Towards a Multi Agent System for the Support of Collaborative Design, Developments in Design & Decision Support Systems in Architecture and Urban Planning, Eindhoven University of Technology. BENI, G. & WANG, J. 1993. Swarm intelligence in cellular robotic systems. Robots and Biological Systems: Towards a New Bionics? : Springer. BENNETT, C. H. 1995. Logical depth and physical complexity, Springer. BENNETT, J. 1991. International construction project management: general theory and practice, New Hampshire, USA, Butterworth-Heinemann. BERTALANFFY, L. V. 1950. An Outline of General System Theory. The British Journal for the Philosophy of Science, 1, 134-165. BEYER, J. M. & TRICE, H. M. 1979. A reexamination of the relations between size and various components of organizational complexity. Administrative Science Quarterly, 48-64. BLOCK, P. 2009. Thrust Network Analysis : Exploring Three-dimensional Equilibrium. Doctor of Philosopohy, MIT. Page | 142 BLOCK, P. & OCHSENDORF, J. 2007. Thrust network analysis: A new methodology for three-dimensional equilibrium. International Association for Shell and Spatial Structures, 155, 167. BONABEAU, E., DORIGO, M. & THERAULAZ, G. 1999. Swarm intelligence: from natural to artificial systems, New York, USA, Oxford university press. BRADSKI, G. & KAEHLER, A. 2008. Learning OpenCV: Computer vision with the OpenCV library, " O'Reilly Media, Inc.". BUKHARI, F., FRAZER, J. H. & DROGEMULLER, R. Evolutionary algorithms for sustainable building design. The 2nd International Conference on Sustainable Architecture and Urban Development, 12-14 July 2010 2010 Amman, Jordan. BULLINGER, H.-J., BAUER, W., WENZEL, G. & BLACH, R. 2010. Towards user centred design (UCD) in architecture based on immersive virtual environments. Computers in Industry, 61, 372-379. BUNDY, A. 2007. Computational thinking is pervasive. Journal of Scientific and Practical Computing, 1, 67-69. CALDAS, C. H. & SOIBELMAN, L. 2003. Automating hierarchical document classification for construction management information systems. Automation in Construction, 12, 395-406. CALDAS, L. 2008. Generation of energy-efficient architecture solutions applying GENE_ARCH: An evolution- based generative design system. Advanced Engineering Informatics, 22, 59-70. CALDAS, L. G. & NORFORD, L. K. 2002. A design optimization tool based on a genetic algorithm. Automation in construction, 11, 173-184. CARPO, M. 2013. The digital turn in architecture 1992-2012, John Wiley & Sons. CARRANZA, P. M. & COATES, P. Swarm modelling: The use of Swarm Intelligence to generate architectural form 3rd Generative Art Conference, 2014/03/10 2000 Milan, Italy. AleaDesign Publisher. CHAITIN, G. J. 1990. Information, randomness & incompleteness: papers on algorithmic information theory, New York, USA, World Scientific. CHEN, C.-H. A prototype using Multi-Agent Based Simulation in Spatial Analysis and Planning. 14th Annual Conference of the Association of Computer Aided Architectural Design (CAADRIA), 2009 Douliu, Taiwan. 513-521. CHILTON, J. 2010. Heinz Isler's Infinite Spectrum: Form‐Finding in Design. Architectural Design, 80, 64-71. CHILTON, J. & CHUANG, C.-C. 2017. Rooted in nature: aesthetics, geometry and structure in the shells of Heinz Isler. Nexus Network Journal, 19, 763-785. CHILTON, J. & ISLER, H. 2000. Heinz isler, London, Thomas Telford Publishing CLEMENTS-CROOME, D. 2004. Intelligent buildings: design, management and operation, Thomas Telford. CLEVENGER, C. M. & HAYMAKER, J. Frameworks and metrics for assessing the guidance of design processes. ICED'09, 24-27 August 2009 2009 Stanford, CA, USA. COBHAM, A. The intrinsic computational difficulty of functions. In: BAR-HILLEL, Y., ed. Logic, Methodology and Philosophy of Science: Studies in Logic and the Foundations of Mathematics, 1965. North-Holland Publishing, 24-30. COVER, T. M. & THOMAS, J. A. 2012. Elements of information theory, John Wiley & Sons. CROSS, N. & ROOZENBURG, N. 1992. Modelling the design process in engineering and in architecture. Journal of Engineering Design, 3, 325-337. CRUTCHFIELD, J. P. 1994. The calculi of emergence: computation, dynamics and induction. Physica D: Nonlinear Phenomena, 75, 11-54. CRYER, J. N. 1994. Design Team Agreements 3.43. The Architect's Handbook of Professional Practice, 2. CUTLER, B., SHENG, Y., MARTIN, S., GLASER, D. & ANDERSEN, M. 2008. Interactive selection of optimal fenestration materials for schematic architectural daylighting design. Automation in Construction, 17, 809-823. Page | 143 DE MEYER, K. 2000. Explorations in stochastic diffusion search: Soft-and hardware implementations of biologically inspired spiking neuron stochastic diffusion networks. Technical Report KDM/JMB/2000. DENT, E. B. 1999. Complexity science: A worldview shift. Emergence, 1, 5-19. DIJKSTRA, J., TIMMERMANS, H. J. & JESSURUN, A. 2001. A multi-agent cellular automata system for visualising simulated pedestrian activity. Theory and Practical Issues on Cellular Automata. Springer. DIMCIC, M. 2012. Structural optimization of grid shells based on genetic algorithms. Ph.D., Stuttgart. DURO-ROYO, J., MOGAS-SOLDEVILA, L. & OXMAN, N. 2015. Flow-based fabrication: An integrated computational workflow for design and digital additive manufacturing of multifunctional heterogeneously structured objects. Computer-Aided Design, 69, 143-154. EASTMAN, C. M., EASTMAN, C., TEICHOLZ, P. & SACKS, R. 2011. BIM handbook: A guide to building information modeling for owners, managers, designers, engineers and contractors, John Wiley & Sons. EDNIE-BROWN, P. & ANDRASEK, A. 2006. Continuum : A Self-Engineering Creature-Culture. Architectural design, 76, 18-25. FABI, V., ANDERSEN, R. V., CORGNATI, S. P. & OLESEN, B. W. 2013. A methodology for modelling energy-related human behaviour: Application to window opening behaviour in residential buildings. Building Simulation, 6, 415-427. FELDMAN, D. P. & CRUTCHFIELD, J. 1998. A survey of complexity measures. Santa Fe Institute, USA, 11. FELKNER, J., CHATZI, E. & KOTNIK, T. Interactive particle swarm optimization for the architectural design of truss structures. Computational Intelligence for Engineering Solutions (CIES), 2013 IEEE Symposium on, 2013. IEEE, 15-22. FERREIRA, P. 2001. Tracing Complexity Theory. Research Seminar in Engineering Systems. Pittsbough, PA: Carnegie Mellon University. FISCHER, T. Wiener's refiguring of a cybernetic design theor. Norbert Wiener in the 21st Century (21CW), 2014 IEEE Conference on, 2014. IEEE, 1-7. FISHER, R. A. 1956. Statistical methods and scientific inference. FRAZER, J. H. 1995. An Evolutionary Architecture. Themes. FRICKER, P., HOVESTADT, L., BRAACH, M., DILLENBURGER, B., DOHMEN, P., RÜDENAUER, K., LEMMERZAHL, S. & LEHNERER, A. 2007. Organised Complexity, Frankfurt am Main,Germany, 25th eCAADe Conference Proceedings. GELL-MANN, M. Complexity and complex adaptive systems. In: GELL-MANN, M. & HAWKINS, J. A., eds. Proceedings of the Santa Fe Inistitute Studies in the Sciencies of Complexity, 1992 California Insitute of Technology, Pasadena. Addison- Wesley Publishing Co, 176-177. GELL-MANN, M. 1995. What is complexity? The Quark and the Jaguar. MA, USA: Jonh Wiley & Sons Ltd. GELL‐MANN, M. & LLOYD, S. 1996. Information measures, effective complexity, and total information. Complexity, 2, 44-52. GERARD, R. 1958. Concepts and principles of biology. Initial Working Paper. Behavioral Science, 3, 95-102. GERBER, D. J. & LIN, S.-H. E. Designing-in performance through parameterization, automation, and evolutionary algorithms: ‘H.D.S. BEAGLE 1.0’. In: FISCHER, T., DE BISWAS, K., HAM, J. J., NAKA, R. & HUANG, W. X., eds. CAADRIA 2012: Beyond Codes and Pixels, 25-28 April 2012 2012 Chennai, India. 141-150. GERBER, D. J. & LIN, S.-H. E. 2013. Designing in complexity: Simulation, integration, and multidisciplinary design optimization for architecture. Simulation, 1-24. GERBER, D. J., LIN, S.-H. E., PAN, B. P. & SOLMAZ, A. S. Design optioneering: Multi-disciplinary design optimization through parameterization, domain integration and automation of a genetic algorithm. In: NIKOLOVSKA, L. & ATTAR, R., eds. SimAUD 2012, 26-30 March 2012 2012 Orlando, FL, USA. 23-30. Page | 144 GERBER, D. J., PANTAZIS, E. & WANG, A. 2017. A multi-agent approach for performance based architecture: Design exploring geometry, user, and environmental agencies in façades. Automation in Construction, 76, 45-58. GERO, J. S. 1996. Creativity, emergence and evolution in design. Knowledge-Based Systems, 9, 435-448. GERO, J. S. 2000. Computational models of innovative and creative design processes. Technological Forecasting and Social Change, 64, 183-196. GERO, J. S. 2002. Computational models of creative designing based on situated cognition. Proceedings of the 4th conference on Creativity & cognition. Loughborough, UK: ACM. GERO, J. S. & BRAZIER, F. M. T. 2004. Intelligent agents in design. Artificial Intelligence for Engineering Design, Analysis and Manufacturing (AIEDAM), 18, 113. GERO, J. S. & SOSA, R. 2008. Complexity measures as a basis for mass customization of novel designs. Environment and Planning B: Planning and Design, 35, 3-15. GIDADO, K. 1993. Numerical index of complexity in building construction to its effect on production time. University of Brighton. GLANVILLE, R. 2001. An intelligent architecture. Convergence: The International Journal of Research into New Media Technologies, 7, 12-24. GLANVILLE, R. 2007. Designing complexity. Performance Improvement Quarterly, 20, 75-96. GOLDENFELD, N. & KADANOFF, L. P. 1999. Simple Lessons from Complexity. Science, 284, 87-89. GROENEWOLT, A., SCHWINN, T., NGUYEN, L. & MENGES, A. 2018. An interactive agent-based framework for materialization-informed architectural design. Swarm Intelligence, 12, 155-186. HALFAWY, M. & FROESE, T. 2005. Building Integrated Architecture/Engineering/Construction Systems Using Smart Objects: Methodology and Implementation 1. Journal of Computing in Civil Engineering, 19, 172- 181. HANSELL, M. H. 1984. Animal architecture and building behaviour. Animal architecture and building behaviour. HARDING, J. 2016. Dimensionality Reduction for Parametric Design Exploration. In: ADRIAENSSENS, S., GRAMAZIO, F., KOHLER, M., MENGES, A. & PAULY, M. (eds.) Advances in Architectural Geometry. Zurich, Switzerland: vdf. HEBRON, P. 2017. Rethinking Design Tools in the Age of Machine Learning. Artists and machine intelligence [Online]. Available: https://medium.com/artists-and-machine-intelligence/rethinking-design-tools-in- the-age-of-machine-learning-369f3f07ab6c [Accessed Arpil 2018]. HERR, C. M. 2002. Generative Architectural Design and Complexity Theory. 5th International Conference of Generative Art. Milan, Italy. HEYDARIAN, A., CARNEIRO, J. P., GERBER, D. & BECERIK-GERBER, B. 2015a. Immersive virtual environments, understanding the impact of design features and occupant choice upon lighting for building performance. Building and Environment, 89, 217-228. HEYDARIAN, A., CARNEIRO, J. P., GERBER, D., BECERIK-GERBER, B., HAYES, T. & WOOD, W. 2015b. Immersive virtual environments versus physical built environments: A benchmarking study for building design and user-built environment explorations. Automation in Construction, 54, 116-126. HEYLIGHEN, F. & JOSLYN, C. 2001. Cybernetics and second order cybernetics. Encyclopedia of physical science & technology, 4, 155-170. HOLLAND, J. H. 1992a. Adaptation in natural and artificial systems: An introductory analysis with applications to biology, control, and artificial intelligence, Cambridge, MA, MIT Press. HOLLAND, J. H. 1992b. Adaptation in natural and artificial systems: An introductory analysis with applications to biology, control, and artificial intelligence, Ann Arbor, A Bradford Book. HOLLAND, J. H. 1995. Hidden order: How adaptation builds complexity, Basic Books. HORGAN, J. 1995. From complexity to perplexity. Scientific American. Page | 145 IRELAND, T. Emergent space diagrams: The application of swarm intelligence to the problem of automatic plan generation. In: TIDAFI, T. & DORTA, A. T., eds. CAADFutures : Joining Languages, Cultures and Visions, 2009 Montreal,Canada. 245-258. ISLER, H. New Shapes for Shells. In: TORROJA, E., ed. International Association for Shell and Spatial Structures, 1959 Madrid. North-Holland, 5. JACOBS, J. 1961. The death and life of great American cities, Vintage. JAZIZADEH, F., KAVULYA, G., KWAK, J., BECERIK-GERBER, B., TAMBE, M. & WOOD, W. Human-building interaction for energy conservation in office buildings. Proc. of the Construction Research Congress, 2012. 1830-1839. JENCKS, C. 2000. Architecture 2000 and Beyond: Success in the art of prediction, Wiley. JENNINGS, N. R., CORERA, J. M. & LARESGOITI, I. Developing Industrial Multi-Agent Systems. ICMAS, 1995. 423- 430. JIN, Y. & LI, W. 2007. Design concept generation: a hierarchical coevolutionary approach. Journal of Mechanical Design, 129, 1012-1022. JOHANNES BRAUMANN, S. B.-C. Parametric Robot Control: Integrated CAD/CAM for Architectural Design. ACAADIA 2011: Integration through Computation, 2011. 242-251. JORDAN, J. S. 1992. The exponential convergence of Bayesian learning in normal form games. Games and Economic Behavior, 4, 202-217. KAELBLING, L. P., LITTMAN, M. L. & CASSANDRA, A. R. 1998. Planning and acting in partially observable stochastic domains. Artificial intelligence, 101, 99-134. KALAY, Y. E. 1999. Performance-based design. Automation in Construction, 8, 395-409. KALAY, Y. E. 2004. Architecture's new media: principles, theories, and methods of computer-aided design, MIT Press. KALMAN, R. E. 1960. A new approach to linear filtering and prediction problems. Journal of basic Engineering, 82, 35-45. KASABOV, N. & KOZMA, R. 1998. Introduction: Hybrid intelligent adaptive systems. International Journal of Intelligent Systems, 13, 453-454. KAUFFMAN, S. A. 1991. Antichaos and adaptation. Scientific American, 265, 78-84. KAUFFMAN, S. A. 1993. The origins of order: Self organization and selection in evolution, Oxford university press. KAVULYA, G., GERBER, D. J. & BECERIK-GERBER, B. 'Designing in' complex system interaction: Multi-agent based systems for early design decision making. ISARC 2011, 29 June - 2 July 2011 2011 Seoul, Korea. {Kavulya, 2011 #94}, 694-698. KEATING, S. J., LELAND, J. C., CAI, L. & OXMAN, N. 2017. Toward site-specific and self-sufficient robotic fabrication on architectural scales. Science Robotics, 2, eaam8986. KEOUGH, I. & BENJAMIN, D. Multi-objective optimization in architectural design. Proceedings of the 2010 Spring Simulation Multiconference, 2010. Society for Computer Simulation International, 191. KILIAN, A. 2006. Design innovation through constraint modeling. International journal of architectural computing, 4, 87-105. KILIAN, A. 2014. Steering of form. Shell Structures for Architecture: Form Finding and Optimization, 131. KLEIN, L., KAVULYA, G., JAZIZADEH, F., KWAK, J., BECERIK-GERBER, B. & TAMBE, M. 2011. Towards optimization of building energy and occupant comfort using multi-agent simulation,. 28th International Symposium oo Automation Robotics and Construction (ISARC). KLEIN, L., KWAK, J.-Y., KAVULYA, G., JAZIZADEH, F., BECERIK-GERBER, B., VARAKANTHAM, P. & TAMBE, M. 2012. Coordinating occupant behavior for building energy and comfort management using multi-agent systems. Automation in Construction, 22, 525-536. KLIR, G. 1985. Architecture of systems problem solving, New York, Plenum Press. Page | 146 KOHLHAMMER, T. & KOTNIK, T. 2011. Systemic behaviour of plane reciprocal frame structures. Structural Engineering International, 21, 80-86. KOHONEN, T. 1998. The self-organizing map. Neurocomputing. KOLAREVIC, B. 2016. Simplexity (and Complicity) in Architecture. KOLMOGOROV, A. N. 1965. Three approaches to the definition of the concept “quantity of information”. Problems of Information Translation, 1, 3-11. KOTNIK, T. 2010. Digital architectural design as exploration of computable functions. International journal of architectural computing, 8, 1-16. L. S. MARCOLINO, A. X. J., AND M. TAMBE. 2013. Multi-agent Team Formation: Diversity Beats Strength? In Proceedings of the 23rd International Joint Conference on Artificial Intelligence (IJCAI 2013). Beijing, China. LARSEN, O. P. 2008. Reciprocal frame architecture, Routledge. LEACH, N. 2009. Swarm Urbanism. Architectural Design, 79, 56-63. LEACH, N., TURNBULL, D. & WILLIAMS, C. 2004. Digital tectonics, University of Michigan, Wiley. LEVIN, L. A. 1976. Different Measures of Complexity of finite objects (Axiomatic Description). DOKLADY AKADEMII NAUK SSSR, 227, 804-807. LLOYD, S. 2001. Measures of complexity: a nonexhaustive list. IEEE Control Systems Magazine, 21, 7-8. LLOYD, S. & PAGELS, H. 1988. Complexity as thermodynamic depth. Annals of physics, 188, 186-213. MACAL, C. M. & NORTH, M. J. Agent-based modeling and simulation. Winter simulation conference, 2009. Winter Simulation Conference, 86-98. MAHER, M. L. & KIM, M. Supporting Design Using Self-Organizing Design Knowledge. CAADRIA 2004 - Proceedings of the 9th International Conference on Computer Aided Architectural Design Research in Asia, 2004 Seoul,Korea. MALKAWI, A. 2005. Performance simulation: research and tools. In: KOLAREVIC, B. & MALKAWI, A. (eds.) Performative architecture : beyond instrumentality. New York: Spon Press. MALKAWI, A. M., SRINIVASAN, R. S., YI, Y. K. & CHOUDHARY, R. 2005. Decision support and design evolution: integrating genetic algorithms, CFD and visualization. Automation in Construction, 14, 33-44. MARCOLINO, L. S., GERBER, D., TAMBE, M., VEETIL P, S., MUSIL, J., KOLEV, B. & PRICE, S. 2015. Agents vote for the environment: Designing energy-efficient architecture. AAAI Workshop on Computational Sustainability. Texas. MARINCIC, N. 2016. Towards communication in CAAD: Spectral Characterisation and Modelling with Conjugate Symbolic Domains. Doctor of Sciences, ETH. MATURANA, H. R. & VARELA, F. J. 1987. The tree of knowledge: The biological roots of human understanding, Boston , MA, New Science Library/Shambhala Publications. MEISSNER, U., RÜPPEL, U. & THEISS, M. Network-Based Fire Engineering Supported by Agents. Proceedings of the Xth International Conference on Computing in Civil and Building Engineering (ICCCBE-2004), 2004 Weimar, Germany. MENGES, A. Computational morphogenesis. Proceedings for 3rd International ASCAAD Conference, 2007 Alexandria,Egypt. 725-744. MENGES, A. 2013. Morphospaces of Robotic Fabrication. In: BRELL-COKCAN, S. & BRAUMANN, J. (eds.) RobArch. Springer. MINSKY, M. 1961. Steps toward artificial intelligence. Proceedings of the IRE, 49, 8-30. MITCHELL, J., WONG, J. & PLUME, J. Design collaboration using IFC. In: DONG, A., MOERE, A. V. & GERO, J. S., eds. Computer-Aided Architectural Design Futures (CAADFutures 2007), 2007 Dordrecht. Springer Netherlands, 317-329. MITCHELL, W. J. 1990. The logic of architecture: Design, computation, and cognition, Cambridge, MA, MIT press. Page | 147 MITCHELL, W. J. 2005. Constructing complexity. Computer aided architectural design futures 2005. Springer. MONEO, R. 1978. On Typology. Oppositions summer 13. MIT Press. MORRIS, P. W. & HOUGH, G. 1987. The anatomy ofmajor projects. Wiley. MULLEN, T. & WELLMAN, M. P. 1996. Some issues in the design of market-oriented agents. Intelligent Agents II Agent Theories, Architectures, and Languages. Springer. MÜLLER, P., WONKA, P., HAEGLER, S., ULMER, A. & VAN GOOL, L. Procedural modeling of buildings. Acm Transactions On Graphics (Tog), 2006. ACM, 614-623. NABAEI, S. S. & WEINAND, Y. 2011. Geometrical description and structural analysis of a modular timber structure. International Journal of Space Structures, 26, 321-330. NASUTO, S. & BISHOP, M. 1999. Convergence analysis of stochastic diffusion search. Parallel Algorithms And Application, 14, 89-107. NEGROPONTE, N. 1970. The architecture machine: towards a more human environment, MIT press Cambridge, MA. NEWELL, A. & SIMON, H. A. 1972. Human problem solving, Prentice-Hall Englewood Cliffs, NJ. OOSTERHUIS, K. 2011. Towards a New Kind of Building: A Designers Guide for Non-Standard Architecture, Netherlands, NAi Uitgevers/Publishers Stichting. OSTERMEYER, Y., NÄGELI, C., HEEREN, N. & WALLBAUM, H. 2018. Building inventory and refurbishment scenario database development for Switzerland. Journal of Industrial Ecology, 22, 629-642. OXMAN, N. Digital craft: Fabrication-based design in the age of digital production. UbiComp 2007, 16-19 September 2007 2007 Innsbruck, Austria. 534-538. OXMAN, R. 2006. Theory and design in the first digital age. Design studies, 27, 229-265. OXMAN, R. 2008. Performance-based design: Current practices and research issues. International Journal of Architectural Computing, 6, 1-17. PANTAZIS, E. & GERBER, D. J. 2016. Design Exploring Complexity in Architectural Shells -Interactive form finding of reciprocal frames through a multi-agent system. In: HERNEOJA, A., OSTERLUND, T. & MARKKANEN, P. (eds.) eCAADe. Oulu, Finland. PARASCHO, S., BAUR, M., KNIPPERS, J. & MENGES, A. 2013. Design Tools for Integrative Planning. In: STOUFFS, R. & SARIYILDIZ, S. (eds.) Proceedings of the 31st eCAADe Conference. Delft, The Netherlands: Delft University of Technology. PAWLOFSKY, T., PANTAZIS, E., ROMAN, M., ALVAREZ, D., RODRIGUEZ, M., PSALTIS, S., MEZARI, M., AZARIADI, S., AGEEVA, K. & STANISLAVA, P. 2012. Brickolage. In: FALLACARA, G. & D'AMATO, C. (eds.) Stereotomy: Stone Architecture and New Research. Paris, France: Presses des Ponts. PERNA, A. & THERAULAZ, G. 2017. When social behaviour is moulded in clay: on growth and form of social insect nests. Journal of Experimental Biology, 220, 83-91. PETERSEN, K. H. 2014. Collective Construction by Termite-Inspired Robots. PHILLIPS, D. 2004. Daylighting: natural light in architecture, Burlington,MA, Elsevier,Architectural Press. PIKER, D. 2013. Kangaroo: form finding with computational physics. Architectural Design, 83, 136-137. PIZZIGONI, A. 2010. LEONARDO & THE RECIPROCAL STRUCTURES. PLESK, P. & WILSON, T. 2001. Complexity, leadership, and management in healthcare organizations. BMJ, 323, 746-749. POPOV, N. 2011. Generative sub-division morphogenesis with Cellular Automata and Agent-Based Modelling. POTTMANN, H., EIGENSATZ, M., VAXMAN, A. & WALLNER, J. 2015. Architectural geometry. Computers & Graphics, 47, 145-164. PREISINGER, C. & HEIMRATH, M. 2014. Karamba—A toolkit for parametric structural design. Structural Engineering International, 24, 217-221. Page | 148 PRUSINKIEWICZ, P. & LINDENMAYER, A. 2012. The algorithmic beauty of plants, Springer Science & Business Media. PUGNALE, A., PARIGI, D., KIRKEGAARD, P. H. & SASSONE, M. S. The principle of structural reciprocity: history, properties and design issues. The 35th Annual Symposium of the IABSE 2011, the 52nd Annual Symposium of the IASS 2011 and incorporating the 6th International Conference on Space Structures, 2011. PUGNALE, A. & SASSONE, M. 2007. Morphogenesis and structural optimization of shell structures with the aid of a genetic algorithm. JOURNAL-INTERNATIONAL ASSOCIATION FOR SHELL AND SPATIAL STRUCTURES, 155, 161. RADFORD, A. D. & GERO, J. S. 1980. Tradeoff diagrams for the integrated design of the physical environment in buildings. Building and Environment, 15, 3-15. RAHAMAN, H. & TAN, B.-K. 2011. Interpreting digital heritage: A conceptual model with end-users' perspective. International Journal of Architectural Computing, 9, 99-114. RAHMAN, M. 2010. Complexity in building design. In: RUBY, I. & RUBY, A. (eds.) Re-inventing Construction. Berlin: Ruby Press. RAPOPORT, A. 1986. General system theory: essential concepts & applications, CRC Press. REAS, C. 2007. Processing : a programming handbook for visual designers and artists, Cambridge, MA, MIT Press. REAS, C. & FRY, B. 2007. Processing: A programming Handbook for Visuakl Designers and Artists, Boston, MIT Press. REINHART, C. F., MARDALJEVIC, J. & ROGERS, Z. 2006. Dynamic daylight performance metrics for sustainable building design. Leukos, 3, 7-31. REYNOLDS, C. W. 1987. Flocks, herds and schools: A distributed behavioral model. ACM SIGGRAPH computer graphics, 21, 25-34. RIPPMANN, M., LACHAUER, L. & BLOCK, P. 2012. Interactive vault design. International Journal of Space Structures, 27, 219-230. RITTEL, H. & WEBBER, M. M. 1973. 2.3 planning problems are wicked. Polity, 4, 155-69. RIVIÈRES, J. D. & WIEGAND, J. 2004. Eclipse: A platform for integrating development tools. IBM Systems Journal, 43, 371-383. ROBERT MCNEEL & ASSOCIATES. 2019. Modeling Tools for Designers [Online]. Miami. Available: www.rhino3d.com [Accessed February 2019]. ROUDSARI, M., PAK, M. & SMITH, A. 2014. Ladybug: A Parametric Environmental Plugin for Grasshopper to Help Designers Create an Environmentally-Conscious Design. RUSSELL, S. J., NORVIG, P., CANNY, J. F., MALIK, J. M. & EDWARDS, D. D. 2003. Artificial intelligence: a modern approach, Prentice hall Upper Saddle River. RUTTEN, D. 2014. Navigating multi-dimsnesional landscapes in foggy weather as an analogy for generic problem solving. Inrternational Conference on Geometry and Graphics. Innsbruck, Austria. SALINGAROS, N. 2000. Complexity and Urban Coherence. Journal of Urban Design, 5, 291-316. SCHEURER, F. 2007. Getting complexity organised Using self-organisation in architectural construction. Automation in Construction, 78 – 85. SCHEURER, F. 2010. Materialising Complexity. Architectural Design, 80, 86-93. SCHUH, G. & EVERSHEIM, W. 2004. Release-Engineering—An Approach to Control Rising System-Complexity. CIRP Annals-Manufacturing Technology, 53, 167-170. SCHWINN, T. & MENGES, A. 2015. Fabrication Agency: Landesgartenschau Exhibition Hall. Architectural Design, 85, 92-99. SCHWINN, T., MENGES, A. & KRIEG, D. O. 2014. Behavioral Strategies: Synthesizing design computation and robotic fabrication of lightweight timber plate structures. ACADIA: Design Agency. Los Angeles. Page | 149 SENATORE, G. & PIKER, D. 2015. Interactive real-time physics: an intuitive approach to form-finding and structural analysis for design and education. Computer-Aided Design, 61, 32-41. SHANNON, C. E. 1948. A mathematical theory of communication. Mobile Computing and Communication Review (SIGMOBILE ), 5, 3-55. SHEA, K., AISH, R. & GOURTOVAIA, M. 2005. Towards integrated performance-driven generative design tools. Automation in Construction, 14, 253-264. SHEA, K., SEDGWICK, A. & ANTONUNTTO, G. 2006. Multicriteria optimization of paneled building envelopes using ant colony optimization. Intelligent Computing in Engineering and Architecture. Springer. SHEN, W., HAO, Q., MAK, H., NEELAMKAVIL, J., XIE, H., DICKINSON, J., THOMAS, R., PARDASANI, A. & XUE, H. 2010. Systems integration and collaboration in architecture, engineering, construction, and facilities management: A review. Advanced Engineering Informatics, 24, 196-207. SIMEONE, D., KALAY, Y. E., SCHAUMANN, D. & HONG, S. W. Modelling and Simulating Use Processes in Buildings. eCAADe 2013: Computation and Performance, September 18-20, 2013 2013 Delft, The Netherlands. Faculty of Architecture, Delft University of Technology; eCAADe (Education and research in Computer Aided Architectural Design in Europe). SIMON, H. 1991. The Architecture of Complexity. Facets of Systems Science. New York, USA: Springer. SIMON, H. A. 1973. The structure of ill-structured problems. Artificial Intelligence Models of discovery, 181-201. SIMON, H. A. 1977. The structure of ill-structured problems. Models of discovery. Springer. SIMON, H. A. 1996. The sciences of the artificial, MIT press. SNOOKS, R. Encoding Behavioral Matter. In: IKEDA, Y., ed. International Symposium on Algorithmic Design for Architecture and Urban Design (ALGODE), March 11-16 2011 Tokyo, Japan. 10. SOIBELMAN, L. & PENA-MORA, F. 2000. Distributed multi-reasoning mechanism to support conceptual structural design. Journal of Structural Engineering, 126, 733-742. SOIBELMAN, L., WU, J., CALDAS, C., BRILAKIS, I. & LIN, K.-Y. 2008. Management and analysis of unstructured construction data types. Advanced Engineering Informatics, 22, 15-27. STEFAN WRONA, A. G. Complexity in Architecture - How CAAD can be involved to Deal with it. In: KOENRAAD, N., PROVOOST TOM, VERBEKE JOHAN & (, V. J., eds. AVOCAAD - Added Value of Computer Aided Architectural Design, 2001 ogeschool voor Wetenschap en Kunst - Departement Architectuur Sint-Lucas, Campus Brussel. STINY, G. 1980. Introduction to shape and shape grammars. Environment and planning B: planning and design, 7, 343-351. SUGIHARA, S. Comparison between Top-Down and Bottom-Up Algorithms in Computational Design Practice. International Symposium on Algorithmic Design for Architecture and Urban Design, ALGODE 2011 Tokyo. SUGIHARA, S. iGeo: Algorithm Development Environment for Computational Design Coders with Integration of NURBS Geometry Modeling and Agent Based Modeling. In: GERBER, D. J., SANCHEZ, J. & HUANG, A., eds. ACADIA 14: Design Agency, 23-25 October, 2014 2014 Los Angeles. eVolo, 23-32. SUH, N. P. 1990. The principles of design, Oxford University Press New York. SUH, N. P. 2005a. Complexity in Engineering. CIRP Annals - Manufacturing Technology, 54, 46-63. SUH, N. P. 2005b. Complexity: Theory and application, New York, Oxford University Press. SYCARA, K. 1998. Multiagent systems. AI magazine, 19, 79-92. TAMBE, M. 1996. Teamwork in real-world, dynamic environments, University of Southern California, Information Sciences Institute. TAMBE, M. 1997. Towards flexible teamwork. Journal of Artificial Intelligence Research, 83-124. TAMBE, M. 1998. Implementing agent teams in dynamic multiagent environments. Applied Artificial Intelligence, 12, 189-210. Page | 150 TEICHOLZ, P. Vision of future practice. Berkeley-Stanford Workshop on Defining a Research Agenda for AEC Process/Product Development in, 2000. TERZIDIS, K. 2006. Algorithmic architecture, Oxford ;Burlington, MA, Architectural Press. THERAULAZ, G. & BONABEAU, E. 1995. Coordination in distributed building. Science, 269, 686. THOMAS KOHLHAMMER, T. K. 2010. Systemic Behaviour of Plane Reciprocal Frame Structures. Structural Engineering International, 80-86. TRAUB, J. F., WASILKOWSKI, G. W. & WOŹNIAKOWSKI, H. 1983. Information, uncertainty, complexity, Addison- Wesley Publishing Company, Advanced Book Program/World Science Division. TSILIAKOS, M. 2012. Swarm Materiality: A multi-agent approach to stress driven material organization. Digital Physicality - Proceedings of the 30th eCAADe Conference. TURING, A. M. 1936. On computable numbers, with an application to the Entscheidungsproblem. J. of Math, 58, 5. TURRIN, M., VON BUELOW, P. & STOUFFS, R. 2011. Design explorations of performance driven geometry in architectural design using parametric modeling and genetic algorithms. Advanced Engineering Informatics, 25, 656-675. TVEIT, P. 1987. Considerations for Design of Network Arches. Journal of Structural Engineering, 113, 2189-2207. VAN MELE, T., LACHAUER, L., RIPPMANN, M. & BLOCK, P. 2012. Geometry-based understanding of structures. Journal of the International Association of Shell and Spatial Structures, 53, 285-295. VENTURI, R. 1977. Complexity and contradiction in architecture, New York, USA, The Museum of modern art. VON BERTALANFFY, L. 1973. The meaning of general system theory. General system theory: Foundations, development, applications, 30-53. VON BÜLOW, P. 2007. An intelligent genetic design tool (IGDT) applied to the exploration of architectural trussed structural systems. Ph.D. Ph.D. Dissertation, University of Stuttgart. VON NEUMANN, J. 1951. The general and logical theory of automata. Cerebral mechanisms in behavior, 1-41. WEAVER, W. 1948. Science and complexity. American Scientist, 36, 449-456. WEINAND, Y. 2011. Innovative Timber Constructions. IABSE-IASS 2011. London, Great-Britain. WEINSTEIN, P., VAN PARUNAK, H., CHIUSANO, P. & BRUECKNER, S. Agents swarming in semantic spaces to corroborate hypotheses. Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems-Volume 3, 2004. IEEE Computer Society, 1488-189. WEISS, A. 2015. Google N-gram viewer. The Complete Guide to Using Google in Libraries: Instruction, Administration, and Staff Productivity, 1, 183. WEISS, G. 1999. Multiagent Systems: A Modern Approach to Distributed Artificial Intelligence. 619. WERFEL, J., BAR-YAM, Y., RUS, D. & NAGPAL, R. 2006. Distributed Construction by Mobile Robots with Enhanced Building Blocks. New England: Complex Systems Institute. WERFEL, J., PETERSEN, K. & NAGPAL, R. 2014. Designing Collective Behavior in a Termite-Inspired Robot Construction Team. Science, 343, 754-758. WIENER, N. 1961. Cybernetics or Control and Communication in the Animal and the Machine, Cambridge, Massachusets, MIT press. WOLFRAM, S. 2002. A new kind of science, Wolfram media Champaign. WOODBURY, R., BURROW, A., DATTA, S. & CHANG, T.-W. 1999. Typed feature structures and design space exploration. AI EDAM, 13, 287-302. WOODRIDGE, M. & JENNINGS, N. R. 1995. Intelligent Agents: Theory and Practice. Knowledge Engineering Review, 10, 115-152. YATES, F. E. 1978. Complexity and the limits to knowledge. American Journal of Physiology-Regulatory, Integrative and Comparative Physiology, 235, R201-R204. Page | 151 YEZIORO, A., BING, D. & LEITE, F. 2008. An applied artificial intelligence approach towards assessing building performance simulation tools. Energy and Buildings, 40, 612-620.
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Target assignment and path planning for navigation tasks with teams of agents
PDF
Enabling human-building communication to promote pro-environmental behavior in office buildings
PDF
A framework for comprehensive assessment of resilience and other dimensions of asset management in metropolis-scale transport systems
PDF
A radio frequency based indoor localization framework for supporting building emergency response operations
PDF
Understanding human-building interactions through perceptual decision-making processes
PDF
Learning personal thermal comfort and integrating personal comfort requirements into HVAC system control loop
PDF
Efficient bounded-suboptimal multi-agent path finding and motion planning via improvements to focal search
PDF
Understanding human-building-emergency interactions in the built environment
PDF
Semantic modeling of outdoor scenes for the creation of virtual environments and simulations
PDF
Planning with continuous resources in agent systems
PDF
An investigation of fully interactive multi-role dialogue agents
PDF
Designing‐in performance: energy simulation feedback for early stage design decision making
PDF
Efficient and effective techniques for large-scale multi-agent path finding
PDF
User-centric smart sensing for non-intrusive electricity consumption disaggregation in buildings
PDF
In-situ quality assessment of scan data for as-built models using building-specific geometric features
PDF
The power of flexibility: autonomous agents that conserve energy in commercial buildings
PDF
Decision support systems for adaptive experimental design of autonomous, off-road ground vehicles
PDF
Identifying and leveraging structure in complex cooperative tasks for multi-agent reinforcement learning
PDF
Point cloud data fusion of RGB and thermal information for advanced building envelope modeling in support of energy audits for large districts
PDF
Towards health-conscious spaces: building for human well-being and performance
Asset Metadata
Creator
Pantazis, Evangelos
(author)
Core Title
Behavioral form finding using multi-agent systems: a computational methodology for combining generative design with environmental and structural analysis in architectural design
School
Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Civil Engineering
Publication Date
08/15/2019
Defense Date
08/15/2019
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
computational design,digital fabrication,form finding,generative design,multi agent systems,OAI-PMH Harvest,robotic construction
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Gerber, David Jason (
committee chair
), Becerik-Gerber, Burcin (
committee member
), Novak, Marcos (
committee member
), Soibelman, Lucio (
committee member
)
Creator Email
epantazi@usc.edu,vague@topotheque.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c89-211407
Unique identifier
UC11663199
Identifier
etd-PantazisEv-7797.pdf (filename),usctheses-c89-211407 (legacy record id)
Legacy Identifier
etd-PantazisEv-7797.pdf
Dmrecord
211407
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Pantazis, Evangelos
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
computational design
digital fabrication
form finding
generative design
multi agent systems
robotic construction